SlideShare a Scribd company logo
From Data to AI with the
MACHINE LEARNING
CANVAS
@louisdorard #BDS16
“A breakthrough in machine
learning would be worth ten
Microsofts”
–Bill Gates
“In the next 20 years, machine
learning will have more impact
than mobile has.”
–Vinod Khosla
4
Skiing down the Gartner hype cycle
with Waldo & the Machine Learning Canvas
@louisdorard
W
H
O
S
Y
O
U
R
P
A
P
I
RED
ICT
IVE
PPLI
CATI
ON
ROGR
AMMI
NG
NTER
FACE
?
What is ML?
Bedrooms Bathrooms Surface (foot²) Year built Type Price ($)
3 1 860 1950 house 565,000
3 1 1012 1951 house
2 1.5 968 1976 townhouse 447,000
4 1315 1950 house 648,000
3 2 1599 1964 house
3 2 987 1951 townhouse 790,000
1 1 530 2007 condo 122,000
4 2 1574 1964 house 835,000
4 2001 house 855,000
3 2.5 1472 2005 house
4 3.5 1714 2005 townhouse
2 2 1113 1999 condo
1 769 1999 condo 315,000
Bedrooms Bathrooms Surface (foot²) Year built Type Price ($)
3 1 860 1950 house 565,000
3 1 1012 1951 house
2 1.5 968 1976 townhouse 447,000
4 1315 1950 house 648,000
3 2 1599 1964 house
3 2 987 1951 townhouse 790,000
1 1 530 2007 condo 122,000
4 2 1574 1964 house 835,000
4 2001 house 855,000
3 2.5 1472 2005 house
4 3.5 1714 2005 townhouse
2 2 1113 1999 condo
1 769 1999 condo 315,000
last column = output (by convention)
16
Some use cases
• Real-estate
• Spam filtering
• City bikes
• Reduce churn
• Anticipate demand
property price
email spam indicator
location, context #bikes
customer churn indicator
product, store, date #sales
Zillow
Gmail
BikePredict
ChurnSpotter
Blue Yonder
RULES
@louisdorard
1. Descriptive analysis
2. Predictive analysis
3. Prescriptive analysis
4. Automated decisions
18
(Big?) Data analysis
reporting &
old-school BI…
now we’re talking!
Decisions from predictions
1. Show churn rate against time
2. Predict which customers will churn next
3. Suggest what to do about each customer

(e.g. propose to switch plan, send promotional offer, etc.)
21
Churn analysis
• Who: SaaS company selling monthly subscription
• Question asked:“Is this customer going to leave within 1
month?”
• Input: customer
• Output: no-churn or churn
• Data collection: history up until 1 month ago
22
Churn prediction
Assume we know who’s going to churn. What do we do?
• Contact them (in which order?)
• Switch to different plan
• Give special offer
• No action?
23
Churn prediction prevention
“3. Suggest what to do about each customer”

→ prioritised list of actions, based on…
• Customer representation
• Churn prediction
• Prediction confidence
• Revenue brought by customer
• Constraints on frequency of solicitations
24
Churn prevention
• Taking action for each TP (and FP) has a cost
• For each TP we“gain”: (success rate of action) *
(revenue /cust. /month)
• Imagine…
• perfect predictions
• revenue /cust. /month = 10€
• success rate of action = 20%
• cost of action = 2€
• What is the ROI?
25
Churn prevention ROI
Machine Learning Canvas
27
The Canvas Concept
28
The Machine Learning Canvas
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
• (Not an adaptation of the Business Model Canvas)
• Describe the Learning part of a predictive system / an intelligent
application:
• What data are we learning from?
• How are we using predictions powered by that learning?
• How are we making sure that the whole thing“works”through
time?
29
The Machine Learning Canvas
30
Cross Industry Standard Process for Data Mining
By Kenneth Jensen -
Own work, CC BY-SA 3.0
ML Canvas
–Ingolf Mollat, Principal Consultant at Blue Yonder
“The Machine Learning Canvas is providing our
clients real business value by supplying the first
critical entry point for their implementation
of predictive applications.”
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
LEARNPREDICT
EVALUATE
GOAL
(what, why, who)
how how
how well
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
background
specifics
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
background
specifics
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
background
specifics
LEARNPREDICT
EVALUATE
GOAL
(what, why, who)
Domain
Integration
Predictive
Engine
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
On 1st day of every month:
• Filter out ‘no-churn’
• Sort remaining by
descending (churn prob.) x
(monthly revenue) and
show prediction path for
each
• Solicit customers
Predict answer to “Is this
customer going to churn in
the coming month?”
• Input: customer
• Output: ‘churn’ or ‘no-
churn’ class (‘churn’ is the
Positive class)
• Binary Classification
Context:
• Company sells SaaS with
monthly subscription
• End-user of predictive
system is CRM team
We want to help them…
• Identify important clients
who may churn, so
appropriate action can be
taken
• Reduce churn rate among
high-revenue customers
• Improve success rate of
retention efforts by
understanding why
customers may churn
• CRM tool
• Payments database
• Website analytics
• Customer support
• Emailing to customers
Every month, we see which of
last month’s customers
churned or not, by looking
through the payments
database.
Associated inputs are
customer “snapshots” taken
last month.
Every month we (re-)featurize
all current customers and
make predictions for them.
We do this overnight.
Basic customer info at time t
(age, city, etc.)
Events between (t - 1 month)
and t:
• Usage of product: # times
logged in, functionalities
used, etc.
• Cust. support interactions
• Other contextual, e.g.
devices used
Every month we create a new
model from the previous
month’s customers.
• Monitor churn rate
• Monitor (#non-churn among solicited) / #solicitations
Customer retention Louis Dorard Sept. 2016 1
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
On 1st day of every month:
• Filter out ‘no-churn’
• Sort remaining by
descending (churn prob.) x
(monthly revenue) and
show prediction path for
each
• Solicit customers
Predict answer to “Is this
customer going to churn in
the coming month?”
• Input: customer
• Output: ‘churn’ or ‘no-
churn’ class (‘churn’ is the
Positive class)
• Binary Classification
Context:
• Company sells SaaS with
monthly subscription
• End-user of predictive
system is CRM team
We want to help them…
• Identify important clients
who may churn, so
appropriate action can be
taken
• Reduce churn rate among
high-revenue customers
• Improve success rate of
retention efforts by
understanding why
customers may churn
• CRM tool
• Payments database
• Website analytics
• Customer support
• Emailing to customers
Every month, we see which of
last month’s customers
churned or not, by looking
through the payments
database.
Associated inputs are
customer “snapshots” taken
last month.
Every month we (re-)featurize
all current customers and
make predictions for them.
We do this overnight.
Basic customer info at time t
(age, city, etc.)
Events between (t - 1 month)
and t:
• Usage of product: # times
logged in, functionalities
used, etc.
• Cust. support interactions
• Other contextual, e.g.
devices used
Every month we create a new
model from the previous
month’s customers.
• Monitor churn rate
• Monitor (#non-churn among solicited) / #solicitations
Customer retention Louis Dorard Sept. 2016 1
• We predict customer would churn but they don’t…
• Great! Prevention works!
• Sh*t! Data inconsistent…
• (Store which actions were taken?)
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
On 1st day of every month:
• Randomly filter out 50% of
customers (hold-out set)
• Filter out ‘no-churn’
• Sort remaining by
descending (churn prob.) x
(monthly revenue) and
show prediction path for
each
• Solicit customers
Predict answer to “Is this
customer going to churn in
the coming month?”
• Input: customer
• Output: ‘churn’ or ‘no-
churn’ class (‘churn’ is the
Positive class)
• Binary Classification
Context:
• Company sells SaaS with
monthly subscription
• End-user of predictive
system is CRM team
We want to help them…
• Identify important clients
who may churn, so
appropriate action can be
taken
• Reduce churn rate among
high-revenue customers
• Improve success rate of
retention efforts by
understanding why
customers may churn
• CRM tool
• Payments database
• Website analytics
• Customer support
• Emailing to customers
Every month, we see which of
last month’s customers
churned or not, by looking
through the payments
database.
Associated inputs are
customer “snapshots” taken
last month.
Every month we (re-)featurize
all current customers and
make predictions for them.
We do this overnight.
Basic customer info at time t
(age, city, etc.)
Events between (t - 1 month)
and t:
• Usage of product: # times
logged in, functionalities
used, etc.
• Cust. support interactions
• Other contextual, e.g.
devices used
• Monitor churn rate
• Monitor (#non-churn among solicited) / #solicitations
Customer retention Louis Dorard Sept. 2016 1
Every month we create a new
model from the previous
month’s customers.
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
On 1st day of every month:
• Randomly filter out 50% of
customers (hold-out set)
• Filter out ‘no-churn’
• Sort remaining by
descending (churn prob.) x
(monthly revenue) and
show prediction path for
each
• Solicit customers
Predict answer to “Is this
customer going to churn in
the coming month?”
• Input: customer
• Output: ‘churn’ or ‘no-
churn’ class (‘churn’ is the
Positive class)
• Binary Classification
Context:
• Company sells SaaS with
monthly subscription
• End-user of predictive
system is CRM team
We want to help them…
• Identify important clients
who may churn, so
appropriate action can be
taken
• Reduce churn rate among
high-revenue customers
• Improve success rate of
retention efforts by
understanding why
customers may churn
• CRM tool
• Payments database
• Website analytics
• Customer support
• Emailing to customers
Every month, we see which of
last month’s customers
churned or not, by looking
through the payments
database.
Associated inputs are
customer “snapshots” taken
last month.
Every month we (re-)featurize
all current customers and
make predictions for them.
We do this overnight.
Basic customer info at time t
(age, city, etc.)
Events between (t - 1 month)
and t:
• Usage of product: # times
logged in, functionalities
used, etc.
• Cust. support interactions
• Other contextual, e.g.
devices used
Every month we create a new
model from the previous
month’s hold-out set (or the
whole set, when initializing
this system).
We do this overnight (along
with making predictions).
• Monitor churn rate
• Monitor (#non-churn among solicited) / #solicitations
Customer retention Louis Dorard Sept. 2016 1
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
On 1st day of every month:
• Randomly filter out 50% of
customers (hold-out set)
• Filter out ‘no-churn’
• Sort remaining by
descending (churn prob.) x
(monthly revenue) and
show prediction path for
each
• Solicit customers
Predict answer to “Is this
customer going to churn in
the coming month?”
• Input: customer
• Output: ‘churn’ or ‘no-
churn’ class (‘churn’ is the
Positive class)
• Binary Classification
Context:
• Company sells SaaS with
monthly subscription
• End-user of predictive
system is CRM team
We want to help them…
• Identify important clients
who may churn, so
appropriate action can be
taken
• Reduce churn rate among
high-revenue customers
• Improve success rate of
retention efforts by
understanding why
customers may churn
• CRM tool
• Payments database
• Website analytics
• Customer support
• Emailing to customers
Every month, we see which of
last month’s customers
churned or not, by looking
through the payments
database.
Associated inputs are
customer “snapshots” taken
last month.
Every month we (re-)featurize
all current customers and
make predictions for them.
We do this overnight.
Basic customer info at time t
(age, city, etc.)
Events between (t - 1 month)
and t:
• Usage of product: # times
logged in, functionalities
used, etc.
• Cust. support interactions
• Other contextual, e.g.
devices used
Every month we create a new
model from the previous
month’s hold-out set (or the
whole set, when initializing
this system).
We do this overnight (along
with making predictions).
• Accuracy of last month’s predictions on hold-out set
• Compare churn rate & lost revenue between last month’s
hold-out set and remaining set
• Monitor (#non-churn among solicited) / #solicitations
• Monitor ROI (based on diff. in lost revenue & cost of
solicitations)
Customer retention Louis Dorard Sept. 2016 1
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
On 1st day of every month:
• Randomly filter out 50% of
customers (hold-out set)
• Filter out ‘no-churn’
• Sort remaining by
descending (churn prob.) x
(monthly revenue) and
show prediction path for
each
• Solicit customers
Before soliciting customers:
• Evaluate new model’s
accuracy on pre-defined
customer profiles
• Simulate decisions taken
on last month’s customers
(using model learnt from
customers 2 months ago).
Compute ROI w. different #
customers to solicit &
hypotheses on retention
success rate (is it >0?)
Predict answer to “Is this
customer going to churn in
the coming month?”
• Input: customer
• Output: ‘churn’ or ‘no-
churn’ class (‘churn’ is the
Positive class)
• Binary Classification
Context:
• Company sells SaaS with
monthly subscription
• End-user of predictive
system is CRM team
We want to help them…
• Identify important clients
who may churn, so
appropriate action can be
taken
• Reduce churn rate among
high-revenue customers
• Improve success rate of
retention efforts by
understanding why
customers may churn
• CRM tool
• Payments database
• Website analytics
• Customer support
• Emailing to customers
Every month, we see which of
last month’s customers
churned or not, by looking
through the payments
database.
Associated inputs are
customer “snapshots” taken
last month.
Every month we (re-)featurize
all current customers and
make predictions for them.
We do this overnight (along
with building the model that
powers these predictions and
evaluating it).
Basic customer info at time t
(age, city, etc.)
Events between (t - 1 month)
and t:
• Usage of product: # times
logged in, functionalities
used, etc.
• Cust. support interactions
• Other contextual, e.g.
devices used
Every month we create a new
model from the previous
month’s hold-out set (or the
whole set, when initializing
this system).
We do this overnight (along
with offline evaluation and
making predictions).
• Accuracy of last month’s predictions on hold-out set
• Compare churn rate & lost revenue between last month’s
hold-out set and remaining set
• Monitor (#non-churn among solicited) / #solicitations
• Monitor ROI (based on diff. in lost revenue & cost of
solicitations)
Customer retention Louis Dorard Sept. 2016 1
The   Machine   Learning   Canvas   (v0.4)                 Designed   for:                                                                                                                                            Designed   by:                                                                                                                                         Date:                                                                                         Iteration:                         . 
Decisions 
How   are   predictions   used   to 
make   decisions   that   provide 
the   proposed   value   to   the   end­user? 
 
ML   task 
Input,   output   to   predict, 
type   of   problem. 
 
Value 
Propositions 
What   are   we   trying   to   do   for   the 
end­user(s)   of   the   predictive   system? 
What   objectives   are   we   serving? 
Data   Sources 
Which   raw   data   sources   can 
we   use   (internal   and 
external)? 
Collecting   Data 
How   do   we   get   new   data   to 
learn   from   (inputs   and 
outputs)? 
Making 
Predictions 
When   do   we   make   predictions   on   new 
inputs?   How   long   do   we   have   to 
featurize   a   new   input   and   make   a 
prediction? 
Offline 
Evaluation 
Methods   and   metrics   to   evaluate   the 
system   before   deployment. 
 
Features 
Input   representations 
extracted   from   raw   data 
sources. 
Building   Models 
When   do   we   create/update 
models   with   new   training 
data?   How   long   do   we   have   to 
featurize   training   inputs   and   create   a 
model? 
 
Live   Evaluation   and 
Monitoring 
Methods   and   metrics   to   evaluate   the 
system   after   deployment,   and   to 
quantify   value   creation.  
     
machinelearningcanvas.com    by   Louis   Dorard,   Ph.D.                                         Licensed   under   a   Creative   Commons   Attribution­ShareAlike   4.0   International   License.  
Before soliciting customers:
• Evaluate new model’s
accuracy on pre-defined
customer profiles
• Simulate decisions taken
on last month’s customers
(using model learnt from
customers 2 months ago).
Compute ROI w. different #
customers to solicit &
hypotheses on retention
success rate (is it >0?)
Predict answer to “Is this
customer going to churn in
the coming month?”
• Input: customer
• Output: ‘churn’ or ‘no-
churn’ class (‘churn’ is the
Positive class)
• Binary Classification
Context:
• Company sells SaaS with
monthly subscription
• End-user of predictive
system is CRM team
We want to help them…
• Identify important clients
who may churn, so
appropriate action can be
taken
• Reduce churn rate among
high-revenue customers
• Improve success rate of
retention efforts by
understanding why
customers may churn
• CRM tool
• Payments database
• Website analytics
• Customer support
• Emailing to customers
Every month, we see which of
last month’s customers
churned or not, by looking
through the payments
database.
Associated inputs are
customer “snapshots” taken
last month.
Every month we (re-)featurize
all current customers and
make predictions for them.
We do this overnight (along
with building the model that
powers these predictions and
evaluating it).
Basic customer info at time t
(age, city, etc.)
Events between (t - 1 month)
and t:
• Usage of product: # times
logged in, functionalities
used, etc.
• Cust. support interactions
• Other contextual, e.g.
devices used
Every month we create a new
model from the previous
month’s hold-out set (or the
whole set, when initializing
this system).
We do this overnight (along
with offline evaluation and
making predictions).
• Accuracy of last month’s predictions on hold-out set
• Compare churn rate & lost revenue between last month’s
hold-out set and remaining set
• Monitor (#non-churn among solicited) / #solicitations
• Monitor ROI (based on diff. in lost revenue & cost of
solicitations)
Customer retention Louis Dorard Sept. 2016 1
On 1st day of every month:
• Randomly filter out 50% of
customers (hold-out set)
• Filter out ‘no-churn’
• Sort remaining by
descending (churn prob.) x
(monthly revenue) and
show prediction path for
each
• Solicit as many customers
as suggested by simulation
• Assist data scientists, software engineers, product and business
managers, in aligning their activities
• Make sure all efforts are directed at solving the right problem!
• Choose right algorithm / infrastructure / ML solution prior to
implementation
• Guide project management
• machinelearningcanvas.com
45
Why fill in ML canvas?
–Jeremy Howard
“Great predictive modeling is an important
part of the solution, but it no longer stands on its
own; as products become more sophisticated, it
disappears into the plumbing.”
twitter.com/louisdorard
2 Shameless Plugs
50
follow us: @papisdotio
WE’RE HIRING!
THANK YOU!

More Related Content

What's hot

Intelligent Banking: AI cases in Retail and Commercial Banking
Intelligent Banking: AI cases in Retail and Commercial BankingIntelligent Banking: AI cases in Retail and Commercial Banking
Intelligent Banking: AI cases in Retail and Commercial BankingDmitry Petukhov
 
Big Data Analytics in light of Financial Industry
Big Data Analytics in light of Financial Industry Big Data Analytics in light of Financial Industry
Big Data Analytics in light of Financial Industry Capgemini
 
Predictive Analytics: Context and Use Cases
Predictive Analytics: Context and Use CasesPredictive Analytics: Context and Use Cases
Predictive Analytics: Context and Use CasesKimberley Mitchell
 
AI powered decision making in banks
AI powered decision making in banksAI powered decision making in banks
AI powered decision making in banksPankaj Baid
 
Functionalities in AI Applications and Use Cases (OECD)
Functionalities in AI Applications and Use Cases (OECD)Functionalities in AI Applications and Use Cases (OECD)
Functionalities in AI Applications and Use Cases (OECD)AnandSRao1962
 
Understanding big data and data analytics big data
Understanding big data and data analytics big dataUnderstanding big data and data analytics big data
Understanding big data and data analytics big dataSeta Wicaksana
 
Banking Disruption in Financial Services: Threats and Opportunities
Banking Disruption in Financial Services: Threats and OpportunitiesBanking Disruption in Financial Services: Threats and Opportunities
Banking Disruption in Financial Services: Threats and OpportunitiesDogTelligent
 
Big data analytics in healthcare industry
Big data analytics in healthcare industryBig data analytics in healthcare industry
Big data analytics in healthcare industryBhagath Gopinath
 
Smart Data Slides: Machine Learning - Case Studies
Smart Data Slides: Machine Learning - Case StudiesSmart Data Slides: Machine Learning - Case Studies
Smart Data Slides: Machine Learning - Case StudiesDATAVERSITY
 
Bank churn with Data Science
Bank churn with Data ScienceBank churn with Data Science
Bank churn with Data ScienceCarolyn Knight
 
Big Data Analytics and its Application in E-Commerce
Big Data Analytics and its Application in E-CommerceBig Data Analytics and its Application in E-Commerce
Big Data Analytics and its Application in E-CommerceUyoyo Edosio
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsKrishnaram Kenthapadi
 
Churn Prediction in Practice
Churn Prediction in PracticeChurn Prediction in Practice
Churn Prediction in PracticeBigData Republic
 
Advanced analytics
Advanced analyticsAdvanced analytics
Advanced analyticsShankar R
 
An Introduction to Generative AI
An Introduction  to Generative AIAn Introduction  to Generative AI
An Introduction to Generative AICori Faklaris
 

What's hot (20)

Intelligent Banking: AI cases in Retail and Commercial Banking
Intelligent Banking: AI cases in Retail and Commercial BankingIntelligent Banking: AI cases in Retail and Commercial Banking
Intelligent Banking: AI cases in Retail and Commercial Banking
 
Big Data Analytics in light of Financial Industry
Big Data Analytics in light of Financial Industry Big Data Analytics in light of Financial Industry
Big Data Analytics in light of Financial Industry
 
Three Big Data Case Studies
Three Big Data Case StudiesThree Big Data Case Studies
Three Big Data Case Studies
 
Big Data Analytics (1).ppt
Big Data Analytics (1).pptBig Data Analytics (1).ppt
Big Data Analytics (1).ppt
 
Predictive Analytics: Context and Use Cases
Predictive Analytics: Context and Use CasesPredictive Analytics: Context and Use Cases
Predictive Analytics: Context and Use Cases
 
AI powered decision making in banks
AI powered decision making in banksAI powered decision making in banks
AI powered decision making in banks
 
Functionalities in AI Applications and Use Cases (OECD)
Functionalities in AI Applications and Use Cases (OECD)Functionalities in AI Applications and Use Cases (OECD)
Functionalities in AI Applications and Use Cases (OECD)
 
Understanding big data and data analytics big data
Understanding big data and data analytics big dataUnderstanding big data and data analytics big data
Understanding big data and data analytics big data
 
Digital Transformation Frameworks
Digital Transformation FrameworksDigital Transformation Frameworks
Digital Transformation Frameworks
 
Banking Disruption in Financial Services: Threats and Opportunities
Banking Disruption in Financial Services: Threats and OpportunitiesBanking Disruption in Financial Services: Threats and Opportunities
Banking Disruption in Financial Services: Threats and Opportunities
 
Big data analytics in healthcare industry
Big data analytics in healthcare industryBig data analytics in healthcare industry
Big data analytics in healthcare industry
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
Smart Data Slides: Machine Learning - Case Studies
Smart Data Slides: Machine Learning - Case StudiesSmart Data Slides: Machine Learning - Case Studies
Smart Data Slides: Machine Learning - Case Studies
 
Bank churn with Data Science
Bank churn with Data ScienceBank churn with Data Science
Bank churn with Data Science
 
Big Data Analytics and its Application in E-Commerce
Big Data Analytics and its Application in E-CommerceBig Data Analytics and its Application in E-Commerce
Big Data Analytics and its Application in E-Commerce
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML Systems
 
Salesforce - AI for CRM
Salesforce - AI for CRMSalesforce - AI for CRM
Salesforce - AI for CRM
 
Churn Prediction in Practice
Churn Prediction in PracticeChurn Prediction in Practice
Churn Prediction in Practice
 
Advanced analytics
Advanced analyticsAdvanced analytics
Advanced analytics
 
An Introduction to Generative AI
An Introduction  to Generative AIAn Introduction  to Generative AI
An Introduction to Generative AI
 

Viewers also liked

Bootstrapping Machine Learning
Bootstrapping Machine LearningBootstrapping Machine Learning
Bootstrapping Machine LearningLouis Dorard
 
AIをあなたのツール化するための第一歩
AIをあなたのツール化するための第一歩AIをあなたのツール化するための第一歩
AIをあなたのツール化するための第一歩Daiyu Hatakeyama
 
Scala製機械学習サーバ「Apache PredictionIO」
Scala製機械学習サーバ「Apache PredictionIO」Scala製機械学習サーバ「Apache PredictionIO」
Scala製機械学習サーバ「Apache PredictionIO」takezoe
 
5.4 Arbres et forêts aléatoires
5.4 Arbres et forêts aléatoires5.4 Arbres et forêts aléatoires
5.4 Arbres et forêts aléatoiresBoris Guarisma
 
( Big ) Data Management - Data Mining and Machine Learning - Global concepts ...
( Big ) Data Management - Data Mining and Machine Learning - Global concepts ...( Big ) Data Management - Data Mining and Machine Learning - Global concepts ...
( Big ) Data Management - Data Mining and Machine Learning - Global concepts ...Nicolas Sarramagna
 
Big Data Science Team Building
Big Data Science Team BuildingBig Data Science Team Building
Big Data Science Team BuildingUIResearchPark
 
Big Data Analytics on AWS - Carlos Conde - AWS Summit Paris
Big Data Analytics on AWS - Carlos Conde - AWS Summit ParisBig Data Analytics on AWS - Carlos Conde - AWS Summit Paris
Big Data Analytics on AWS - Carlos Conde - AWS Summit ParisAmazon Web Services
 
Building an AI Startup: Realities & Tactics
Building an AI Startup: Realities & TacticsBuilding an AI Startup: Realities & Tactics
Building an AI Startup: Realities & TacticsMatt Turck
 
Building & Scaling Data Teams
Building & Scaling Data TeamsBuilding & Scaling Data Teams
Building & Scaling Data TeamsOutreach Digital
 
Big Data Case Studies
Big Data Case Studies Big Data Case Studies
Big Data Case Studies UIResearchPark
 
"How To Build and Lead a Winning Data Team" by Cahyo Listyanto (Bizzy.co.id)
"How To Build and Lead a Winning Data Team" by Cahyo Listyanto (Bizzy.co.id)"How To Build and Lead a Winning Data Team" by Cahyo Listyanto (Bizzy.co.id)
"How To Build and Lead a Winning Data Team" by Cahyo Listyanto (Bizzy.co.id)Tech in Asia ID
 
How to Build a Successful Data Team - Florian Douetteau @ PAPIs Connect
How to Build a Successful Data Team - Florian Douetteau @ PAPIs ConnectHow to Build a Successful Data Team - Florian Douetteau @ PAPIs Connect
How to Build a Successful Data Team - Florian Douetteau @ PAPIs ConnectPAPIs.io
 
From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...
From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...
From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...Louis Dorard
 
[db tech showcase Tokyo 2017] D35: 何を基準に選定すべきなのか!? ~ビッグデータ×IoT×AI時代のデータベースのアー...
[db tech showcase Tokyo 2017] D35: 何を基準に選定すべきなのか!? ~ビッグデータ×IoT×AI時代のデータベースのアー...[db tech showcase Tokyo 2017] D35: 何を基準に選定すべきなのか!? ~ビッグデータ×IoT×AI時代のデータベースのアー...
[db tech showcase Tokyo 2017] D35: 何を基準に選定すべきなのか!? ~ビッグデータ×IoT×AI時代のデータベースのアー...Insight Technology, Inc.
 

Viewers also liked (14)

Bootstrapping Machine Learning
Bootstrapping Machine LearningBootstrapping Machine Learning
Bootstrapping Machine Learning
 
AIをあなたのツール化するための第一歩
AIをあなたのツール化するための第一歩AIをあなたのツール化するための第一歩
AIをあなたのツール化するための第一歩
 
Scala製機械学習サーバ「Apache PredictionIO」
Scala製機械学習サーバ「Apache PredictionIO」Scala製機械学習サーバ「Apache PredictionIO」
Scala製機械学習サーバ「Apache PredictionIO」
 
5.4 Arbres et forêts aléatoires
5.4 Arbres et forêts aléatoires5.4 Arbres et forêts aléatoires
5.4 Arbres et forêts aléatoires
 
( Big ) Data Management - Data Mining and Machine Learning - Global concepts ...
( Big ) Data Management - Data Mining and Machine Learning - Global concepts ...( Big ) Data Management - Data Mining and Machine Learning - Global concepts ...
( Big ) Data Management - Data Mining and Machine Learning - Global concepts ...
 
Big Data Science Team Building
Big Data Science Team BuildingBig Data Science Team Building
Big Data Science Team Building
 
Big Data Analytics on AWS - Carlos Conde - AWS Summit Paris
Big Data Analytics on AWS - Carlos Conde - AWS Summit ParisBig Data Analytics on AWS - Carlos Conde - AWS Summit Paris
Big Data Analytics on AWS - Carlos Conde - AWS Summit Paris
 
Building an AI Startup: Realities & Tactics
Building an AI Startup: Realities & TacticsBuilding an AI Startup: Realities & Tactics
Building an AI Startup: Realities & Tactics
 
Building & Scaling Data Teams
Building & Scaling Data TeamsBuilding & Scaling Data Teams
Building & Scaling Data Teams
 
Big Data Case Studies
Big Data Case Studies Big Data Case Studies
Big Data Case Studies
 
"How To Build and Lead a Winning Data Team" by Cahyo Listyanto (Bizzy.co.id)
"How To Build and Lead a Winning Data Team" by Cahyo Listyanto (Bizzy.co.id)"How To Build and Lead a Winning Data Team" by Cahyo Listyanto (Bizzy.co.id)
"How To Build and Lead a Winning Data Team" by Cahyo Listyanto (Bizzy.co.id)
 
How to Build a Successful Data Team - Florian Douetteau @ PAPIs Connect
How to Build a Successful Data Team - Florian Douetteau @ PAPIs ConnectHow to Build a Successful Data Team - Florian Douetteau @ PAPIs Connect
How to Build a Successful Data Team - Florian Douetteau @ PAPIs Connect
 
From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...
From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...
From Data to Artificial Intelligence with the Machine Learning Canvas — ODSC ...
 
[db tech showcase Tokyo 2017] D35: 何を基準に選定すべきなのか!? ~ビッグデータ×IoT×AI時代のデータベースのアー...
[db tech showcase Tokyo 2017] D35: 何を基準に選定すべきなのか!? ~ビッグデータ×IoT×AI時代のデータベースのアー...[db tech showcase Tokyo 2017] D35: 何を基準に選定すべきなのか!? ~ビッグデータ×IoT×AI時代のデータベースのアー...
[db tech showcase Tokyo 2017] D35: 何を基準に選定すべきなのか!? ~ビッグデータ×IoT×AI時代のデータベースのアー...
 

Similar to From Data to AI with the Machine Learning Canvas

Demystifying Machine Learning
Demystifying Machine LearningDemystifying Machine Learning
Demystifying Machine LearningLouis Dorard
 
GVTI Pitch Deck (1).pptx
GVTI Pitch Deck (1).pptxGVTI Pitch Deck (1).pptx
GVTI Pitch Deck (1).pptxDeniseMathre1
 
Pragmatic machine learning for the real world
Pragmatic machine learning for the real worldPragmatic machine learning for the real world
Pragmatic machine learning for the real worldLouis Dorard
 
GVI Pitch Deck (1).pptx
GVI Pitch Deck (1).pptxGVI Pitch Deck (1).pptx
GVI Pitch Deck (1).pptxDeniseMathre1
 
Online advertising money: how do we spend it in 2014
Online advertising money: how do we spend it in 2014Online advertising money: how do we spend it in 2014
Online advertising money: how do we spend it in 2014Gummy Industries
 
Jaakko Kankaanpää - IoT Took My Money - Mindtrek 2016
Jaakko Kankaanpää - IoT Took My Money - Mindtrek 2016Jaakko Kankaanpää - IoT Took My Money - Mindtrek 2016
Jaakko Kankaanpää - IoT Took My Money - Mindtrek 2016Mindtrek
 
FBIC Global Deborah Weinswig New Tech Presentation Dec. 3 2014
FBIC Global Deborah Weinswig New Tech Presentation Dec. 3 2014FBIC Global Deborah Weinswig New Tech Presentation Dec. 3 2014
FBIC Global Deborah Weinswig New Tech Presentation Dec. 3 2014Deborah Weinswig
 
Construction Technology Quarterly, Q2, 2021
Construction Technology Quarterly, Q2, 2021Construction Technology Quarterly, Q2, 2021
Construction Technology Quarterly, Q2, 2021Hugh Seaton
 
Digital Futures Webinar with Amaze CSO Rick Curtis Jan 2014
Digital Futures Webinar with Amaze CSO Rick Curtis Jan 2014Digital Futures Webinar with Amaze CSO Rick Curtis Jan 2014
Digital Futures Webinar with Amaze CSO Rick Curtis Jan 2014amazeplc
 
Presentatie revenue profs hsmai 15 november
Presentatie revenue profs hsmai 15 novemberPresentatie revenue profs hsmai 15 november
Presentatie revenue profs hsmai 15 novemberVincent Everts
 
NVI Deconstructing IoT 3 jJly 2013 by Maurizio Pilu - CDEC
NVI Deconstructing IoT  3 jJly 2013  by Maurizio Pilu - CDEC NVI Deconstructing IoT  3 jJly 2013  by Maurizio Pilu - CDEC
NVI Deconstructing IoT 3 jJly 2013 by Maurizio Pilu - CDEC Maurizio Pilu
 
Entrepreneurial GIS Services: Innovative Practices at King County GIS, 2006 U...
Entrepreneurial GIS Services: Innovative Practices at King County GIS, 2006 U...Entrepreneurial GIS Services: Innovative Practices at King County GIS, 2006 U...
Entrepreneurial GIS Services: Innovative Practices at King County GIS, 2006 U...Greg Babinski
 
5G Edge Computing IoT Presentation
5G Edge Computing IoT Presentation 5G Edge Computing IoT Presentation
5G Edge Computing IoT Presentation Rick Stomphorst
 
IoT Overview and Challenges - Sachin Pukale IOT Mumbai
IoT Overview and Challenges - Sachin Pukale IOT MumbaiIoT Overview and Challenges - Sachin Pukale IOT Mumbai
IoT Overview and Challenges - Sachin Pukale IOT MumbaiSachin Pukale
 
STKI Summit 2014 Main tent presenation
STKI Summit 2014 Main tent presenation STKI Summit 2014 Main tent presenation
STKI Summit 2014 Main tent presenation Dr. Jimmy Schwarzkopf
 
Artificial Intelligence: How Enterprises Can Crush It With Apache Spark: Keyn...
Artificial Intelligence: How Enterprises Can Crush It With Apache Spark: Keyn...Artificial Intelligence: How Enterprises Can Crush It With Apache Spark: Keyn...
Artificial Intelligence: How Enterprises Can Crush It With Apache Spark: Keyn...Spark Summit
 
The good bad and ugly collaborative western
The good bad and ugly collaborative westernThe good bad and ugly collaborative western
The good bad and ugly collaborative westernWithoutModel
 

Similar to From Data to AI with the Machine Learning Canvas (20)

Demystifying Machine Learning
Demystifying Machine LearningDemystifying Machine Learning
Demystifying Machine Learning
 
GVTI Pitch Deck (1).pptx
GVTI Pitch Deck (1).pptxGVTI Pitch Deck (1).pptx
GVTI Pitch Deck (1).pptx
 
Pragmatic machine learning for the real world
Pragmatic machine learning for the real worldPragmatic machine learning for the real world
Pragmatic machine learning for the real world
 
GVI Pitch Deck (1).pptx
GVI Pitch Deck (1).pptxGVI Pitch Deck (1).pptx
GVI Pitch Deck (1).pptx
 
Creative disruption 4 SMEs
Creative disruption 4 SMEsCreative disruption 4 SMEs
Creative disruption 4 SMEs
 
Online advertising money: how do we spend it in 2014
Online advertising money: how do we spend it in 2014Online advertising money: how do we spend it in 2014
Online advertising money: how do we spend it in 2014
 
Jaakko Kankaanpää - IoT Took My Money - Mindtrek 2016
Jaakko Kankaanpää - IoT Took My Money - Mindtrek 2016Jaakko Kankaanpää - IoT Took My Money - Mindtrek 2016
Jaakko Kankaanpää - IoT Took My Money - Mindtrek 2016
 
Smart contract
Smart contractSmart contract
Smart contract
 
FBIC Global Deborah Weinswig New Tech Presentation Dec. 3 2014
FBIC Global Deborah Weinswig New Tech Presentation Dec. 3 2014FBIC Global Deborah Weinswig New Tech Presentation Dec. 3 2014
FBIC Global Deborah Weinswig New Tech Presentation Dec. 3 2014
 
Construction Technology Quarterly, Q2, 2021
Construction Technology Quarterly, Q2, 2021Construction Technology Quarterly, Q2, 2021
Construction Technology Quarterly, Q2, 2021
 
Digital Futures Webinar with Amaze CSO Rick Curtis Jan 2014
Digital Futures Webinar with Amaze CSO Rick Curtis Jan 2014Digital Futures Webinar with Amaze CSO Rick Curtis Jan 2014
Digital Futures Webinar with Amaze CSO Rick Curtis Jan 2014
 
Presentatie revenue profs hsmai 15 november
Presentatie revenue profs hsmai 15 novemberPresentatie revenue profs hsmai 15 november
Presentatie revenue profs hsmai 15 november
 
NVI Deconstructing IoT 3 jJly 2013 by Maurizio Pilu - CDEC
NVI Deconstructing IoT  3 jJly 2013  by Maurizio Pilu - CDEC NVI Deconstructing IoT  3 jJly 2013  by Maurizio Pilu - CDEC
NVI Deconstructing IoT 3 jJly 2013 by Maurizio Pilu - CDEC
 
Entrepreneurial GIS Services: Innovative Practices at King County GIS, 2006 U...
Entrepreneurial GIS Services: Innovative Practices at King County GIS, 2006 U...Entrepreneurial GIS Services: Innovative Practices at King County GIS, 2006 U...
Entrepreneurial GIS Services: Innovative Practices at King County GIS, 2006 U...
 
5G Edge Computing IoT Presentation
5G Edge Computing IoT Presentation 5G Edge Computing IoT Presentation
5G Edge Computing IoT Presentation
 
Big Data 2.0
Big Data 2.0Big Data 2.0
Big Data 2.0
 
IoT Overview and Challenges - Sachin Pukale IOT Mumbai
IoT Overview and Challenges - Sachin Pukale IOT MumbaiIoT Overview and Challenges - Sachin Pukale IOT Mumbai
IoT Overview and Challenges - Sachin Pukale IOT Mumbai
 
STKI Summit 2014 Main tent presenation
STKI Summit 2014 Main tent presenation STKI Summit 2014 Main tent presenation
STKI Summit 2014 Main tent presenation
 
Artificial Intelligence: How Enterprises Can Crush It With Apache Spark: Keyn...
Artificial Intelligence: How Enterprises Can Crush It With Apache Spark: Keyn...Artificial Intelligence: How Enterprises Can Crush It With Apache Spark: Keyn...
Artificial Intelligence: How Enterprises Can Crush It With Apache Spark: Keyn...
 
The good bad and ugly collaborative western
The good bad and ugly collaborative westernThe good bad and ugly collaborative western
The good bad and ugly collaborative western
 

More from Louis Dorard

Machine Learning: je m'y mets demain!
Machine Learning: je m'y mets demain!Machine Learning: je m'y mets demain!
Machine Learning: je m'y mets demain!Louis Dorard
 
Trusting AI with important decisions
Trusting AI with important decisionsTrusting AI with important decisions
Trusting AI with important decisionsLouis Dorard
 
Predictive apps for startups
Predictive apps for startupsPredictive apps for startups
Predictive apps for startupsLouis Dorard
 
Pragmatic Machine Learning @ ML Spain
Pragmatic Machine Learning @ ML SpainPragmatic Machine Learning @ ML Spain
Pragmatic Machine Learning @ ML SpainLouis Dorard
 
Future of AI-powered automation in business
Future of AI-powered automation in businessFuture of AI-powered automation in business
Future of AI-powered automation in businessLouis Dorard
 
Intro to machine learning for web folks @ BlendWebMix
Intro to machine learning for web folks @ BlendWebMixIntro to machine learning for web folks @ BlendWebMix
Intro to machine learning for web folks @ BlendWebMixLouis Dorard
 
A developer's overview of the world of predictive APIs
A developer's overview of the world of predictive APIsA developer's overview of the world of predictive APIs
A developer's overview of the world of predictive APIsLouis Dorard
 
Using predictive APIs to create smarter apps
Using predictive APIs to create smarter appsUsing predictive APIs to create smarter apps
Using predictive APIs to create smarter appsLouis Dorard
 
Predictive APIs at APIdays Berlin
Predictive APIs at APIdays BerlinPredictive APIs at APIdays Berlin
Predictive APIs at APIdays BerlinLouis Dorard
 
Data Summit Brussels: Introduction
Data Summit Brussels: IntroductionData Summit Brussels: Introduction
Data Summit Brussels: IntroductionLouis Dorard
 
Exploration & Exploitation Challenge 2011
Exploration & Exploitation Challenge 2011Exploration & Exploitation Challenge 2011
Exploration & Exploitation Challenge 2011Louis Dorard
 

More from Louis Dorard (11)

Machine Learning: je m'y mets demain!
Machine Learning: je m'y mets demain!Machine Learning: je m'y mets demain!
Machine Learning: je m'y mets demain!
 
Trusting AI with important decisions
Trusting AI with important decisionsTrusting AI with important decisions
Trusting AI with important decisions
 
Predictive apps for startups
Predictive apps for startupsPredictive apps for startups
Predictive apps for startups
 
Pragmatic Machine Learning @ ML Spain
Pragmatic Machine Learning @ ML SpainPragmatic Machine Learning @ ML Spain
Pragmatic Machine Learning @ ML Spain
 
Future of AI-powered automation in business
Future of AI-powered automation in businessFuture of AI-powered automation in business
Future of AI-powered automation in business
 
Intro to machine learning for web folks @ BlendWebMix
Intro to machine learning for web folks @ BlendWebMixIntro to machine learning for web folks @ BlendWebMix
Intro to machine learning for web folks @ BlendWebMix
 
A developer's overview of the world of predictive APIs
A developer's overview of the world of predictive APIsA developer's overview of the world of predictive APIs
A developer's overview of the world of predictive APIs
 
Using predictive APIs to create smarter apps
Using predictive APIs to create smarter appsUsing predictive APIs to create smarter apps
Using predictive APIs to create smarter apps
 
Predictive APIs at APIdays Berlin
Predictive APIs at APIdays BerlinPredictive APIs at APIdays Berlin
Predictive APIs at APIdays Berlin
 
Data Summit Brussels: Introduction
Data Summit Brussels: IntroductionData Summit Brussels: Introduction
Data Summit Brussels: Introduction
 
Exploration & Exploitation Challenge 2011
Exploration & Exploitation Challenge 2011Exploration & Exploitation Challenge 2011
Exploration & Exploitation Challenge 2011
 

Recently uploaded

10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka DoktorováCzechDreamin
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxAbida Shariff
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupCatarinaPereira64715
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Product School
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...Elena Simperl
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesThousandEyes
 
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya HalderCustom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya HalderCzechDreamin
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...Product School
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaRTTS
 
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...CzechDreamin
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...Product School
 
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeFree and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeCzechDreamin
 
Introduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationIntroduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationZilliz
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutesconfluent
 
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomSalesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomCzechDreamin
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Product School
 
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptxUnpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptxDavid Michel
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...Product School
 

Recently uploaded (20)

10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya HalderCustom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Org’s Data With Aggregate...
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeFree and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
 
Introduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationIntroduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG Evaluation
 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutes
 
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomSalesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptxUnpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 

From Data to AI with the Machine Learning Canvas