The Road to Data Science - Joel Grus, June 2015

S
Seattle DAML meetupSeattle DAML meetup
Joel Grus
Seattle DAML Meetup
June 23, 2015
Data Science from Scratch
About me
Old-school DAML-er
Wrote a book ---------->
SWE at Google
Formerly data science at
VoloMetrix, Decide,
Farecast
The Road to
Data Science
The Road to
Data Science
My
The Road to Data Science - Joel Grus, June 2015
Grad School
The Road to Data Science - Joel Grus, June 2015
The Road to Data Science - Joel Grus, June 2015
Fareology
Data Science Is A Broad Field
Some Stuff
More
Stuff
Even
More
Stuff
Data
Science
People who think they're
data scientists, but they're
not really data scientists
People who are a danger
to everyone around them
People who say
"machine learnings"
The Road to Data Science - Joel Grus, June 2015
a data scientist should be able to
JOEL GRUS
a data scientist should be able to
run a regression,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data, test a hypothesis,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data, test a hypothesis,
talk to a businessperson,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data, test a hypothesis,
talk to a businessperson, script a shell,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data, test a hypothesis,
talk to a businessperson, script a shell, code on a
whiteboard,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data, test a hypothesis,
talk to a businessperson, script a shell, code on a
whiteboard, hack a p-value,
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data, test a hypothesis,
talk to a businessperson, script a shell, code on a
whiteboard, hack a p-value, machine-learn a model.
JOEL GRUS
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data, test a hypothesis,
talk to a businessperson, script a shell, code on a
whiteboard, hack a p-value, machine-learn a model.
specialization is for engineers.
JOEL GRUS
A lot of stuff!
What Are Hiring Managers Looking For?
What Are Hiring Managers Looking For?
Let's check LinkedIn
The Road to Data Science - Joel Grus, June 2015
a data scientist should be able to
run a regression, write a sql query, scrape a web
site, design an experiment, factor matrices, use a
data frame, pretend to understand deep learning,
steal from the d3 gallery, argue r versus python,
think in mapreduce, update a prior, build a
dashboard, clean up messy data, test a hypothesis,
talk to a businessperson, script a shell, code on a
whiteboard, hack a p-value, machine-learn a model.
specialization is for engineers.
JOEL GRUS
grad students!
Learning Data Science
I want to be a
data scientist. Great!
The Math Way
I like to start with
matrix
decompositions.
How's your
measure theory?
The Math Way
The Good:
Solid foundation
Math is the noblest
known pursuit
The Math Way
The Good:
Solid foundation
Math is the noblest
known pursuit
The Bad:
Some weirdos don't
think math is fun
Can be pretty
forbidding
Can miss practical
skills
So, did you
count the
words in that
document?
No, but I have an
elegant proof
that the number
of words is finite!
OK, Let's Try Again
I want to be a
data scientist. Great!
The Tools Way
Here's a list of
the 25 libraries
you really ought
to know. How's
your R
programming?
The Tools Way
The Good:
Don't have to
understand the
math
Practical
Can get started doing
fun stuff right away
The Tools Way
The Good:
Don't have to
understand the
math
Practical
Can get started doing
fun stuff right away
The Bad:
Don't have to
understand the
math
Can get started doing
bad science right
away
So, did you
build that
model?
Yes, and it fits the
training data
almost perfectly!
OK, Maybe Not That Either
So Then What?
Example: k-means clustering
Unsupervised machine learning technique
Given a set of points, group them into k clusters
in a way that minimizes the within-cluster sum-
of-squares
i.e. in a way such that the clusters are as "small"
as possible (for a particular conception of
"small")
The Road to Data Science - Joel Grus, June 2015
The Math Way
The Math Way
The Tools Way
# a 2-dimensional example
x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2),
matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2))
colnames(x) <- c("x", "y")
(cl <- kmeans(x, 2))
plot(x, col = cl$cluster)
points(cl$centers, col = 1:2, pch = 8, cex = 2)
The Tools Way
>>> from sklearn import cluster, datasets
>>> iris = datasets.load_iris()
>>> X_iris = iris.data
>>> y_iris = iris.target
>>> k_means = cluster.KMeans(n_clusters=3)
>>> k_means.fit(X_iris)
KMeans(copy_x=True, init='k-means++', ...
>>> print(k_means.labels_[::10])
[1 1 1 1 1 0 0 0 0 0 2 2 2 2 2]
>>> print(y_iris[::10])
[0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]
So What To Do?
Bootcamps?
Data Science from Scratch
This is to certify that Joel Grus
has honorably completed the course of study outlined in
the book Data Science from Scratch: First Principles with
Python, and is entitled to all the Rights, Privileges, and
Honors thereunto appertaining.
Joel GrusJune 23, 2015
Certificate Programs?
Hey! Data scientists!
Learning By Building
You don't really understand something until you
build it
For example, I understand garbage disposals
much better now that I had to replace one that
was leaking water all over my kitchen
More relevantly, I thought I understood
hypothesis testing, until I tried to write a book
chapter + code about it.
Learning By Building
Functional Programming
Break Things Down Into Small Functions
So you
don't end
up with
something
like this
Don't Mutate
Example: k-means clustering
Given a set of points, group them into k clusters
in a way that minimizes the within-cluster sum-
of-squares
Global optimization is hard, so use a greedy
iterative approach
Fun Motivation: Image Posterization
Image consists of pixels
Each pixel is a triplet (R,G,B)
Imagine pixels as points in space
Find k clusters of pixels
Recolor each pixel to its cluster mean
I think it's fun, anyway
8 colors
Example: k-means clustering
given some points, find k clusters by
choose k "means"
repeat:
assign each point to cluster of closest "mean"
recompute mean of each cluster
sounds simple! let's code!
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
start with no cluster assignments
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
start with no cluster assignments
for each iteration
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
start with no cluster assignments
for each iteration
for each point
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
start with no cluster assignments
for each iteration
for each point
for each mean
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
start with no cluster assignments
for each iteration
for each point
for each mean
compute the distance
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
start with no cluster assignments
for each iteration
for each point
for each mean
compute the distance
assign the point to the cluster of the mean with
the smallest distance
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
start with no cluster assignments
for each iteration
for each point
for each mean
compute the distance
assign the point to the cluster of the mean with
the smallest distance
find the points in each cluster
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
start with k randomly chosen points
start with no cluster assignments
for each iteration
for each point
for each mean
compute the distance
assign the point to the cluster of the mean with
the smallest distance
find the points in each cluster
and compute the new means
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
Not impenetrable, but
a lot less helpful than
it could be
def k_means(points, k, num_iters=10):
means = list(random.sample(points, k))
assignments = [None for _ in points]
for _ in range(num_iters):
# assign each point to closest mean
for i, point_i in enumerate(points):
d_min = float('inf')
for j, mean_j in enumerate(means):
d = sum((x - y)**2
for x, y in zip(point_i, mean_j))
if d < d_min:
d_min = d
assignments[i] = j
# recompute means
for j in range(k):
cluster = [point for i, point in enumerate(points) if assignments[i] ==
j]
means[j] = mean(cluster)
return means
Not impenetrable, but
a lot less helpful than
it could be
Can we make it
simpler?
Break Things Down Into Small Functions
def k_means(points, k, num_iters=10):
# start with k of the points as "means"
means = random.sample(points, k)
# and iterate finding new means
for _ in range(num_iters):
means = new_means(points, means)
return means
def new_means(points, means):
# assign points to clusters
# each cluster is just a list of points
clusters = assign_clusters(points, means)
# return the cluster means
return [mean(cluster)
for cluster in clusters]
def assign_clusters(points, means):
# one cluster for each mean
# each cluster starts empty
clusters = [[] for _ in means]
# assign each point to cluster
# corresponding to closest mean
for p in points:
index = closest_index(point, means)
clusters[index].append(point)
return clusters
def closest_index(point, means):
# return index of closest mean
return argmin(distance(point, mean)
for mean in means)
def argmin(xs):
# return index of smallest element
return min(enumerate(xs),
key=lambda pair: pair[1])[0]
To Recap
k_means(points, k, num_iters=10)
mean(points)
k_means(points, k, num_iters=10)
new_means(points, means)
assign_clusters(points, means)
closest_index(point, means)
argmin(xs)
distance(point1, point2)
mean(points)
add(point1, point2)
scalar_multiply(c, point)
As a Pedagogical Tool
Can be used "top down" (as we did here)
Implement high-level logic
Then implement the details
Nice for exposition
Can also be used "bottom up"
Implement small pieces
Build up to high-level logic
Good for workshops
Example: Decision Trees
Want to predict whether
a given Meetup is worth
attending (True) or not
(False)
Inputs are dictionaries
describing each Meetup
{ "group" : "DAML",
"date" : "2015-06-23",
"beer" : "free",
"food" : "dim sum",
"speaker" : "@joelgrus",
"location" : "Google",
"topic" : "shameless self-promotion" }
{ "group" : "Seattle Atheists",
"date" : "2015-06-23",
"location" : "Round the Table",
"beer" : "none",
"food" : "none",
"topic" : "Godless Game Night" }
Example: Decision Trees
{ "group" : "DAML",
"date" : "2015-06-23",
"beer" : "free",
"food" : "dim sum",
"speaker" : "@joelgrus",
"location" : "Google",
"topic" : "shameless self-promotion" }
{ "group" : "Seattle Atheists",
"date" : "2015-06-23",
"location" : "Round the Table",
"beer" : "none",
"food" : "none",
"topic" : "Godless Game Night" }
beer?
True False
speaker?
True False
free none
paid
@jakevdp @joelgrus
Example: Decision Trees
class LeafNode:
def __init__(self, prediction):
self.prediction = prediction
def predict(self, input_dict):
return self.prediction
class DecisionNode:
def __init__(self, attribute, subtree_dict):
self.attribute = attribute
self.subtree_dict = subtree_dict
def predict(self, input_dict):
value = input_dict.get(self.attribute)
subtree = self.subtree_dict[value]
return subtree.predict(input)
Example: Decision Trees
Again inspiration from functional programming:
type Input = Map.Map String String
data Tree = Predict Bool
| Subtrees String (Map.Map String Tree)
look at the "beer" entry
a map from each possible
"beer" value to a subtree
always predict a specific value
Example: Decision Trees
type Input = Map.Map String String
data Tree = Predict Bool
| Subtrees String (Map.Map String Tree)
predict :: Tree -> Input -> Bool
predict (Predict b) _ = b
predict (Subtrees a subtrees) input =
predict subtree input
where subtree = subtrees Map.! (input Map.!
Example: Decision Trees
type Input = Map.Map String String
data Tree = Predict Bool
| Subtrees String (Map.Map String Tree)
We can do the same,
we'll say a decision tree is either
True
False
(attribute, subtree_dict)
("beer",
{ "free" : True,
"none" : False,
"paid" : ("speaker",
{...})})
predict :: Tree -> Input -> Bool
predict (Predict b) _ = b
predict (Subtrees a subtrees) input =
predict subtree input
where subtree = subtrees Map.! (input Map.! a)
Example: Decision Trees
def predict(tree, input_dict):
# leaf node predicts itself
if tree in (True, False):
return tree
else:
# destructure tree
attribute, subtree_dict = tree
# find appropriate subtree
value = input_dict[attribute]
subtree = subtree_dict[value]
# classify using subtree
return predict(subtree, input_dict)
Not Just For Data Science
In Conclusion
Teaching data science is fun, if you're smart
about it
Learning data science is fun, if you're smart
about it
Writing a book is not that much fun
Having written a book is pretty fun
Making slides is actually kind of fun
Functional programming is a lot of fun
Thanks!
@joelgrus
joelgrus@gmail.com
joelgrus.com
1 of 98

Recommended

Pandas, Data Wrangling & Data Science by
Pandas, Data Wrangling & Data SciencePandas, Data Wrangling & Data Science
Pandas, Data Wrangling & Data ScienceKrishna Sankar
2.4K views34 slides
Data Science Folk Knowledge by
Data Science Folk KnowledgeData Science Folk Knowledge
Data Science Folk KnowledgeKrishna Sankar
7.8K views42 slides
The Art of Social Media Analysis with Twitter & Python by
The Art of Social Media Analysis with Twitter & PythonThe Art of Social Media Analysis with Twitter & Python
The Art of Social Media Analysis with Twitter & PythonKrishna Sankar
19.5K views131 slides
R, Data Wrangling & Predicting NFL with Elo like Nate SIlver & 538 by
R, Data Wrangling & Predicting NFL with Elo like Nate SIlver & 538R, Data Wrangling & Predicting NFL with Elo like Nate SIlver & 538
R, Data Wrangling & Predicting NFL with Elo like Nate SIlver & 538Krishna Sankar
6.1K views94 slides
The Hitchhiker's Guide to Machine Learning with Python & Apache Spark by
The Hitchhiker's Guide to Machine Learning with Python & Apache SparkThe Hitchhiker's Guide to Machine Learning with Python & Apache Spark
The Hitchhiker's Guide to Machine Learning with Python & Apache SparkKrishna Sankar
8.2K views87 slides
Data Wrangling by
Data WranglingData Wrangling
Data WranglingAshwini Kuntamukkala
4K views34 slides

More Related Content

Viewers also liked

F# for startups v2 by
F# for startups v2F# for startups v2
F# for startups v2joelgrus
1.4K views31 slides
T shirts, feminism, parenting, and data science by
T shirts, feminism, parenting, and data scienceT shirts, feminism, parenting, and data science
T shirts, feminism, parenting, and data sciencejoelgrus
1.2K views17 slides
Alex Korbonits, "AUC at what costs?" Seattle DAML June 2016 by
Alex Korbonits, "AUC at what costs?" Seattle DAML June 2016Alex Korbonits, "AUC at what costs?" Seattle DAML June 2016
Alex Korbonits, "AUC at what costs?" Seattle DAML June 2016Seattle DAML meetup
472 views20 slides
F# for startups by
F# for startupsF# for startups
F# for startupsjoelgrus
2.2K views30 slides
Karin Strauss - DNA Storage, July 2016 by
Karin Strauss - DNA Storage, July 2016Karin Strauss - DNA Storage, July 2016
Karin Strauss - DNA Storage, July 2016Seattle DAML meetup
2.1K views12 slides
Numbers game by
Numbers gameNumbers game
Numbers gamejoelgrus
981 views11 slides

Viewers also liked(7)

F# for startups v2 by joelgrus
F# for startups v2F# for startups v2
F# for startups v2
joelgrus1.4K views
T shirts, feminism, parenting, and data science by joelgrus
T shirts, feminism, parenting, and data scienceT shirts, feminism, parenting, and data science
T shirts, feminism, parenting, and data science
joelgrus1.2K views
Alex Korbonits, "AUC at what costs?" Seattle DAML June 2016 by Seattle DAML meetup
Alex Korbonits, "AUC at what costs?" Seattle DAML June 2016Alex Korbonits, "AUC at what costs?" Seattle DAML June 2016
Alex Korbonits, "AUC at what costs?" Seattle DAML June 2016
F# for startups by joelgrus
F# for startupsF# for startups
F# for startups
joelgrus2.2K views
Numbers game by joelgrus
Numbers gameNumbers game
Numbers game
joelgrus981 views
Secrets of Fire Truck Society - Slides for Ignite Strata 2013 by joelgrus
Secrets of Fire Truck Society - Slides for Ignite Strata 2013Secrets of Fire Truck Society - Slides for Ignite Strata 2013
Secrets of Fire Truck Society - Slides for Ignite Strata 2013
joelgrus2.1K views

Similar to The Road to Data Science - Joel Grus, June 2015

Data science presentation by
Data science presentationData science presentation
Data science presentationMSDEVMTL
38.3K views25 slides
Data Science, what even?! by
Data Science, what even?!Data Science, what even?!
Data Science, what even?!David Coallier
1.8K views120 slides
How to Build a Semantic Search System by
How to Build a Semantic Search SystemHow to Build a Semantic Search System
How to Build a Semantic Search SystemTrey Grainger
5.3K views68 slides
Big Data [sorry] & Data Science: What Does a Data Scientist Do? by
Big Data [sorry] & Data Science: What Does a Data Scientist Do?Big Data [sorry] & Data Science: What Does a Data Scientist Do?
Big Data [sorry] & Data Science: What Does a Data Scientist Do?Data Science London
147.9K views54 slides
20151020 Metis by
20151020 Metis20151020 Metis
20151020 MetisDean Malmgren
10.8K views37 slides
Introduction to python by
Introduction to pythonIntroduction to python
Introduction to pythonRajesh Rajamani
165 views20 slides

Similar to The Road to Data Science - Joel Grus, June 2015(20)

Data science presentation by MSDEVMTL
Data science presentationData science presentation
Data science presentation
MSDEVMTL38.3K views
Data Science, what even?! by David Coallier
Data Science, what even?!Data Science, what even?!
Data Science, what even?!
David Coallier1.8K views
How to Build a Semantic Search System by Trey Grainger
How to Build a Semantic Search SystemHow to Build a Semantic Search System
How to Build a Semantic Search System
Trey Grainger5.3K views
Big Data [sorry] & Data Science: What Does a Data Scientist Do? by Data Science London
Big Data [sorry] & Data Science: What Does a Data Scientist Do?Big Data [sorry] & Data Science: What Does a Data Scientist Do?
Big Data [sorry] & Data Science: What Does a Data Scientist Do?
Data Science London147.9K views
Get connected with python by Jan Kroon
Get connected with pythonGet connected with python
Get connected with python
Jan Kroon468 views
Intro to Python for Data Science by TJ Stalcup
Intro to Python for Data ScienceIntro to Python for Data Science
Intro to Python for Data Science
TJ Stalcup351 views
Intro to Python for Data Science by TJ Stalcup
Intro to Python for Data ScienceIntro to Python for Data Science
Intro to Python for Data Science
TJ Stalcup371 views
Data Science, what even... by David Coallier
Data Science, what even...Data Science, what even...
Data Science, what even...
David Coallier1.4K views
DN18 | The Data Janitor Returns | Daniel Molnar | Oberlo/Shopify by Dataconomy Media
DN18 | The Data Janitor Returns | Daniel Molnar | Oberlo/Shopify DN18 | The Data Janitor Returns | Daniel Molnar | Oberlo/Shopify
DN18 | The Data Janitor Returns | Daniel Molnar | Oberlo/Shopify
Dataconomy Media254 views
The Data Janitor Returns | Daniel Molnar | DN18 by DataconomyGmbH
The Data Janitor Returns | Daniel Molnar | DN18The Data Janitor Returns | Daniel Molnar | DN18
The Data Janitor Returns | Daniel Molnar | DN18
DataconomyGmbH46 views
Sztuka czytania między wierszami - R i Data mining by Katarzyna Mrowca
Sztuka czytania między wierszami - R i Data miningSztuka czytania między wierszami - R i Data mining
Sztuka czytania między wierszami - R i Data mining
Katarzyna Mrowca892 views
AI Is Changing The Way We Look At Data Science by Abe
AI Is Changing The Way We Look At Data ScienceAI Is Changing The Way We Look At Data Science
AI Is Changing The Way We Look At Data Science
Abe1.4K views
Sql saturday el salvador 2016 - Me, A Data Scientist? by Fabricio Quintanilla
Sql saturday el salvador 2016 - Me, A Data Scientist?Sql saturday el salvador 2016 - Me, A Data Scientist?
Sql saturday el salvador 2016 - Me, A Data Scientist?
From Rocket Science to Data Science by Sanghamitra Deb
From Rocket Science to Data ScienceFrom Rocket Science to Data Science
From Rocket Science to Data Science
Sanghamitra Deb642 views
Data science by Sreejith c
Data scienceData science
Data science
Sreejith c623 views

More from Seattle DAML meetup

Understanding disparities using the American Community Survey - Sean Green, M... by
Understanding disparities using the American Community Survey - Sean Green, M...Understanding disparities using the American Community Survey - Sean Green, M...
Understanding disparities using the American Community Survey - Sean Green, M...Seattle DAML meetup
308 views22 slides
Towards Automatic Moderation of Online Hate Speech - Emily Spahn, March 2016 by
Towards Automatic Moderation of Online Hate Speech - Emily Spahn, March 2016Towards Automatic Moderation of Online Hate Speech - Emily Spahn, March 2016
Towards Automatic Moderation of Online Hate Speech - Emily Spahn, March 2016Seattle DAML meetup
556 views18 slides
Frequent Pattern Mining - Krishna Sridhar, Feb 2016 by
Frequent Pattern Mining - Krishna Sridhar, Feb 2016Frequent Pattern Mining - Krishna Sridhar, Feb 2016
Frequent Pattern Mining - Krishna Sridhar, Feb 2016Seattle DAML meetup
1.3K views80 slides
Streaming Hypothesis Reasoning - William Smith, Jan 2016 by
Streaming Hypothesis Reasoning - William Smith, Jan 2016Streaming Hypothesis Reasoning - William Smith, Jan 2016
Streaming Hypothesis Reasoning - William Smith, Jan 2016Seattle DAML meetup
751 views39 slides
Been Kim - Interpretable machine learning, Nov 2015 by
Been Kim - Interpretable machine learning, Nov 2015Been Kim - Interpretable machine learning, Nov 2015
Been Kim - Interpretable machine learning, Nov 2015Seattle DAML meetup
921 views54 slides
Hunting criminals with hybrid analytics -- October 2015 by
Hunting criminals with hybrid analytics -- October 2015Hunting criminals with hybrid analytics -- October 2015
Hunting criminals with hybrid analytics -- October 2015Seattle DAML meetup
520 views26 slides

More from Seattle DAML meetup(9)

Understanding disparities using the American Community Survey - Sean Green, M... by Seattle DAML meetup
Understanding disparities using the American Community Survey - Sean Green, M...Understanding disparities using the American Community Survey - Sean Green, M...
Understanding disparities using the American Community Survey - Sean Green, M...
Towards Automatic Moderation of Online Hate Speech - Emily Spahn, March 2016 by Seattle DAML meetup
Towards Automatic Moderation of Online Hate Speech - Emily Spahn, March 2016Towards Automatic Moderation of Online Hate Speech - Emily Spahn, March 2016
Towards Automatic Moderation of Online Hate Speech - Emily Spahn, March 2016
Frequent Pattern Mining - Krishna Sridhar, Feb 2016 by Seattle DAML meetup
Frequent Pattern Mining - Krishna Sridhar, Feb 2016Frequent Pattern Mining - Krishna Sridhar, Feb 2016
Frequent Pattern Mining - Krishna Sridhar, Feb 2016
Seattle DAML meetup1.3K views
Streaming Hypothesis Reasoning - William Smith, Jan 2016 by Seattle DAML meetup
Streaming Hypothesis Reasoning - William Smith, Jan 2016Streaming Hypothesis Reasoning - William Smith, Jan 2016
Streaming Hypothesis Reasoning - William Smith, Jan 2016
Been Kim - Interpretable machine learning, Nov 2015 by Seattle DAML meetup
Been Kim - Interpretable machine learning, Nov 2015Been Kim - Interpretable machine learning, Nov 2015
Been Kim - Interpretable machine learning, Nov 2015
Hunting criminals with hybrid analytics -- October 2015 by Seattle DAML meetup
Hunting criminals with hybrid analytics -- October 2015Hunting criminals with hybrid analytics -- October 2015
Hunting criminals with hybrid analytics -- October 2015
Machine Learning in Biology and Why It Doesn't Make Sense - Theo Knijnenburg,... by Seattle DAML meetup
Machine Learning in Biology and Why It Doesn't Make Sense - Theo Knijnenburg,...Machine Learning in Biology and Why It Doesn't Make Sense - Theo Knijnenburg,...
Machine Learning in Biology and Why It Doesn't Make Sense - Theo Knijnenburg,...
Adventures in Data Visualization - Jeff Heer, May 2015 by Seattle DAML meetup
Adventures in Data Visualization - Jeff Heer, May 2015Adventures in Data Visualization - Jeff Heer, May 2015
Adventures in Data Visualization - Jeff Heer, May 2015
Scaling decision trees - George Murray, July 2015 by Seattle DAML meetup
Scaling decision trees - George Murray, July 2015Scaling decision trees - George Murray, July 2015
Scaling decision trees - George Murray, July 2015

Recently uploaded

GDSC Mikroskil Members Onboarding 2023.pdf by
GDSC Mikroskil Members Onboarding 2023.pdfGDSC Mikroskil Members Onboarding 2023.pdf
GDSC Mikroskil Members Onboarding 2023.pdfgdscmikroskil
58 views62 slides
SUMIT SQL PROJECT SUPERSTORE 1.pptx by
SUMIT SQL PROJECT SUPERSTORE 1.pptxSUMIT SQL PROJECT SUPERSTORE 1.pptx
SUMIT SQL PROJECT SUPERSTORE 1.pptxSumit Jadhav
18 views26 slides
Update 42 models(Diode/General ) in SPICE PARK(DEC2023) by
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)Update 42 models(Diode/General ) in SPICE PARK(DEC2023)
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)Tsuyoshi Horigome
38 views16 slides
Design of machine elements-UNIT 3.pptx by
Design of machine elements-UNIT 3.pptxDesign of machine elements-UNIT 3.pptx
Design of machine elements-UNIT 3.pptxgopinathcreddy
33 views31 slides
MK__Cert.pdf by
MK__Cert.pdfMK__Cert.pdf
MK__Cert.pdfHassan Khan
15 views1 slide
Ansari: Practical experiences with an LLM-based Islamic Assistant by
Ansari: Practical experiences with an LLM-based Islamic AssistantAnsari: Practical experiences with an LLM-based Islamic Assistant
Ansari: Practical experiences with an LLM-based Islamic AssistantM Waleed Kadous
5 views29 slides

Recently uploaded(20)

GDSC Mikroskil Members Onboarding 2023.pdf by gdscmikroskil
GDSC Mikroskil Members Onboarding 2023.pdfGDSC Mikroskil Members Onboarding 2023.pdf
GDSC Mikroskil Members Onboarding 2023.pdf
gdscmikroskil58 views
SUMIT SQL PROJECT SUPERSTORE 1.pptx by Sumit Jadhav
SUMIT SQL PROJECT SUPERSTORE 1.pptxSUMIT SQL PROJECT SUPERSTORE 1.pptx
SUMIT SQL PROJECT SUPERSTORE 1.pptx
Sumit Jadhav 18 views
Update 42 models(Diode/General ) in SPICE PARK(DEC2023) by Tsuyoshi Horigome
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)Update 42 models(Diode/General ) in SPICE PARK(DEC2023)
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)
Design of machine elements-UNIT 3.pptx by gopinathcreddy
Design of machine elements-UNIT 3.pptxDesign of machine elements-UNIT 3.pptx
Design of machine elements-UNIT 3.pptx
gopinathcreddy33 views
Ansari: Practical experiences with an LLM-based Islamic Assistant by M Waleed Kadous
Ansari: Practical experiences with an LLM-based Islamic AssistantAnsari: Practical experiences with an LLM-based Islamic Assistant
Ansari: Practical experiences with an LLM-based Islamic Assistant
M Waleed Kadous5 views
BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for Growth by Innomantra
BCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for GrowthBCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for Growth
BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for Growth
Innomantra 6 views
Design_Discover_Develop_Campaign.pptx by ShivanshSeth6
Design_Discover_Develop_Campaign.pptxDesign_Discover_Develop_Campaign.pptx
Design_Discover_Develop_Campaign.pptx
ShivanshSeth637 views
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx by lwang78
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx
lwang78109 views
Web Dev Session 1.pptx by VedVekhande
Web Dev Session 1.pptxWeb Dev Session 1.pptx
Web Dev Session 1.pptx
VedVekhande11 views
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc... by csegroupvn
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
csegroupvn5 views
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdf by AlhamduKure
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdfASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdf
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdf
AlhamduKure6 views
MongoDB.pdf by ArthyR3
MongoDB.pdfMongoDB.pdf
MongoDB.pdf
ArthyR345 views

The Road to Data Science - Joel Grus, June 2015

  • 1. Joel Grus Seattle DAML Meetup June 23, 2015 Data Science from Scratch
  • 2. About me Old-school DAML-er Wrote a book ----------> SWE at Google Formerly data science at VoloMetrix, Decide, Farecast
  • 4. The Road to Data Science My
  • 10. Data Science Is A Broad Field Some Stuff More Stuff Even More Stuff Data Science People who think they're data scientists, but they're not really data scientists People who are a danger to everyone around them People who say "machine learnings"
  • 12. a data scientist should be able to JOEL GRUS
  • 13. a data scientist should be able to run a regression, JOEL GRUS
  • 14. a data scientist should be able to run a regression, write a sql query, JOEL GRUS
  • 15. a data scientist should be able to run a regression, write a sql query, scrape a web site, JOEL GRUS
  • 16. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, JOEL GRUS
  • 17. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, JOEL GRUS
  • 18. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, JOEL GRUS
  • 19. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, JOEL GRUS
  • 20. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, JOEL GRUS
  • 21. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, JOEL GRUS
  • 22. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, JOEL GRUS
  • 23. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, JOEL GRUS
  • 24. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, JOEL GRUS
  • 25. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, JOEL GRUS
  • 26. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, test a hypothesis, JOEL GRUS
  • 27. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, test a hypothesis, talk to a businessperson, JOEL GRUS
  • 28. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, test a hypothesis, talk to a businessperson, script a shell, JOEL GRUS
  • 29. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, test a hypothesis, talk to a businessperson, script a shell, code on a whiteboard, JOEL GRUS
  • 30. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, test a hypothesis, talk to a businessperson, script a shell, code on a whiteboard, hack a p-value, JOEL GRUS
  • 31. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, test a hypothesis, talk to a businessperson, script a shell, code on a whiteboard, hack a p-value, machine-learn a model. JOEL GRUS
  • 32. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, test a hypothesis, talk to a businessperson, script a shell, code on a whiteboard, hack a p-value, machine-learn a model. specialization is for engineers. JOEL GRUS
  • 33. A lot of stuff!
  • 34. What Are Hiring Managers Looking For?
  • 35. What Are Hiring Managers Looking For? Let's check LinkedIn
  • 37. a data scientist should be able to run a regression, write a sql query, scrape a web site, design an experiment, factor matrices, use a data frame, pretend to understand deep learning, steal from the d3 gallery, argue r versus python, think in mapreduce, update a prior, build a dashboard, clean up messy data, test a hypothesis, talk to a businessperson, script a shell, code on a whiteboard, hack a p-value, machine-learn a model. specialization is for engineers. JOEL GRUS grad students!
  • 39. I want to be a data scientist. Great!
  • 40. The Math Way I like to start with matrix decompositions. How's your measure theory?
  • 41. The Math Way The Good: Solid foundation Math is the noblest known pursuit
  • 42. The Math Way The Good: Solid foundation Math is the noblest known pursuit The Bad: Some weirdos don't think math is fun Can be pretty forbidding Can miss practical skills
  • 43. So, did you count the words in that document? No, but I have an elegant proof that the number of words is finite!
  • 44. OK, Let's Try Again
  • 45. I want to be a data scientist. Great!
  • 46. The Tools Way Here's a list of the 25 libraries you really ought to know. How's your R programming?
  • 47. The Tools Way The Good: Don't have to understand the math Practical Can get started doing fun stuff right away
  • 48. The Tools Way The Good: Don't have to understand the math Practical Can get started doing fun stuff right away The Bad: Don't have to understand the math Can get started doing bad science right away
  • 49. So, did you build that model? Yes, and it fits the training data almost perfectly!
  • 50. OK, Maybe Not That Either
  • 52. Example: k-means clustering Unsupervised machine learning technique Given a set of points, group them into k clusters in a way that minimizes the within-cluster sum- of-squares i.e. in a way such that the clusters are as "small" as possible (for a particular conception of "small")
  • 56. The Tools Way # a 2-dimensional example x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) colnames(x) <- c("x", "y") (cl <- kmeans(x, 2)) plot(x, col = cl$cluster) points(cl$centers, col = 1:2, pch = 8, cex = 2)
  • 57. The Tools Way >>> from sklearn import cluster, datasets >>> iris = datasets.load_iris() >>> X_iris = iris.data >>> y_iris = iris.target >>> k_means = cluster.KMeans(n_clusters=3) >>> k_means.fit(X_iris) KMeans(copy_x=True, init='k-means++', ... >>> print(k_means.labels_[::10]) [1 1 1 1 1 0 0 0 0 0 2 2 2 2 2] >>> print(y_iris[::10]) [0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]
  • 58. So What To Do?
  • 60. Data Science from Scratch This is to certify that Joel Grus has honorably completed the course of study outlined in the book Data Science from Scratch: First Principles with Python, and is entitled to all the Rights, Privileges, and Honors thereunto appertaining. Joel GrusJune 23, 2015 Certificate Programs?
  • 62. Learning By Building You don't really understand something until you build it For example, I understand garbage disposals much better now that I had to replace one that was leaking water all over my kitchen More relevantly, I thought I understood hypothesis testing, until I tried to write a book chapter + code about it.
  • 64. Break Things Down Into Small Functions
  • 65. So you don't end up with something like this
  • 67. Example: k-means clustering Given a set of points, group them into k clusters in a way that minimizes the within-cluster sum- of-squares Global optimization is hard, so use a greedy iterative approach
  • 68. Fun Motivation: Image Posterization Image consists of pixels Each pixel is a triplet (R,G,B) Imagine pixels as points in space Find k clusters of pixels Recolor each pixel to its cluster mean I think it's fun, anyway 8 colors
  • 69. Example: k-means clustering given some points, find k clusters by choose k "means" repeat: assign each point to cluster of closest "mean" recompute mean of each cluster sounds simple! let's code!
  • 70. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means
  • 71. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points
  • 72. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points start with no cluster assignments
  • 73. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points start with no cluster assignments for each iteration
  • 74. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points start with no cluster assignments for each iteration for each point
  • 75. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points start with no cluster assignments for each iteration for each point for each mean
  • 76. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points start with no cluster assignments for each iteration for each point for each mean compute the distance
  • 77. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points start with no cluster assignments for each iteration for each point for each mean compute the distance assign the point to the cluster of the mean with the smallest distance
  • 78. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points start with no cluster assignments for each iteration for each point for each mean compute the distance assign the point to the cluster of the mean with the smallest distance find the points in each cluster
  • 79. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means start with k randomly chosen points start with no cluster assignments for each iteration for each point for each mean compute the distance assign the point to the cluster of the mean with the smallest distance find the points in each cluster and compute the new means
  • 80. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means Not impenetrable, but a lot less helpful than it could be
  • 81. def k_means(points, k, num_iters=10): means = list(random.sample(points, k)) assignments = [None for _ in points] for _ in range(num_iters): # assign each point to closest mean for i, point_i in enumerate(points): d_min = float('inf') for j, mean_j in enumerate(means): d = sum((x - y)**2 for x, y in zip(point_i, mean_j)) if d < d_min: d_min = d assignments[i] = j # recompute means for j in range(k): cluster = [point for i, point in enumerate(points) if assignments[i] == j] means[j] = mean(cluster) return means Not impenetrable, but a lot less helpful than it could be Can we make it simpler?
  • 82. Break Things Down Into Small Functions
  • 83. def k_means(points, k, num_iters=10): # start with k of the points as "means" means = random.sample(points, k) # and iterate finding new means for _ in range(num_iters): means = new_means(points, means) return means
  • 84. def new_means(points, means): # assign points to clusters # each cluster is just a list of points clusters = assign_clusters(points, means) # return the cluster means return [mean(cluster) for cluster in clusters]
  • 85. def assign_clusters(points, means): # one cluster for each mean # each cluster starts empty clusters = [[] for _ in means] # assign each point to cluster # corresponding to closest mean for p in points: index = closest_index(point, means) clusters[index].append(point) return clusters
  • 86. def closest_index(point, means): # return index of closest mean return argmin(distance(point, mean) for mean in means) def argmin(xs): # return index of smallest element return min(enumerate(xs), key=lambda pair: pair[1])[0]
  • 87. To Recap k_means(points, k, num_iters=10) mean(points) k_means(points, k, num_iters=10) new_means(points, means) assign_clusters(points, means) closest_index(point, means) argmin(xs) distance(point1, point2) mean(points) add(point1, point2) scalar_multiply(c, point)
  • 88. As a Pedagogical Tool Can be used "top down" (as we did here) Implement high-level logic Then implement the details Nice for exposition Can also be used "bottom up" Implement small pieces Build up to high-level logic Good for workshops
  • 89. Example: Decision Trees Want to predict whether a given Meetup is worth attending (True) or not (False) Inputs are dictionaries describing each Meetup { "group" : "DAML", "date" : "2015-06-23", "beer" : "free", "food" : "dim sum", "speaker" : "@joelgrus", "location" : "Google", "topic" : "shameless self-promotion" } { "group" : "Seattle Atheists", "date" : "2015-06-23", "location" : "Round the Table", "beer" : "none", "food" : "none", "topic" : "Godless Game Night" }
  • 90. Example: Decision Trees { "group" : "DAML", "date" : "2015-06-23", "beer" : "free", "food" : "dim sum", "speaker" : "@joelgrus", "location" : "Google", "topic" : "shameless self-promotion" } { "group" : "Seattle Atheists", "date" : "2015-06-23", "location" : "Round the Table", "beer" : "none", "food" : "none", "topic" : "Godless Game Night" } beer? True False speaker? True False free none paid @jakevdp @joelgrus
  • 91. Example: Decision Trees class LeafNode: def __init__(self, prediction): self.prediction = prediction def predict(self, input_dict): return self.prediction class DecisionNode: def __init__(self, attribute, subtree_dict): self.attribute = attribute self.subtree_dict = subtree_dict def predict(self, input_dict): value = input_dict.get(self.attribute) subtree = self.subtree_dict[value] return subtree.predict(input)
  • 92. Example: Decision Trees Again inspiration from functional programming: type Input = Map.Map String String data Tree = Predict Bool | Subtrees String (Map.Map String Tree) look at the "beer" entry a map from each possible "beer" value to a subtree always predict a specific value
  • 93. Example: Decision Trees type Input = Map.Map String String data Tree = Predict Bool | Subtrees String (Map.Map String Tree) predict :: Tree -> Input -> Bool predict (Predict b) _ = b predict (Subtrees a subtrees) input = predict subtree input where subtree = subtrees Map.! (input Map.!
  • 94. Example: Decision Trees type Input = Map.Map String String data Tree = Predict Bool | Subtrees String (Map.Map String Tree) We can do the same, we'll say a decision tree is either True False (attribute, subtree_dict) ("beer", { "free" : True, "none" : False, "paid" : ("speaker", {...})})
  • 95. predict :: Tree -> Input -> Bool predict (Predict b) _ = b predict (Subtrees a subtrees) input = predict subtree input where subtree = subtrees Map.! (input Map.! a) Example: Decision Trees def predict(tree, input_dict): # leaf node predicts itself if tree in (True, False): return tree else: # destructure tree attribute, subtree_dict = tree # find appropriate subtree value = input_dict[attribute] subtree = subtree_dict[value] # classify using subtree return predict(subtree, input_dict)
  • 96. Not Just For Data Science
  • 97. In Conclusion Teaching data science is fun, if you're smart about it Learning data science is fun, if you're smart about it Writing a book is not that much fun Having written a book is pretty fun Making slides is actually kind of fun Functional programming is a lot of fun

Editor's Notes

  1. hedge fund jerks
  2. sql jockeys
  3. I can do some of these
  4. I can do some of these
  5. I can do some of these
  6. I can do some of these
  7. I can do some of these
  8. I can do some of these
  9. I can do some of these
  10. I can do some of these
  11. I can do some of these
  12. I can do some of these
  13. I can do some of these
  14. I can do some of these
  15. I can do some of these
  16. I can do some of these
  17. I can do some of these
  18. I can do some of these
  19. I can do some of these
  20. I can do some of these
  21. I can do some of these
  22. I can do some of these
  23. I can do some of these
  24. typed in "data science" into LinkedIn Jobs
  25. I can do some of these
  26. for those of us without PhDs
  27. https://www.flickr.com/photos/arlophoto/5616233274
  28. https://www.flickr.com/photos/arlophoto/5616233274
  29. Norvig