SlideShare a Scribd company logo
1 of 109
Download to read offline
Using Spark with MongoDB to Build "Big Data” Analytics Applications
Agile Data Science 2.0
http://bit.ly/agile_data_slides_mongo
Agile Data Science 2.0
Russell Jurney
3
Data Engineer
Data Scientist
Visualization Software Engineer
85%
85%
85%
Writer
85%
Teacher
50%
Russell Jurney is a veteran data
scientist and thought leader. He
coined the term Agile Data Science in
the book of that name from O’Reilly
in 2012, which outlines the first agile
development methodology for data
science. Russell has constructed
numerous full-stack analytics
products over the past ten years and
now works with clients helping them
extract value from their data assets.
Russell Jurney
Skill
Principal Consultant at Data Syndrome
Russell Jurney
Data Syndrome, LLC
Email : russell.jurney@gmail.com
Web : datasyndrome.com
Principal Consultant
Lorem Ipsum dolor siamet suame this placeholder for text can simply
random text. It has roots in a piece of classical. variazioni deiwords which
whichhtly. ven on your zuniga merida della is not denis.
Product Consulting
We build analytics products and systems
consisting of big data viz, predictions,
recommendations, reports and search.
Corporate Training
We offer training courses for data
scientists and engineers and data
science teams,
Video Training
We offer video training courses that rapidly
acclimate you with a technology and
technique.
Agile Data Science 2.0 5
What makes data science “agile data science”?
Theory
Agile Data Science 2.0
Agile Manifesto
6
Four principles for software engineering
Agile Data Science 2.0 7
Yes. Building applications is a fundamental skill for today’s data scientist.
Data Products or Data Science?
Agile Data Science 2.0 8
Data Products
or
Data Science?
Agile Data Science 2.0 9
If someone else has to start over and rebuild it, it ain’t agile.
Big Data or Data Science?
Agile Data Science 2.0 10
Goal of
Methodology
The goal of agile data science in <140 characters: to
document and guide exploratory data analysis to
discover and follow the critical path to a compelling
product.
Agile Data Science 2.0
Agile Data Science Manifesto
11
Seven Principles for Agile Data Science
Discover and pursue the critical path to a killer product
Iterate, iterate, iterate: tables, charts, reports, predictions1.
Integrate the tyrannical opinion of data in product management4.
Get Meta. Describe the process, not just the end-state7.
Ship intermediate output. Even failed experiments have output2.
Climb up and down the data-value pyramid as we work5.
Prototype experiments over implementing tasks3.
6.
Agile Data Science 2.0
Iterate, iterate, iterate: tables, charts, reports, predictions
12
Seven Principles for Agile Data Science
Agile Data Science 2.0
Ship intermediate output. Even failed experiments have output.
13
Seven Principles for Agile Data Science
—>
Agile Data Science 2.0
Prototype experiments over implementing tasks
14
Seven Principles for Agile Data Science
Agile Data Science 2.0
Climb up and down the data-value pyramid
15
Seven Principles for Agile Data Science
People will pay more for the things towards the top, but you need the things
on the bottom to have the things above. They are foundational. See:
Maslow’s Theory of Needs.
Data Value Pyramid
Agile Data Science 2.0
Discover and pursue the critical path to a killer product
16
Seven Principles for Agile Data Science
In analytics, the end-goal moves or is complex in nature, so we model as a
network of tasks rather than as a strictly linear process.
Critical Path
Agile Data Science 2.0
Get Meta. Describe the process, not just the end state
17
Seven Principles for Agile Data Science
Agile Data Science 2.0
Doing Agile Wrong
18
iterating between sprints, not within them is a bad idea.
Agile Data Science 2.0
Data Science Teams
19
The roles in a data science team are many, necessitating generalists
Agile Data Science 2.0
Data Science Releases
20
Snowballing releases towards a broader audience
Agile Data Science 2.0 21
Decorate your environment by visualizing your dataset!
Collage
Agile Data Science 2.0 22
Things we use to build the apps
Tools
Agile Data Science 2.0
Agile Data Science 2.0 Stack
23
Apache Spark Apache Kafka MongoDB
Batch and Realtime
Realtime Queue Document Store
Flask
Simple Web App
Example of a high productivity stack for “big” data applications
ElasticSearch
Search
d3.js
Visualization
Agile Data Science 2.0
Flow of Data Processing
24
Tools and processes in collecting, refining, publishing and decorating data
{“hello”: “world”}
Agile Data Science 2.0
Architecture of Agile Data Science
25
How the system comes together
Data Syndrome: Agile Data Science 2.0
Apache Spark Ecosystem
26
HDFS, Amazon S3, Spark, Spark SQL, Spark MLlib, Spark Streaming
/
Agile Data Science 2.0 27
SQL or dataflow programming?
Programming Models
Agile Data Science 2.0 28
Describing what you want and letting the planner figure out how
SQL
SELECT associations2.object_id,
associations2.term_id, associations2.cat_ID,
associations2.term_taxonomy_id

FROM (SELECT objects_tags.object_id,
objects_tags.term_id, wp_cb_tags2cats.cat_ID,
categories.term_taxonomy_id

FROM (SELECT
wp_term_relationships.object_id,
wp_term_taxonomy.term_id,
wp_term_taxonomy.term_taxonomy_id

FROM wp_term_relationships

LEFT JOIN wp_term_taxonomy ON
wp_term_relationships.term_taxonomy_id =
wp_term_taxonomy.term_taxonomy_id

ORDER BY object_id ASC, term_id ASC) 

AS objects_tags

LEFT JOIN wp_cb_tags2cats ON
objects_tags.term_id = wp_cb_tags2cats.tag_ID

LEFT JOIN (SELECT
wp_term_relationships.object_id,
wp_term_taxonomy.term_id as cat_ID,
wp_term_taxonomy.term_taxonomy_id

FROM wp_term_relationships

LEFT JOIN wp_term_taxonomy ON
wp_term_relationships.term_taxonomy_id =
wp_term_taxonomy.term_taxonomy_id

WHERE wp_term_taxonomy.taxonomy =
'category'

GROUP BY object_id, cat_ID,
term_taxonomy_id

ORDER BY object_id, cat_ID,
term_taxonomy_id) 

AS categories on wp_cb_tags2cats.cat_ID
= categories.term_id

WHERE objects_tags.term_id =
wp_cb_tags2cats.tag_ID

GROUP BY object_id, term_id, cat_ID,
term_taxonomy_id

ORDER BY object_id ASC, term_id ASC, cat_ID
ASC) 

AS associations2

LEFT JOIN categories ON associations2.object_id
= categories.object_id

WHERE associations2.cat_ID <> categories.cat_ID

GROUP BY object_id, term_id, cat_ID,
term_taxonomy_id

ORDER BY object_id, term_id, cat_ID,
term_taxonomy_id
Agile Data Science 2.0 29
Flowing data through operations to effect change
Dataflow Programming
Agile Data Science 2.0 30
The best of both worlds!
SQL AND
Dataflow
Programming
# Flights that were late arriving...

late_arrivals =
on_time_dataframe.filter(on_time_dataframe.ArrD
elayMinutes > 0)

total_late_arrivals = late_arrivals.count()



# Flights that left late but made up time to
arrive on time...

on_time_heros = on_time_dataframe.filter(

(on_time_dataframe.DepDelayMinutes > 0)

&

(on_time_dataframe.ArrDelayMinutes <= 0)

)

total_on_time_heros = on_time_heros.count()



# Get the percentage of flights that are late,
rounded to 1 decimal place

pct_late = round((total_late_arrivals /
(total_flights * 1.0)) * 100, 1)



print("Total flights:
{:,}".format(total_flights))

print("Late departures:
{:,}".format(total_late_departures))

print("Late arrivals:
{:,}".format(total_late_arrivals))

print("Recoveries:
{:,}".format(total_on_time_heros))

print("Percentage Late: {}%".format(pct_late))



# Why are flights late? Lets look at some
delayed flights and the delay causes

late_flights = spark.sql("""

SELECT

ArrDelayMinutes,

WeatherDelay,

CarrierDelay,

NASDelay,

SecurityDelay,

LateAircraftDelay

FROM

on_time_performance

WHERE

WeatherDelay IS NOT NULL

OR

CarrierDelay IS NOT NULL

OR

NASDelay IS NOT NULL

OR

SecurityDelay IS NOT NULL

OR

LateAircraftDelay IS NOT NULL

ORDER BY

FlightDate

""")

late_flights.sample(False, 0.01).show()
# Calculate the percentage contribution to delay for each source

total_delays = spark.sql("""

SELECT

ROUND(SUM(WeatherDelay)/SUM(ArrDelayMinutes) * 100, 1) AS
pct_weather_delay,

ROUND(SUM(CarrierDelay)/SUM(ArrDelayMinutes) * 100, 1) AS
pct_carrier_delay,

ROUND(SUM(NASDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_nas_delay,

ROUND(SUM(SecurityDelay)/SUM(ArrDelayMinutes) * 100, 1) AS
pct_security_delay,

ROUND(SUM(LateAircraftDelay)/SUM(ArrDelayMinutes) * 100, 1) AS
pct_late_aircraft_delay

FROM on_time_performance

""")

total_delays.show()



# Generate a histogram of the weather and carrier delays

weather_delay_histogram = on_time_dataframe

.select("WeatherDelay")

.rdd

.flatMap(lambda x: x)

.histogram(10)



print("{}n{}".format(weather_delay_histogram[0],
weather_delay_histogram[1]))



# Eyeball the first to define our buckets

weather_delay_histogram = on_time_dataframe

.select("WeatherDelay")

.rdd

.flatMap(lambda x: x)

.histogram([1, 15, 30, 60, 120, 240, 480, 720, 24*60.0])

print(weather_delay_histogram)
# Transform the data into something easily consumed by d3

record = {'key': 1, 'data': []}

for label, count in zip(weather_delay_histogram[0],
weather_delay_histogram[1]):

record['data'].append(

{

'label': label,

'count': count

}

)



# Save to Mongo directly, since this is a Tuple not a dataframe or RDD

from pymongo import MongoClient

client = MongoClient()

client.relato.weather_delay_histogram.insert_one(record)
Agile Data Science 2.0 31
Batch processing still dominant for charts, reports, search indexes and
training predictive models
Apache Airflow
for
Batch Scheduling
Agile Data Science 2.0 32
FAA on-time performance data
Data
Data Syndrome: Agile Data Science 2.0
Collect and Serialize Events in JSON
I never regret using JSON
33
Data Syndrome: Agile Data Science 2.0
FAA On-Time Performance Records
95% of commercial flights
34http://www.transtats.bts.gov/Fields.asp?table_id=236
Data Syndrome: Agile Data Science 2.0
FAA On-Time Performance Records
95% of commercial flights
35
"Year","Quarter","Month","DayofMonth","DayOfWeek","FlightDate","UniqueCarrier","AirlineID","Carrier","TailNum","FlightNum",
"OriginAirportID","OriginAirportSeqID","OriginCityMarketID","Origin","OriginCityName","OriginState","OriginStateFips",
"OriginStateName","OriginWac","DestAirportID","DestAirportSeqID","DestCityMarketID","Dest","DestCityName","DestState",
"DestStateFips","DestStateName","DestWac","CRSDepTime","DepTime","DepDelay","DepDelayMinutes","DepDel15","DepartureDelayGroups",
"DepTimeBlk","TaxiOut","WheelsOff","WheelsOn","TaxiIn","CRSArrTime","ArrTime","ArrDelay","ArrDelayMinutes","ArrDel15",
"ArrivalDelayGroups","ArrTimeBlk","Cancelled","CancellationCode","Diverted","CRSElapsedTime","ActualElapsedTime","AirTime",
"Flights","Distance","DistanceGroup","CarrierDelay","WeatherDelay","NASDelay","SecurityDelay","LateAircraftDelay",
"FirstDepTime","TotalAddGTime","LongestAddGTime","DivAirportLandings","DivReachedDest","DivActualElapsedTime","DivArrDelay",
"DivDistance","Div1Airport","Div1AirportID","Div1AirportSeqID","Div1WheelsOn","Div1TotalGTime","Div1LongestGTime",
"Div1WheelsOff","Div1TailNum","Div2Airport","Div2AirportID","Div2AirportSeqID","Div2WheelsOn","Div2TotalGTime",
"Div2LongestGTime","Div2WheelsOff","Div2TailNum","Div3Airport","Div3AirportID","Div3AirportSeqID","Div3WheelsOn",
"Div3TotalGTime","Div3LongestGTime","Div3WheelsOff","Div3TailNum","Div4Airport","Div4AirportID","Div4AirportSeqID",
"Div4WheelsOn","Div4TotalGTime","Div4LongestGTime","Div4WheelsOff","Div4TailNum","Div5Airport","Div5AirportID",
"Div5AirportSeqID","Div5WheelsOn","Div5TotalGTime","Div5LongestGTime","Div5WheelsOff","Div5TailNum"
Data Syndrome: Agile Data Science 2.0
openflights.org Database
Airports, Airlines, Routes
36
Data Syndrome: Agile Data Science 2.0
Scraping the FAA Registry
Airplane Data by Tail Number
37
Data Syndrome: Agile Data Science 2.0
Wikipedia Airlines Entries
Descriptions of Airlines
38
Data Syndrome: Agile Data Science 2.0
National Centers for Environmental Information
Historical Weather Observations
39
Agile Data Science 2.0 40
How to store it and how to fetch it…
Data Structure
and
Access Patterns
Data Syndrome: Agile Data Science 2.0
First Order Form
Storing groups of documents in key/value stores
41
Key/Value and some Document Stores
Partition Tolerance
No Single Master
Data Syndrome: Agile Data Science 2.0
First Order Form
Storing groups of documents in key/value stores
42
Data Syndrome: Agile Data Science 2.0
Second Order Form
Using column stores and compound keys to enable range scans
43
Range Scans for Fun and Profit
One or more Masters
Data Syndrome: Agile Data Science 2.0
Second Order Form
Using column stores and compound keys to enable range scans
44
# Achieve everything many access patterns via key composition:
Requirement: SELECT FLIGHTS WHERE FlightDate < X AND FlightDate> Y
Compose Key: TailNum + FlightDate, ex. N16954-2015-12-16
Range Scan: FROM: N16954-2015-12-01
TO: N16954-2015-01-01
Data Syndrome: Agile Data Science 2.0
Third Order Form
Using databases with B-Trees to query and analyze raw records
45
SELECT COUNT(*), SUM(*)
…
FROM …
GROUP BY …
WHERE …
Agile Data Science 2.0 46
Using the two systems together for fun and profit!
PySpark and MongoDB
Data Syndrome: Agile Data Science 2.0
Working with Spark and MongoDB
The data model for data processing with Spark and MongoDB
47
Simple Web
Data Syndrome: Agile Data Science 2.0
Working with Spark and MongoDB
The data model for data processing with Spark and MongoDB
48
Process Data
Publish Data
Process and Publish Data Model
Data Syndrome: Agile Data Science 2.0
Publishing Data to MongoDB
Storing data from PySpark to MongoDB
49
# PyMongo Spark Setup
import pymongo
import pymongo_spark
pymongo_spark.activate()
# Loading our data
csv_lines = sc.textFile(“data/example.csv")
# Processing our data
data = csv_lines.map(lambda line: line.split(“,”))
schema_data = data.map(lambda x: {'name': x[0], 'company': x[1], 'title': x[2]})
# Publishing our data to MongoDB
schema_data.saveToMongoDB(‘mongodb://localhost:27017/agile_data_science.executives')
Data Syndrome: Agile Data Science 2.0
Verifying Data in MongoDB
Storing data from PySpark to MongoDB
50
> db.executives.find()
{ "_id" : ObjectId("5894fce281d0edd9a2016c76"), "name" : "Russell Jurney", "company" : "Relato", "title" : "CEO" }
{ "_id" : ObjectId("5894fce281d0edd9a2016c77"), "name" : "Florian Liebert", "company" : "Mesosphere", "title" : "CEO" }
{ "_id" : ObjectId("5894fce281d0edd9a2016c75"), "name" : "Don Brown", "company" : "Rocana", "title" : "CIO" }
Data Syndrome: Agile Data Science 2.0
Loading Data from MongoDB
Loading data from MongoDB to PySpark
51
# PyMongo Spark Setup
import pymongo
import pymongo_spark
pymongo_spark.activate()
# Publishing our data to MongoDB
schema_data = sc.mongoRDD(‘mongodb://localhost:27017/agile_data_science.executives')
# Processing data
names = schema_data.map(lambda record: record[“name”])
names.collect()
Agile Data Science 2.0 52
Working our way up the data value pyramid
Climbing the Stack
Agile Data Science 2.0 53
Starting by “plumbing” the system from end to end
Plumbing
Data Syndrome: Agile Data Science 2.0
Publishing Flight Records
Plumbing our master records through to the web
54
Simple Web
Data Syndrome: Agile Data Science 2.0
Publishing Flight Records to MongoDB
Plumbing our master records through to the web
55
import pymongo

import pymongo_spark

# Important: activate pymongo_spark.

pymongo_spark.activate()

# Load the parquet file
on_time_dataframe = spark.read.parquet('data/on_time_performance.parquet')

# Convert to RDD of dicts and save to MongoDB

as_dict = on_time_dataframe.rdd.map(lambda row: row.asDict())
as_dict.saveToMongoDB(‘mongodb://localhost:27017/agile_data_science.on_time_performance')
Data Syndrome: Agile Data Science 2.0
Publishing Flight Records to ElasticSearch
Plumbing our master records through to the web
56
# Load the parquet file

on_time_dataframe = spark.read.parquet('data/on_time_performance.parquet')



# Save the DataFrame to Elasticsearch

on_time_dataframe.write.format("org.elasticsearch.spark.sql")

.option("es.resource","agile_data_science/on_time_performance")

.option("es.batch.size.entries","100")

.mode("overwrite")

.save()
Data Syndrome: Agile Data Science 2.0
Putting Records on the Web
Plumbing our master records through to the web
57
from flask import Flask, render_template, request

from pymongo import MongoClient

from bson import json_util



# Set up Flask and Mongo

app = Flask(__name__)

client = MongoClient()



# Controller: Fetch an email and display it

@app.route("/on_time_performance")

def on_time_performance():



carrier = request.args.get('Carrier')

flight_date = request.args.get('FlightDate')

flight_num = request.args.get('FlightNum')



flight = client.agile_data_science.on_time_performance.find_one({

'Carrier': carrier,

'FlightDate': flight_date,

'FlightNum': int(flight_num)

})



return json_util.dumps(flight)



if __name__ == "__main__":

app.run(debug=True)
Data Syndrome: Agile Data Science 2.0
Putting Records on the Web
Plumbing our master records through to the web
58
Data Syndrome: Agile Data Science 2.0
Putting Records on the Web
Plumbing our master records through to the web
59
Agile Data Science 2.0 60
Getting to know your data
Tables and Charts
Data Syndrome: Agile Data Science 2.0
Tables in PySpark
Back end development in PySpark
61
# Load the parquet file

on_time_dataframe = spark.read.parquet('data/on_time_performance.parquet')



# Use SQL to look at the total flights by month across 2015

on_time_dataframe.registerTempTable("on_time_dataframe")

total_flights_by_month = spark.sql(

"""SELECT Month, Year, COUNT(*) AS total_flights

FROM on_time_dataframe

GROUP BY Year, Month

ORDER BY Year, Month"""

)



# This map/asDict trick makes the rows print a little prettier. It is optional.

flights_chart_data = total_flights_by_month.map(lambda row: row.asDict())

flights_chart_data.collect()



# Save chart to MongoDB

import pymongo_spark

pymongo_spark.activate()

flights_chart_data.saveToMongoDB(

'mongodb://localhost:27017/agile_data_science.flights_by_month'

)
Data Syndrome: Agile Data Science 2.0
Tables in Flask and Jinja2
Front end development in Flask: controller and template
62
# Controller: Fetch a flight table

@app.route("/total_flights")

def total_flights():

total_flights = client.agile_data_science.flights_by_month.find({}, 

sort = [

('Year', 1),

('Month', 1)

])

return render_template('total_flights.html', total_flights=total_flights)
{% extends "layout.html" %}

{% block body %}

<div>

<p class="lead">Total Flights by Month</p>

<table class="table table-condensed table-striped" style="width: 200px;">

<thead>

<th>Month</th>

<th>Total Flights</th>

</thead>

<tbody>

{% for month in total_flights %}

<tr>

<td>{{month.Month}}</td>

<td>{{month.total_flights}}</td>

</tr>

{% endfor %}

</tbody>

</table>

</div>

{% endblock %}
Data Syndrome: Agile Data Science 2.0
Tables
Visualizing data
63
Data Syndrome: Agile Data Science 2.0
Charts in Flask and d3.js
Visualizing data with JSON and d3.js
64
# Serve the chart's data via an asynchronous request (formerly known as 'AJAX')

@app.route("/total_flights.json")

def total_flights_json():

total_flights = client.agile_data_science.flights_by_month.find({}, 

sort = [

('Year', 1),

('Month', 1)

])

return json_util.dumps(total_flights, ensure_ascii=False)
var width = 960,
height = 350;
var y = d3.scale.linear()
.range([height, 0]);
// We define the domain once we get our data in d3.json, below
var chart = d3.select(".chart")
.attr("width", width)
.attr("height", height);
d3.json("/total_flights.json", function(data) {
y.domain([0, d3.max(data, function(d) { return d.total_flights; })]);
var barWidth = width / data.length;
var bar = chart.selectAll("g")
.data(data)
.enter()
.append("g")
.attr("transform", function(d, i) {
return "translate(" + i * barWidth + ",0)"; });
bar.append("rect")
.attr("y", function(d) { return y(d.total_flights); })
.attr("height", function(d) { return height - y(d.total_flights); })
.attr("width", barWidth - 1);
bar.append("text")
.attr("x", barWidth / 2)
.attr("y", function(d) { return y(d.total_flights) + 3; })
.attr("dy", ".75em")
.text(function(d) { return d.total_flights; });
});
Data Syndrome: Agile Data Science 2.0
Charts
Visualizing data
65
Data Syndrome: Agile Data Science 2.0
Charts in Flask and d3.js
Visualizing data with JSON and d3.js
66
# Serve the chart's data via an asynchronous request (formerly known as 'AJAX')

@app.route("/total_flights.json")

def total_flights_json():

total_flights = client.agile_data_science.flights_by_month.find({}, 

sort = [

('Year', 1),

('Month', 1)

])

return json_util.dumps(total_flights, ensure_ascii=False)
var width = 960,

height = 350;



var y = d3.scale.linear()

.range([height, 0]);

// We define the domain once we get our data in d3.json, below



var chart = d3.select(".chart")

.attr("width", width)

.attr("height", height);



d3.json("/total_flights.json", function(data) {



var defaultColor = 'steelblue';

var modeColor = '#4CA9F5';



var maxY = d3.max(data, function(d) { return d.total_flights; });

y.domain([0, maxY]);



var varColor = function(d, i) {

if(d['total_flights'] == maxY) { return modeColor; }

else { return defaultColor; }

}

var barWidth = width / data.length;

var bar = chart.selectAll("g")

.data(data)

.enter()

.append("g")

.attr("transform", function(d, i) { return "translate(" + i * barWidth + ",0)"; });



bar.append("rect")

.attr("y", function(d) { return y(d.total_flights); })

.attr("height", function(d) { return height - y(d.total_flights); })

.attr("width", barWidth - 1)

.style("fill", varColor);



bar.append("text")

.attr("x", barWidth / 2)

.attr("y", function(d) { return y(d.total_flights) + 3; })

.attr("dy", ".75em")

.text(function(d) { return d.total_flights; });

});
Data Syndrome: Agile Data Science 2.0
Charts
Visualizing data
67
Agile Data Science 2.0 68
Exploring your data through interaction
Reports
Data Syndrome: Agile Data Science 2.0
Creating Interactive Ontologies from Semi-Structured Data
Extracting and visualizing entities
69
Data Syndrome: Agile Data Science 2.0
Home Page
Extracting and decorating entities
70
Data Syndrome: Agile Data Science 2.0
Airline Entities on openflights.org
Extracting and decorating entities
71
Data Syndrome: Agile Data Science 2.0
Airline Entities on openflights.org
Extracting and decorating entities
72
# download.sh
curl -Lko /tmp/airlines.dat https://raw.githubusercontent.com/jpatokal/openflights/master/data/airlines.dat
mv /tmp/airlines.dat $PROJECT_HOME/data/airlines.csv
Data Syndrome: Agile Data Science 2.0
Airline Entity Data in FAA On-Time Performance Records
Extracting and decorating entities
73
airlines = spark.read.format('com.databricks.spark.csv')
.options(header='false', nullValue='N')
.load('data/airlines.csv')
airlines.show()
# Is Delta around?
airlines.filter(airlines.C3 == 'DL').show()
+----+---------------+----+---+---+-----+-------------+---+
| C0| C1| C2| C3| C4| C5| C6| C7|
+----+---------------+----+---+---+-----+-------------+---+
|2009|Delta Air Lines|null| DL|DAL|DELTA|United States| Y|
+----+---------------+----+---+---+-----+-------------+---+
Data Syndrome: Agile Data Science 2.0
Airline Entity Preparation
Extracting and decorating entities
74
# Drop fields except for C1 as name, C3 as carrier code
airlines.registerTempTable("airlines")
airlines = spark.sql("SELECT C1 AS Name, C3 AS CarrierCode from airlines")
# Join our 14 carrier codes to the airlines table to get our set of airlines
our_airlines = carrier_codes.join(
airlines, carrier_codes.Carrier == airlines.CarrierCode
)
our_airlines = our_airlines.select('Name', 'CarrierCode')
our_airlines.show()
our_airlines.repartition(1).write.json("data/our_airlines.json")
Data Syndrome: Agile Data Science 2.0
Airline Entity Preparation
Extracting and decorating entities
75
cat data/our_airlines.jsonl:
{"Name":"American Airlines","CarrierCode":"AA"}
{"Name":"Spirit Airlines","CarrierCode":"NK"}
{"Name":"Hawaiian Airlines","CarrierCode":"HA"}
{"Name":"Alaska Airlines","CarrierCode":"AS"}
{"Name":"JetBlue Airways","CarrierCode":"B6"}
{"Name":"United Airlines","CarrierCode":"UA"}
{"Name":"US Airways","CarrierCode":"US"}
{"Name":"SkyWest","CarrierCode":"OO"}
{"Name":"Virgin America","CarrierCode":"VX"}
{"Name":"Southwest Airlines","CarrierCode":"WN"}
{"Name":"Delta Air Lines","CarrierCode":"DL"}
{"Name":"Atlantic Southeast Airlines","CarrierCode":"EV"}
{"Name":"Frontier Airlines","CarrierCode":"F9"}
{"Name":"American Eagle Airlines","CarrierCode":"MQ"}
Data Syndrome: Agile Data Science 2.0
Wikipedia Enrichment
Using Wikipedia to enhance the Airline page
76
Data Syndrome: Agile Data Science 2.0
Wikipedia Enrichment
Using Wikipedia to enhance the Airline page
77
import wikipedia
from bs4 import BeautifulSoup
import tldextract
# Load our airlines...
our_airlines = utils.read_json_lines_file('data/our_airlines.jsonl')
# Build a new list that includes Wikipedia data
with_url = []
for airline in our_airlines:
# Get the Wikipedia page for the airline name
wikipage = wikipedia.page(airline['Name'])
# Get the summary
summary = wikipage.summary
airline['summary'] = summary
# Get the HTML of the page
page = BeautifulSoup(wikipage.html())
# Task: get the logo from the right 'vcard' column
# 1) Get the vcard table
vcard_table = page.find_all('table', class_='vcard')[0]
# 2) The logo is always the first image inside this table
first_image = vcard_table.find_all('img')[0]
# 3) Set the URL to the image
logo_url = 'http:' + first_image.get('src')
airline['logo_url'] = logo_url
# Task: get the company website
# 1) Find the 'Website' table header
th = page.find_all('th', text='Website')[0]
# 2) Find the parent tr element
tr = th.parent
# 3) Find the a (link) tag within the tr
a = tr.find_all('a')[0]
# 4) Finally, get the href of the a tag
url = a.get('href')
airline['url'] = url
# Get the domain to display with the URL
url_parts = tldextract.extract(url)
airline['domain'] = url_parts.domain + '.' + url_parts.suffix
with_url.append(airline)
utils.write_json_lines_file(with_url, 'data/our_airlines_with_wiki.jsonl')
Data Syndrome: Agile Data Science 2.0
Airline Entity
Extracting and decorating entities
78
mongoimport -d agile_data_science -c airlines 
--file data/our_airlines_with_wiki.jsonl
Data Syndrome: Agile Data Science 2.0
Airline Entity
Extracting and decorating entities
79
db.airlines.findOne();
{
"_id" : ObjectId("57c0e656818573ed12d584d1"),
"CarrierCode" : "AA",
"url" : "http://www.aa.com",
"logo_url" : "http://upload.wikimedia.org/.../300px-American...
_2013.svg.png",
"Name" : "American Airlines",
"summary" : "American Airlines, Inc. (AA), commonly referred to as
American..."
}
Data Syndrome: Agile Data Science 2.0
Airline Entity
Extracting and decorating entities
80
# Controller: Fetch an airplane entity page
@app.route("/airline/<carrier_code>")
def airline(carrier_code):
airline_summary = client.agile_data_science.airlines.find_one(
{'CarrierCode': carrier_code}
)
airline_airplanes = client.agile_data_science.airplanes_per_carrier.find_one(
{'Carrier': carrier_code}
)
return render_template(
'airlines.html',
airline_summary=airline_summary,
airline_airplanes=airline_airplanes,
carrier_code=carrier_code
)
Data Syndrome: Agile Data Science 2.0
Airline Entity
Extracting and decorating entities
81
{% extends "layout.html" %}
{% block body %}
<!-- Navigation guide -->
/ <a href="/airlines">Airlines</a>
/ <a href="/airline/{{carrier_code}}">{{carrier_code}}</a>
<!-- Logo -->
<img src="{{airline_summary.logo_url}}" style="float: right;"/>
<p class="lead">
<!-- Airline name and website-->
{{airline_summary.Name}}
/ <a href="{{airline_summary.url}}">{{airline_summary.domain}}</a>
</p>
<!-- Summary -->
<p style="text-align: justify;">{{airline_summary.summary}}</p>
<h4>Fleet: {{airline_airplanes.FleetCount}} Planes</h4>
<ul class="nav nav-pills">
{% for tail_number in airline_airplanes.TailNumbers -%}
<li class="button">
<a href="/airplane/{{tail_number}}">{{tail_number}}</a>
</li>
{% endfor -%}
</ul>
{% endblock %}
Data Syndrome: Agile Data Science 2.0
A Single Airline Entity @ /airline/DL
Extracting and decorating entities
82
Data Syndrome: Agile Data Science 2.0
Summarizing all Airline Entities @ /airplanes
Describing entities in aggregate
83
Data Syndrome: Agile Data Science 2.0
Summarizing Airlines 2.0
Describing entities in aggregate
84
Data Syndrome: Agile Data Science 2.0
Summarizing Airlines 3.0
Describing entities in aggregate
85
Data Syndrome: Agile Data Science 2.0
Summarizing Airlines 3.0 - Entity Resolution Problem
Describing entities in aggregate
86
...
AIRBUS
AIRBUS INDUSTRIE
...
EMBRAER
EMBRAER S A
...
GULFSTREAM AEROSPACE
GULFSTREAM AEROSPACE CORP
...
MCDONNELL DOUGLAS
MCDONNELL DOUGLAS AIRCRAFT CO
MCDONNELL DOUGLAS CORPORATION
...
Data Syndrome: Agile Data Science 2.0
Summarizing Airlines 3.0 - Entity Resolution Hack
Describing entities in aggregate
87
# Detect the longest common beginning string in a pair of strings
def longest_common_beginning(s1, s2):
if s1 == s2:
return s1
min_length = min(len(s1), len(s2))
i = 0
while i < min_length:
if s1[i] == s2[i]:
i += 1
else:
break
return s1[0:i]
# Compare two manufacturers, returning a tuple describing the result
def compare_manufacturers(mfrs):
mfr1 = mfrs[0]
mfr2 = mfrs[1]
lcb = longest_common_beginning(mfr1, mfr2)
len_lcb = len(lcb)
record = {
'mfr1': mfr1,
'mfr2': mfr2,
'lcb': lcb,
'len_lcb': len_lcb,
'eq': mfr1 == mfr2
}
return record
# Pair every unique instance of Manufacturer field with every
# other for comparison
comparison_pairs = manufacturer_variety.join(manufacturer_variety)
# Do the comparisons
comparisons = comparison_pairs.rdd.map(compare_manufacturers)
# Matches have > 5 starting chars in common
matches = comparisons.filter(lambda f: f['eq'] == False and f['len_lcb'] > 5)
#
# Now we create a mapping of duplicate keys from their raw value
# to the one we're going to use
#
# 1) Group the matches by the longest common beginning ('lcb')
common_lcbs = matches.groupBy(lambda x: x['lcb'])
# 2) Emit the raw value for each side of the match along with the key, our 'lcb'
mfr1_map = common_lcbs.map(
lambda x: [(y['mfr1'], x[0]) for y in x[1]]).flatMap(lambda x: x)
mfr2_map = common_lcbs.map(
lambda x: [(y['mfr2'], x[0]) for y in x[1]]).flatMap(lambda x: x)
# 3) Combine the two sides of the comparison's records
map_with_dupes = mfr1_map.union(mfr2_map)
# 4) Remove duplicates
mfr_dedupe_mapping = map_with_dupes.distinct()
# 5) Convert mapping to dataframe to join to airplanes dataframe
mapping_dataframe = mfr_dedupe_mapping.toDF()
# 6) Give the mapping column names
mapping_dataframe.registerTempTable("mapping_dataframe")
mapping_dataframe = spark.sql(
"SELECT _1 AS Raw, _2 AS NewManufacturer FROM mapping_dataframe"
)
Data Syndrome: Agile Data Science 2.0
Summarizing Airlines 4.0
Describing entities in aggregate
88
Data Syndrome: Agile Data Science 2.0
Searching Flights
Using ElasticSearch to search for flights
89
Agile Data Science 2.0 90
Predicting the future for fun and profit
Predictions
Data Syndrome: Agile Data Science 2.0
Back End Design
Deep Storage and Spark vs Kafka and Spark Streaming
91
/
Batch Realtime
Historical Data
Train Model Apply Model
Realtime Data
Data Syndrome: Agile Data Science 2.0 92
jQuery in the web client submits a form to create the prediction request, and
then polls another url every few seconds until the prediction is ready. The
request generates a Kafka event, which a Spark Streaming worker processes
by applying the model we trained in batch. Having done so, it inserts a record
for the prediction in MongoDB, where the Flask app sends it to the web client
the next time it polls the server
Front End Design
/flights/delays/predict/classify_realtime/
Data Syndrome: Agile Data Science 2.0
User Interface
Where the user submits prediction requests
93
Data Syndrome: Agile Data Science 2.0
String Vectorization
From properties of items to vector format
94
Data Syndrome: Agile Data Science 2.0 95
scikit-learn was 166. Spark MLlib is very powerful!
http://bit.ly/train_model_spark
190 Line Model
# !/usr/bin/env python



import sys, os, re



# Pass date and base path to main() from airflow

def main(base_path):



# Default to "."

try: base_path

except NameError: base_path = "."

if not base_path:

base_path = "."



APP_NAME = "train_spark_mllib_model.py"



# If there is no SparkSession, create the environment

try:

sc and spark

except NameError as e:

import findspark

findspark.init()

import pyspark

import pyspark.sql



sc = pyspark.SparkContext()

spark = pyspark.sql.SparkSession(sc).builder.appName(APP_NAME).getOrCreate()



#

# {

# "ArrDelay":5.0,"CRSArrTime":"2015-12-31T03:20:00.000-08:00","CRSDepTime":"2015-12-31T03:05:00.000-08:00",

# "Carrier":"WN","DayOfMonth":31,"DayOfWeek":4,"DayOfYear":365,"DepDelay":14.0,"Dest":"SAN","Distance":368.0,

# "FlightDate":"2015-12-30T16:00:00.000-08:00","FlightNum":"6109","Origin":"TUS"

# }

#

from pyspark.sql.types import StringType, IntegerType, FloatType, DoubleType, DateType, TimestampType

from pyspark.sql.types import StructType, StructField

from pyspark.sql.functions import udf



schema = StructType([

StructField("ArrDelay", DoubleType(), True), # "ArrDelay":5.0

StructField("CRSArrTime", TimestampType(), True), # "CRSArrTime":"2015-12-31T03:20:00.000-08:00"

StructField("CRSDepTime", TimestampType(), True), # "CRSDepTime":"2015-12-31T03:05:00.000-08:00"

StructField("Carrier", StringType(), True), # "Carrier":"WN"

StructField("DayOfMonth", IntegerType(), True), # "DayOfMonth":31

StructField("DayOfWeek", IntegerType(), True), # "DayOfWeek":4

StructField("DayOfYear", IntegerType(), True), # "DayOfYear":365

StructField("DepDelay", DoubleType(), True), # "DepDelay":14.0

StructField("Dest", StringType(), True), # "Dest":"SAN"

StructField("Distance", DoubleType(), True), # "Distance":368.0

StructField("FlightDate", DateType(), True), # "FlightDate":"2015-12-30T16:00:00.000-08:00"

StructField("FlightNum", StringType(), True), # "FlightNum":"6109"

StructField("Origin", StringType(), True), # "Origin":"TUS"

])



input_path = "{}/data/simple_flight_delay_features.jsonl.bz2".format(

base_path

)

features = spark.read.json(input_path, schema=schema)

features.first()



#

# Check for nulls in features before using Spark ML

#

null_counts = [(column, features.where(features[column].isNull()).count()) for column in features.columns]

cols_with_nulls = filter(lambda x: x[1] > 0, null_counts)

print(list(cols_with_nulls))



#

# Add a Route variable to replace FlightNum

#

from pyspark.sql.functions import lit, concat

features_with_route = features.withColumn(

'Route',

concat(

features.Origin,

lit('-'),

features.Dest

)

)

features_with_route.show(6)



#

# Use pysmark.ml.feature.Bucketizer to bucketize ArrDelay into on-time, slightly late, very late (0, 1, 2)

#

from pyspark.ml.feature import Bucketizer



# Setup the Bucketizer

splits = [-float("inf"), -15.0, 0, 30.0, float("inf")]

arrival_bucketizer = Bucketizer(

splits=splits,

inputCol="ArrDelay",

outputCol="ArrDelayBucket"

)



# Save the bucketizer

arrival_bucketizer_path = "{}/models/arrival_bucketizer_2.0.bin".format(base_path)

arrival_bucketizer.write().overwrite().save(arrival_bucketizer_path)



# Apply the bucketizer

ml_bucketized_features = arrival_bucketizer.transform(features_with_route)

ml_bucketized_features.select("ArrDelay", "ArrDelayBucket").show()



#

# Extract features tools in with pyspark.ml.feature

#

from pyspark.ml.feature import StringIndexer, VectorAssembler



# Turn category fields into indexes

for column in ["Carrier", "Origin", "Dest", "Route"]:

string_indexer = StringIndexer(

inputCol=column,

outputCol=column + "_index"

)



string_indexer_model = string_indexer.fit(ml_bucketized_features)

ml_bucketized_features = string_indexer_model.transform(ml_bucketized_features)



# Drop the original column

ml_bucketized_features = ml_bucketized_features.drop(column)



# Save the pipeline model

string_indexer_output_path = "{}/models/string_indexer_model_{}.bin".format(

base_path,

column

)

string_indexer_model.write().overwrite().save(string_indexer_output_path)



# Combine continuous, numeric fields with indexes of nominal ones

# ...into one feature vector

numeric_columns = [

"DepDelay", "Distance",

"DayOfMonth", "DayOfWeek",

"DayOfYear"]

index_columns = ["Carrier_index", "Origin_index",

"Dest_index", "Route_index"]

vector_assembler = VectorAssembler(

inputCols=numeric_columns + index_columns,

outputCol="Features_vec"

)

final_vectorized_features = vector_assembler.transform(ml_bucketized_features)



# Save the numeric vector assembler

vector_assembler_path = "{}/models/numeric_vector_assembler.bin".format(base_path)

vector_assembler.write().overwrite().save(vector_assembler_path)



# Drop the index columns

for column in index_columns:

final_vectorized_features = final_vectorized_features.drop(column)



# Inspect the finalized features

final_vectorized_features.show()



# Instantiate and fit random forest classifier on all the data

from pyspark.ml.classification import RandomForestClassifier

rfc = RandomForestClassifier(

featuresCol="Features_vec",

labelCol="ArrDelayBucket",

predictionCol="Prediction",

maxBins=4657,

maxMemoryInMB=1024

)

model = rfc.fit(final_vectorized_features)



# Save the new model over the old one

model_output_path = "{}/models/spark_random_forest_classifier.flight_delays.5.0.bin".format(

base_path

)

model.write().overwrite().save(model_output_path)



# Evaluate model using test data

predictions = model.transform(final_vectorized_features)



from pyspark.ml.evaluation import MulticlassClassificationEvaluator

evaluator = MulticlassClassificationEvaluator(

predictionCol="Prediction",

labelCol="ArrDelayBucket",

metricName="accuracy"

)

accuracy = evaluator.evaluate(predictions)

print("Accuracy = {}".format(accuracy))



# Check the distribution of predictions

predictions.groupBy("Prediction").count().show()



# Check a sample

predictions.sample(False, 0.001, 18).orderBy("CRSDepTime").show(6)



if __name__ == "__main__":

main(sys.argv[1])

Data Syndrome: Agile Data Science 2.0 96
Using the model in realtime via Spark Streaming!
Deploying the Model
Data Syndrome: Agile Data Science 2.0
Deployment of Model Training to Airflow
Using Airflow to batch schedule the training of our model
97
training_dag = DAG(
'agile_data_science_batch_prediction_model_training',
default_args=default_args
)
# We use the same two commands for all our PySpark tasks
pyspark_bash_command = """
spark-submit --master {{ params.master }} 
{{ params.base_path }}/{{ params.filename }} 
{{ params.base_path }}
"""
pyspark_date_bash_command = """
spark-submit --master {{ params.master }} 
{{ params.base_path }}/{{ params.filename }} 
{{ ds }} {{ params.base_path }}
"""
# Gather the training data for our classifier
extract_features_operator = BashOperator(
task_id = "pyspark_extract_features",
bash_command = pyspark_bash_command,
params = {
"master": "local[8]",
"filename": "ch08/extract_features.py",
"base_path": "{}/".format(PROJECT_HOME)
},
dag=training_dag
)
# Train and persist the classifier model
train_classifier_model_operator = BashOperator(
task_id = "pyspark_train_classifier_model",
bash_command = pyspark_bash_command,
params = {
"master": "local[8]",
"filename": "ch08/train_spark_mllib_model.py",
"base_path": "{}/".format(PROJECT_HOME)
},
dag=training_dag
)
# The model training depends on the feature extraction
train_classifier_model_operator.set_upstream(extract_features_operator)
Data Syndrome: Agile Data Science 2.0
Deployment of Model Training to Airflow
Using Airflow to batch schedule the training of our model
98
Data Syndrome: Agile Data Science 2.0
Connecting to Kafka
Creating a direct stream to the Kafka queue containing our prediction requests
99
#

# Process Prediction Requests in Streaming

#
from pyspark.streaming.kafka import KafkaUtils



stream = KafkaUtils.createDirectStream(

ssc,

[PREDICTION_TOPIC],

{

"metadata.broker.list": BROKERS,

"group.id": "0",

}

)



object_stream = stream.map(lambda x: json.loads(x[1]))

object_stream.pprint()
Data Syndrome: Agile Data Science 2.0
Storing to Mongo from PySpark Streaming
Putting the result where our web application can find it
100
# Store to Mongo

if final_predictions.count() > 0:

final_predictions.rdd.map(lambda x: x.asDict()).saveToMongoDB(

"mongodb://localhost:27017/agile_data_science.flight_delay_classification_response"

)
Data Syndrome: Agile Data Science 2.0
Fetching Results from Mongo
Getting the prediction request back to our front end
101
@app.route("/flights/delays/predict/classify_realtime/response/<unique_id>")
def classify_flight_delays_realtime_response(unique_id):
"""Serves predictions to polling requestors"""
prediction = client.agile_data_science.
flight_delay_classification_response.find_one(
{
"UUID": unique_id
}
)
response = {"status": "WAIT", "id": unique_id}
if prediction:
response["status"] = "OK"
response["prediction"] = prediction
return json_util.dumps(response)
/flights/delays/predict/classify_realtime/
Data Syndrome: Agile Data Science 2.0
Final Result of Deployed Prediction
The bang for the buck!
102
python ch08/make_predictions_streaming.py .
Data Syndrome: Agile Data Science 2.0
Final Result of Deployed Prediction
The bang for the buck!
103
-------------------------------------------
Time: 2016-12-20 18:06:50
-------------------------------------------
{
'Dest': 'ORD',
'DayOfYear': 360,
'FlightDate': '2016-12-25',
'Distance': 606.0,
'DayOfMonth': 25,
'UUID': 'a01b5ccb-49f1-4c4d-af34-188c6ae0bbf0',
'FlightNum': '2010',
'Carrier': 'AA',
'DepDelay': -100.0,
'DayOfWeek': 6,
'Timestamp': '2016-12-20T18:06:45.307114',
'Origin': 'ATL'
}
-------------------------------------------
Time: 2016-12-20 18:06:50
-------------------------------------------
Row(Carrier='AA', DayOfMonth=25, DayOfWeek=6, DayOfYear=360, DepDelay=-100.0,
Dest='ORD', Distance=606.0, FlightDate=datetime.datetime(2016,
12, 25, 0, 0, tzinfo=<iso8601.Utc>), FlightNum='2010',
Origin='ATL', Timestamp=datetime.datetime(2016, 12, 20, 18,
6, 45, 307114, tzinfo=<iso8601.Utc>),
UUID='a01b5ccb-49f1-4c4d-af34-188c6ae0bbf0')
Data Syndrome: Agile Data Science 2.0
Final Result of Deployed Prediction
The bang for the buck!
104
+-------+----------+---------+---------+
|Carrier|...| UUID|
+-------+----------+---------+---------+
| AA|...|a01b5ccb-49f1-4c4...|
+-------+----------+---------+---------+
+-------+----------+---------+---------+
|Carrier|...| Features_vec|
+-------+----------+---------+---------+
| AA|...|(8009,[2,38,51,34...|
+-------+----------+---------+---------+
+-------+----------+---------+---------+
|Carrier|...| Features_vec|
+-------+----------+---------+---------+
| AA|...|(8009,[2,38,51,34...|
+-------+----------+---------+---------+
+-------+----------+---------+---------+
|Carrier|...|Prediction|
+-------+----------+---------+---------+
| AA|...| 0.0|
+-------+----------+---------+---------+
Data Syndrome: Agile Data Science 2.0
Final Result of Deployed Prediction
The bang for the buck!
105
Data Syndrome: Agile Data Science 2.0 106
Next steps for learning more about Agile Data Science 2.0
Next Steps
Building Full-Stack Data Analytics Applications with Spark
http://bit.ly/agile_data_science
Available Now on O’Reilly Safari: http://bit.ly/agile_data_safari
Agile Data Science 2.0
Agile Data Science 2.0 108
Realtime Predictive
Analytics
Rapidly learn to build entire predictive systems driven by
Kafka, PySpark, Speak Streaming, Spark MLlib and with a web
front-end using Python/Flask and JQuery.
Available for purchase at http://datasyndrome.com/video
Data Syndrome Russell Jurney
Principal Consultant
Email : rjurney@datasyndrome.com
Web : datasyndrome.com
Data Syndrome, LLC
Product Consulting
We build analytics products
and systems consisting of
big data viz, predictions,
recommendations, reports
and search.
Corporate Training
We offer training courses
for data scientists and
engineers and data
science teams,
Video Training
We offer video training
courses that rapidly
acclimate you with a
technology and technique.

More Related Content

What's hot

Running Intelligent Applications inside a Database: Deep Learning with Python...
Running Intelligent Applications inside a Database: Deep Learning with Python...Running Intelligent Applications inside a Database: Deep Learning with Python...
Running Intelligent Applications inside a Database: Deep Learning with Python...
Miguel González-Fierro
 

What's hot (18)

Agile Data Science: Hadoop Analytics Applications
Agile Data Science: Hadoop Analytics ApplicationsAgile Data Science: Hadoop Analytics Applications
Agile Data Science: Hadoop Analytics Applications
 
Introduction to PySpark
Introduction to PySparkIntroduction to PySpark
Introduction to PySpark
 
Social Network Analysis in Your Problem Domain
Social Network Analysis in Your Problem DomainSocial Network Analysis in Your Problem Domain
Social Network Analysis in Your Problem Domain
 
Networks All Around Us: Extracting networks from your problem domain
Networks All Around Us: Extracting networks from your problem domainNetworks All Around Us: Extracting networks from your problem domain
Networks All Around Us: Extracting networks from your problem domain
 
Networks All Around Us: Extracting networks from your problem domain
Networks All Around Us: Extracting networks from your problem domainNetworks All Around Us: Extracting networks from your problem domain
Networks All Around Us: Extracting networks from your problem domain
 
Running Intelligent Applications inside a Database: Deep Learning with Python...
Running Intelligent Applications inside a Database: Deep Learning with Python...Running Intelligent Applications inside a Database: Deep Learning with Python...
Running Intelligent Applications inside a Database: Deep Learning with Python...
 
Reproducible, Open Data Science in the Life Sciences
Reproducible, Open  Data Science in the  Life SciencesReproducible, Open  Data Science in the  Life Sciences
Reproducible, Open Data Science in the Life Sciences
 
Adam Fuchs' Accumulo Talk at NoSQL Now! 2013
Adam Fuchs' Accumulo Talk at NoSQL Now! 2013Adam Fuchs' Accumulo Talk at NoSQL Now! 2013
Adam Fuchs' Accumulo Talk at NoSQL Now! 2013
 
HEPData workshop talk
HEPData workshop talkHEPData workshop talk
HEPData workshop talk
 
HEPData Open Repositories 2016 Talk
HEPData Open Repositories 2016 TalkHEPData Open Repositories 2016 Talk
HEPData Open Repositories 2016 Talk
 
Telemetry doesn't have to be scary; Ben Ford
Telemetry doesn't have to be scary; Ben FordTelemetry doesn't have to be scary; Ben Ford
Telemetry doesn't have to be scary; Ben Ford
 
Demo Eclipse Science
Demo Eclipse ScienceDemo Eclipse Science
Demo Eclipse Science
 
Architecture in action 01
Architecture in action 01Architecture in action 01
Architecture in action 01
 
Splunk Ninjas: New features, pivot, and search dojo
Splunk Ninjas: New features, pivot, and search dojoSplunk Ninjas: New features, pivot, and search dojo
Splunk Ninjas: New features, pivot, and search dojo
 
NoSQL: The first New Jakarta EE Specification (DWX 2019)
NoSQL: The first New Jakarta EE Specification (DWX 2019)NoSQL: The first New Jakarta EE Specification (DWX 2019)
NoSQL: The first New Jakarta EE Specification (DWX 2019)
 
Open Data Science Conference Big Data Infrastructure – Introduction to Hadoop...
Open Data Science Conference Big Data Infrastructure – Introduction to Hadoop...Open Data Science Conference Big Data Infrastructure – Introduction to Hadoop...
Open Data Science Conference Big Data Infrastructure – Introduction to Hadoop...
 
HEPData
HEPDataHEPData
HEPData
 
Spark in the Wild: An In-Depth Analysis of 50+ Production Deployments-(Arsala...
Spark in the Wild: An In-Depth Analysis of 50+ Production Deployments-(Arsala...Spark in the Wild: An In-Depth Analysis of 50+ Production Deployments-(Arsala...
Spark in the Wild: An In-Depth Analysis of 50+ Production Deployments-(Arsala...
 

Similar to Agile Data Science 2.0: Using Spark with MongoDB

Similar to Agile Data Science 2.0: Using Spark with MongoDB (20)

Predictive Analytics with Airflow and PySpark
Predictive Analytics with Airflow and PySparkPredictive Analytics with Airflow and PySpark
Predictive Analytics with Airflow and PySpark
 
OLAP on the Cloud with Azure Databricks and Azure Synapse
OLAP on the Cloud with Azure Databricks and Azure SynapseOLAP on the Cloud with Azure Databricks and Azure Synapse
OLAP on the Cloud with Azure Databricks and Azure Synapse
 
Best Practices for Building and Deploying Data Pipelines in Apache Spark
Best Practices for Building and Deploying Data Pipelines in Apache SparkBest Practices for Building and Deploying Data Pipelines in Apache Spark
Best Practices for Building and Deploying Data Pipelines in Apache Spark
 
Ben ford intro
Ben ford introBen ford intro
Ben ford intro
 
Data herding
Data herdingData herding
Data herding
 
Data herding
Data herdingData herding
Data herding
 
Ultra Fast Deep Learning in Hybrid Cloud Using Intel Analytics Zoo & Alluxio
Ultra Fast Deep Learning in Hybrid Cloud Using Intel Analytics Zoo & AlluxioUltra Fast Deep Learning in Hybrid Cloud Using Intel Analytics Zoo & Alluxio
Ultra Fast Deep Learning in Hybrid Cloud Using Intel Analytics Zoo & Alluxio
 
User 2013-oracle-big-data-analytics-1971985
User 2013-oracle-big-data-analytics-1971985User 2013-oracle-big-data-analytics-1971985
User 2013-oracle-big-data-analytics-1971985
 
Intershop Commerce Management with Microsoft SQL Server
Intershop Commerce Management with Microsoft SQL ServerIntershop Commerce Management with Microsoft SQL Server
Intershop Commerce Management with Microsoft SQL Server
 
Open Source 101 2022 - MySQL Indexes and Histograms
Open Source 101 2022 - MySQL Indexes and HistogramsOpen Source 101 2022 - MySQL Indexes and Histograms
Open Source 101 2022 - MySQL Indexes and Histograms
 
Big Data Analytics in the Cloud with Microsoft Azure
Big Data Analytics in the Cloud with Microsoft AzureBig Data Analytics in the Cloud with Microsoft Azure
Big Data Analytics in the Cloud with Microsoft Azure
 
RivieraJUG - MySQL Indexes and Histograms
RivieraJUG - MySQL Indexes and HistogramsRivieraJUG - MySQL Indexes and Histograms
RivieraJUG - MySQL Indexes and Histograms
 
A Hands-on Intro to Data Science and R Presentation.ppt
A Hands-on Intro to Data Science and R Presentation.pptA Hands-on Intro to Data Science and R Presentation.ppt
A Hands-on Intro to Data Science and R Presentation.ppt
 
BigData_Krishna Kumar Sharma
BigData_Krishna Kumar SharmaBigData_Krishna Kumar Sharma
BigData_Krishna Kumar Sharma
 
Discover deep insights with Salesforce Einstein Analytics and Discovery
Discover deep insights with Salesforce Einstein Analytics and DiscoveryDiscover deep insights with Salesforce Einstein Analytics and Discovery
Discover deep insights with Salesforce Einstein Analytics and Discovery
 
Secrets of Enterprise Data Mining: SQL Saturday Oregon 201411
Secrets of Enterprise Data Mining: SQL Saturday Oregon 201411Secrets of Enterprise Data Mining: SQL Saturday Oregon 201411
Secrets of Enterprise Data Mining: SQL Saturday Oregon 201411
 
#rstats lessons for #measure
#rstats lessons for #measure#rstats lessons for #measure
#rstats lessons for #measure
 
Data science and OSS
Data science and OSSData science and OSS
Data science and OSS
 
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
 
Azure Data.pptx
Azure Data.pptxAzure Data.pptx
Azure Data.pptx
 

Recently uploaded

introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
VishalKumarJha10
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 

Recently uploaded (20)

Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 

Agile Data Science 2.0: Using Spark with MongoDB

  • 1. Using Spark with MongoDB to Build "Big Data” Analytics Applications Agile Data Science 2.0 http://bit.ly/agile_data_slides_mongo
  • 2.
  • 3. Agile Data Science 2.0 Russell Jurney 3 Data Engineer Data Scientist Visualization Software Engineer 85% 85% 85% Writer 85% Teacher 50% Russell Jurney is a veteran data scientist and thought leader. He coined the term Agile Data Science in the book of that name from O’Reilly in 2012, which outlines the first agile development methodology for data science. Russell has constructed numerous full-stack analytics products over the past ten years and now works with clients helping them extract value from their data assets. Russell Jurney Skill Principal Consultant at Data Syndrome Russell Jurney Data Syndrome, LLC Email : russell.jurney@gmail.com Web : datasyndrome.com Principal Consultant
  • 4. Lorem Ipsum dolor siamet suame this placeholder for text can simply random text. It has roots in a piece of classical. variazioni deiwords which whichhtly. ven on your zuniga merida della is not denis. Product Consulting We build analytics products and systems consisting of big data viz, predictions, recommendations, reports and search. Corporate Training We offer training courses for data scientists and engineers and data science teams, Video Training We offer video training courses that rapidly acclimate you with a technology and technique.
  • 5. Agile Data Science 2.0 5 What makes data science “agile data science”? Theory
  • 6. Agile Data Science 2.0 Agile Manifesto 6 Four principles for software engineering
  • 7. Agile Data Science 2.0 7 Yes. Building applications is a fundamental skill for today’s data scientist. Data Products or Data Science?
  • 8. Agile Data Science 2.0 8 Data Products or Data Science?
  • 9. Agile Data Science 2.0 9 If someone else has to start over and rebuild it, it ain’t agile. Big Data or Data Science?
  • 10. Agile Data Science 2.0 10 Goal of Methodology The goal of agile data science in <140 characters: to document and guide exploratory data analysis to discover and follow the critical path to a compelling product.
  • 11. Agile Data Science 2.0 Agile Data Science Manifesto 11 Seven Principles for Agile Data Science Discover and pursue the critical path to a killer product Iterate, iterate, iterate: tables, charts, reports, predictions1. Integrate the tyrannical opinion of data in product management4. Get Meta. Describe the process, not just the end-state7. Ship intermediate output. Even failed experiments have output2. Climb up and down the data-value pyramid as we work5. Prototype experiments over implementing tasks3. 6.
  • 12. Agile Data Science 2.0 Iterate, iterate, iterate: tables, charts, reports, predictions 12 Seven Principles for Agile Data Science
  • 13. Agile Data Science 2.0 Ship intermediate output. Even failed experiments have output. 13 Seven Principles for Agile Data Science —>
  • 14. Agile Data Science 2.0 Prototype experiments over implementing tasks 14 Seven Principles for Agile Data Science
  • 15. Agile Data Science 2.0 Climb up and down the data-value pyramid 15 Seven Principles for Agile Data Science People will pay more for the things towards the top, but you need the things on the bottom to have the things above. They are foundational. See: Maslow’s Theory of Needs. Data Value Pyramid
  • 16. Agile Data Science 2.0 Discover and pursue the critical path to a killer product 16 Seven Principles for Agile Data Science In analytics, the end-goal moves or is complex in nature, so we model as a network of tasks rather than as a strictly linear process. Critical Path
  • 17. Agile Data Science 2.0 Get Meta. Describe the process, not just the end state 17 Seven Principles for Agile Data Science
  • 18. Agile Data Science 2.0 Doing Agile Wrong 18 iterating between sprints, not within them is a bad idea.
  • 19. Agile Data Science 2.0 Data Science Teams 19 The roles in a data science team are many, necessitating generalists
  • 20. Agile Data Science 2.0 Data Science Releases 20 Snowballing releases towards a broader audience
  • 21. Agile Data Science 2.0 21 Decorate your environment by visualizing your dataset! Collage
  • 22. Agile Data Science 2.0 22 Things we use to build the apps Tools
  • 23. Agile Data Science 2.0 Agile Data Science 2.0 Stack 23 Apache Spark Apache Kafka MongoDB Batch and Realtime Realtime Queue Document Store Flask Simple Web App Example of a high productivity stack for “big” data applications ElasticSearch Search d3.js Visualization
  • 24. Agile Data Science 2.0 Flow of Data Processing 24 Tools and processes in collecting, refining, publishing and decorating data {“hello”: “world”}
  • 25. Agile Data Science 2.0 Architecture of Agile Data Science 25 How the system comes together
  • 26. Data Syndrome: Agile Data Science 2.0 Apache Spark Ecosystem 26 HDFS, Amazon S3, Spark, Spark SQL, Spark MLlib, Spark Streaming /
  • 27. Agile Data Science 2.0 27 SQL or dataflow programming? Programming Models
  • 28. Agile Data Science 2.0 28 Describing what you want and letting the planner figure out how SQL SELECT associations2.object_id, associations2.term_id, associations2.cat_ID, associations2.term_taxonomy_id
 FROM (SELECT objects_tags.object_id, objects_tags.term_id, wp_cb_tags2cats.cat_ID, categories.term_taxonomy_id
 FROM (SELECT wp_term_relationships.object_id, wp_term_taxonomy.term_id, wp_term_taxonomy.term_taxonomy_id
 FROM wp_term_relationships
 LEFT JOIN wp_term_taxonomy ON wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id
 ORDER BY object_id ASC, term_id ASC) 
 AS objects_tags
 LEFT JOIN wp_cb_tags2cats ON objects_tags.term_id = wp_cb_tags2cats.tag_ID
 LEFT JOIN (SELECT wp_term_relationships.object_id, wp_term_taxonomy.term_id as cat_ID, wp_term_taxonomy.term_taxonomy_id
 FROM wp_term_relationships
 LEFT JOIN wp_term_taxonomy ON wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id
 WHERE wp_term_taxonomy.taxonomy = 'category'
 GROUP BY object_id, cat_ID, term_taxonomy_id
 ORDER BY object_id, cat_ID, term_taxonomy_id) 
 AS categories on wp_cb_tags2cats.cat_ID = categories.term_id
 WHERE objects_tags.term_id = wp_cb_tags2cats.tag_ID
 GROUP BY object_id, term_id, cat_ID, term_taxonomy_id
 ORDER BY object_id ASC, term_id ASC, cat_ID ASC) 
 AS associations2
 LEFT JOIN categories ON associations2.object_id = categories.object_id
 WHERE associations2.cat_ID <> categories.cat_ID
 GROUP BY object_id, term_id, cat_ID, term_taxonomy_id
 ORDER BY object_id, term_id, cat_ID, term_taxonomy_id
  • 29. Agile Data Science 2.0 29 Flowing data through operations to effect change Dataflow Programming
  • 30. Agile Data Science 2.0 30 The best of both worlds! SQL AND Dataflow Programming # Flights that were late arriving...
 late_arrivals = on_time_dataframe.filter(on_time_dataframe.ArrD elayMinutes > 0)
 total_late_arrivals = late_arrivals.count()
 
 # Flights that left late but made up time to arrive on time...
 on_time_heros = on_time_dataframe.filter(
 (on_time_dataframe.DepDelayMinutes > 0)
 &
 (on_time_dataframe.ArrDelayMinutes <= 0)
 )
 total_on_time_heros = on_time_heros.count()
 
 # Get the percentage of flights that are late, rounded to 1 decimal place
 pct_late = round((total_late_arrivals / (total_flights * 1.0)) * 100, 1)
 
 print("Total flights: {:,}".format(total_flights))
 print("Late departures: {:,}".format(total_late_departures))
 print("Late arrivals: {:,}".format(total_late_arrivals))
 print("Recoveries: {:,}".format(total_on_time_heros))
 print("Percentage Late: {}%".format(pct_late))
 
 # Why are flights late? Lets look at some delayed flights and the delay causes
 late_flights = spark.sql("""
 SELECT
 ArrDelayMinutes,
 WeatherDelay,
 CarrierDelay,
 NASDelay,
 SecurityDelay,
 LateAircraftDelay
 FROM
 on_time_performance
 WHERE
 WeatherDelay IS NOT NULL
 OR
 CarrierDelay IS NOT NULL
 OR
 NASDelay IS NOT NULL
 OR
 SecurityDelay IS NOT NULL
 OR
 LateAircraftDelay IS NOT NULL
 ORDER BY
 FlightDate
 """)
 late_flights.sample(False, 0.01).show() # Calculate the percentage contribution to delay for each source
 total_delays = spark.sql("""
 SELECT
 ROUND(SUM(WeatherDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_weather_delay,
 ROUND(SUM(CarrierDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_carrier_delay,
 ROUND(SUM(NASDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_nas_delay,
 ROUND(SUM(SecurityDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_security_delay,
 ROUND(SUM(LateAircraftDelay)/SUM(ArrDelayMinutes) * 100, 1) AS pct_late_aircraft_delay
 FROM on_time_performance
 """)
 total_delays.show()
 
 # Generate a histogram of the weather and carrier delays
 weather_delay_histogram = on_time_dataframe
 .select("WeatherDelay")
 .rdd
 .flatMap(lambda x: x)
 .histogram(10)
 
 print("{}n{}".format(weather_delay_histogram[0], weather_delay_histogram[1]))
 
 # Eyeball the first to define our buckets
 weather_delay_histogram = on_time_dataframe
 .select("WeatherDelay")
 .rdd
 .flatMap(lambda x: x)
 .histogram([1, 15, 30, 60, 120, 240, 480, 720, 24*60.0])
 print(weather_delay_histogram) # Transform the data into something easily consumed by d3
 record = {'key': 1, 'data': []}
 for label, count in zip(weather_delay_histogram[0], weather_delay_histogram[1]):
 record['data'].append(
 {
 'label': label,
 'count': count
 }
 )
 
 # Save to Mongo directly, since this is a Tuple not a dataframe or RDD
 from pymongo import MongoClient
 client = MongoClient()
 client.relato.weather_delay_histogram.insert_one(record)
  • 31. Agile Data Science 2.0 31 Batch processing still dominant for charts, reports, search indexes and training predictive models Apache Airflow for Batch Scheduling
  • 32. Agile Data Science 2.0 32 FAA on-time performance data Data
  • 33. Data Syndrome: Agile Data Science 2.0 Collect and Serialize Events in JSON I never regret using JSON 33
  • 34. Data Syndrome: Agile Data Science 2.0 FAA On-Time Performance Records 95% of commercial flights 34http://www.transtats.bts.gov/Fields.asp?table_id=236
  • 35. Data Syndrome: Agile Data Science 2.0 FAA On-Time Performance Records 95% of commercial flights 35 "Year","Quarter","Month","DayofMonth","DayOfWeek","FlightDate","UniqueCarrier","AirlineID","Carrier","TailNum","FlightNum", "OriginAirportID","OriginAirportSeqID","OriginCityMarketID","Origin","OriginCityName","OriginState","OriginStateFips", "OriginStateName","OriginWac","DestAirportID","DestAirportSeqID","DestCityMarketID","Dest","DestCityName","DestState", "DestStateFips","DestStateName","DestWac","CRSDepTime","DepTime","DepDelay","DepDelayMinutes","DepDel15","DepartureDelayGroups", "DepTimeBlk","TaxiOut","WheelsOff","WheelsOn","TaxiIn","CRSArrTime","ArrTime","ArrDelay","ArrDelayMinutes","ArrDel15", "ArrivalDelayGroups","ArrTimeBlk","Cancelled","CancellationCode","Diverted","CRSElapsedTime","ActualElapsedTime","AirTime", "Flights","Distance","DistanceGroup","CarrierDelay","WeatherDelay","NASDelay","SecurityDelay","LateAircraftDelay", "FirstDepTime","TotalAddGTime","LongestAddGTime","DivAirportLandings","DivReachedDest","DivActualElapsedTime","DivArrDelay", "DivDistance","Div1Airport","Div1AirportID","Div1AirportSeqID","Div1WheelsOn","Div1TotalGTime","Div1LongestGTime", "Div1WheelsOff","Div1TailNum","Div2Airport","Div2AirportID","Div2AirportSeqID","Div2WheelsOn","Div2TotalGTime", "Div2LongestGTime","Div2WheelsOff","Div2TailNum","Div3Airport","Div3AirportID","Div3AirportSeqID","Div3WheelsOn", "Div3TotalGTime","Div3LongestGTime","Div3WheelsOff","Div3TailNum","Div4Airport","Div4AirportID","Div4AirportSeqID", "Div4WheelsOn","Div4TotalGTime","Div4LongestGTime","Div4WheelsOff","Div4TailNum","Div5Airport","Div5AirportID", "Div5AirportSeqID","Div5WheelsOn","Div5TotalGTime","Div5LongestGTime","Div5WheelsOff","Div5TailNum"
  • 36. Data Syndrome: Agile Data Science 2.0 openflights.org Database Airports, Airlines, Routes 36
  • 37. Data Syndrome: Agile Data Science 2.0 Scraping the FAA Registry Airplane Data by Tail Number 37
  • 38. Data Syndrome: Agile Data Science 2.0 Wikipedia Airlines Entries Descriptions of Airlines 38
  • 39. Data Syndrome: Agile Data Science 2.0 National Centers for Environmental Information Historical Weather Observations 39
  • 40. Agile Data Science 2.0 40 How to store it and how to fetch it… Data Structure and Access Patterns
  • 41. Data Syndrome: Agile Data Science 2.0 First Order Form Storing groups of documents in key/value stores 41 Key/Value and some Document Stores Partition Tolerance No Single Master
  • 42. Data Syndrome: Agile Data Science 2.0 First Order Form Storing groups of documents in key/value stores 42
  • 43. Data Syndrome: Agile Data Science 2.0 Second Order Form Using column stores and compound keys to enable range scans 43 Range Scans for Fun and Profit One or more Masters
  • 44. Data Syndrome: Agile Data Science 2.0 Second Order Form Using column stores and compound keys to enable range scans 44 # Achieve everything many access patterns via key composition: Requirement: SELECT FLIGHTS WHERE FlightDate < X AND FlightDate> Y Compose Key: TailNum + FlightDate, ex. N16954-2015-12-16 Range Scan: FROM: N16954-2015-12-01 TO: N16954-2015-01-01
  • 45. Data Syndrome: Agile Data Science 2.0 Third Order Form Using databases with B-Trees to query and analyze raw records 45 SELECT COUNT(*), SUM(*) … FROM … GROUP BY … WHERE …
  • 46. Agile Data Science 2.0 46 Using the two systems together for fun and profit! PySpark and MongoDB
  • 47. Data Syndrome: Agile Data Science 2.0 Working with Spark and MongoDB The data model for data processing with Spark and MongoDB 47 Simple Web
  • 48. Data Syndrome: Agile Data Science 2.0 Working with Spark and MongoDB The data model for data processing with Spark and MongoDB 48 Process Data Publish Data Process and Publish Data Model
  • 49. Data Syndrome: Agile Data Science 2.0 Publishing Data to MongoDB Storing data from PySpark to MongoDB 49 # PyMongo Spark Setup import pymongo import pymongo_spark pymongo_spark.activate() # Loading our data csv_lines = sc.textFile(“data/example.csv") # Processing our data data = csv_lines.map(lambda line: line.split(“,”)) schema_data = data.map(lambda x: {'name': x[0], 'company': x[1], 'title': x[2]}) # Publishing our data to MongoDB schema_data.saveToMongoDB(‘mongodb://localhost:27017/agile_data_science.executives')
  • 50. Data Syndrome: Agile Data Science 2.0 Verifying Data in MongoDB Storing data from PySpark to MongoDB 50 > db.executives.find() { "_id" : ObjectId("5894fce281d0edd9a2016c76"), "name" : "Russell Jurney", "company" : "Relato", "title" : "CEO" } { "_id" : ObjectId("5894fce281d0edd9a2016c77"), "name" : "Florian Liebert", "company" : "Mesosphere", "title" : "CEO" } { "_id" : ObjectId("5894fce281d0edd9a2016c75"), "name" : "Don Brown", "company" : "Rocana", "title" : "CIO" }
  • 51. Data Syndrome: Agile Data Science 2.0 Loading Data from MongoDB Loading data from MongoDB to PySpark 51 # PyMongo Spark Setup import pymongo import pymongo_spark pymongo_spark.activate() # Publishing our data to MongoDB schema_data = sc.mongoRDD(‘mongodb://localhost:27017/agile_data_science.executives') # Processing data names = schema_data.map(lambda record: record[“name”]) names.collect()
  • 52. Agile Data Science 2.0 52 Working our way up the data value pyramid Climbing the Stack
  • 53. Agile Data Science 2.0 53 Starting by “plumbing” the system from end to end Plumbing
  • 54. Data Syndrome: Agile Data Science 2.0 Publishing Flight Records Plumbing our master records through to the web 54 Simple Web
  • 55. Data Syndrome: Agile Data Science 2.0 Publishing Flight Records to MongoDB Plumbing our master records through to the web 55 import pymongo
 import pymongo_spark
 # Important: activate pymongo_spark.
 pymongo_spark.activate()
 # Load the parquet file on_time_dataframe = spark.read.parquet('data/on_time_performance.parquet')
 # Convert to RDD of dicts and save to MongoDB
 as_dict = on_time_dataframe.rdd.map(lambda row: row.asDict()) as_dict.saveToMongoDB(‘mongodb://localhost:27017/agile_data_science.on_time_performance')
  • 56. Data Syndrome: Agile Data Science 2.0 Publishing Flight Records to ElasticSearch Plumbing our master records through to the web 56 # Load the parquet file
 on_time_dataframe = spark.read.parquet('data/on_time_performance.parquet')
 
 # Save the DataFrame to Elasticsearch
 on_time_dataframe.write.format("org.elasticsearch.spark.sql")
 .option("es.resource","agile_data_science/on_time_performance")
 .option("es.batch.size.entries","100")
 .mode("overwrite")
 .save()
  • 57. Data Syndrome: Agile Data Science 2.0 Putting Records on the Web Plumbing our master records through to the web 57 from flask import Flask, render_template, request
 from pymongo import MongoClient
 from bson import json_util
 
 # Set up Flask and Mongo
 app = Flask(__name__)
 client = MongoClient()
 
 # Controller: Fetch an email and display it
 @app.route("/on_time_performance")
 def on_time_performance():
 
 carrier = request.args.get('Carrier')
 flight_date = request.args.get('FlightDate')
 flight_num = request.args.get('FlightNum')
 
 flight = client.agile_data_science.on_time_performance.find_one({
 'Carrier': carrier,
 'FlightDate': flight_date,
 'FlightNum': int(flight_num)
 })
 
 return json_util.dumps(flight)
 
 if __name__ == "__main__":
 app.run(debug=True)
  • 58. Data Syndrome: Agile Data Science 2.0 Putting Records on the Web Plumbing our master records through to the web 58
  • 59. Data Syndrome: Agile Data Science 2.0 Putting Records on the Web Plumbing our master records through to the web 59
  • 60. Agile Data Science 2.0 60 Getting to know your data Tables and Charts
  • 61. Data Syndrome: Agile Data Science 2.0 Tables in PySpark Back end development in PySpark 61 # Load the parquet file
 on_time_dataframe = spark.read.parquet('data/on_time_performance.parquet')
 
 # Use SQL to look at the total flights by month across 2015
 on_time_dataframe.registerTempTable("on_time_dataframe")
 total_flights_by_month = spark.sql(
 """SELECT Month, Year, COUNT(*) AS total_flights
 FROM on_time_dataframe
 GROUP BY Year, Month
 ORDER BY Year, Month"""
 )
 
 # This map/asDict trick makes the rows print a little prettier. It is optional.
 flights_chart_data = total_flights_by_month.map(lambda row: row.asDict())
 flights_chart_data.collect()
 
 # Save chart to MongoDB
 import pymongo_spark
 pymongo_spark.activate()
 flights_chart_data.saveToMongoDB(
 'mongodb://localhost:27017/agile_data_science.flights_by_month'
 )
  • 62. Data Syndrome: Agile Data Science 2.0 Tables in Flask and Jinja2 Front end development in Flask: controller and template 62 # Controller: Fetch a flight table
 @app.route("/total_flights")
 def total_flights():
 total_flights = client.agile_data_science.flights_by_month.find({}, 
 sort = [
 ('Year', 1),
 ('Month', 1)
 ])
 return render_template('total_flights.html', total_flights=total_flights) {% extends "layout.html" %}
 {% block body %}
 <div>
 <p class="lead">Total Flights by Month</p>
 <table class="table table-condensed table-striped" style="width: 200px;">
 <thead>
 <th>Month</th>
 <th>Total Flights</th>
 </thead>
 <tbody>
 {% for month in total_flights %}
 <tr>
 <td>{{month.Month}}</td>
 <td>{{month.total_flights}}</td>
 </tr>
 {% endfor %}
 </tbody>
 </table>
 </div>
 {% endblock %}
  • 63. Data Syndrome: Agile Data Science 2.0 Tables Visualizing data 63
  • 64. Data Syndrome: Agile Data Science 2.0 Charts in Flask and d3.js Visualizing data with JSON and d3.js 64 # Serve the chart's data via an asynchronous request (formerly known as 'AJAX')
 @app.route("/total_flights.json")
 def total_flights_json():
 total_flights = client.agile_data_science.flights_by_month.find({}, 
 sort = [
 ('Year', 1),
 ('Month', 1)
 ])
 return json_util.dumps(total_flights, ensure_ascii=False) var width = 960, height = 350; var y = d3.scale.linear() .range([height, 0]); // We define the domain once we get our data in d3.json, below var chart = d3.select(".chart") .attr("width", width) .attr("height", height); d3.json("/total_flights.json", function(data) { y.domain([0, d3.max(data, function(d) { return d.total_flights; })]); var barWidth = width / data.length; var bar = chart.selectAll("g") .data(data) .enter() .append("g") .attr("transform", function(d, i) { return "translate(" + i * barWidth + ",0)"; }); bar.append("rect") .attr("y", function(d) { return y(d.total_flights); }) .attr("height", function(d) { return height - y(d.total_flights); }) .attr("width", barWidth - 1); bar.append("text") .attr("x", barWidth / 2) .attr("y", function(d) { return y(d.total_flights) + 3; }) .attr("dy", ".75em") .text(function(d) { return d.total_flights; }); });
  • 65. Data Syndrome: Agile Data Science 2.0 Charts Visualizing data 65
  • 66. Data Syndrome: Agile Data Science 2.0 Charts in Flask and d3.js Visualizing data with JSON and d3.js 66 # Serve the chart's data via an asynchronous request (formerly known as 'AJAX')
 @app.route("/total_flights.json")
 def total_flights_json():
 total_flights = client.agile_data_science.flights_by_month.find({}, 
 sort = [
 ('Year', 1),
 ('Month', 1)
 ])
 return json_util.dumps(total_flights, ensure_ascii=False) var width = 960,
 height = 350;
 
 var y = d3.scale.linear()
 .range([height, 0]);
 // We define the domain once we get our data in d3.json, below
 
 var chart = d3.select(".chart")
 .attr("width", width)
 .attr("height", height);
 
 d3.json("/total_flights.json", function(data) {
 
 var defaultColor = 'steelblue';
 var modeColor = '#4CA9F5';
 
 var maxY = d3.max(data, function(d) { return d.total_flights; });
 y.domain([0, maxY]);
 
 var varColor = function(d, i) {
 if(d['total_flights'] == maxY) { return modeColor; }
 else { return defaultColor; }
 }
 var barWidth = width / data.length;
 var bar = chart.selectAll("g")
 .data(data)
 .enter()
 .append("g")
 .attr("transform", function(d, i) { return "translate(" + i * barWidth + ",0)"; });
 
 bar.append("rect")
 .attr("y", function(d) { return y(d.total_flights); })
 .attr("height", function(d) { return height - y(d.total_flights); })
 .attr("width", barWidth - 1)
 .style("fill", varColor);
 
 bar.append("text")
 .attr("x", barWidth / 2)
 .attr("y", function(d) { return y(d.total_flights) + 3; })
 .attr("dy", ".75em")
 .text(function(d) { return d.total_flights; });
 });
  • 67. Data Syndrome: Agile Data Science 2.0 Charts Visualizing data 67
  • 68. Agile Data Science 2.0 68 Exploring your data through interaction Reports
  • 69. Data Syndrome: Agile Data Science 2.0 Creating Interactive Ontologies from Semi-Structured Data Extracting and visualizing entities 69
  • 70. Data Syndrome: Agile Data Science 2.0 Home Page Extracting and decorating entities 70
  • 71. Data Syndrome: Agile Data Science 2.0 Airline Entities on openflights.org Extracting and decorating entities 71
  • 72. Data Syndrome: Agile Data Science 2.0 Airline Entities on openflights.org Extracting and decorating entities 72 # download.sh curl -Lko /tmp/airlines.dat https://raw.githubusercontent.com/jpatokal/openflights/master/data/airlines.dat mv /tmp/airlines.dat $PROJECT_HOME/data/airlines.csv
  • 73. Data Syndrome: Agile Data Science 2.0 Airline Entity Data in FAA On-Time Performance Records Extracting and decorating entities 73 airlines = spark.read.format('com.databricks.spark.csv') .options(header='false', nullValue='N') .load('data/airlines.csv') airlines.show() # Is Delta around? airlines.filter(airlines.C3 == 'DL').show() +----+---------------+----+---+---+-----+-------------+---+ | C0| C1| C2| C3| C4| C5| C6| C7| +----+---------------+----+---+---+-----+-------------+---+ |2009|Delta Air Lines|null| DL|DAL|DELTA|United States| Y| +----+---------------+----+---+---+-----+-------------+---+
  • 74. Data Syndrome: Agile Data Science 2.0 Airline Entity Preparation Extracting and decorating entities 74 # Drop fields except for C1 as name, C3 as carrier code airlines.registerTempTable("airlines") airlines = spark.sql("SELECT C1 AS Name, C3 AS CarrierCode from airlines") # Join our 14 carrier codes to the airlines table to get our set of airlines our_airlines = carrier_codes.join( airlines, carrier_codes.Carrier == airlines.CarrierCode ) our_airlines = our_airlines.select('Name', 'CarrierCode') our_airlines.show() our_airlines.repartition(1).write.json("data/our_airlines.json")
  • 75. Data Syndrome: Agile Data Science 2.0 Airline Entity Preparation Extracting and decorating entities 75 cat data/our_airlines.jsonl: {"Name":"American Airlines","CarrierCode":"AA"} {"Name":"Spirit Airlines","CarrierCode":"NK"} {"Name":"Hawaiian Airlines","CarrierCode":"HA"} {"Name":"Alaska Airlines","CarrierCode":"AS"} {"Name":"JetBlue Airways","CarrierCode":"B6"} {"Name":"United Airlines","CarrierCode":"UA"} {"Name":"US Airways","CarrierCode":"US"} {"Name":"SkyWest","CarrierCode":"OO"} {"Name":"Virgin America","CarrierCode":"VX"} {"Name":"Southwest Airlines","CarrierCode":"WN"} {"Name":"Delta Air Lines","CarrierCode":"DL"} {"Name":"Atlantic Southeast Airlines","CarrierCode":"EV"} {"Name":"Frontier Airlines","CarrierCode":"F9"} {"Name":"American Eagle Airlines","CarrierCode":"MQ"}
  • 76. Data Syndrome: Agile Data Science 2.0 Wikipedia Enrichment Using Wikipedia to enhance the Airline page 76
  • 77. Data Syndrome: Agile Data Science 2.0 Wikipedia Enrichment Using Wikipedia to enhance the Airline page 77 import wikipedia from bs4 import BeautifulSoup import tldextract # Load our airlines... our_airlines = utils.read_json_lines_file('data/our_airlines.jsonl') # Build a new list that includes Wikipedia data with_url = [] for airline in our_airlines: # Get the Wikipedia page for the airline name wikipage = wikipedia.page(airline['Name']) # Get the summary summary = wikipage.summary airline['summary'] = summary # Get the HTML of the page page = BeautifulSoup(wikipage.html()) # Task: get the logo from the right 'vcard' column # 1) Get the vcard table vcard_table = page.find_all('table', class_='vcard')[0] # 2) The logo is always the first image inside this table first_image = vcard_table.find_all('img')[0] # 3) Set the URL to the image logo_url = 'http:' + first_image.get('src') airline['logo_url'] = logo_url # Task: get the company website # 1) Find the 'Website' table header th = page.find_all('th', text='Website')[0] # 2) Find the parent tr element tr = th.parent # 3) Find the a (link) tag within the tr a = tr.find_all('a')[0] # 4) Finally, get the href of the a tag url = a.get('href') airline['url'] = url # Get the domain to display with the URL url_parts = tldextract.extract(url) airline['domain'] = url_parts.domain + '.' + url_parts.suffix with_url.append(airline) utils.write_json_lines_file(with_url, 'data/our_airlines_with_wiki.jsonl')
  • 78. Data Syndrome: Agile Data Science 2.0 Airline Entity Extracting and decorating entities 78 mongoimport -d agile_data_science -c airlines --file data/our_airlines_with_wiki.jsonl
  • 79. Data Syndrome: Agile Data Science 2.0 Airline Entity Extracting and decorating entities 79 db.airlines.findOne(); { "_id" : ObjectId("57c0e656818573ed12d584d1"), "CarrierCode" : "AA", "url" : "http://www.aa.com", "logo_url" : "http://upload.wikimedia.org/.../300px-American... _2013.svg.png", "Name" : "American Airlines", "summary" : "American Airlines, Inc. (AA), commonly referred to as American..." }
  • 80. Data Syndrome: Agile Data Science 2.0 Airline Entity Extracting and decorating entities 80 # Controller: Fetch an airplane entity page @app.route("/airline/<carrier_code>") def airline(carrier_code): airline_summary = client.agile_data_science.airlines.find_one( {'CarrierCode': carrier_code} ) airline_airplanes = client.agile_data_science.airplanes_per_carrier.find_one( {'Carrier': carrier_code} ) return render_template( 'airlines.html', airline_summary=airline_summary, airline_airplanes=airline_airplanes, carrier_code=carrier_code )
  • 81. Data Syndrome: Agile Data Science 2.0 Airline Entity Extracting and decorating entities 81 {% extends "layout.html" %} {% block body %} <!-- Navigation guide --> / <a href="/airlines">Airlines</a> / <a href="/airline/{{carrier_code}}">{{carrier_code}}</a> <!-- Logo --> <img src="{{airline_summary.logo_url}}" style="float: right;"/> <p class="lead"> <!-- Airline name and website--> {{airline_summary.Name}} / <a href="{{airline_summary.url}}">{{airline_summary.domain}}</a> </p> <!-- Summary --> <p style="text-align: justify;">{{airline_summary.summary}}</p> <h4>Fleet: {{airline_airplanes.FleetCount}} Planes</h4> <ul class="nav nav-pills"> {% for tail_number in airline_airplanes.TailNumbers -%} <li class="button"> <a href="/airplane/{{tail_number}}">{{tail_number}}</a> </li> {% endfor -%} </ul> {% endblock %}
  • 82. Data Syndrome: Agile Data Science 2.0 A Single Airline Entity @ /airline/DL Extracting and decorating entities 82
  • 83. Data Syndrome: Agile Data Science 2.0 Summarizing all Airline Entities @ /airplanes Describing entities in aggregate 83
  • 84. Data Syndrome: Agile Data Science 2.0 Summarizing Airlines 2.0 Describing entities in aggregate 84
  • 85. Data Syndrome: Agile Data Science 2.0 Summarizing Airlines 3.0 Describing entities in aggregate 85
  • 86. Data Syndrome: Agile Data Science 2.0 Summarizing Airlines 3.0 - Entity Resolution Problem Describing entities in aggregate 86 ... AIRBUS AIRBUS INDUSTRIE ... EMBRAER EMBRAER S A ... GULFSTREAM AEROSPACE GULFSTREAM AEROSPACE CORP ... MCDONNELL DOUGLAS MCDONNELL DOUGLAS AIRCRAFT CO MCDONNELL DOUGLAS CORPORATION ...
  • 87. Data Syndrome: Agile Data Science 2.0 Summarizing Airlines 3.0 - Entity Resolution Hack Describing entities in aggregate 87 # Detect the longest common beginning string in a pair of strings def longest_common_beginning(s1, s2): if s1 == s2: return s1 min_length = min(len(s1), len(s2)) i = 0 while i < min_length: if s1[i] == s2[i]: i += 1 else: break return s1[0:i] # Compare two manufacturers, returning a tuple describing the result def compare_manufacturers(mfrs): mfr1 = mfrs[0] mfr2 = mfrs[1] lcb = longest_common_beginning(mfr1, mfr2) len_lcb = len(lcb) record = { 'mfr1': mfr1, 'mfr2': mfr2, 'lcb': lcb, 'len_lcb': len_lcb, 'eq': mfr1 == mfr2 } return record # Pair every unique instance of Manufacturer field with every # other for comparison comparison_pairs = manufacturer_variety.join(manufacturer_variety) # Do the comparisons comparisons = comparison_pairs.rdd.map(compare_manufacturers) # Matches have > 5 starting chars in common matches = comparisons.filter(lambda f: f['eq'] == False and f['len_lcb'] > 5) # # Now we create a mapping of duplicate keys from their raw value # to the one we're going to use # # 1) Group the matches by the longest common beginning ('lcb') common_lcbs = matches.groupBy(lambda x: x['lcb']) # 2) Emit the raw value for each side of the match along with the key, our 'lcb' mfr1_map = common_lcbs.map( lambda x: [(y['mfr1'], x[0]) for y in x[1]]).flatMap(lambda x: x) mfr2_map = common_lcbs.map( lambda x: [(y['mfr2'], x[0]) for y in x[1]]).flatMap(lambda x: x) # 3) Combine the two sides of the comparison's records map_with_dupes = mfr1_map.union(mfr2_map) # 4) Remove duplicates mfr_dedupe_mapping = map_with_dupes.distinct() # 5) Convert mapping to dataframe to join to airplanes dataframe mapping_dataframe = mfr_dedupe_mapping.toDF() # 6) Give the mapping column names mapping_dataframe.registerTempTable("mapping_dataframe") mapping_dataframe = spark.sql( "SELECT _1 AS Raw, _2 AS NewManufacturer FROM mapping_dataframe" )
  • 88. Data Syndrome: Agile Data Science 2.0 Summarizing Airlines 4.0 Describing entities in aggregate 88
  • 89. Data Syndrome: Agile Data Science 2.0 Searching Flights Using ElasticSearch to search for flights 89
  • 90. Agile Data Science 2.0 90 Predicting the future for fun and profit Predictions
  • 91. Data Syndrome: Agile Data Science 2.0 Back End Design Deep Storage and Spark vs Kafka and Spark Streaming 91 / Batch Realtime Historical Data Train Model Apply Model Realtime Data
  • 92. Data Syndrome: Agile Data Science 2.0 92 jQuery in the web client submits a form to create the prediction request, and then polls another url every few seconds until the prediction is ready. The request generates a Kafka event, which a Spark Streaming worker processes by applying the model we trained in batch. Having done so, it inserts a record for the prediction in MongoDB, where the Flask app sends it to the web client the next time it polls the server Front End Design /flights/delays/predict/classify_realtime/
  • 93. Data Syndrome: Agile Data Science 2.0 User Interface Where the user submits prediction requests 93
  • 94. Data Syndrome: Agile Data Science 2.0 String Vectorization From properties of items to vector format 94
  • 95. Data Syndrome: Agile Data Science 2.0 95 scikit-learn was 166. Spark MLlib is very powerful! http://bit.ly/train_model_spark 190 Line Model # !/usr/bin/env python
 
 import sys, os, re
 
 # Pass date and base path to main() from airflow
 def main(base_path):
 
 # Default to "."
 try: base_path
 except NameError: base_path = "."
 if not base_path:
 base_path = "."
 
 APP_NAME = "train_spark_mllib_model.py"
 
 # If there is no SparkSession, create the environment
 try:
 sc and spark
 except NameError as e:
 import findspark
 findspark.init()
 import pyspark
 import pyspark.sql
 
 sc = pyspark.SparkContext()
 spark = pyspark.sql.SparkSession(sc).builder.appName(APP_NAME).getOrCreate()
 
 #
 # {
 # "ArrDelay":5.0,"CRSArrTime":"2015-12-31T03:20:00.000-08:00","CRSDepTime":"2015-12-31T03:05:00.000-08:00",
 # "Carrier":"WN","DayOfMonth":31,"DayOfWeek":4,"DayOfYear":365,"DepDelay":14.0,"Dest":"SAN","Distance":368.0,
 # "FlightDate":"2015-12-30T16:00:00.000-08:00","FlightNum":"6109","Origin":"TUS"
 # }
 #
 from pyspark.sql.types import StringType, IntegerType, FloatType, DoubleType, DateType, TimestampType
 from pyspark.sql.types import StructType, StructField
 from pyspark.sql.functions import udf
 
 schema = StructType([
 StructField("ArrDelay", DoubleType(), True), # "ArrDelay":5.0
 StructField("CRSArrTime", TimestampType(), True), # "CRSArrTime":"2015-12-31T03:20:00.000-08:00"
 StructField("CRSDepTime", TimestampType(), True), # "CRSDepTime":"2015-12-31T03:05:00.000-08:00"
 StructField("Carrier", StringType(), True), # "Carrier":"WN"
 StructField("DayOfMonth", IntegerType(), True), # "DayOfMonth":31
 StructField("DayOfWeek", IntegerType(), True), # "DayOfWeek":4
 StructField("DayOfYear", IntegerType(), True), # "DayOfYear":365
 StructField("DepDelay", DoubleType(), True), # "DepDelay":14.0
 StructField("Dest", StringType(), True), # "Dest":"SAN"
 StructField("Distance", DoubleType(), True), # "Distance":368.0
 StructField("FlightDate", DateType(), True), # "FlightDate":"2015-12-30T16:00:00.000-08:00"
 StructField("FlightNum", StringType(), True), # "FlightNum":"6109"
 StructField("Origin", StringType(), True), # "Origin":"TUS"
 ])
 
 input_path = "{}/data/simple_flight_delay_features.jsonl.bz2".format(
 base_path
 )
 features = spark.read.json(input_path, schema=schema)
 features.first()
 
 #
 # Check for nulls in features before using Spark ML
 #
 null_counts = [(column, features.where(features[column].isNull()).count()) for column in features.columns]
 cols_with_nulls = filter(lambda x: x[1] > 0, null_counts)
 print(list(cols_with_nulls))
 
 #
 # Add a Route variable to replace FlightNum
 #
 from pyspark.sql.functions import lit, concat
 features_with_route = features.withColumn(
 'Route',
 concat(
 features.Origin,
 lit('-'),
 features.Dest
 )
 )
 features_with_route.show(6)
 
 #
 # Use pysmark.ml.feature.Bucketizer to bucketize ArrDelay into on-time, slightly late, very late (0, 1, 2)
 #
 from pyspark.ml.feature import Bucketizer
 
 # Setup the Bucketizer
 splits = [-float("inf"), -15.0, 0, 30.0, float("inf")]
 arrival_bucketizer = Bucketizer(
 splits=splits,
 inputCol="ArrDelay",
 outputCol="ArrDelayBucket"
 )
 
 # Save the bucketizer
 arrival_bucketizer_path = "{}/models/arrival_bucketizer_2.0.bin".format(base_path)
 arrival_bucketizer.write().overwrite().save(arrival_bucketizer_path)
 
 # Apply the bucketizer
 ml_bucketized_features = arrival_bucketizer.transform(features_with_route)
 ml_bucketized_features.select("ArrDelay", "ArrDelayBucket").show()
 
 #
 # Extract features tools in with pyspark.ml.feature
 #
 from pyspark.ml.feature import StringIndexer, VectorAssembler
 
 # Turn category fields into indexes
 for column in ["Carrier", "Origin", "Dest", "Route"]:
 string_indexer = StringIndexer(
 inputCol=column,
 outputCol=column + "_index"
 )
 
 string_indexer_model = string_indexer.fit(ml_bucketized_features)
 ml_bucketized_features = string_indexer_model.transform(ml_bucketized_features)
 
 # Drop the original column
 ml_bucketized_features = ml_bucketized_features.drop(column)
 
 # Save the pipeline model
 string_indexer_output_path = "{}/models/string_indexer_model_{}.bin".format(
 base_path,
 column
 )
 string_indexer_model.write().overwrite().save(string_indexer_output_path)
 
 # Combine continuous, numeric fields with indexes of nominal ones
 # ...into one feature vector
 numeric_columns = [
 "DepDelay", "Distance",
 "DayOfMonth", "DayOfWeek",
 "DayOfYear"]
 index_columns = ["Carrier_index", "Origin_index",
 "Dest_index", "Route_index"]
 vector_assembler = VectorAssembler(
 inputCols=numeric_columns + index_columns,
 outputCol="Features_vec"
 )
 final_vectorized_features = vector_assembler.transform(ml_bucketized_features)
 
 # Save the numeric vector assembler
 vector_assembler_path = "{}/models/numeric_vector_assembler.bin".format(base_path)
 vector_assembler.write().overwrite().save(vector_assembler_path)
 
 # Drop the index columns
 for column in index_columns:
 final_vectorized_features = final_vectorized_features.drop(column)
 
 # Inspect the finalized features
 final_vectorized_features.show()
 
 # Instantiate and fit random forest classifier on all the data
 from pyspark.ml.classification import RandomForestClassifier
 rfc = RandomForestClassifier(
 featuresCol="Features_vec",
 labelCol="ArrDelayBucket",
 predictionCol="Prediction",
 maxBins=4657,
 maxMemoryInMB=1024
 )
 model = rfc.fit(final_vectorized_features)
 
 # Save the new model over the old one
 model_output_path = "{}/models/spark_random_forest_classifier.flight_delays.5.0.bin".format(
 base_path
 )
 model.write().overwrite().save(model_output_path)
 
 # Evaluate model using test data
 predictions = model.transform(final_vectorized_features)
 
 from pyspark.ml.evaluation import MulticlassClassificationEvaluator
 evaluator = MulticlassClassificationEvaluator(
 predictionCol="Prediction",
 labelCol="ArrDelayBucket",
 metricName="accuracy"
 )
 accuracy = evaluator.evaluate(predictions)
 print("Accuracy = {}".format(accuracy))
 
 # Check the distribution of predictions
 predictions.groupBy("Prediction").count().show()
 
 # Check a sample
 predictions.sample(False, 0.001, 18).orderBy("CRSDepTime").show(6)
 
 if __name__ == "__main__":
 main(sys.argv[1])

  • 96. Data Syndrome: Agile Data Science 2.0 96 Using the model in realtime via Spark Streaming! Deploying the Model
  • 97. Data Syndrome: Agile Data Science 2.0 Deployment of Model Training to Airflow Using Airflow to batch schedule the training of our model 97 training_dag = DAG( 'agile_data_science_batch_prediction_model_training', default_args=default_args ) # We use the same two commands for all our PySpark tasks pyspark_bash_command = """ spark-submit --master {{ params.master }} {{ params.base_path }}/{{ params.filename }} {{ params.base_path }} """ pyspark_date_bash_command = """ spark-submit --master {{ params.master }} {{ params.base_path }}/{{ params.filename }} {{ ds }} {{ params.base_path }} """ # Gather the training data for our classifier extract_features_operator = BashOperator( task_id = "pyspark_extract_features", bash_command = pyspark_bash_command, params = { "master": "local[8]", "filename": "ch08/extract_features.py", "base_path": "{}/".format(PROJECT_HOME) }, dag=training_dag ) # Train and persist the classifier model train_classifier_model_operator = BashOperator( task_id = "pyspark_train_classifier_model", bash_command = pyspark_bash_command, params = { "master": "local[8]", "filename": "ch08/train_spark_mllib_model.py", "base_path": "{}/".format(PROJECT_HOME) }, dag=training_dag ) # The model training depends on the feature extraction train_classifier_model_operator.set_upstream(extract_features_operator)
  • 98. Data Syndrome: Agile Data Science 2.0 Deployment of Model Training to Airflow Using Airflow to batch schedule the training of our model 98
  • 99. Data Syndrome: Agile Data Science 2.0 Connecting to Kafka Creating a direct stream to the Kafka queue containing our prediction requests 99 #
 # Process Prediction Requests in Streaming
 # from pyspark.streaming.kafka import KafkaUtils
 
 stream = KafkaUtils.createDirectStream(
 ssc,
 [PREDICTION_TOPIC],
 {
 "metadata.broker.list": BROKERS,
 "group.id": "0",
 }
 )
 
 object_stream = stream.map(lambda x: json.loads(x[1]))
 object_stream.pprint()
  • 100. Data Syndrome: Agile Data Science 2.0 Storing to Mongo from PySpark Streaming Putting the result where our web application can find it 100 # Store to Mongo
 if final_predictions.count() > 0:
 final_predictions.rdd.map(lambda x: x.asDict()).saveToMongoDB(
 "mongodb://localhost:27017/agile_data_science.flight_delay_classification_response"
 )
  • 101. Data Syndrome: Agile Data Science 2.0 Fetching Results from Mongo Getting the prediction request back to our front end 101 @app.route("/flights/delays/predict/classify_realtime/response/<unique_id>") def classify_flight_delays_realtime_response(unique_id): """Serves predictions to polling requestors""" prediction = client.agile_data_science. flight_delay_classification_response.find_one( { "UUID": unique_id } ) response = {"status": "WAIT", "id": unique_id} if prediction: response["status"] = "OK" response["prediction"] = prediction return json_util.dumps(response) /flights/delays/predict/classify_realtime/
  • 102. Data Syndrome: Agile Data Science 2.0 Final Result of Deployed Prediction The bang for the buck! 102 python ch08/make_predictions_streaming.py .
  • 103. Data Syndrome: Agile Data Science 2.0 Final Result of Deployed Prediction The bang for the buck! 103 ------------------------------------------- Time: 2016-12-20 18:06:50 ------------------------------------------- { 'Dest': 'ORD', 'DayOfYear': 360, 'FlightDate': '2016-12-25', 'Distance': 606.0, 'DayOfMonth': 25, 'UUID': 'a01b5ccb-49f1-4c4d-af34-188c6ae0bbf0', 'FlightNum': '2010', 'Carrier': 'AA', 'DepDelay': -100.0, 'DayOfWeek': 6, 'Timestamp': '2016-12-20T18:06:45.307114', 'Origin': 'ATL' } ------------------------------------------- Time: 2016-12-20 18:06:50 ------------------------------------------- Row(Carrier='AA', DayOfMonth=25, DayOfWeek=6, DayOfYear=360, DepDelay=-100.0, Dest='ORD', Distance=606.0, FlightDate=datetime.datetime(2016, 12, 25, 0, 0, tzinfo=<iso8601.Utc>), FlightNum='2010', Origin='ATL', Timestamp=datetime.datetime(2016, 12, 20, 18, 6, 45, 307114, tzinfo=<iso8601.Utc>), UUID='a01b5ccb-49f1-4c4d-af34-188c6ae0bbf0')
  • 104. Data Syndrome: Agile Data Science 2.0 Final Result of Deployed Prediction The bang for the buck! 104 +-------+----------+---------+---------+ |Carrier|...| UUID| +-------+----------+---------+---------+ | AA|...|a01b5ccb-49f1-4c4...| +-------+----------+---------+---------+ +-------+----------+---------+---------+ |Carrier|...| Features_vec| +-------+----------+---------+---------+ | AA|...|(8009,[2,38,51,34...| +-------+----------+---------+---------+ +-------+----------+---------+---------+ |Carrier|...| Features_vec| +-------+----------+---------+---------+ | AA|...|(8009,[2,38,51,34...| +-------+----------+---------+---------+ +-------+----------+---------+---------+ |Carrier|...|Prediction| +-------+----------+---------+---------+ | AA|...| 0.0| +-------+----------+---------+---------+
  • 105. Data Syndrome: Agile Data Science 2.0 Final Result of Deployed Prediction The bang for the buck! 105
  • 106. Data Syndrome: Agile Data Science 2.0 106 Next steps for learning more about Agile Data Science 2.0 Next Steps
  • 107. Building Full-Stack Data Analytics Applications with Spark http://bit.ly/agile_data_science Available Now on O’Reilly Safari: http://bit.ly/agile_data_safari Agile Data Science 2.0
  • 108. Agile Data Science 2.0 108 Realtime Predictive Analytics Rapidly learn to build entire predictive systems driven by Kafka, PySpark, Speak Streaming, Spark MLlib and with a web front-end using Python/Flask and JQuery. Available for purchase at http://datasyndrome.com/video
  • 109. Data Syndrome Russell Jurney Principal Consultant Email : rjurney@datasyndrome.com Web : datasyndrome.com Data Syndrome, LLC Product Consulting We build analytics products and systems consisting of big data viz, predictions, recommendations, reports and search. Corporate Training We offer training courses for data scientists and engineers and data science teams, Video Training We offer video training courses that rapidly acclimate you with a technology and technique.