SlideShare a Scribd company logo
1 of 81
Download to read offline
How Apache Spark 
fits into the 
Big Data landscape 
Licensed under a Creative Commons Attribution- 
NonCommercial-NoDerivatives 4.0 International License
What is Spark?
What is Spark? 
Developed in 2009 at UC Berkeley AMPLab, then 
open sourced in 2010, Spark has since become 
one of the largest OSS communities in big data, 
with over 200 contributors in 50+ organizations 
spark.apache.org 
“Organizations that are looking at big data challenges – 
including collection, ETL, storage, exploration and analytics – 
should consider Spark for its in-memory performance and 
the breadth of its model. It supports advanced analytics 
solutions on Hadoop clusters, including the iterative model 
required for machine learning and graph analysis.” 
Gartner, Advanced Analytics and Data Science (2014)
What is Spark?
What is Spark? 
Spark Core is the general execution engine for the 
Spark platform that other functionality is built atop: 
! 
• in-memory computing capabilities deliver speed 
• general execution model supports wide variety 
of use cases 
• ease of development – native APIs in Java, Scala, 
Python (+ SQL, Clojure, R)
What is Spark? 
WordCount in 3 lines of Spark 
WordCount in 50+ lines of Java MR
What is Spark? 
Sustained exponential growth, as one of the most 
active Apache projects ohloh.net/orgs/apache
A Brief History
A Brief History: Functional Programming for Big Data 
Theory, Eight Decades Ago: 
what can be computed? 
Haskell Curry 
haskell.org 
Alonso Church 
wikipedia.org 
Praxis, Four Decades Ago: 
algebra for applicative systems 
John Backus 
acm.org 
David Turner 
wikipedia.org 
Reality, Two Decades Ago: 
machine data from web apps 
Pattie Maes 
MIT Media Lab
A Brief History: Functional Programming for Big Data 
circa late 1990s: 
explosive growth e-commerce and machine data 
implied that workloads could not fit on a single 
computer anymore… 
notable firms led the shift to horizontal scale-out 
on clusters of commodity hardware, especially 
for machine learning use cases at scale
Stakeholder Customers 
RDBMS 
SQL Query 
result sets 
recommenders 
+ 
classifiers 
Web Apps 
customer 
transactions 
Algorithmic 
Modeling 
Logs 
event 
history 
aggregation 
dashboards 
Product 
Engineering 
UX 
DW ETL 
Middleware 
models servlets
Amazon 
“Early Amazon: Splitting the website” – Greg Linden 
glinden.blogspot.com/2006/02/early-amazon-splitting- 
website.html 
! 
eBay 
“The eBay Architecture” – Randy Shoup, Dan Pritchett 
addsimplicity.com/adding_simplicity_an_engi/ 
2006/11/you_scaled_your.html 
addsimplicity.com.nyud.net:8080/downloads/ 
eBaySDForum2006-11-29.pdf 
! 
Inktomi (YHOO Search) 
“Inktomi’s Wild Ride” – Erik Brewer (0:05:31 ff) 
youtu.be/E91oEn1bnXM 
! 
Google 
“Underneath the Covers at Google” – Jeff Dean (0:06:54 ff) 
youtu.be/qsan-GQaeyk 
perspectives.mvdirona.com/2008/06/11/ 
JeffDeanOnGoogleInfrastructure.aspx 
! 
MIT Media Lab 
“Social Information Filtering for Music Recommendation” – Pattie Maes 
pubs.media.mit.edu/pubs/papers/32paper.ps 
ted.com/speakers/pattie_maes.html
A Brief History: Functional Programming for Big Data 
circa 2002: 
mitigate risk of large distributed workloads lost 
due to disk failures on commodity hardware… 
Google File System 
Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung 
research.google.com/archive/gfs.html 
! 
MapReduce: Simplified Data Processing on Large Clusters 
Jeffrey Dean, Sanjay Ghemawat 
research.google.com/archive/mapreduce.html
A Brief History: Functional Programming for Big Data 
2002 
2004 
MapReduce paper 
2002 
MapReduce @ Google 
2004 2006 2008 2010 2012 2014 
2006 
Hadoop @ Yahoo! 
2014 
Apache Spark top-level 
2010 
Spark paper 
2008 
Hadoop Summit
A Brief History: Functional Programming for Big Data 
MapReduce 
Pregel Giraph 
Dremel Drill 
S4 Storm 
F1 
MillWheel 
General Batch Processing Specialized Systems: 
Impala 
GraphLab 
iterative, interactive, streaming, graph, etc. 
Tez 
MR doesn’t compose well for large applications, 
and so specialized systems emerged as workarounds
A Brief History: Functional Programming for Big Data 
circa 2010: 
a unified engine for enterprise data workflows, 
based on commodity hardware a decade later… 
Spark: Cluster Computing with Working Sets 
Matei Zaharia, Mosharaf Chowdhury, 
Michael Franklin, Scott Shenker, Ion Stoica 
people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf 
! 
Resilient Distributed Datasets: A Fault-Tolerant Abstraction for 
In-Memory Cluster Computing 
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, 
Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica 
usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf
A Brief History: Functional Programming for Big Data 
In addition to simple map and reduce operations, 
Spark supports SQL queries, streaming data, and 
complex analytics such as machine learning and 
graph algorithms out-of-the-box. 
Better yet, combine these capabilities seamlessly 
into one integrated workflow…
TL;DR: Generational trade-offs for handling Big Compute 
Photo from John Wilkes’ keynote talk @ #MesosCon 2014
TL;DR: Generational trade-offs for handling Big Compute 
Cheap 
Memory 
Cheap 
Storage 
Cheap 
Network 
recompute 
replicate 
reference 
(RDD) 
(DFS) 
(URI)
TL;DR: Applicative Systems and Functional Programming – RDDs 
action value 
RDD 
RDD 
RDD 
transformations RDD 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache() 
// action 1! 
messages.filter(_.contains("mysql")).count()
TL;DR: Big Compute in Applicative Systems, by the numbers… 
1. Express business logic in a preferred native 
language (Scala, Java, Python, Clojure, SQL, R, etc.) 
leveraging FP/closures 
2. Build a graph of what must be computed 
3. Rewrite graph into stages using graph reduction 
to determine how to move/combine predicates, 
where synchronization barriers are required, 
what can be computed in parallel, etc. 
(Wadsworth, Henderson, Turner, et al.) 
4. Handle synchronization using Akka and 
reactive programming, with an LRU 
to manage in-memory working sets (RDDs) 
5. Profit
TL;DR: Big Compute…Implications 
Of course, if you can define the structure of workloads 
in terms of abstract algebra, this all becomes much more 
interesting – having vast implications on machine learning 
at scale, IoT, industrial applications, optimization in general, 
etc., as we retool the industrial plant 
However, we’ll leave that for another talk… 
http://justenoughmath.com/
Spark Deconstructed
Spark Deconstructed: Log Mining Example 
// load error messages from a log into memory! 
// then interactively search for various patterns! 
// https://gist.github.com/ceteri/8ae5b9509a08c08a1132! 
! 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count()
Driver 
Worker 
Worker 
Worker 
Spark Deconstructed: Log Mining Example 
We start with Spark running on a cluster… 
submitting code to be evaluated on it:
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// discussing action 2! 
the other part 
messages.filter(_.contains("php")).count()
Spark Deconstructed: Log Mining Example 
At this point, take a look at the transformed 
RDD operator graph: 
scala> messages.toDebugString! 
res5: String = ! 
MappedRDD[4] at map at <console>:16 (3 partitions)! 
MappedRDD[3] at map at <console>:16 (3 partitions)! 
FilteredRDD[2] at filter at <console>:14 (3 partitions)! 
MappedRDD[1] at textFile at <console>:12 (3 partitions)! 
HadoopRDD[0] at textFile at <console>:12 (3 partitions)
Driver 
Worker 
Worker 
Worker 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
read 
HDFS 
block 
read 
HDFS 
block 
read 
HDFS 
block 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
process, 
cache data 
process, 
cache data 
process, 
cache data 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count() 
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
Spark Deconstructed: Log Mining Example 
discussing the other part
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
process 
from cache 
process 
from cache 
process 
from cache 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// discussing transformed RDDs! 
val errors = lines.filter(_.the startsWith("other ERROR"))part 
! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains(“mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count()
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// discussing transformed RDDs! 
val errors = lines.filter(_.the startsWith("other ERROR"))part 
! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains(“mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count()
Spark Deconstructed: 
Looking at the RDD transformations and 
actions from another perspective… 
action value 
RDD 
RDD 
RDD 
// load error messages from a log into memory! 
// then interactively search for various patterns! 
// https://gist.github.com/ceteri/8ae5b9509a08c08a1132! 
! 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count() 
transformations RDD
Spark Deconstructed: 
RDD 
// base RDD! 
val lines = sc.textFile("hdfs://...")
RDD 
RDD 
RDD 
Spark Deconstructed: 
transformations RDD 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()
action value 
RDD 
RDD 
RDD 
Spark Deconstructed: 
transformations RDD 
// action 1! 
messages.filter(_.contains("mysql")).count()
Unifying the Pieces
Unifying the Pieces: Spark SQL 
// http://spark.apache.org/docs/latest/sql-programming-guide.html! 
! 
val sqlContext = new org.apache.spark.sql.SQLContext(sc)! 
import sqlContext._! 
! 
// define the schema using a case class! 
case class Person(name: String, age: Int)! 
! 
// create an RDD of Person objects and register it as a table! 
val people = sc.textFile("examples/src/main/resources/ 
people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))! 
! 
people.registerAsTable("people")! 
! 
// SQL statements can be run using the SQL methods provided by sqlContext! 
val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")! 
! 
// results of SQL queries are SchemaRDDs and support all the ! 
// normal RDD operations…! 
// columns of a row in the result can be accessed by ordinal! 
teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
Unifying the Pieces: Spark Streaming 
// http://spark.apache.org/docs/latest/streaming-programming-guide.html! 
! 
import org.apache.spark.streaming._! 
import org.apache.spark.streaming.StreamingContext._! 
! 
// create a StreamingContext with a SparkConf configuration! 
val ssc = new StreamingContext(sparkConf, Seconds(10))! 
! 
// create a DStream that will connect to serverIP:serverPort! 
val lines = ssc.socketTextStream(serverIP, serverPort)! 
! 
// split each line into words! 
val words = lines.flatMap(_.split(" "))! 
! 
// count each word in each batch! 
val pairs = words.map(word => (word, 1))! 
val wordCounts = pairs.reduceByKey(_ + _)! 
! 
// print a few of the counts to the console! 
wordCounts.print()! 
! 
ssc.start() // start the computation! 
ssc.awaitTermination() // wait for the computation to terminate
MLI: An API for Distributed Machine Learning 
Evan Sparks, Ameet Talwalkar, et al. 
International Conference on Data Mining (2013) 
http://arxiv.org/abs/1310.5426 
Unifying the Pieces: MLlib 
// http://spark.apache.org/docs/latest/mllib-guide.html! 
! 
val train_data = // RDD of Vector! 
val model = KMeans.train(train_data, k=10)! 
! 
// evaluate the model! 
val test_data = // RDD of Vector! 
test_data.map(t => model.predict(t)).collect().foreach(println)!
Unifying the Pieces: GraphX 
// http://spark.apache.org/docs/latest/graphx-programming-guide.html! 
! 
import org.apache.spark.graphx._! 
import org.apache.spark.rdd.RDD! 
! 
case class Peep(name: String, age: Int)! 
! 
val vertexArray = Array(! 
(1L, Peep("Kim", 23)), (2L, Peep("Pat", 31)),! 
(3L, Peep("Chris", 52)), (4L, Peep("Kelly", 39)),! 
(5L, Peep("Leslie", 45))! 
)! 
val edgeArray = Array(! 
Edge(2L, 1L, 7), Edge(2L, 4L, 2),! 
Edge(3L, 2L, 4), Edge(3L, 5L, 3),! 
Edge(4L, 1L, 1), Edge(5L, 3L, 9)! 
)! 
! 
val vertexRDD: RDD[(Long, Peep)] = sc.parallelize(vertexArray)! 
val edgeRDD: RDD[Edge[Int]] = sc.parallelize(edgeArray)! 
val g: Graph[Peep, Int] = Graph(vertexRDD, edgeRDD)! 
! 
val results = g.triplets.filter(t => t.attr > 7)! 
! 
for (triplet <- results.collect) {! 
println(s"${triplet.srcAttr.name} loves ${triplet.dstAttr.name}")! 
}
Unifying the Pieces: Summary 
Demo, if time permits (perhaps in the hallway): 
Twitter Streaming Language Classifier 
databricks.gitbooks.io/databricks-spark-reference-applications/ 
twitter_classifier/README.html 
! 
For many more Spark resources online, check: 
databricks.com/spark-training-resources
TL;DR: Engineering is about costs 
Sure, maybe you’ll squeeze slightly better performance 
by using many specialized systems… 
However, putting on an Eng Director hat, would you 
be also prepared to pay the corresponding costs of: 
• learning curves for your developers across 
several different frameworks 
• ops for several different kinds of clusters 
• maintenance + troubleshooting mission-critical 
apps across several systems 
• tech-debt for OSS that ignores the math (80 yrs!) 
plus the fundamental h/w trade-offs
Integrations
Spark Integrations: 
Discover 
Insights 
Clean Up 
Your Data 
Run 
Sophisticated 
Analytics 
Integrate With 
Many Other 
Systems 
Use Lots of Different 
Data Sources 
cloud-based notebooks… ETL… the Hadoop ecosystem… 
widespread use of PyData… advanced analytics in streaming… 
rich custom search… web apps for data APIs… 
low-latency + multi-tenancy…
Spark Integrations: Unified platform for building Big Data pipelines 
Databricks Cloud 
databricks.com/blog/2014/07/14/databricks-cloud-making- 
big-data-easy.html 
youtube.com/watch?v=dJQ5lV5Tldw#t=883
Spark Integrations: The proverbial Hadoop ecosystem 
Spark + Hadoop + HBase + etc. 
mapr.com/products/apache-spark 
vision.cloudera.com/apache-spark-in-the-apache-hadoop-ecosystem/ 
hortonworks.com/hadoop/spark/ 
databricks.com/blog/2014/05/23/pivotal-hadoop-integrates-the- 
full-apache-spark-stack.html 
unified compute 
hadoop ecosystem
Spark Integrations: Leverage widespread use of Python 
Spark + PyData 
spark-summit.org/2014/talk/A-platform-for-large-scale-neuroscience 
cwiki.apache.org/confluence/display/SPARK/PySpark+Internals 
unified compute 
Py Data
Spark Integrations: Advanced analytics for streaming use cases 
Kafka + Spark + Cassandra 
datastax.com/documentation/datastax_enterprise/4.5/ 
datastax_enterprise/spark/sparkIntro.html 
http://helenaedelson.com/?p=991 
github.com/datastax/spark-cassandra-connector 
github.com/dibbhatt/kafka-spark-consumer 
unified compute 
data streams columnar key-value
Spark Integrations: Rich search, immediate insights 
Spark + ElasticSearch 
databricks.com/blog/2014/06/27/application-spotlight-elasticsearch. 
unified compute 
html 
elasticsearch.org/guide/en/elasticsearch/hadoop/current/ 
spark.html 
spark-summit.org/2014/talk/streamlining-search-indexing- 
using-elastic-search-and-spark 
document search
Spark Integrations: Building data APIs with web apps 
Spark + Play 
typesafe.com/blog/apache-spark-and-the-typesafe-reactive- 
platform-a-match-made-in-heaven 
unified compute 
web apps
Spark Integrations: The case for multi-tenancy 
Spark + Mesos 
spark.apache.org/docs/latest/running-on-mesos.html 
+ Mesosphere + Google Cloud Platform 
ceteri.blogspot.com/2014/09/spark-atop-mesos-on-google-cloud.html 
unified compute 
cluster resources
Advanced Topics
Advanced Topics: 
Other BDAS projects running atop Spark for 
graphs, sampling, and memory sharing: 
• BlinkDB 
• Tachyon
Advanced Topics: BlinkDB 
BlinkDB blinkdb.org/ 
massively parallel, approximate query engine for 
running interactive SQL queries on large volumes 
of data 
• allows users to trade-off query accuracy 
for response time 
• enables interactive queries over massive 
data by running queries on data samples 
• presents results annotated with meaningful 
error bars
Advanced Topics: BlinkDB 
“Our experiments on a 100 node cluster show that 
BlinkDB can answer queries on up to 17 TBs of data 
in less than 2 seconds (over 200 x faster than Hive), 
within an error of 2-10%.” 
BlinkDB: Queries with Bounded Errors and 
Bounded Response Times on Very Large Data 
Sameer Agarwal, Barzan Mozafari, Aurojit Panda, 
Henry Milner, Samuel Madden, Ion Stoica 
EuroSys (2013) 
dl.acm.org/citation.cfm?id=2465355
Advanced Topics: BlinkDB 
Introduction to using BlinkDB 
Sameer Agarwal 
youtu.be/Pc8_EM9PKqY 
Deep Dive into BlinkDB 
Sameer Agarwal 
youtu.be/WoTTbdk0kCA
Advanced Topics: Tachyon 
Tachyon tachyon-project.org/ 
• fault tolerant distributed file system enabling 
reliable file sharing at memory-speed across 
cluster frameworks 
• achieves high performance by leveraging lineage 
information and using memory aggressively 
• caches working set files in memory thereby 
avoiding going to disk to load datasets that are 
frequently read 
• enables different jobs/queries and frameworks 
to access cached files at memory speed
Advanced Topics: Tachyon 
More details: 
tachyon-project.org/Command-Line-Interface.html 
ampcamp.berkeley.edu/big-data-mini-course/ 
tachyon.html 
timothysc.github.io/blog/2014/02/17/bdas-tachyon/
Advanced Topics: Tachyon 
Introduction to Tachyon 
Haoyuan Li 
youtu.be/4lMAsd2LNEE
Case Studies
Summary: Case Studies 
Spark at Twitter: Evaluation & Lessons Learnt 
Sriram Krishnan 
slideshare.net/krishflix/seattle-spark-meetup-spark- 
at-twitter 
• Spark can be more interactive, efficient than MR 
• Support for iterative algorithms and caching 
• More generic than traditional MapReduce 
• Why is Spark faster than Hadoop MapReduce? 
• Fewer I/O synchronization barriers 
• Less expensive shuffle 
• More complex the DAG, greater the 
performance improvement
Summary: Case Studies 
Using Spark to Ignite Data Analytics 
ebaytechblog.com/2014/05/28/using-spark-to-ignite- 
data-analytics/
Summary: Case Studies 
Hadoop and Spark Join Forces in Yahoo 
Andy Feng 
spark-summit.org/talk/feng-hadoop-and-spark-join- 
forces-at-yahoo/
Summary: Case Studies 
Collaborative Filtering with Spark 
Chris Johnson 
slideshare.net/MrChrisJohnson/collaborative-filtering- 
with-spark 
• collab filter (ALS) for music recommendation 
• Hadoop suffers from I/O overhead 
• show a progression of code rewrites, converting 
a Hadoop-based app into efficient use of Spark
Summary: Case Studies 
Why Spark is the Next Top (Compute) Model 
Dean Wampler 
slideshare.net/deanwampler/spark-the-next-top- 
compute-model 
• Hadoop: most algorithms are much harder to 
implement in this restrictive map-then-reduce 
model 
• Spark: fine-grained “combinators” for 
composing algorithms 
• slide #67, any questions?
Summary: Case Studies 
Open Sourcing Our Spark Job Server 
Evan Chan 
engineering.ooyala.com/blog/open-sourcing-our-spark- 
job-server 
• github.com/ooyala/spark-jobserver 
• REST server for submitting, running, managing 
Spark jobs and contexts 
• company vision for Spark is as a multi-team big 
data service 
• shares Spark RDDs in one SparkContext among 
multiple jobs
Summary: Case Studies 
Beyond Word Count: 
Productionalizing Spark Streaming 
Ryan Weald 
spark-summit.org/talk/weald-beyond-word-count- 
productionalizing-spark-streaming/ 
blog.cloudera.com/blog/2014/03/letting-it-flow-with- 
spark-streaming/ 
• overcoming 3 major challenges encountered 
while developing production streaming jobs 
• write streaming applications the same way 
you write batch jobs, reusing code 
• stateful, exactly-once semantics out of the box 
• integration of Algebird
Summary: Case Studies 
Installing the Cassandra / Spark OSS Stack 
Al Tobey 
tobert.github.io/post/2014-07-15-installing-cassandra- 
spark-stack.html 
• install+config for Cassandra and Spark together 
• spark-cassandra-connector integration 
• examples show a Spark shell that can access 
tables in Cassandra as RDDs with types pre-mapped 
and ready to go
Summary: Case Studies 
One platform for all: real-time, near-real-time, 
and offline video analytics on Spark 
Davis Shepherd, Xi Liu 
spark-summit.org/talk/one-platform-for-all-real- 
time-near-real-time-and-offline-video-analytics- 
on-spark
Resources
certification: 
Apache Spark developer certificate program 
• http://oreilly.com/go/sparkcert 
• defined by Spark experts @Databricks 
• assessed by O’Reilly Media 
• preview @Strata NY
community: 
spark.apache.org/community.html 
video+slide archives: spark-summit.org 
local events: Spark Meetups Worldwide 
resources: databricks.com/spark-training-resources 
workshops: databricks.com/spark-training 
Intro to Spark 
Spark 
AppDev 
Spark 
DevOps 
Spark 
DataSci 
Distributed ML 
on Spark 
Streaming Apps 
on Spark 
Spark + 
Cassandra
books: 
Fast Data Processing 
with Spark 
Holden Karau 
Packt (2013) 
shop.oreilly.com/product/ 
9781782167068.do 
Spark in Action 
Chris Fregly 
Manning (2015*) 
sparkinaction.com/ 
Learning Spark 
Holden Karau, 
Andy Konwinski, 
Matei Zaharia 
O’Reilly (2015*) 
shop.oreilly.com/product/ 
0636920028512.do
events: 
Strata NY + Hadoop World 
NYC, Oct 15-17 
strataconf.com/stratany2014 
Big Data TechCon 
SF, Oct 27 
bigdatatechcon.com 
Strata EU 
Barcelona, Nov 19-21 
strataconf.com/strataeu2014 
Data Day Texas 
Austin, Jan 10 
datadaytexas.com 
Strata CA 
San Jose, Feb 18-20 
strataconf.com/strata2015 
Spark Summit East 
NYC, Mar 18-19 
spark-summit.org/east 
Spark Summit 2015 
SF, Jun 15-17 
spark-summit.org
presenter: 
monthly newsletter for updates, 
events, conf summaries, etc.: 
liber118.com/pxn/ 
Just Enough Math 
O’Reilly, 2014 
justenoughmath.com 
preview: youtu.be/TQ58cWgdCpA 
Enterprise Data Workflows 
with Cascading 
O’Reilly, 2013 
shop.oreilly.com/product/ 
0636920028536.do

More Related Content

What's hot

Spark overview
Spark overviewSpark overview
Spark overviewLisa Hua
 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekVenkata Naga Ravi
 
Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Databricks
 
Introduction to Spark - DataFactZ
Introduction to Spark - DataFactZIntroduction to Spark - DataFactZ
Introduction to Spark - DataFactZDataFactZ
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonBenjamin Bengfort
 
Apache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterApache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterDatabricks
 
Beneath RDD in Apache Spark by Jacek Laskowski
Beneath RDD in Apache Spark by Jacek LaskowskiBeneath RDD in Apache Spark by Jacek Laskowski
Beneath RDD in Apache Spark by Jacek LaskowskiSpark Summit
 
Introduction to real time big data with Apache Spark
Introduction to real time big data with Apache SparkIntroduction to real time big data with Apache Spark
Introduction to real time big data with Apache SparkTaras Matyashovsky
 
Stanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache SparkStanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache SparkReynold Xin
 
Introduction to Spark Training
Introduction to Spark TrainingIntroduction to Spark Training
Introduction to Spark TrainingSpark Summit
 
Intro to Spark development
 Intro to Spark development  Intro to Spark development
Intro to Spark development Spark Summit
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsDatabricks
 
Real time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkReal time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkRahul Jain
 
Spark streaming , Spark SQL
Spark streaming , Spark SQLSpark streaming , Spark SQL
Spark streaming , Spark SQLYousun Jeong
 

What's hot (20)

Spark overview
Spark overviewSpark overview
Spark overview
 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeek
 
Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)
 
Introduction to Spark - DataFactZ
Introduction to Spark - DataFactZIntroduction to Spark - DataFactZ
Introduction to Spark - DataFactZ
 
Hadoop and Spark
Hadoop and SparkHadoop and Spark
Hadoop and Spark
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and Python
 
Apache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterApache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and Smarter
 
Beneath RDD in Apache Spark by Jacek Laskowski
Beneath RDD in Apache Spark by Jacek LaskowskiBeneath RDD in Apache Spark by Jacek Laskowski
Beneath RDD in Apache Spark by Jacek Laskowski
 
Introduction to real time big data with Apache Spark
Introduction to real time big data with Apache SparkIntroduction to real time big data with Apache Spark
Introduction to real time big data with Apache Spark
 
Stanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache SparkStanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache Spark
 
Introduction to Spark Training
Introduction to Spark TrainingIntroduction to Spark Training
Introduction to Spark Training
 
Intro to Spark development
 Intro to Spark development  Intro to Spark development
Intro to Spark development
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutions
 
Real time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkReal time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache Spark
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Intro to Apache Spark
Intro to Apache SparkIntro to Apache Spark
Intro to Apache Spark
 
Spark SQL
Spark SQLSpark SQL
Spark SQL
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Spark architecture
Spark architectureSpark architecture
Spark architecture
 
Spark streaming , Spark SQL
Spark streaming , Spark SQLSpark streaming , Spark SQL
Spark streaming , Spark SQL
 

Viewers also liked

How Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapeHow Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapePaco Nathan
 
What's new with Apache Spark?
What's new with Apache Spark?What's new with Apache Spark?
What's new with Apache Spark?Paco Nathan
 
Intro to Apache Spark
Intro to Apache SparkIntro to Apache Spark
Intro to Apache SparkMammoth Data
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Sparkdatamantra
 
Apache Spark & Hadoop : Train-the-trainer
Apache Spark & Hadoop : Train-the-trainerApache Spark & Hadoop : Train-the-trainer
Apache Spark & Hadoop : Train-the-trainerIMC Institute
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache SparkRahul Jain
 
Big Data visualization with Apache Spark and Zeppelin
Big Data visualization with Apache Spark and ZeppelinBig Data visualization with Apache Spark and Zeppelin
Big Data visualization with Apache Spark and Zeppelinprajods
 
One Tool to Rule Them All- Seamless SQL on MongoDB, MySQL and Redis with Apac...
One Tool to Rule Them All- Seamless SQL on MongoDB, MySQL and Redis with Apac...One Tool to Rule Them All- Seamless SQL on MongoDB, MySQL and Redis with Apac...
One Tool to Rule Them All- Seamless SQL on MongoDB, MySQL and Redis with Apac...Tim Vaillancourt
 
Data Science in Future Tense
Data Science in Future TenseData Science in Future Tense
Data Science in Future TensePaco Nathan
 
#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on MesosPaco Nathan
 
OSCON 2014: Data Workflows for Machine Learning
OSCON 2014: Data Workflows for Machine LearningOSCON 2014: Data Workflows for Machine Learning
OSCON 2014: Data Workflows for Machine LearningPaco Nathan
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapePaco Nathan
 
Apache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataApache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataPaco Nathan
 
Big Data is changing abruptly, and where it is likely heading
Big Data is changing abruptly, and where it is likely headingBig Data is changing abruptly, and where it is likely heading
Big Data is changing abruptly, and where it is likely headingPaco Nathan
 
Big data & data science challenges and opportunities
Big data & data science   challenges and opportunitiesBig data & data science   challenges and opportunities
Big data & data science challenges and opportunitiesJose Quesada
 
Future of data science as a profession
Future of data science as a professionFuture of data science as a profession
Future of data science as a professionJose Quesada
 
Data Science in 2016: Moving Up
Data Science in 2016: Moving UpData Science in 2016: Moving Up
Data Science in 2016: Moving UpPaco Nathan
 
Data Science Reinvents Learning?
Data Science Reinvents Learning?Data Science Reinvents Learning?
Data Science Reinvents Learning?Paco Nathan
 
GalvanizeU Seattle: Eleven Almost-Truisms About Data
GalvanizeU Seattle: Eleven Almost-Truisms About DataGalvanizeU Seattle: Eleven Almost-Truisms About Data
GalvanizeU Seattle: Eleven Almost-Truisms About DataPaco Nathan
 
Using Apache Spark and MySQL for Data Analysis
Using Apache Spark and MySQL for Data AnalysisUsing Apache Spark and MySQL for Data Analysis
Using Apache Spark and MySQL for Data AnalysisSveta Smirnova
 

Viewers also liked (20)

How Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapeHow Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscape
 
What's new with Apache Spark?
What's new with Apache Spark?What's new with Apache Spark?
What's new with Apache Spark?
 
Intro to Apache Spark
Intro to Apache SparkIntro to Apache Spark
Intro to Apache Spark
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Apache Spark & Hadoop : Train-the-trainer
Apache Spark & Hadoop : Train-the-trainerApache Spark & Hadoop : Train-the-trainer
Apache Spark & Hadoop : Train-the-trainer
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Big Data visualization with Apache Spark and Zeppelin
Big Data visualization with Apache Spark and ZeppelinBig Data visualization with Apache Spark and Zeppelin
Big Data visualization with Apache Spark and Zeppelin
 
One Tool to Rule Them All- Seamless SQL on MongoDB, MySQL and Redis with Apac...
One Tool to Rule Them All- Seamless SQL on MongoDB, MySQL and Redis with Apac...One Tool to Rule Them All- Seamless SQL on MongoDB, MySQL and Redis with Apac...
One Tool to Rule Them All- Seamless SQL on MongoDB, MySQL and Redis with Apac...
 
Data Science in Future Tense
Data Science in Future TenseData Science in Future Tense
Data Science in Future Tense
 
#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos
 
OSCON 2014: Data Workflows for Machine Learning
OSCON 2014: Data Workflows for Machine LearningOSCON 2014: Data Workflows for Machine Learning
OSCON 2014: Data Workflows for Machine Learning
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscape
 
Apache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataApache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big Data
 
Big Data is changing abruptly, and where it is likely heading
Big Data is changing abruptly, and where it is likely headingBig Data is changing abruptly, and where it is likely heading
Big Data is changing abruptly, and where it is likely heading
 
Big data & data science challenges and opportunities
Big data & data science   challenges and opportunitiesBig data & data science   challenges and opportunities
Big data & data science challenges and opportunities
 
Future of data science as a profession
Future of data science as a professionFuture of data science as a profession
Future of data science as a profession
 
Data Science in 2016: Moving Up
Data Science in 2016: Moving UpData Science in 2016: Moving Up
Data Science in 2016: Moving Up
 
Data Science Reinvents Learning?
Data Science Reinvents Learning?Data Science Reinvents Learning?
Data Science Reinvents Learning?
 
GalvanizeU Seattle: Eleven Almost-Truisms About Data
GalvanizeU Seattle: Eleven Almost-Truisms About DataGalvanizeU Seattle: Eleven Almost-Truisms About Data
GalvanizeU Seattle: Eleven Almost-Truisms About Data
 
Using Apache Spark and MySQL for Data Analysis
Using Apache Spark and MySQL for Data AnalysisUsing Apache Spark and MySQL for Data Analysis
Using Apache Spark and MySQL for Data Analysis
 

Similar to How Apache Spark fits into the Big Data landscape

Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...BigDataEverywhere
 
Brief Intro to Apache Spark @ Stanford ICME
Brief Intro to Apache Spark @ Stanford ICMEBrief Intro to Apache Spark @ Stanford ICME
Brief Intro to Apache Spark @ Stanford ICMEPaco Nathan
 
Tiny Batches, in the wine: Shiny New Bits in Spark Streaming
Tiny Batches, in the wine: Shiny New Bits in Spark StreamingTiny Batches, in the wine: Shiny New Bits in Spark Streaming
Tiny Batches, in the wine: Shiny New Bits in Spark StreamingPaco Nathan
 
Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkUnified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkC4Media
 
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and MoreStrata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and MorePaco Nathan
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introductionHektor Jacynycz García
 
An introduction To Apache Spark
An introduction To Apache SparkAn introduction To Apache Spark
An introduction To Apache SparkAmir Sedighi
 
In Memory Analytics with Apache Spark
In Memory Analytics with Apache SparkIn Memory Analytics with Apache Spark
In Memory Analytics with Apache SparkVenkata Naga Ravi
 
Azure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkAzure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkIke Ellis
 
Apache Spark: The Analytics Operating System
Apache Spark: The Analytics Operating SystemApache Spark: The Analytics Operating System
Apache Spark: The Analytics Operating SystemAdarsh Pannu
 
Apache spark architecture (Big Data and Analytics)
Apache spark architecture (Big Data and Analytics)Apache spark architecture (Big Data and Analytics)
Apache spark architecture (Big Data and Analytics)Jyotasana Bharti
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"IT Event
 
Data Engineer's Lunch #82: Automating Apache Cassandra Operations with Apache...
Data Engineer's Lunch #82: Automating Apache Cassandra Operations with Apache...Data Engineer's Lunch #82: Automating Apache Cassandra Operations with Apache...
Data Engineer's Lunch #82: Automating Apache Cassandra Operations with Apache...Anant Corporation
 
Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)Databricks
 
What's new in spark 2.0?
What's new in spark 2.0?What's new in spark 2.0?
What's new in spark 2.0?Örjan Lundberg
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Massimo Schenone
 
IBM Strategy for Spark
IBM Strategy for SparkIBM Strategy for Spark
IBM Strategy for SparkMark Kerzner
 

Similar to How Apache Spark fits into the Big Data landscape (20)

Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
 
Brief Intro to Apache Spark @ Stanford ICME
Brief Intro to Apache Spark @ Stanford ICMEBrief Intro to Apache Spark @ Stanford ICME
Brief Intro to Apache Spark @ Stanford ICME
 
Tiny Batches, in the wine: Shiny New Bits in Spark Streaming
Tiny Batches, in the wine: Shiny New Bits in Spark StreamingTiny Batches, in the wine: Shiny New Bits in Spark Streaming
Tiny Batches, in the wine: Shiny New Bits in Spark Streaming
 
Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkUnified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache Spark
 
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and MoreStrata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introduction
 
An introduction To Apache Spark
An introduction To Apache SparkAn introduction To Apache Spark
An introduction To Apache Spark
 
In Memory Analytics with Apache Spark
In Memory Analytics with Apache SparkIn Memory Analytics with Apache Spark
In Memory Analytics with Apache Spark
 
Azure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkAzure Databricks is Easier Than You Think
Azure Databricks is Easier Than You Think
 
Apache Spark: The Analytics Operating System
Apache Spark: The Analytics Operating SystemApache Spark: The Analytics Operating System
Apache Spark: The Analytics Operating System
 
Apache spark architecture (Big Data and Analytics)
Apache spark architecture (Big Data and Analytics)Apache spark architecture (Big Data and Analytics)
Apache spark architecture (Big Data and Analytics)
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
 
Spark meetup TCHUG
Spark meetup TCHUGSpark meetup TCHUG
Spark meetup TCHUG
 
Data Engineer's Lunch #82: Automating Apache Cassandra Operations with Apache...
Data Engineer's Lunch #82: Automating Apache Cassandra Operations with Apache...Data Engineer's Lunch #82: Automating Apache Cassandra Operations with Apache...
Data Engineer's Lunch #82: Automating Apache Cassandra Operations with Apache...
 
Big data apache spark + scala
Big data   apache spark + scalaBig data   apache spark + scala
Big data apache spark + scala
 
Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)Introduction to Spark (Intern Event Presentation)
Introduction to Spark (Intern Event Presentation)
 
What's new in spark 2.0?
What's new in spark 2.0?What's new in spark 2.0?
What's new in spark 2.0?
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Apache Spark: What? Why? When?
Apache Spark: What? Why? When?
 
Bds session 13 14
Bds session 13 14Bds session 13 14
Bds session 13 14
 
IBM Strategy for Spark
IBM Strategy for SparkIBM Strategy for Spark
IBM Strategy for Spark
 

More from Paco Nathan

Human in the loop: a design pattern for managing teams working with ML
Human in the loop: a design pattern for managing  teams working with MLHuman in the loop: a design pattern for managing  teams working with ML
Human in the loop: a design pattern for managing teams working with MLPaco Nathan
 
Human-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-the-loop: a design pattern for managing teams that leverage MLHuman-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-the-loop: a design pattern for managing teams that leverage MLPaco Nathan
 
Human-in-a-loop: a design pattern for managing teams which leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage MLHuman-in-a-loop: a design pattern for managing teams which leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage MLPaco Nathan
 
Humans in a loop: Jupyter notebooks as a front-end for AI
Humans in a loop: Jupyter notebooks as a front-end for AIHumans in a loop: Jupyter notebooks as a front-end for AI
Humans in a loop: Jupyter notebooks as a front-end for AIPaco Nathan
 
Humans in the loop: AI in open source and industry
Humans in the loop: AI in open source and industryHumans in the loop: AI in open source and industry
Humans in the loop: AI in open source and industryPaco Nathan
 
Computable Content
Computable ContentComputable Content
Computable ContentPaco Nathan
 
Computable Content: Lessons Learned
Computable Content: Lessons LearnedComputable Content: Lessons Learned
Computable Content: Lessons LearnedPaco Nathan
 
SF Python Meetup: TextRank in Python
SF Python Meetup: TextRank in PythonSF Python Meetup: TextRank in Python
SF Python Meetup: TextRank in PythonPaco Nathan
 
Use of standards and related issues in predictive analytics
Use of standards and related issues in predictive analyticsUse of standards and related issues in predictive analytics
Use of standards and related issues in predictive analyticsPaco Nathan
 
Jupyter for Education: Beyond Gutenberg and Erasmus
Jupyter for Education: Beyond Gutenberg and ErasmusJupyter for Education: Beyond Gutenberg and Erasmus
Jupyter for Education: Beyond Gutenberg and ErasmusPaco Nathan
 
Microservices, containers, and machine learning
Microservices, containers, and machine learningMicroservices, containers, and machine learning
Microservices, containers, and machine learningPaco Nathan
 
GraphX: Graph analytics for insights about developer communities
GraphX: Graph analytics for insights about developer communitiesGraphX: Graph analytics for insights about developer communities
GraphX: Graph analytics for insights about developer communitiesPaco Nathan
 
Graph Analytics in Spark
Graph Analytics in SparkGraph Analytics in Spark
Graph Analytics in SparkPaco Nathan
 
QCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark StreamingQCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark StreamingPaco Nathan
 
A New Year in Data Science: ML Unpaused
A New Year in Data Science: ML UnpausedA New Year in Data Science: ML Unpaused
A New Year in Data Science: ML UnpausedPaco Nathan
 
Microservices, Containers, and Machine Learning
Microservices, Containers, and Machine LearningMicroservices, Containers, and Machine Learning
Microservices, Containers, and Machine LearningPaco Nathan
 
Databricks Meetup @ Los Angeles Apache Spark User Group
Databricks Meetup @ Los Angeles Apache Spark User GroupDatabricks Meetup @ Los Angeles Apache Spark User Group
Databricks Meetup @ Los Angeles Apache Spark User GroupPaco Nathan
 
Strata EU 2014: Spark Streaming Case Studies
Strata EU 2014: Spark Streaming Case StudiesStrata EU 2014: Spark Streaming Case Studies
Strata EU 2014: Spark Streaming Case StudiesPaco Nathan
 

More from Paco Nathan (18)

Human in the loop: a design pattern for managing teams working with ML
Human in the loop: a design pattern for managing  teams working with MLHuman in the loop: a design pattern for managing  teams working with ML
Human in the loop: a design pattern for managing teams working with ML
 
Human-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-the-loop: a design pattern for managing teams that leverage MLHuman-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-the-loop: a design pattern for managing teams that leverage ML
 
Human-in-a-loop: a design pattern for managing teams which leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage MLHuman-in-a-loop: a design pattern for managing teams which leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage ML
 
Humans in a loop: Jupyter notebooks as a front-end for AI
Humans in a loop: Jupyter notebooks as a front-end for AIHumans in a loop: Jupyter notebooks as a front-end for AI
Humans in a loop: Jupyter notebooks as a front-end for AI
 
Humans in the loop: AI in open source and industry
Humans in the loop: AI in open source and industryHumans in the loop: AI in open source and industry
Humans in the loop: AI in open source and industry
 
Computable Content
Computable ContentComputable Content
Computable Content
 
Computable Content: Lessons Learned
Computable Content: Lessons LearnedComputable Content: Lessons Learned
Computable Content: Lessons Learned
 
SF Python Meetup: TextRank in Python
SF Python Meetup: TextRank in PythonSF Python Meetup: TextRank in Python
SF Python Meetup: TextRank in Python
 
Use of standards and related issues in predictive analytics
Use of standards and related issues in predictive analyticsUse of standards and related issues in predictive analytics
Use of standards and related issues in predictive analytics
 
Jupyter for Education: Beyond Gutenberg and Erasmus
Jupyter for Education: Beyond Gutenberg and ErasmusJupyter for Education: Beyond Gutenberg and Erasmus
Jupyter for Education: Beyond Gutenberg and Erasmus
 
Microservices, containers, and machine learning
Microservices, containers, and machine learningMicroservices, containers, and machine learning
Microservices, containers, and machine learning
 
GraphX: Graph analytics for insights about developer communities
GraphX: Graph analytics for insights about developer communitiesGraphX: Graph analytics for insights about developer communities
GraphX: Graph analytics for insights about developer communities
 
Graph Analytics in Spark
Graph Analytics in SparkGraph Analytics in Spark
Graph Analytics in Spark
 
QCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark StreamingQCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark Streaming
 
A New Year in Data Science: ML Unpaused
A New Year in Data Science: ML UnpausedA New Year in Data Science: ML Unpaused
A New Year in Data Science: ML Unpaused
 
Microservices, Containers, and Machine Learning
Microservices, Containers, and Machine LearningMicroservices, Containers, and Machine Learning
Microservices, Containers, and Machine Learning
 
Databricks Meetup @ Los Angeles Apache Spark User Group
Databricks Meetup @ Los Angeles Apache Spark User GroupDatabricks Meetup @ Los Angeles Apache Spark User Group
Databricks Meetup @ Los Angeles Apache Spark User Group
 
Strata EU 2014: Spark Streaming Case Studies
Strata EU 2014: Spark Streaming Case StudiesStrata EU 2014: Spark Streaming Case Studies
Strata EU 2014: Spark Streaming Case Studies
 

Recently uploaded

Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 

Recently uploaded (20)

Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 

How Apache Spark fits into the Big Data landscape

  • 1. How Apache Spark fits into the Big Data landscape Licensed under a Creative Commons Attribution- NonCommercial-NoDerivatives 4.0 International License
  • 3. What is Spark? Developed in 2009 at UC Berkeley AMPLab, then open sourced in 2010, Spark has since become one of the largest OSS communities in big data, with over 200 contributors in 50+ organizations spark.apache.org “Organizations that are looking at big data challenges – including collection, ETL, storage, exploration and analytics – should consider Spark for its in-memory performance and the breadth of its model. It supports advanced analytics solutions on Hadoop clusters, including the iterative model required for machine learning and graph analysis.” Gartner, Advanced Analytics and Data Science (2014)
  • 5. What is Spark? Spark Core is the general execution engine for the Spark platform that other functionality is built atop: ! • in-memory computing capabilities deliver speed • general execution model supports wide variety of use cases • ease of development – native APIs in Java, Scala, Python (+ SQL, Clojure, R)
  • 6. What is Spark? WordCount in 3 lines of Spark WordCount in 50+ lines of Java MR
  • 7. What is Spark? Sustained exponential growth, as one of the most active Apache projects ohloh.net/orgs/apache
  • 8.
  • 10. A Brief History: Functional Programming for Big Data Theory, Eight Decades Ago: what can be computed? Haskell Curry haskell.org Alonso Church wikipedia.org Praxis, Four Decades Ago: algebra for applicative systems John Backus acm.org David Turner wikipedia.org Reality, Two Decades Ago: machine data from web apps Pattie Maes MIT Media Lab
  • 11. A Brief History: Functional Programming for Big Data circa late 1990s: explosive growth e-commerce and machine data implied that workloads could not fit on a single computer anymore… notable firms led the shift to horizontal scale-out on clusters of commodity hardware, especially for machine learning use cases at scale
  • 12. Stakeholder Customers RDBMS SQL Query result sets recommenders + classifiers Web Apps customer transactions Algorithmic Modeling Logs event history aggregation dashboards Product Engineering UX DW ETL Middleware models servlets
  • 13. Amazon “Early Amazon: Splitting the website” – Greg Linden glinden.blogspot.com/2006/02/early-amazon-splitting- website.html ! eBay “The eBay Architecture” – Randy Shoup, Dan Pritchett addsimplicity.com/adding_simplicity_an_engi/ 2006/11/you_scaled_your.html addsimplicity.com.nyud.net:8080/downloads/ eBaySDForum2006-11-29.pdf ! Inktomi (YHOO Search) “Inktomi’s Wild Ride” – Erik Brewer (0:05:31 ff) youtu.be/E91oEn1bnXM ! Google “Underneath the Covers at Google” – Jeff Dean (0:06:54 ff) youtu.be/qsan-GQaeyk perspectives.mvdirona.com/2008/06/11/ JeffDeanOnGoogleInfrastructure.aspx ! MIT Media Lab “Social Information Filtering for Music Recommendation” – Pattie Maes pubs.media.mit.edu/pubs/papers/32paper.ps ted.com/speakers/pattie_maes.html
  • 14. A Brief History: Functional Programming for Big Data circa 2002: mitigate risk of large distributed workloads lost due to disk failures on commodity hardware… Google File System Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung research.google.com/archive/gfs.html ! MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean, Sanjay Ghemawat research.google.com/archive/mapreduce.html
  • 15. A Brief History: Functional Programming for Big Data 2002 2004 MapReduce paper 2002 MapReduce @ Google 2004 2006 2008 2010 2012 2014 2006 Hadoop @ Yahoo! 2014 Apache Spark top-level 2010 Spark paper 2008 Hadoop Summit
  • 16. A Brief History: Functional Programming for Big Data MapReduce Pregel Giraph Dremel Drill S4 Storm F1 MillWheel General Batch Processing Specialized Systems: Impala GraphLab iterative, interactive, streaming, graph, etc. Tez MR doesn’t compose well for large applications, and so specialized systems emerged as workarounds
  • 17. A Brief History: Functional Programming for Big Data circa 2010: a unified engine for enterprise data workflows, based on commodity hardware a decade later… Spark: Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf ! Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf
  • 18. A Brief History: Functional Programming for Big Data In addition to simple map and reduce operations, Spark supports SQL queries, streaming data, and complex analytics such as machine learning and graph algorithms out-of-the-box. Better yet, combine these capabilities seamlessly into one integrated workflow…
  • 19. TL;DR: Generational trade-offs for handling Big Compute Photo from John Wilkes’ keynote talk @ #MesosCon 2014
  • 20. TL;DR: Generational trade-offs for handling Big Compute Cheap Memory Cheap Storage Cheap Network recompute replicate reference (RDD) (DFS) (URI)
  • 21. TL;DR: Applicative Systems and Functional Programming – RDDs action value RDD RDD RDD transformations RDD // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache() // action 1! messages.filter(_.contains("mysql")).count()
  • 22. TL;DR: Big Compute in Applicative Systems, by the numbers… 1. Express business logic in a preferred native language (Scala, Java, Python, Clojure, SQL, R, etc.) leveraging FP/closures 2. Build a graph of what must be computed 3. Rewrite graph into stages using graph reduction to determine how to move/combine predicates, where synchronization barriers are required, what can be computed in parallel, etc. (Wadsworth, Henderson, Turner, et al.) 4. Handle synchronization using Akka and reactive programming, with an LRU to manage in-memory working sets (RDDs) 5. Profit
  • 23. TL;DR: Big Compute…Implications Of course, if you can define the structure of workloads in terms of abstract algebra, this all becomes much more interesting – having vast implications on machine learning at scale, IoT, industrial applications, optimization in general, etc., as we retool the industrial plant However, we’ll leave that for another talk… http://justenoughmath.com/
  • 25. Spark Deconstructed: Log Mining Example // load error messages from a log into memory! // then interactively search for various patterns! // https://gist.github.com/ceteri/8ae5b9509a08c08a1132! ! // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count()
  • 26. Driver Worker Worker Worker Spark Deconstructed: Log Mining Example We start with Spark running on a cluster… submitting code to be evaluated on it:
  • 27. Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // discussing action 2! the other part messages.filter(_.contains("php")).count()
  • 28. Spark Deconstructed: Log Mining Example At this point, take a look at the transformed RDD operator graph: scala> messages.toDebugString! res5: String = ! MappedRDD[4] at map at <console>:16 (3 partitions)! MappedRDD[3] at map at <console>:16 (3 partitions)! FilteredRDD[2] at filter at <console>:14 (3 partitions)! MappedRDD[1] at textFile at <console>:12 (3 partitions)! HadoopRDD[0] at textFile at <console>:12 (3 partitions)
  • 29. Driver Worker Worker Worker Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 30. Driver Worker Worker block 1 Worker block 2 block 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 31. Driver Worker Worker block 1 Worker block 2 block 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 32. Driver Worker Worker block 1 Worker block 2 block 3 read HDFS block read HDFS block read HDFS block Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 33. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 process, cache data process, cache data process, cache data Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 34. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 35. // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count() Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example discussing the other part
  • 36. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 process from cache process from cache process from cache Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // discussing transformed RDDs! val errors = lines.filter(_.the startsWith("other ERROR"))part ! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains(“mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count()
  • 37. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // discussing transformed RDDs! val errors = lines.filter(_.the startsWith("other ERROR"))part ! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains(“mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count()
  • 38. Spark Deconstructed: Looking at the RDD transformations and actions from another perspective… action value RDD RDD RDD // load error messages from a log into memory! // then interactively search for various patterns! // https://gist.github.com/ceteri/8ae5b9509a08c08a1132! ! // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count() transformations RDD
  • 39. Spark Deconstructed: RDD // base RDD! val lines = sc.textFile("hdfs://...")
  • 40. RDD RDD RDD Spark Deconstructed: transformations RDD // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()
  • 41. action value RDD RDD RDD Spark Deconstructed: transformations RDD // action 1! messages.filter(_.contains("mysql")).count()
  • 43. Unifying the Pieces: Spark SQL // http://spark.apache.org/docs/latest/sql-programming-guide.html! ! val sqlContext = new org.apache.spark.sql.SQLContext(sc)! import sqlContext._! ! // define the schema using a case class! case class Person(name: String, age: Int)! ! // create an RDD of Person objects and register it as a table! val people = sc.textFile("examples/src/main/resources/ people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))! ! people.registerAsTable("people")! ! // SQL statements can be run using the SQL methods provided by sqlContext! val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")! ! // results of SQL queries are SchemaRDDs and support all the ! // normal RDD operations…! // columns of a row in the result can be accessed by ordinal! teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
  • 44. Unifying the Pieces: Spark Streaming // http://spark.apache.org/docs/latest/streaming-programming-guide.html! ! import org.apache.spark.streaming._! import org.apache.spark.streaming.StreamingContext._! ! // create a StreamingContext with a SparkConf configuration! val ssc = new StreamingContext(sparkConf, Seconds(10))! ! // create a DStream that will connect to serverIP:serverPort! val lines = ssc.socketTextStream(serverIP, serverPort)! ! // split each line into words! val words = lines.flatMap(_.split(" "))! ! // count each word in each batch! val pairs = words.map(word => (word, 1))! val wordCounts = pairs.reduceByKey(_ + _)! ! // print a few of the counts to the console! wordCounts.print()! ! ssc.start() // start the computation! ssc.awaitTermination() // wait for the computation to terminate
  • 45. MLI: An API for Distributed Machine Learning Evan Sparks, Ameet Talwalkar, et al. International Conference on Data Mining (2013) http://arxiv.org/abs/1310.5426 Unifying the Pieces: MLlib // http://spark.apache.org/docs/latest/mllib-guide.html! ! val train_data = // RDD of Vector! val model = KMeans.train(train_data, k=10)! ! // evaluate the model! val test_data = // RDD of Vector! test_data.map(t => model.predict(t)).collect().foreach(println)!
  • 46. Unifying the Pieces: GraphX // http://spark.apache.org/docs/latest/graphx-programming-guide.html! ! import org.apache.spark.graphx._! import org.apache.spark.rdd.RDD! ! case class Peep(name: String, age: Int)! ! val vertexArray = Array(! (1L, Peep("Kim", 23)), (2L, Peep("Pat", 31)),! (3L, Peep("Chris", 52)), (4L, Peep("Kelly", 39)),! (5L, Peep("Leslie", 45))! )! val edgeArray = Array(! Edge(2L, 1L, 7), Edge(2L, 4L, 2),! Edge(3L, 2L, 4), Edge(3L, 5L, 3),! Edge(4L, 1L, 1), Edge(5L, 3L, 9)! )! ! val vertexRDD: RDD[(Long, Peep)] = sc.parallelize(vertexArray)! val edgeRDD: RDD[Edge[Int]] = sc.parallelize(edgeArray)! val g: Graph[Peep, Int] = Graph(vertexRDD, edgeRDD)! ! val results = g.triplets.filter(t => t.attr > 7)! ! for (triplet <- results.collect) {! println(s"${triplet.srcAttr.name} loves ${triplet.dstAttr.name}")! }
  • 47. Unifying the Pieces: Summary Demo, if time permits (perhaps in the hallway): Twitter Streaming Language Classifier databricks.gitbooks.io/databricks-spark-reference-applications/ twitter_classifier/README.html ! For many more Spark resources online, check: databricks.com/spark-training-resources
  • 48. TL;DR: Engineering is about costs Sure, maybe you’ll squeeze slightly better performance by using many specialized systems… However, putting on an Eng Director hat, would you be also prepared to pay the corresponding costs of: • learning curves for your developers across several different frameworks • ops for several different kinds of clusters • maintenance + troubleshooting mission-critical apps across several systems • tech-debt for OSS that ignores the math (80 yrs!) plus the fundamental h/w trade-offs
  • 50. Spark Integrations: Discover Insights Clean Up Your Data Run Sophisticated Analytics Integrate With Many Other Systems Use Lots of Different Data Sources cloud-based notebooks… ETL… the Hadoop ecosystem… widespread use of PyData… advanced analytics in streaming… rich custom search… web apps for data APIs… low-latency + multi-tenancy…
  • 51. Spark Integrations: Unified platform for building Big Data pipelines Databricks Cloud databricks.com/blog/2014/07/14/databricks-cloud-making- big-data-easy.html youtube.com/watch?v=dJQ5lV5Tldw#t=883
  • 52. Spark Integrations: The proverbial Hadoop ecosystem Spark + Hadoop + HBase + etc. mapr.com/products/apache-spark vision.cloudera.com/apache-spark-in-the-apache-hadoop-ecosystem/ hortonworks.com/hadoop/spark/ databricks.com/blog/2014/05/23/pivotal-hadoop-integrates-the- full-apache-spark-stack.html unified compute hadoop ecosystem
  • 53. Spark Integrations: Leverage widespread use of Python Spark + PyData spark-summit.org/2014/talk/A-platform-for-large-scale-neuroscience cwiki.apache.org/confluence/display/SPARK/PySpark+Internals unified compute Py Data
  • 54. Spark Integrations: Advanced analytics for streaming use cases Kafka + Spark + Cassandra datastax.com/documentation/datastax_enterprise/4.5/ datastax_enterprise/spark/sparkIntro.html http://helenaedelson.com/?p=991 github.com/datastax/spark-cassandra-connector github.com/dibbhatt/kafka-spark-consumer unified compute data streams columnar key-value
  • 55. Spark Integrations: Rich search, immediate insights Spark + ElasticSearch databricks.com/blog/2014/06/27/application-spotlight-elasticsearch. unified compute html elasticsearch.org/guide/en/elasticsearch/hadoop/current/ spark.html spark-summit.org/2014/talk/streamlining-search-indexing- using-elastic-search-and-spark document search
  • 56. Spark Integrations: Building data APIs with web apps Spark + Play typesafe.com/blog/apache-spark-and-the-typesafe-reactive- platform-a-match-made-in-heaven unified compute web apps
  • 57. Spark Integrations: The case for multi-tenancy Spark + Mesos spark.apache.org/docs/latest/running-on-mesos.html + Mesosphere + Google Cloud Platform ceteri.blogspot.com/2014/09/spark-atop-mesos-on-google-cloud.html unified compute cluster resources
  • 59. Advanced Topics: Other BDAS projects running atop Spark for graphs, sampling, and memory sharing: • BlinkDB • Tachyon
  • 60. Advanced Topics: BlinkDB BlinkDB blinkdb.org/ massively parallel, approximate query engine for running interactive SQL queries on large volumes of data • allows users to trade-off query accuracy for response time • enables interactive queries over massive data by running queries on data samples • presents results annotated with meaningful error bars
  • 61. Advanced Topics: BlinkDB “Our experiments on a 100 node cluster show that BlinkDB can answer queries on up to 17 TBs of data in less than 2 seconds (over 200 x faster than Hive), within an error of 2-10%.” BlinkDB: Queries with Bounded Errors and Bounded Response Times on Very Large Data Sameer Agarwal, Barzan Mozafari, Aurojit Panda, Henry Milner, Samuel Madden, Ion Stoica EuroSys (2013) dl.acm.org/citation.cfm?id=2465355
  • 62. Advanced Topics: BlinkDB Introduction to using BlinkDB Sameer Agarwal youtu.be/Pc8_EM9PKqY Deep Dive into BlinkDB Sameer Agarwal youtu.be/WoTTbdk0kCA
  • 63. Advanced Topics: Tachyon Tachyon tachyon-project.org/ • fault tolerant distributed file system enabling reliable file sharing at memory-speed across cluster frameworks • achieves high performance by leveraging lineage information and using memory aggressively • caches working set files in memory thereby avoiding going to disk to load datasets that are frequently read • enables different jobs/queries and frameworks to access cached files at memory speed
  • 64. Advanced Topics: Tachyon More details: tachyon-project.org/Command-Line-Interface.html ampcamp.berkeley.edu/big-data-mini-course/ tachyon.html timothysc.github.io/blog/2014/02/17/bdas-tachyon/
  • 65. Advanced Topics: Tachyon Introduction to Tachyon Haoyuan Li youtu.be/4lMAsd2LNEE
  • 67. Summary: Case Studies Spark at Twitter: Evaluation & Lessons Learnt Sriram Krishnan slideshare.net/krishflix/seattle-spark-meetup-spark- at-twitter • Spark can be more interactive, efficient than MR • Support for iterative algorithms and caching • More generic than traditional MapReduce • Why is Spark faster than Hadoop MapReduce? • Fewer I/O synchronization barriers • Less expensive shuffle • More complex the DAG, greater the performance improvement
  • 68. Summary: Case Studies Using Spark to Ignite Data Analytics ebaytechblog.com/2014/05/28/using-spark-to-ignite- data-analytics/
  • 69. Summary: Case Studies Hadoop and Spark Join Forces in Yahoo Andy Feng spark-summit.org/talk/feng-hadoop-and-spark-join- forces-at-yahoo/
  • 70. Summary: Case Studies Collaborative Filtering with Spark Chris Johnson slideshare.net/MrChrisJohnson/collaborative-filtering- with-spark • collab filter (ALS) for music recommendation • Hadoop suffers from I/O overhead • show a progression of code rewrites, converting a Hadoop-based app into efficient use of Spark
  • 71. Summary: Case Studies Why Spark is the Next Top (Compute) Model Dean Wampler slideshare.net/deanwampler/spark-the-next-top- compute-model • Hadoop: most algorithms are much harder to implement in this restrictive map-then-reduce model • Spark: fine-grained “combinators” for composing algorithms • slide #67, any questions?
  • 72. Summary: Case Studies Open Sourcing Our Spark Job Server Evan Chan engineering.ooyala.com/blog/open-sourcing-our-spark- job-server • github.com/ooyala/spark-jobserver • REST server for submitting, running, managing Spark jobs and contexts • company vision for Spark is as a multi-team big data service • shares Spark RDDs in one SparkContext among multiple jobs
  • 73. Summary: Case Studies Beyond Word Count: Productionalizing Spark Streaming Ryan Weald spark-summit.org/talk/weald-beyond-word-count- productionalizing-spark-streaming/ blog.cloudera.com/blog/2014/03/letting-it-flow-with- spark-streaming/ • overcoming 3 major challenges encountered while developing production streaming jobs • write streaming applications the same way you write batch jobs, reusing code • stateful, exactly-once semantics out of the box • integration of Algebird
  • 74. Summary: Case Studies Installing the Cassandra / Spark OSS Stack Al Tobey tobert.github.io/post/2014-07-15-installing-cassandra- spark-stack.html • install+config for Cassandra and Spark together • spark-cassandra-connector integration • examples show a Spark shell that can access tables in Cassandra as RDDs with types pre-mapped and ready to go
  • 75. Summary: Case Studies One platform for all: real-time, near-real-time, and offline video analytics on Spark Davis Shepherd, Xi Liu spark-summit.org/talk/one-platform-for-all-real- time-near-real-time-and-offline-video-analytics- on-spark
  • 77. certification: Apache Spark developer certificate program • http://oreilly.com/go/sparkcert • defined by Spark experts @Databricks • assessed by O’Reilly Media • preview @Strata NY
  • 78. community: spark.apache.org/community.html video+slide archives: spark-summit.org local events: Spark Meetups Worldwide resources: databricks.com/spark-training-resources workshops: databricks.com/spark-training Intro to Spark Spark AppDev Spark DevOps Spark DataSci Distributed ML on Spark Streaming Apps on Spark Spark + Cassandra
  • 79. books: Fast Data Processing with Spark Holden Karau Packt (2013) shop.oreilly.com/product/ 9781782167068.do Spark in Action Chris Fregly Manning (2015*) sparkinaction.com/ Learning Spark Holden Karau, Andy Konwinski, Matei Zaharia O’Reilly (2015*) shop.oreilly.com/product/ 0636920028512.do
  • 80. events: Strata NY + Hadoop World NYC, Oct 15-17 strataconf.com/stratany2014 Big Data TechCon SF, Oct 27 bigdatatechcon.com Strata EU Barcelona, Nov 19-21 strataconf.com/strataeu2014 Data Day Texas Austin, Jan 10 datadaytexas.com Strata CA San Jose, Feb 18-20 strataconf.com/strata2015 Spark Summit East NYC, Mar 18-19 spark-summit.org/east Spark Summit 2015 SF, Jun 15-17 spark-summit.org
  • 81. presenter: monthly newsletter for updates, events, conf summaries, etc.: liber118.com/pxn/ Just Enough Math O’Reilly, 2014 justenoughmath.com preview: youtu.be/TQ58cWgdCpA Enterprise Data Workflows with Cascading O’Reilly, 2013 shop.oreilly.com/product/ 0636920028536.do