SlideShare a Scribd company logo
1 of 43
Download to read offline
Intro to Apache Spark 
Paco Nathan @pacoid 
(BS MathSci 86 / MS CS 86) 
Stanford ICME, 2014-10-28
What is Spark?
What is Spark? 
Developed in 2009 at UC Berkeley AMPLab, then 
open sourced in 2010, Spark has since become 
one of the largest OSS communities in big data, 
with over 200 contributors in 50+ organizations 
spark.apache.org 
“Organizations that are looking at big data challenges – 
including collection, ETL, storage, exploration and analytics – 
should consider Spark for its in-memory performance and 
the breadth of its model. It supports advanced analytics 
solutions on Hadoop clusters, including the iterative model 
required for machine learning and graph analysis.” 
Gartner, Advanced Analytics and Data Science (2014)
What is Spark?
What is Spark? 
Spark Core is the general execution engine for the 
Spark platform that other functionality is built atop: 
! 
• in-memory computing capabilities deliver speed 
• general execution model supports wide variety 
of use cases 
• ease of development – native APIs in Java, Scala, 
Python (+ SQL, Clojure, R)
What is Spark? 
WordCount in 3 lines of Spark 
WordCount in 50+ lines of Java MR
What is Spark? 
Sustained exponential growth, as one of the most 
active Apache projects ohloh.net/orgs/apache
A Brief History
A Brief History: Functional Programming for Big Data 
Theory, Eight Decades Ago: 
what can be computed? 
Haskell Curry 
haskell.org 
Alonso Church 
wikipedia.org 
Praxis, Four Decades Ago: 
algebra for applicative systems 
John Backus 
acm.org 
David Turner 
wikipedia.org 
Reality, Two Decades Ago: 
machine data from web apps 
Pattie Maes 
MIT Media Lab
A Brief History: Functional Programming for Big Data 
The Big Data Problem – 
A single machine can no longer 
process or even store all the data! 
! 
The most feasible approach is to 
distribute over large clusters…
Google Datacenter 
How do we program this thing?
A Brief History: Functional Programming for Big Data 
circa 2002: 
mitigate risk of large distributed workloads lost 
due to disk failures on commodity hardware… 
Google File System 
Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung 
research.google.com/archive/gfs.html 
! 
MapReduce: Simplified Data Processing on Large Clusters 
Jeffrey Dean, Sanjay Ghemawat 
research.google.com/archive/mapreduce.html
A Brief History: Functional Programming for Big Data 
2002 
2004 
MapReduce paper 
2002 
MapReduce @ Google 
2004 2006 2008 2010 2012 2014 
2006 
Hadoop @ Yahoo! 
2014 
Apache Spark top-level 
2010 
Spark paper 
2008 
Hadoop Summit
A Brief History: Functional Programming for Big Data 
MapReduce 
Pregel Giraph 
Dremel Drill 
S4 Storm 
F1 
MillWheel 
General Batch Processing Specialized Systems: 
Impala 
GraphLab 
iterative, interactive, streaming, graph, etc. 
Tez 
MR doesn’t compose well for large applications, 
and so specialized systems emerged as workarounds
A Brief History: Functional Programming for Big Data 
circa 2010: 
a unified engine for enterprise data workflows, 
based on commodity hardware a decade later… 
Spark: Cluster Computing with Working Sets 
Matei Zaharia, Mosharaf Chowdhury, 
Michael Franklin, Scott Shenker, Ion Stoica 
people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf 
! 
Resilient Distributed Datasets: A Fault-Tolerant Abstraction for 
In-Memory Cluster Computing 
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, 
Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica 
usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf
A Brief History: Functional Programming for Big Data 
In addition to simple map and reduce operations, 
Spark supports SQL queries, streaming data, and 
complex analytics such as machine learning and 
graph algorithms out-of-the-box. 
Better yet, combine these capabilities seamlessly 
into one integrated workflow…
TL;DR: Generational trade-offs for handling Big Compute 
Cheap 
Memory 
Cheap 
Storage 
Cheap 
Network 
recompute 
replicate 
reference 
(RDD) 
(DFS) 
(URI)
TL;DR: Applicative Systems and Functional Programming – RDDs 
action value 
RDD 
RDD 
RDD 
transformations RDD 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache() 
// action 1! 
messages.filter(_.contains("mysql")).count()
A Brief History: Smashing The Previous Petabyte Sort Record 
databricks.com/blog/2014/10/10/spark-petabyte-sort.html
Spark Deconstructed
Spark Deconstructed: Log Mining Example 
// load error messages from a log into memory! 
// then interactively search for various patterns! 
// https://gist.github.com/ceteri/8ae5b9509a08c08a1132! 
! 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count()
Driver 
Worker 
Worker 
Worker 
Spark Deconstructed: Log Mining Example 
We start with Spark running on a cluster… 
submitting code to be evaluated on it:
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// discussing action 2! 
the other part 
messages.filter(_.contains("php")).count()
Spark Deconstructed: Log Mining Example 
At this point, take a look at the transformed 
RDD operator graph: 
scala> messages.toDebugString! 
res5: String = ! 
MappedRDD[4] at map at <console>:16 (3 partitions)! 
MappedRDD[3] at map at <console>:16 (3 partitions)! 
FilteredRDD[2] at filter at <console>:14 (3 partitions)! 
MappedRDD[1] at textFile at <console>:12 (3 partitions)! 
HadoopRDD[0] at textFile at <console>:12 (3 partitions)
Driver 
Worker 
Worker 
Worker 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
read 
HDFS 
block 
read 
HDFS 
block 
read 
HDFS 
block 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
process, 
cache data 
process, 
cache data 
process, 
cache data 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// transformed RDDs! 
val errors = lines.filter(_.startsWith("ERROR"))! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains("mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count() 
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
Spark Deconstructed: Log Mining Example 
discussing the other part
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
process 
from cache 
process 
from cache 
process 
from cache 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// discussing transformed RDDs! 
val errors = lines.filter(_.the startsWith("other ERROR"))part 
! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains(“mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count()
Driver 
cache 1 
Worker 
Worker 
block 1 
Worker 
block 2 
block 3 
cache 2 
cache 3 
Spark Deconstructed: Log Mining Example 
// base RDD! 
val lines = sc.textFile("hdfs://...")! 
! 
// discussing transformed RDDs! 
val errors = lines.filter(_.the startsWith("other ERROR"))part 
! 
val messages = errors.map(_.split("t")).map(r => r(1))! 
messages.cache()! 
! 
// action 1! 
messages.filter(_.contains(“mysql")).count()! 
! 
// action 2! 
messages.filter(_.contains("php")).count()
Unifying the Pieces
Unifying the Pieces: Spark SQL 
// http://spark.apache.org/docs/latest/sql-programming-guide.html! 
! 
val sqlContext = new org.apache.spark.sql.SQLContext(sc)! 
import sqlContext._! 
! 
// define the schema using a case class! 
case class Person(name: String, age: Int)! 
! 
// create an RDD of Person objects and register it as a table! 
val people = sc.textFile("examples/src/main/resources/ 
people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))! 
! 
people.registerAsTable("people")! 
! 
// SQL statements can be run using the SQL methods provided by sqlContext! 
val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")! 
! 
// results of SQL queries are SchemaRDDs and support all the ! 
// normal RDD operations…! 
// columns of a row in the result can be accessed by ordinal! 
teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
Unifying the Pieces: Spark Streaming 
// http://spark.apache.org/docs/latest/streaming-programming-guide.html! 
! 
import org.apache.spark.streaming._! 
import org.apache.spark.streaming.StreamingContext._! 
! 
// create a StreamingContext with a SparkConf configuration! 
val ssc = new StreamingContext(sparkConf, Seconds(10))! 
! 
// create a DStream that will connect to serverIP:serverPort! 
val lines = ssc.socketTextStream(serverIP, serverPort)! 
! 
// split each line into words! 
val words = lines.flatMap(_.split(" "))! 
! 
// count each word in each batch! 
val pairs = words.map(word => (word, 1))! 
val wordCounts = pairs.reduceByKey(_ + _)! 
! 
// print a few of the counts to the console! 
wordCounts.print()! 
! 
ssc.start() // start the computation! 
ssc.awaitTermination() // wait for the computation to terminate
MLI: An API for Distributed Machine Learning 
Evan Sparks, Ameet Talwalkar, et al. 
International Conference on Data Mining (2013) 
http://arxiv.org/abs/1310.5426 
Unifying the Pieces: MLlib 
// http://spark.apache.org/docs/latest/mllib-guide.html! 
! 
val train_data = // RDD of Vector! 
val model = KMeans.train(train_data, k=10)! 
! 
// evaluate the model! 
val test_data = // RDD of Vector! 
test_data.map(t => model.predict(t)).collect().foreach(println)!
Unifying the Pieces: GraphX 
// http://spark.apache.org/docs/latest/graphx-programming-guide.html! 
! 
import org.apache.spark.graphx._! 
import org.apache.spark.rdd.RDD! 
! 
case class Peep(name: String, age: Int)! 
! 
val vertexArray = Array(! 
(1L, Peep("Kim", 23)), (2L, Peep("Pat", 31)),! 
(3L, Peep("Chris", 52)), (4L, Peep("Kelly", 39)),! 
(5L, Peep("Leslie", 45))! 
)! 
val edgeArray = Array(! 
Edge(2L, 1L, 7), Edge(2L, 4L, 2),! 
Edge(3L, 2L, 4), Edge(3L, 5L, 3),! 
Edge(4L, 1L, 1), Edge(5L, 3L, 9)! 
)! 
! 
val vertexRDD: RDD[(Long, Peep)] = sc.parallelize(vertexArray)! 
val edgeRDD: RDD[Edge[Int]] = sc.parallelize(edgeArray)! 
val g: Graph[Peep, Int] = Graph(vertexRDD, edgeRDD)! 
! 
val results = g.triplets.filter(t => t.attr > 7)! 
! 
for (triplet <- results.collect) {! 
println(s"${triplet.srcAttr.name} loves ${triplet.dstAttr.name}")! 
}
Resources
community: 
spark.apache.org/community.html 
video+slide archives: spark-summit.org 
local events: Spark Meetups Worldwide 
global events: goo.gl/2YqJZK 
resources: databricks.com/spark-training-resources 
workshops: databricks.com/spark-training
books: 
Fast Data Processing 
with Spark 
Holden Karau 
Packt (2013) 
shop.oreilly.com/product/ 
9781782167068.do 
Spark in Action 
Chris Fregly 
Manning (2015*) 
sparkinaction.com/ 
Learning Spark 
Holden Karau, 
Andy Konwinski, 
Matei Zaharia 
O’Reilly (2015*) 
shop.oreilly.com/product/ 
0636920028512.do
certification: 
Apache Spark developer certificate program 
• http://oreilly.com/go/sparkcert 
To prepare for the Spark certification exam, we recommend that you: 
• are comfortable coding the advanced exercises in Spark Camp 
or related training 
• have mastered the material released so far in the O'Reilly book, 
Learning Spark 
• have some hands-on experience developing Spark apps in 
production already 
The test includes questions in Scala, Python, Java, and SQL. However, 
deep proficiency in any of those languages is not required, since the 
questions focus on Spark and its model of computation.
events: 
Strata EU 
Barcelona, Nov 19-21 
strataconf.com/strataeu2014 
Data Day Texas 
Austin, Jan 10 
datadaytexas.com 
Strata CA 
San Jose, Feb 18-20 
strataconf.com/strata2015 
Spark Summit East 
NYC, Mar 18-19 
spark-summit.org/east 
Spark Summit 2015 
SF, Jun 15-17 
spark-summit.org

More Related Content

What's hot

Building Machine Learning Applications with Sparkling Water
Building Machine Learning Applications with Sparkling WaterBuilding Machine Learning Applications with Sparkling Water
Building Machine Learning Applications with Sparkling WaterSri Ambati
 
Tulsa techfest Spark Core Aug 5th 2016
Tulsa techfest Spark Core Aug 5th 2016Tulsa techfest Spark Core Aug 5th 2016
Tulsa techfest Spark Core Aug 5th 2016Mark Smith
 
Big Data Processing using Apache Spark and Clojure
Big Data Processing using Apache Spark and ClojureBig Data Processing using Apache Spark and Clojure
Big Data Processing using Apache Spark and ClojureDr. Christian Betz
 
Apache Spark RDDs
Apache Spark RDDsApache Spark RDDs
Apache Spark RDDsDean Chen
 
Data discoveryonhadoop@yahoo! hadoopsummit2014
Data discoveryonhadoop@yahoo! hadoopsummit2014Data discoveryonhadoop@yahoo! hadoopsummit2014
Data discoveryonhadoop@yahoo! hadoopsummit2014thiruvel
 
Hadoop Summit San Jose 2014: Data Discovery on Hadoop
Hadoop Summit San Jose 2014: Data Discovery on Hadoop Hadoop Summit San Jose 2014: Data Discovery on Hadoop
Hadoop Summit San Jose 2014: Data Discovery on Hadoop Sumeet Singh
 
Hive ICDE 2010
Hive ICDE 2010Hive ICDE 2010
Hive ICDE 2010ragho
 
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...CloudxLab
 
BDAS Shark study report 03 v1.1
BDAS Shark study report  03 v1.1BDAS Shark study report  03 v1.1
BDAS Shark study report 03 v1.1Stefanie Zhao
 
Stream all the things
Stream all the thingsStream all the things
Stream all the thingsDean Wampler
 
Resilient Distributed Datasets
Resilient Distributed DatasetsResilient Distributed Datasets
Resilient Distributed DatasetsAlessandro Menabò
 
Introduction to spark
Introduction to sparkIntroduction to spark
Introduction to sparkDuyhai Doan
 
OCF.tw's talk about "Introduction to spark"
OCF.tw's talk about "Introduction to spark"OCF.tw's talk about "Introduction to spark"
OCF.tw's talk about "Introduction to spark"Giivee The
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overviewDataArt
 
BDM25 - Spark runtime internal
BDM25 - Spark runtime internalBDM25 - Spark runtime internal
BDM25 - Spark runtime internalDavid Lauzon
 
Spark Sql and DataFrame
Spark Sql and DataFrameSpark Sql and DataFrame
Spark Sql and DataFramePrashant Gupta
 
Hadoop trainingin bangalore
Hadoop trainingin bangaloreHadoop trainingin bangalore
Hadoop trainingin bangaloreappaji intelhunt
 
report on aadhaar anlysis using bid data hadoop and hive
report on aadhaar anlysis using bid data hadoop and hivereport on aadhaar anlysis using bid data hadoop and hive
report on aadhaar anlysis using bid data hadoop and hivesiddharthboora
 

What's hot (20)

Apache spark Intro
Apache spark IntroApache spark Intro
Apache spark Intro
 
Building Machine Learning Applications with Sparkling Water
Building Machine Learning Applications with Sparkling WaterBuilding Machine Learning Applications with Sparkling Water
Building Machine Learning Applications with Sparkling Water
 
Tulsa techfest Spark Core Aug 5th 2016
Tulsa techfest Spark Core Aug 5th 2016Tulsa techfest Spark Core Aug 5th 2016
Tulsa techfest Spark Core Aug 5th 2016
 
Big Data Processing using Apache Spark and Clojure
Big Data Processing using Apache Spark and ClojureBig Data Processing using Apache Spark and Clojure
Big Data Processing using Apache Spark and Clojure
 
Apache Drill
Apache DrillApache Drill
Apache Drill
 
Apache Spark RDDs
Apache Spark RDDsApache Spark RDDs
Apache Spark RDDs
 
Data discoveryonhadoop@yahoo! hadoopsummit2014
Data discoveryonhadoop@yahoo! hadoopsummit2014Data discoveryonhadoop@yahoo! hadoopsummit2014
Data discoveryonhadoop@yahoo! hadoopsummit2014
 
Hadoop Summit San Jose 2014: Data Discovery on Hadoop
Hadoop Summit San Jose 2014: Data Discovery on Hadoop Hadoop Summit San Jose 2014: Data Discovery on Hadoop
Hadoop Summit San Jose 2014: Data Discovery on Hadoop
 
Hive ICDE 2010
Hive ICDE 2010Hive ICDE 2010
Hive ICDE 2010
 
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...
 
BDAS Shark study report 03 v1.1
BDAS Shark study report  03 v1.1BDAS Shark study report  03 v1.1
BDAS Shark study report 03 v1.1
 
Stream all the things
Stream all the thingsStream all the things
Stream all the things
 
Resilient Distributed Datasets
Resilient Distributed DatasetsResilient Distributed Datasets
Resilient Distributed Datasets
 
Introduction to spark
Introduction to sparkIntroduction to spark
Introduction to spark
 
OCF.tw's talk about "Introduction to spark"
OCF.tw's talk about "Introduction to spark"OCF.tw's talk about "Introduction to spark"
OCF.tw's talk about "Introduction to spark"
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
BDM25 - Spark runtime internal
BDM25 - Spark runtime internalBDM25 - Spark runtime internal
BDM25 - Spark runtime internal
 
Spark Sql and DataFrame
Spark Sql and DataFrameSpark Sql and DataFrame
Spark Sql and DataFrame
 
Hadoop trainingin bangalore
Hadoop trainingin bangaloreHadoop trainingin bangalore
Hadoop trainingin bangalore
 
report on aadhaar anlysis using bid data hadoop and hive
report on aadhaar anlysis using bid data hadoop and hivereport on aadhaar anlysis using bid data hadoop and hive
report on aadhaar anlysis using bid data hadoop and hive
 

Similar to Brief Intro to Apache Spark @ Stanford ICME

#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on MesosPaco Nathan
 
What's new with Apache Spark?
What's new with Apache Spark?What's new with Apache Spark?
What's new with Apache Spark?Paco Nathan
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapePaco Nathan
 
How Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapeHow Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapePaco Nathan
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapePaco Nathan
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Massimo Schenone
 
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...BigDataEverywhere
 
Stanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache SparkStanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache SparkReynold Xin
 
Microservices, Containers, and Machine Learning
Microservices, Containers, and Machine LearningMicroservices, Containers, and Machine Learning
Microservices, Containers, and Machine LearningPaco Nathan
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introductionHektor Jacynycz García
 
Introduction to Spark
Introduction to SparkIntroduction to Spark
Introduction to SparkLi Ming Tsai
 
Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Databricks
 
Spark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesSpark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesDatabricks
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsDatabricks
 
Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkUnified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkC4Media
 
Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Spark & Cassandra at DataStax Meetup on Jan 29, 2015 Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Spark & Cassandra at DataStax Meetup on Jan 29, 2015 Sameer Farooqui
 
Scala Meetup Hamburg - Spark
Scala Meetup Hamburg - SparkScala Meetup Hamburg - Spark
Scala Meetup Hamburg - SparkIvan Morozov
 
Intro to apache spark stand ford
Intro to apache spark stand fordIntro to apache spark stand ford
Intro to apache spark stand fordThu Hiền
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"IT Event
 

Similar to Brief Intro to Apache Spark @ Stanford ICME (20)

#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos#MesosCon 2014: Spark on Mesos
#MesosCon 2014: Spark on Mesos
 
What's new with Apache Spark?
What's new with Apache Spark?What's new with Apache Spark?
What's new with Apache Spark?
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscape
 
How Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscapeHow Apache Spark fits in the Big Data landscape
How Apache Spark fits in the Big Data landscape
 
How Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscapeHow Apache Spark fits into the Big Data landscape
How Apache Spark fits into the Big Data landscape
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Apache Spark: What? Why? When?
Apache Spark: What? Why? When?
 
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
 
Stanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache SparkStanford CS347 Guest Lecture: Apache Spark
Stanford CS347 Guest Lecture: Apache Spark
 
Microservices, Containers, and Machine Learning
Microservices, Containers, and Machine LearningMicroservices, Containers, and Machine Learning
Microservices, Containers, and Machine Learning
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introduction
 
Introduction to Spark
Introduction to SparkIntroduction to Spark
Introduction to Spark
 
Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)
 
Spark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesSpark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student Slides
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutions
 
Unified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache SparkUnified Big Data Processing with Apache Spark
Unified Big Data Processing with Apache Spark
 
Osd ctw spark
Osd ctw sparkOsd ctw spark
Osd ctw spark
 
Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Spark & Cassandra at DataStax Meetup on Jan 29, 2015 Spark & Cassandra at DataStax Meetup on Jan 29, 2015
Spark & Cassandra at DataStax Meetup on Jan 29, 2015
 
Scala Meetup Hamburg - Spark
Scala Meetup Hamburg - SparkScala Meetup Hamburg - Spark
Scala Meetup Hamburg - Spark
 
Intro to apache spark stand ford
Intro to apache spark stand fordIntro to apache spark stand ford
Intro to apache spark stand ford
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
 

More from Paco Nathan

Human in the loop: a design pattern for managing teams working with ML
Human in the loop: a design pattern for managing  teams working with MLHuman in the loop: a design pattern for managing  teams working with ML
Human in the loop: a design pattern for managing teams working with MLPaco Nathan
 
Human-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-the-loop: a design pattern for managing teams that leverage MLHuman-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-the-loop: a design pattern for managing teams that leverage MLPaco Nathan
 
Human-in-a-loop: a design pattern for managing teams which leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage MLHuman-in-a-loop: a design pattern for managing teams which leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage MLPaco Nathan
 
Humans in a loop: Jupyter notebooks as a front-end for AI
Humans in a loop: Jupyter notebooks as a front-end for AIHumans in a loop: Jupyter notebooks as a front-end for AI
Humans in a loop: Jupyter notebooks as a front-end for AIPaco Nathan
 
Humans in the loop: AI in open source and industry
Humans in the loop: AI in open source and industryHumans in the loop: AI in open source and industry
Humans in the loop: AI in open source and industryPaco Nathan
 
Computable Content
Computable ContentComputable Content
Computable ContentPaco Nathan
 
Computable Content: Lessons Learned
Computable Content: Lessons LearnedComputable Content: Lessons Learned
Computable Content: Lessons LearnedPaco Nathan
 
SF Python Meetup: TextRank in Python
SF Python Meetup: TextRank in PythonSF Python Meetup: TextRank in Python
SF Python Meetup: TextRank in PythonPaco Nathan
 
Use of standards and related issues in predictive analytics
Use of standards and related issues in predictive analyticsUse of standards and related issues in predictive analytics
Use of standards and related issues in predictive analyticsPaco Nathan
 
Data Science in 2016: Moving Up
Data Science in 2016: Moving UpData Science in 2016: Moving Up
Data Science in 2016: Moving UpPaco Nathan
 
Data Science Reinvents Learning?
Data Science Reinvents Learning?Data Science Reinvents Learning?
Data Science Reinvents Learning?Paco Nathan
 
Jupyter for Education: Beyond Gutenberg and Erasmus
Jupyter for Education: Beyond Gutenberg and ErasmusJupyter for Education: Beyond Gutenberg and Erasmus
Jupyter for Education: Beyond Gutenberg and ErasmusPaco Nathan
 
GalvanizeU Seattle: Eleven Almost-Truisms About Data
GalvanizeU Seattle: Eleven Almost-Truisms About DataGalvanizeU Seattle: Eleven Almost-Truisms About Data
GalvanizeU Seattle: Eleven Almost-Truisms About DataPaco Nathan
 
Microservices, containers, and machine learning
Microservices, containers, and machine learningMicroservices, containers, and machine learning
Microservices, containers, and machine learningPaco Nathan
 
GraphX: Graph analytics for insights about developer communities
GraphX: Graph analytics for insights about developer communitiesGraphX: Graph analytics for insights about developer communities
GraphX: Graph analytics for insights about developer communitiesPaco Nathan
 
Graph Analytics in Spark
Graph Analytics in SparkGraph Analytics in Spark
Graph Analytics in SparkPaco Nathan
 
Apache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataApache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataPaco Nathan
 
QCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark StreamingQCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark StreamingPaco Nathan
 
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and MoreStrata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and MorePaco Nathan
 
A New Year in Data Science: ML Unpaused
A New Year in Data Science: ML UnpausedA New Year in Data Science: ML Unpaused
A New Year in Data Science: ML UnpausedPaco Nathan
 

More from Paco Nathan (20)

Human in the loop: a design pattern for managing teams working with ML
Human in the loop: a design pattern for managing  teams working with MLHuman in the loop: a design pattern for managing  teams working with ML
Human in the loop: a design pattern for managing teams working with ML
 
Human-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-the-loop: a design pattern for managing teams that leverage MLHuman-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-the-loop: a design pattern for managing teams that leverage ML
 
Human-in-a-loop: a design pattern for managing teams which leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage MLHuman-in-a-loop: a design pattern for managing teams which leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage ML
 
Humans in a loop: Jupyter notebooks as a front-end for AI
Humans in a loop: Jupyter notebooks as a front-end for AIHumans in a loop: Jupyter notebooks as a front-end for AI
Humans in a loop: Jupyter notebooks as a front-end for AI
 
Humans in the loop: AI in open source and industry
Humans in the loop: AI in open source and industryHumans in the loop: AI in open source and industry
Humans in the loop: AI in open source and industry
 
Computable Content
Computable ContentComputable Content
Computable Content
 
Computable Content: Lessons Learned
Computable Content: Lessons LearnedComputable Content: Lessons Learned
Computable Content: Lessons Learned
 
SF Python Meetup: TextRank in Python
SF Python Meetup: TextRank in PythonSF Python Meetup: TextRank in Python
SF Python Meetup: TextRank in Python
 
Use of standards and related issues in predictive analytics
Use of standards and related issues in predictive analyticsUse of standards and related issues in predictive analytics
Use of standards and related issues in predictive analytics
 
Data Science in 2016: Moving Up
Data Science in 2016: Moving UpData Science in 2016: Moving Up
Data Science in 2016: Moving Up
 
Data Science Reinvents Learning?
Data Science Reinvents Learning?Data Science Reinvents Learning?
Data Science Reinvents Learning?
 
Jupyter for Education: Beyond Gutenberg and Erasmus
Jupyter for Education: Beyond Gutenberg and ErasmusJupyter for Education: Beyond Gutenberg and Erasmus
Jupyter for Education: Beyond Gutenberg and Erasmus
 
GalvanizeU Seattle: Eleven Almost-Truisms About Data
GalvanizeU Seattle: Eleven Almost-Truisms About DataGalvanizeU Seattle: Eleven Almost-Truisms About Data
GalvanizeU Seattle: Eleven Almost-Truisms About Data
 
Microservices, containers, and machine learning
Microservices, containers, and machine learningMicroservices, containers, and machine learning
Microservices, containers, and machine learning
 
GraphX: Graph analytics for insights about developer communities
GraphX: Graph analytics for insights about developer communitiesGraphX: Graph analytics for insights about developer communities
GraphX: Graph analytics for insights about developer communities
 
Graph Analytics in Spark
Graph Analytics in SparkGraph Analytics in Spark
Graph Analytics in Spark
 
Apache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataApache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big Data
 
QCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark StreamingQCon São Paulo: Real-Time Analytics with Spark Streaming
QCon São Paulo: Real-Time Analytics with Spark Streaming
 
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and MoreStrata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
 
A New Year in Data Science: ML Unpaused
A New Year in Data Science: ML UnpausedA New Year in Data Science: ML Unpaused
A New Year in Data Science: ML Unpaused
 

Recently uploaded

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 

Recently uploaded (20)

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 

Brief Intro to Apache Spark @ Stanford ICME

  • 1. Intro to Apache Spark Paco Nathan @pacoid (BS MathSci 86 / MS CS 86) Stanford ICME, 2014-10-28
  • 3. What is Spark? Developed in 2009 at UC Berkeley AMPLab, then open sourced in 2010, Spark has since become one of the largest OSS communities in big data, with over 200 contributors in 50+ organizations spark.apache.org “Organizations that are looking at big data challenges – including collection, ETL, storage, exploration and analytics – should consider Spark for its in-memory performance and the breadth of its model. It supports advanced analytics solutions on Hadoop clusters, including the iterative model required for machine learning and graph analysis.” Gartner, Advanced Analytics and Data Science (2014)
  • 5. What is Spark? Spark Core is the general execution engine for the Spark platform that other functionality is built atop: ! • in-memory computing capabilities deliver speed • general execution model supports wide variety of use cases • ease of development – native APIs in Java, Scala, Python (+ SQL, Clojure, R)
  • 6. What is Spark? WordCount in 3 lines of Spark WordCount in 50+ lines of Java MR
  • 7. What is Spark? Sustained exponential growth, as one of the most active Apache projects ohloh.net/orgs/apache
  • 9. A Brief History: Functional Programming for Big Data Theory, Eight Decades Ago: what can be computed? Haskell Curry haskell.org Alonso Church wikipedia.org Praxis, Four Decades Ago: algebra for applicative systems John Backus acm.org David Turner wikipedia.org Reality, Two Decades Ago: machine data from web apps Pattie Maes MIT Media Lab
  • 10. A Brief History: Functional Programming for Big Data The Big Data Problem – A single machine can no longer process or even store all the data! ! The most feasible approach is to distribute over large clusters…
  • 11. Google Datacenter How do we program this thing?
  • 12. A Brief History: Functional Programming for Big Data circa 2002: mitigate risk of large distributed workloads lost due to disk failures on commodity hardware… Google File System Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung research.google.com/archive/gfs.html ! MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean, Sanjay Ghemawat research.google.com/archive/mapreduce.html
  • 13. A Brief History: Functional Programming for Big Data 2002 2004 MapReduce paper 2002 MapReduce @ Google 2004 2006 2008 2010 2012 2014 2006 Hadoop @ Yahoo! 2014 Apache Spark top-level 2010 Spark paper 2008 Hadoop Summit
  • 14. A Brief History: Functional Programming for Big Data MapReduce Pregel Giraph Dremel Drill S4 Storm F1 MillWheel General Batch Processing Specialized Systems: Impala GraphLab iterative, interactive, streaming, graph, etc. Tez MR doesn’t compose well for large applications, and so specialized systems emerged as workarounds
  • 15. A Brief History: Functional Programming for Big Data circa 2010: a unified engine for enterprise data workflows, based on commodity hardware a decade later… Spark: Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf ! Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf
  • 16. A Brief History: Functional Programming for Big Data In addition to simple map and reduce operations, Spark supports SQL queries, streaming data, and complex analytics such as machine learning and graph algorithms out-of-the-box. Better yet, combine these capabilities seamlessly into one integrated workflow…
  • 17. TL;DR: Generational trade-offs for handling Big Compute Cheap Memory Cheap Storage Cheap Network recompute replicate reference (RDD) (DFS) (URI)
  • 18. TL;DR: Applicative Systems and Functional Programming – RDDs action value RDD RDD RDD transformations RDD // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache() // action 1! messages.filter(_.contains("mysql")).count()
  • 19. A Brief History: Smashing The Previous Petabyte Sort Record databricks.com/blog/2014/10/10/spark-petabyte-sort.html
  • 21. Spark Deconstructed: Log Mining Example // load error messages from a log into memory! // then interactively search for various patterns! // https://gist.github.com/ceteri/8ae5b9509a08c08a1132! ! // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count()
  • 22. Driver Worker Worker Worker Spark Deconstructed: Log Mining Example We start with Spark running on a cluster… submitting code to be evaluated on it:
  • 23. Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // discussing action 2! the other part messages.filter(_.contains("php")).count()
  • 24. Spark Deconstructed: Log Mining Example At this point, take a look at the transformed RDD operator graph: scala> messages.toDebugString! res5: String = ! MappedRDD[4] at map at <console>:16 (3 partitions)! MappedRDD[3] at map at <console>:16 (3 partitions)! FilteredRDD[2] at filter at <console>:14 (3 partitions)! MappedRDD[1] at textFile at <console>:12 (3 partitions)! HadoopRDD[0] at textFile at <console>:12 (3 partitions)
  • 25. Driver Worker Worker Worker Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 26. Driver Worker Worker block 1 Worker block 2 block 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 27. Driver Worker Worker block 1 Worker block 2 block 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 28. Driver Worker Worker block 1 Worker block 2 block 3 read HDFS block read HDFS block read HDFS block Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 29. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 process, cache data process, cache data process, cache data Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 30. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
  • 31. // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count() Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example discussing the other part
  • 32. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 process from cache process from cache process from cache Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // discussing transformed RDDs! val errors = lines.filter(_.the startsWith("other ERROR"))part ! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains(“mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count()
  • 33. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // discussing transformed RDDs! val errors = lines.filter(_.the startsWith("other ERROR"))part ! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains(“mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count()
  • 35. Unifying the Pieces: Spark SQL // http://spark.apache.org/docs/latest/sql-programming-guide.html! ! val sqlContext = new org.apache.spark.sql.SQLContext(sc)! import sqlContext._! ! // define the schema using a case class! case class Person(name: String, age: Int)! ! // create an RDD of Person objects and register it as a table! val people = sc.textFile("examples/src/main/resources/ people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))! ! people.registerAsTable("people")! ! // SQL statements can be run using the SQL methods provided by sqlContext! val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")! ! // results of SQL queries are SchemaRDDs and support all the ! // normal RDD operations…! // columns of a row in the result can be accessed by ordinal! teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
  • 36. Unifying the Pieces: Spark Streaming // http://spark.apache.org/docs/latest/streaming-programming-guide.html! ! import org.apache.spark.streaming._! import org.apache.spark.streaming.StreamingContext._! ! // create a StreamingContext with a SparkConf configuration! val ssc = new StreamingContext(sparkConf, Seconds(10))! ! // create a DStream that will connect to serverIP:serverPort! val lines = ssc.socketTextStream(serverIP, serverPort)! ! // split each line into words! val words = lines.flatMap(_.split(" "))! ! // count each word in each batch! val pairs = words.map(word => (word, 1))! val wordCounts = pairs.reduceByKey(_ + _)! ! // print a few of the counts to the console! wordCounts.print()! ! ssc.start() // start the computation! ssc.awaitTermination() // wait for the computation to terminate
  • 37. MLI: An API for Distributed Machine Learning Evan Sparks, Ameet Talwalkar, et al. International Conference on Data Mining (2013) http://arxiv.org/abs/1310.5426 Unifying the Pieces: MLlib // http://spark.apache.org/docs/latest/mllib-guide.html! ! val train_data = // RDD of Vector! val model = KMeans.train(train_data, k=10)! ! // evaluate the model! val test_data = // RDD of Vector! test_data.map(t => model.predict(t)).collect().foreach(println)!
  • 38. Unifying the Pieces: GraphX // http://spark.apache.org/docs/latest/graphx-programming-guide.html! ! import org.apache.spark.graphx._! import org.apache.spark.rdd.RDD! ! case class Peep(name: String, age: Int)! ! val vertexArray = Array(! (1L, Peep("Kim", 23)), (2L, Peep("Pat", 31)),! (3L, Peep("Chris", 52)), (4L, Peep("Kelly", 39)),! (5L, Peep("Leslie", 45))! )! val edgeArray = Array(! Edge(2L, 1L, 7), Edge(2L, 4L, 2),! Edge(3L, 2L, 4), Edge(3L, 5L, 3),! Edge(4L, 1L, 1), Edge(5L, 3L, 9)! )! ! val vertexRDD: RDD[(Long, Peep)] = sc.parallelize(vertexArray)! val edgeRDD: RDD[Edge[Int]] = sc.parallelize(edgeArray)! val g: Graph[Peep, Int] = Graph(vertexRDD, edgeRDD)! ! val results = g.triplets.filter(t => t.attr > 7)! ! for (triplet <- results.collect) {! println(s"${triplet.srcAttr.name} loves ${triplet.dstAttr.name}")! }
  • 40. community: spark.apache.org/community.html video+slide archives: spark-summit.org local events: Spark Meetups Worldwide global events: goo.gl/2YqJZK resources: databricks.com/spark-training-resources workshops: databricks.com/spark-training
  • 41. books: Fast Data Processing with Spark Holden Karau Packt (2013) shop.oreilly.com/product/ 9781782167068.do Spark in Action Chris Fregly Manning (2015*) sparkinaction.com/ Learning Spark Holden Karau, Andy Konwinski, Matei Zaharia O’Reilly (2015*) shop.oreilly.com/product/ 0636920028512.do
  • 42. certification: Apache Spark developer certificate program • http://oreilly.com/go/sparkcert To prepare for the Spark certification exam, we recommend that you: • are comfortable coding the advanced exercises in Spark Camp or related training • have mastered the material released so far in the O'Reilly book, Learning Spark • have some hands-on experience developing Spark apps in production already The test includes questions in Scala, Python, Java, and SQL. However, deep proficiency in any of those languages is not required, since the questions focus on Spark and its model of computation.
  • 43. events: Strata EU Barcelona, Nov 19-21 strataconf.com/strataeu2014 Data Day Texas Austin, Jan 10 datadaytexas.com Strata CA San Jose, Feb 18-20 strataconf.com/strata2015 Spark Summit East NYC, Mar 18-19 spark-summit.org/east Spark Summit 2015 SF, Jun 15-17 spark-summit.org