INTRODUCTIONTOAPACHE
SPARK
BY SAMY DINDANE
OUTLINE
History of "Big Data" engines
Apache Spark: What is it and what's special about it?
Apache Spark: What is used for?
Apache Spark: API
Tools and software usually used with Apache Spark
Demo
HISTORYOF"BIG DATA"ENGINES
BATCHVSSTREAMING
HISTORYOF"BIG DATA"ENGINES
2011 - Hadoop MapReduce: Batch, in-disk processing
2011 - Apache Storm: Realtime
2014 - Apache Tez
2014 - Apache Spark: Batch and near-realtime, in-
memory processing
2015 - Apache Flink: Realtime, in-memory processing
APACHE SPARK: WHATISITAND
WHAT'SSPECIAL ABOUTIT?
WHY SPARK?
Most machine learning algorithms are iterative; each
iteration can improve the results
With disk-based approach, each iteration's output is
written to disk making the processing slow
HADOOPMAPREDUCE EXECUTION FLOW
SPARK EXECUTION FLOW
Spark is a distributed data processing engine
Started in 2009
Open source & written in Scala
Compatible with Hadoop's data
It runs on memory and on disk
Run 10 to 100 times faster than Hadoop MapReduce
Can be written in Java, Scala, Python & R
Supports batch and near-realtime workflows (micro-
batches)
Spark has four modules:
APACHE SPARK: WHATISUSEDFOR?
CAPTURE ANDEXTRACTDATA
Data can come from several sources:
Databases
Flat files
Web and mobile applications' logs
Data feeds from social media
IoT devices
TRANSFORMDATA
Data in an analytics pipeline needs transformation
Check and correct quality issues
Handle missing values
Cast fields into specific data types
Compute derived fields
Split or merge records for more granularity
Join with other datasets
Restructure data
STORE DATA
Data can then be stored in several ways
As self describing files (Parquet, JSON, XML)
SQL databases
Search databases (Elasticsearch, Solr)
Key-value stores (HBase, Cassandra)
QUERY,ANALYZE,VISUALIZE
With Spark Shell, notebooks, Kibana, etc.
APACHE SPARK: API
EXECUTION FLOW
RESILENTDISTRIBUTEDDATASETS
RDD's are the fundamental data unit in Spark
Resilient: If data in memory is lost, it can be recreated
Distributed: Stored in memory across the cluster
Dataset: The initial data can come from a file or
created programmatically
RDD'S
Immutable and partionned collection of elements
Basic operations: map, filter, reduce, persist
Several implementations: PairRDD, DoubleRDD,
SequenceFileRDD
HISTORY
2011 (Spark release) - RDD API
2013 - introduction of the DataFrame API: Add the
concept of schema and allow Spark to manage it for
more efficient serialization and deserialization
2015 - introduction of the DataSet API
OPERATIONSON RDD'S
Transformations
Actions
TRANSFORMATIONS
Create a new dataset from an RDD, like filter, map,
reduce
ACTIONS:
Return a value to the driver program after running a
computation on the dataset
EXAMPLE OF MAPANDFILTERTRANSFORMATIONS
EXAMPLE OF MAPANDFILTERTRANSFORMATIONS
HOW TO RUNSPARKPROGRAMS?
Inside Spark Shell
Using a notebook
As a Spark application
By submitting Spark application to spark-submit
INSIDE SPARKSHELL
Run ./bin/spark-shell
val textFile = sc.textFile("README.md")
val lines = textFile.filter(line => line contains "Spark")
lines.collect()
USING ANOTEBOOK
There are many Spark notebooks, we are going to use
http://spark-notebook.io/
spark­notebook
open http://localhost:9000/
ASASPARKAPPLICATION
By adding spark-core and other Spark modules as project
dependencies and using Spark API inside the application
code
def main(args: Array[String]) {
    val conf = new SparkConf()
        .setAppName("Sample Application")
        .setMaster("local")
    val sc = new SparkContext(conf)
    val logData = sc.textFile("/tmp/spark/README.md")
    val lines = textFile.filter(line => line contains "Spark")
    lines.collect()
    sc.stop()
}
BYSUBMITTING SPARKAPPLICATION
TO SPARK-SUBMIT
./bin/spark­submit 
­­class <main­class>
­­master <master­url> 
­­deploy­mode <deploy­mode> 
­­conf <key>=<value> 
... # other options
<application­jar> 
[application­arguments]
TERMINOLOGY
SparkContext: A connection to a Spark context
Worker node: Node that runs the program in a cluster
Task: A unit of work
Job: Consists of multiple tasks
Executor: Process in a worker node, that runs the
tasks
TOOLSANDSOFTWARE USUALLY
USEDWITH APACHE SPARK
HDFS: HADOOP DISTRIBUTEDFILE
SYSTEM
Simple: Uses many servers as one big computer
Reliable: Detects failures, has redundant storage
Fault-tolerant: Auto-retry, self-healing
Scalable: Scales (almost) lineary with disks and CPU
APACHE KAFKA
ADISTRIBUTEDANDREPLICATEDMESSAGING SYSTEM
APACHE ZOOKEEPER
ZOOKEEPERISADISTRIBUTED,OPEN-SOURCE
COORDINATION SERVICE FORDISTRIBUTEDAPPLICATIONS
Coordination: Needed when multiple nodes need to
work together
Examples:
Group membership
Locking
Leaders election
Synchronization
Publisher/subscriber
APACHE MESOS
Mesos is built using the same principles as the Linux
kernel, only at a different level of abstraction.
The Mesos kernel runs on every machine and provides
applications (e.g., Hadoop, Spark, Kafka, Elastic Search)
with API's for resource management and scheduling
across entire datacenter and cloud environments.
A cluster manager that:
Runs distributed applications
Abstracts CPU, memory, storage, and other resources
Handles resource allocation
Handles applications' isolation
Has a Web UI for viewing the cluster's state
NOTEBOOKS
Spark Notebook: Allows performing reproducible
analysis with Scala, Apache Spark and more
Apache Zeppelin: A web-based notebook that enables
interactive data analytics
THE END
APACHE SPARK
Is a fast distributed data processing engine
Runs on memory
Can be used with Java, Scala, Python & R
Its main data structure is a Resilient Distributed
Dataset
SOURCES
http://www.slideshare.net/jhols1/kafka­atlmeetuppublicv2?qid=8627acbf­f89d­4ada­
http://www.slideshare.net/Clogeny/an­introduction­to­zookeeper?qid=ac974e3b­c935
http://www.slideshare.net/rahuldausa/introduction­to­apache­spark­39638645?qid=4
http://www.slideshare.net/junjun1/apache­spark­its­place­within­a­big­data­stack
http://www.slideshare.net/cloudera/spark­devwebinarslides­final?qid=4cd97031­912
http://www.slideshare.net/pacoid/aus­mesos
https://spark.apache.org/docs/latest/submitting­applications.html
https://spark.apache.org/docs/1.6.1/quick­start.html

Introduction to Apache Spark