SlideShare a Scribd company logo
1 of 49
Introduction to
Scala and Spark
Ciao
ciao
Vai a fare
ciao ciao
Contents
• Hadoop quick introduction
• An introduction to spark
• Spark – Architecture & Programming Model
1
Hadoop
• An Open-Source software for distributed storage of large
dataset on commodity hardware
• Provides a programming model/framework for processing
large dataset in parallel
2
Map
Map
Map
Reduce
Reduce
Input Output
Limitations of Map Reduce
• Slow due to replication, serialization, and disk IO
• Inefficient for:
– Iterative algorithms (Machine Learning, Graphs & Network Analysis)
– Interactive Data Mining (R, Excel, Ad hoc Reporting, Searching)
3
Input iter. 1 iter. 2 . . .
HDFS
read
HDFS
write
HDFS
read
HDFS
write
Map
Map
Map
Reduce
Reduce
Input Output
Solutions?
• Leverage to memory:
– load Data into Memory
– Replace disks with SSD
4
Apache Spark
• A big data analytics cluster-computing framework
written in Scala.
• Open Sourced originally in AMPLab at UC Berkley
• Provides in-memory analytics based on RDD
• Highly compatible with Hadoop Storage API
– Can run on top of an Hadoop cluster
• Developer can write programs using multiple
programming languages
5
Spark architecture
6
HDFS
Datanode Datanode Datanode
....
Spark
Worker
Spark
Worker
Spark
Worker
....
Cache Cache Cache
Block Block Block
Cluster Manager
Spark Driver (Master)
Spark
7
iter. 1 iter. 2 . . .
Input
HDFS
read
HDFS
write
HDFS
read
HDFS
write
Spark
8
iter. 1 iter. 2 . . .
Input
Not tied to 2 stage Map
Reduce paradigm
1. Extract a working set
2. Cache it
3. Query it repeatedly
Logistic regression in Hadoop and Spark
HDFS
read
Spark Programming Model
9
Datanode
HDFS
Datanode
…
User
(Developer)
Writes
sc=new SparkContext
rDD=sc.textfile(“hdfs://…”)
rDD.filter(…)
rDD.Cache
rDD.Count
rDD.map
Driver Program
SparkContext
Cluster
Manager
Worker Node
Executer Cache
Task Task
Worker Node
Executer Cache
Task Task
Spark Programming Model
10
User
(Developer)
Writes
sc=new SparkContext
rDD=sc.textfile(“hdfs://…”)
rDD.filter(…)
rDD.Cache
rDD.Count
rDD.map
Driver Program
RDD
(Resilient
Distributed
Dataset)
• Immutable Data structure
• In-memory (explicitly)
• Fault Tolerant
• Parallel Data Structure
• Controlled partitioning to
optimize data placement
• Can be manipulated using
rich set of operators.
RDD
• Programming Interface: Programmer can perform 3
types of operations
11
Transformations
• Create a new dataset
from and existing one.
• Lazy in nature. They
are executed only
when some action is
performed.
• Example :
• Map(func)
• Filter(func)
• Distinct()
Actions
• Returns to the driver
program a value or
exports data to a
storage system after
performing a
computation.
• Example:
• Count()
• Reduce(funct)
• Collect
• Take()
Persistence
• For caching datasets
in-memory for future
operations.
• Option to store on disk
or RAM or mixed
(Storage Level).
• Example:
• Persist()
• Cache()
How Spark works
• RDD: Parallel collection with partitions
• User application create RDDs, transform them, and
run actions.
• This results in a DAG (Directed Acyclic Graph) of
operators.
• DAG is compiled into stages
• Each stage is executed as a series of Task (one Task
for each Partition).
12
Example
13
sc.textFile(“/wiki/pagecounts”) RDD[String]
textFile
Example
14
sc.textFile(“/wiki/pagecounts”)
.map(line => line.split(“t”))
RDD[String]
textFil
e
map
RDD[List[String]]
Example
15
sc.textFile(“/wiki/pagecounts”)
.map(line => line.split(“t”))
.map(R => (R[0], int(R[1])))
RDD[String]
textFile map
RDD[List[String]]
RDD[(String, Int)]
map
Example
16
sc.textFile(“/wiki/pagecounts”)
.map(line => line.split(“t”))
.map(R => (R[0], int(R[1])))
.reduceByKey(_+_)
RDD[String]
textFile map
RDD[List[String]]
RDD[(String, Int)]
map
RDD[(String, Int)]
reduceByKey
Example
17
sc.textFile(“/wiki/pagecounts”)
.map(line => line.split(“t”))
.map(R => (R[0], int(R[1])))
.reduceByKey(_+_, 3)
.collect()
RDD[String]
RDD[List[String]]
RDD[(String, Int)]
RDD[(String, Int)]
reduceByKey
Array[(String, Int)]
collect
Execution Plan
Stages are sequences of RDDs, that don’t have a Shuffle in
between
18
textFile map map
reduceByKey
collect
Stage 1 Stage
2
Execution Plan
19
textFil
e
map map
reduceByK
ey
collect
Stage
1
Stage
2
Stage
1
Stage
2
1. Read HDFS split
2. Apply both the maps
3. Start Partial reduce
4. Write shuffle data
1. Read shuffle data
2. Final reduce
3. Send result to driver
program
Stage Execution
• Create a task for each Partition in the new RDD
• Serialize the Task
• Schedule and ship Tasks to Slaves
And all this happens internally (you need to do anything)
20
Task 1
Task 2
Task 2
Task 2
Spark Executor (Slaves)
21
Fetch Input
Execute Task
Write Output
Fetch Input
Execute Task
Write Output
Fetch Input
Execute Task
Write Output
Fetch Input
Execute Task
Write Output
Fetch Input
Execute Task
Write Output
Fetch Input
Execute Task
Write Output
Fetch Input
Execute Task
Write Output
Core 1
Core 2
Core 3
Summary of Components
• Task: The fundamental unit of execution in Spark
• Stage : Set of Tasks that run parallel
• DAG: Logical Graph of RDD operations
• RDD: Parallel dataset with partitions
22
Start the docker container
From
•https://github.com/sequenceiq/docker-spark
docker run -i -t -h sandbox sequenceiq/spark:1.1.1-ubuntu
/etc/ bootstrap.sh –bash
•Run the spark shell using yarn or local
spark-shell --master yarn-client --driver-memory 1g --executor-memory
1g --executor-cores 2
23
Running the example and Shell
• To Run the examples
– $ run-example SparkPi 10
• We can start a spark shell via
– spark-shell -- master local n
• The -- master specifies the master URL for a
distributed cluster
• Example applications are also provided in Python
– spark-submit example/src/main/python/pi.py 10
24
Collections and External Datasets
• A Collection can be parallelized using the SparkContext
– val data = Array(1, 2, 3, 4, 5)
– val distData = sc.parallelize(data)
• Spark can create distributed dataset from HDFS, Cassandra,
Hbase, Amazon S3, etc.
• Spark supports text files, Sequence Files and any other
Hadoop input format
• Files can be read from an URI local or remote (hdfs://, s3n://)
– scala> val distFile = sc.textFile("data.txt")
– distFile: RDD[String] = MappedRDD@1d4cee08
– distFile.map(s => s.length).reduce((a,b) => a + b)
25
RDD operations
• Count the length of the words in the file
– val lines = sc.textFile("data.txt")
– val lineLengths = lines.map(s => s.length)
– val totalLength = lineLengths.reduce((a, b) => a + b)
• If we want to use lineLengths later we can run
– lineLengths.persist()
• This will save in the memory the value of lineLengths
before reducing
26
Passing a function to Spark
• Spark is based on Anonymous function syntax
– (x: Int) => x *x
• Which is a shorthand for
new Function1[Int,Int] {
def apply(x: Int) = x * x
}
• We can define functions with more parameters and without
– (x: Int, y: Int) => "(" + x + ", " + y + ")”
– () => { System.getProperty("user.dir") }
• The syntax is a shorthand for
– Funtion1[T,+E] … Function22[…]
27
Passing a function to Spark
object MyFunctions {
def func1(s: String): String = s + s
}
file.map(MyFunctions.func1)
class MyClass {
def func1(s: String): String = { ... }
def doStuff(rdd: RDD[String]): RDD[String] = { rdd.map(func1) }
}
28
Working with Key-Value Pairs
• We can setup RDD with key-value pairs that are
caster to Tuple2 type
– val lines = sc.textFile("data.txt")
– val pairs = lines.map(s => (s, 1))
– val counts = pairs.reduceByKey((a, b) => a + b)
• We can use counts.sortByKey() to sort
• And finally counts.collect() to bring them back
• NOTE: when using custom objects as key-value we
should be sure that they have the method equals()
with hashcode()
http://docs.oracle.com/javase/7/docs/api/java/lang/Object.ht
ml#hashCode()
29
Transformations
• There are several transformations supported by
Spark
– Map
– Filter
– flatMap
– mapPartitions
– ….
• When they are executed?
30
Actions
• The following table lists some of the common actions
supported:
– Reduce
– Collect
– Count
– First
– Take
– takeSample
31
RDD Persistence
• One of the most important capabilities in Spark is persisting
(or caching) a dataset in memory across operations
• Caching is a key tool for iterative algorithms and fast
interactive use
• You can mark an RDD to be persisted using the persist() or
cache() methods on it
• The first time it is computed in an action, it will be kept in
memory on the nodes. Spark’s cache is fault-tolerant – if any
partition of an RDD is lost, it will automatically be recomputed
using the transformations that originally created it.
32
RDD persistence
• In addition, each persisted RDD can be stored using a different
storage level,
• for example we can persist
– the dataset on disk,
– in memory but as serialized Java objects (to save space), replicate it
across nodes,
– off-heap in Tachyon
• Note: In Python, stored objects will always be serialized with
the Pickle library, so it does not matter whether you choose a
serialized level.
• Spark also automatically persists some intermediate data in
shuffle operations (e.g. reduceByKey), even without users
calling persist
33
Which Storage Level to Choose?
• Memory only if that fit in the main memory
• If not, try using MEMORY_ONLY_SER and selecting a fast
serialization library to make the objects much more space-
efficient, but still reasonably fast to access.
• Don’t spill to disk unless the functions that computed your
datasets are expensive, or they filter a large amount of the
data. Otherwise, recomputing a partition may be as fast as
reading it from disk.
• Use the replicated storage levels if you want fast fault
recovery
• Use OFF_HEAP in environments with hig amounts of memory
used or multiple applications
34
Shared Variables
• Normally when functions are executed on a remote
node it works on immutable copies
• However, Sparks does provide two types of shared
variables for two usages:
– Broadcast variables
– Accumulators
35
Broadcast Variables
• Broadcast variables allow the programmer to keep a
read-only variable cached on each machine rather
than shipping a copy of it with tasks.
scala> val broadcastVar = sc.broadcast(Array(1, 2, 3))
broadcastVar: org.apache.spark.broadcast.Broadcast[Array[Int]] =
Broadcast(0)
scala> broadcastVar.value
res0: Array[Int] = Array(1, 2, 3)
36
Accumulators
• Accumulators are variables that are only “added” to through
an associative operation and can therefore be efficiently
supported in parallel
• Spark natively supports accumulators of numeric types, and
programmers can add support for new types
• Note: not yet supported on Python
scala> val accum = sc.accumulator(0, "My Accumulator")
accum: spark.Accumulator[Int] = 0
scala> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x)
scala> accum.value
res7: Int = 10
37
Accumulators
object VectorAccumulatorParam extends AccumulatorParam[Vector] {
def zero(initialValue: Vector): Vector = {
Vector.zeros(initialValue.size)
}
def addInPlace(v1: Vector, v2: Vector): Vector = {
v1 += v2
}
}
// Then, create an Accumulator of this type:
val vecAccum = sc.accumulator(new Vector(...))(VectorAccumulatorParam)
38
Spark Examples
• Let’s walk through
http://spark.apache.org/examples.html
• Other examples are on
• Basic Sample
=>https://github.com/apache/spark/tree/master/exa
mples/src/main/scala/org/apache/spark/examples
• Streaming Samples =>
https://github.com/apache/spark/tree/master/exam
ples/src/main/scala/org/apache/spark/examples/stre
aming
39
Create a Self Contained App in
Scala
/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SimpleApp {
def main(args: Array[String]) {
val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
}
}
40
Create a Self Contained App in
Scala
Create a build.sbt file
name := "Simple Project"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0"
41
Project folder
• That how the project directory should look
$ find .
.
./simple.sbt
./src
./src/main
./src/main/scala
./src/main/scala/SimpleApp.scala
• With sbt package we can create the jar
• To submit the job
$ YOUR_SPARK_HOME/bin/spark-submit 
--class "SimpleApp" 
--master local[4] 
target/scala-2.10/simple-project_2.10-1.0.jar
42
Gradle Project
• https://github.com/fabiofumarola/spark-demo
43
Spark Streaming
44
A simple example
• We create a local StreamingContext with two execution
threads, and batch interval of 1 second.
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
// Create a local StreamingContext with two working thread and batch
interval of 1 second.
// The master requires 2 cores to prevent from a starvation scenario.
val conf = new
SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
val ssc = new StreamingContext(conf, Seconds(1))
45
A sample example
• Using this context, we can create a DStream that represents
streaming data from a TCP source
val lines = ssc.socketTextStream("localhost", 9999)
• Split each line into words
val words = lines.flatMap(_.split(" "))
• Count each word in the batch
import org.apache.spark.streaming.StreamingContext._
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
wordCounts.print()
46
A sample example
• Note that when these lines are executed, Spark Streaming
only sets up the computation it will perform when it is
started, and no real processing has started yet
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to terminate
• Start netcat as data server by using
– Nc –lk 9999
47
A sample example
• If you have already downloaded and built Spark, you
can run this example as follows. You will first need to
run Netcat (a small utility found in most Unix-like
systems) as a data server by using
– nc -lk 9999
• Run the example by
– run-example streaming.NetworkWordCount localhost 9999
• http://spark.apache.org/docs/latest/streaming-
programming-guide.html
48

More Related Content

Similar to Apache Spark™ is a multi-language engine for executing data-S5.ppt

Apache spark sneha challa- google pittsburgh-aug 25th
Apache spark  sneha challa- google pittsburgh-aug 25thApache spark  sneha challa- google pittsburgh-aug 25th
Apache spark sneha challa- google pittsburgh-aug 25th
Sneha Challa
 

Similar to Apache Spark™ is a multi-language engine for executing data-S5.ppt (20)

TriHUG talk on Spark and Shark
TriHUG talk on Spark and SharkTriHUG talk on Spark and Shark
TriHUG talk on Spark and Shark
 
Spark real world use cases and optimizations
Spark real world use cases and optimizationsSpark real world use cases and optimizations
Spark real world use cases and optimizations
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming
 
A Deep Dive Into Spark
A Deep Dive Into SparkA Deep Dive Into Spark
A Deep Dive Into Spark
 
An introduction To Apache Spark
An introduction To Apache SparkAn introduction To Apache Spark
An introduction To Apache Spark
 
Apache Spark
Apache SparkApache Spark
Apache Spark
 
Spark core
Spark coreSpark core
Spark core
 
Introduction to Apache Spark Ecosystem
Introduction to Apache Spark EcosystemIntroduction to Apache Spark Ecosystem
Introduction to Apache Spark Ecosystem
 
Apache spark sneha challa- google pittsburgh-aug 25th
Apache spark  sneha challa- google pittsburgh-aug 25thApache spark  sneha challa- google pittsburgh-aug 25th
Apache spark sneha challa- google pittsburgh-aug 25th
 
OVERVIEW ON SPARK.pptx
OVERVIEW ON SPARK.pptxOVERVIEW ON SPARK.pptx
OVERVIEW ON SPARK.pptx
 
Apache Spark Overview @ ferret
Apache Spark Overview @ ferretApache Spark Overview @ ferret
Apache Spark Overview @ ferret
 
Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)
 
Why Functional Programming Is Important in Big Data Era
Why Functional Programming Is Important in Big Data EraWhy Functional Programming Is Important in Big Data Era
Why Functional Programming Is Important in Big Data Era
 
Spark 计算模型
Spark 计算模型Spark 计算模型
Spark 计算模型
 
Apache Spark - San Diego Big Data Meetup Jan 14th 2015
Apache Spark - San Diego Big Data Meetup Jan 14th 2015Apache Spark - San Diego Big Data Meetup Jan 14th 2015
Apache Spark - San Diego Big Data Meetup Jan 14th 2015
 
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2
 
Spark architechure.pptx
Spark architechure.pptxSpark architechure.pptx
Spark architechure.pptx
 
Boston Apache Spark User Group (the Spahk group) - Introduction to Spark - 15...
Boston Apache Spark User Group (the Spahk group) - Introduction to Spark - 15...Boston Apache Spark User Group (the Spahk group) - Introduction to Spark - 15...
Boston Apache Spark User Group (the Spahk group) - Introduction to Spark - 15...
 
Ten tools for ten big data areas 03_Apache Spark
Ten tools for ten big data areas 03_Apache SparkTen tools for ten big data areas 03_Apache Spark
Ten tools for ten big data areas 03_Apache Spark
 

More from bhargavi804095

Chapter24.pptx big data systems power point ppt
Chapter24.pptx big data systems power point pptChapter24.pptx big data systems power point ppt
Chapter24.pptx big data systems power point ppt
bhargavi804095
 
power point presentation on pig -hadoop framework
power point presentation on pig -hadoop frameworkpower point presentation on pig -hadoop framework
power point presentation on pig -hadoop framework
bhargavi804095
 
hadoop distributed file systems complete information
hadoop distributed file systems complete informationhadoop distributed file systems complete information
hadoop distributed file systems complete information
bhargavi804095
 

More from bhargavi804095 (16)

Big Data Analytics is not something which was just invented yesterday!
Big Data Analytics is not something which was just invented yesterday!Big Data Analytics is not something which was just invented yesterday!
Big Data Analytics is not something which was just invented yesterday!
 
C++ was developed by Bjarne Stroustrup, as an extension to the C language. cp...
C++ was developed by Bjarne Stroustrup, as an extension to the C language. cp...C++ was developed by Bjarne Stroustrup, as an extension to the C language. cp...
C++ was developed by Bjarne Stroustrup, as an extension to the C language. cp...
 
A File is a collection of data stored in the secondary memory. So far data wa...
A File is a collection of data stored in the secondary memory. So far data wa...A File is a collection of data stored in the secondary memory. So far data wa...
A File is a collection of data stored in the secondary memory. So far data wa...
 
C++ helps you to format the I/O operations like determining the number of dig...
C++ helps you to format the I/O operations like determining the number of dig...C++ helps you to format the I/O operations like determining the number of dig...
C++ helps you to format the I/O operations like determining the number of dig...
 
While writing program in any language, you need to use various variables to s...
While writing program in any language, you need to use various variables to s...While writing program in any language, you need to use various variables to s...
While writing program in any language, you need to use various variables to s...
 
Python is a high-level, general-purpose programming language. Its design phil...
Python is a high-level, general-purpose programming language. Its design phil...Python is a high-level, general-purpose programming language. Its design phil...
Python is a high-level, general-purpose programming language. Its design phil...
 
cpp-streams.ppt,C++ is the top choice of many programmers for creating powerf...
cpp-streams.ppt,C++ is the top choice of many programmers for creating powerf...cpp-streams.ppt,C++ is the top choice of many programmers for creating powerf...
cpp-streams.ppt,C++ is the top choice of many programmers for creating powerf...
 
Graphs in data structures are non-linear data structures made up of a finite ...
Graphs in data structures are non-linear data structures made up of a finite ...Graphs in data structures are non-linear data structures made up of a finite ...
Graphs in data structures are non-linear data structures made up of a finite ...
 
power point presentation to show oops with python.pptx
power point presentation to show oops with python.pptxpower point presentation to show oops with python.pptx
power point presentation to show oops with python.pptx
 
Lecture4_Method_overloading power point presentaion
Lecture4_Method_overloading power point presentaionLecture4_Method_overloading power point presentaion
Lecture4_Method_overloading power point presentaion
 
Lecture5_Method_overloading_Final power point presentation
Lecture5_Method_overloading_Final power point presentationLecture5_Method_overloading_Final power point presentation
Lecture5_Method_overloading_Final power point presentation
 
power point presentation on object oriented programming functions concepts
power point presentation on object oriented programming functions conceptspower point presentation on object oriented programming functions concepts
power point presentation on object oriented programming functions concepts
 
THE C PROGRAMMING LANGUAGE PPT CONTAINS THE BASICS OF C
THE C PROGRAMMING LANGUAGE PPT CONTAINS THE BASICS OF CTHE C PROGRAMMING LANGUAGE PPT CONTAINS THE BASICS OF C
THE C PROGRAMMING LANGUAGE PPT CONTAINS THE BASICS OF C
 
Chapter24.pptx big data systems power point ppt
Chapter24.pptx big data systems power point pptChapter24.pptx big data systems power point ppt
Chapter24.pptx big data systems power point ppt
 
power point presentation on pig -hadoop framework
power point presentation on pig -hadoop frameworkpower point presentation on pig -hadoop framework
power point presentation on pig -hadoop framework
 
hadoop distributed file systems complete information
hadoop distributed file systems complete informationhadoop distributed file systems complete information
hadoop distributed file systems complete information
 

Recently uploaded

Maher Othman Interior Design Portfolio..
Maher Othman Interior Design Portfolio..Maher Othman Interior Design Portfolio..
Maher Othman Interior Design Portfolio..
MaherOthman7
 
Seizure stage detection of epileptic seizure using convolutional neural networks
Seizure stage detection of epileptic seizure using convolutional neural networksSeizure stage detection of epileptic seizure using convolutional neural networks
Seizure stage detection of epileptic seizure using convolutional neural networks
IJECEIAES
 

Recently uploaded (20)

Basics of Relay for Engineering Students
Basics of Relay for Engineering StudentsBasics of Relay for Engineering Students
Basics of Relay for Engineering Students
 
SLIDESHARE PPT-DECISION MAKING METHODS.pptx
SLIDESHARE PPT-DECISION MAKING METHODS.pptxSLIDESHARE PPT-DECISION MAKING METHODS.pptx
SLIDESHARE PPT-DECISION MAKING METHODS.pptx
 
5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)
 
Maher Othman Interior Design Portfolio..
Maher Othman Interior Design Portfolio..Maher Othman Interior Design Portfolio..
Maher Othman Interior Design Portfolio..
 
engineering chemistry power point presentation
engineering chemistry  power point presentationengineering chemistry  power point presentation
engineering chemistry power point presentation
 
The Entity-Relationship Model(ER Diagram).pptx
The Entity-Relationship Model(ER Diagram).pptxThe Entity-Relationship Model(ER Diagram).pptx
The Entity-Relationship Model(ER Diagram).pptx
 
Seizure stage detection of epileptic seizure using convolutional neural networks
Seizure stage detection of epileptic seizure using convolutional neural networksSeizure stage detection of epileptic seizure using convolutional neural networks
Seizure stage detection of epileptic seizure using convolutional neural networks
 
Circuit Breakers for Engineering Students
Circuit Breakers for Engineering StudentsCircuit Breakers for Engineering Students
Circuit Breakers for Engineering Students
 
Intro to Design (for Engineers) at Sydney Uni
Intro to Design (for Engineers) at Sydney UniIntro to Design (for Engineers) at Sydney Uni
Intro to Design (for Engineers) at Sydney Uni
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
 
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
 
History of Indian Railways - the story of Growth & Modernization
History of Indian Railways - the story of Growth & ModernizationHistory of Indian Railways - the story of Growth & Modernization
History of Indian Railways - the story of Growth & Modernization
 
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and ToolsMaximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
 
15-Minute City: A Completely New Horizon
15-Minute City: A Completely New Horizon15-Minute City: A Completely New Horizon
15-Minute City: A Completely New Horizon
 
Diploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdfDiploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdf
 
Interfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdfInterfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdf
 
Worksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxWorksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptx
 

Apache Spark™ is a multi-language engine for executing data-S5.ppt

  • 1. Introduction to Scala and Spark Ciao ciao Vai a fare ciao ciao
  • 2. Contents • Hadoop quick introduction • An introduction to spark • Spark – Architecture & Programming Model 1
  • 3. Hadoop • An Open-Source software for distributed storage of large dataset on commodity hardware • Provides a programming model/framework for processing large dataset in parallel 2 Map Map Map Reduce Reduce Input Output
  • 4. Limitations of Map Reduce • Slow due to replication, serialization, and disk IO • Inefficient for: – Iterative algorithms (Machine Learning, Graphs & Network Analysis) – Interactive Data Mining (R, Excel, Ad hoc Reporting, Searching) 3 Input iter. 1 iter. 2 . . . HDFS read HDFS write HDFS read HDFS write Map Map Map Reduce Reduce Input Output
  • 5. Solutions? • Leverage to memory: – load Data into Memory – Replace disks with SSD 4
  • 6. Apache Spark • A big data analytics cluster-computing framework written in Scala. • Open Sourced originally in AMPLab at UC Berkley • Provides in-memory analytics based on RDD • Highly compatible with Hadoop Storage API – Can run on top of an Hadoop cluster • Developer can write programs using multiple programming languages 5
  • 7. Spark architecture 6 HDFS Datanode Datanode Datanode .... Spark Worker Spark Worker Spark Worker .... Cache Cache Cache Block Block Block Cluster Manager Spark Driver (Master)
  • 8. Spark 7 iter. 1 iter. 2 . . . Input HDFS read HDFS write HDFS read HDFS write
  • 9. Spark 8 iter. 1 iter. 2 . . . Input Not tied to 2 stage Map Reduce paradigm 1. Extract a working set 2. Cache it 3. Query it repeatedly Logistic regression in Hadoop and Spark HDFS read
  • 10. Spark Programming Model 9 Datanode HDFS Datanode … User (Developer) Writes sc=new SparkContext rDD=sc.textfile(“hdfs://…”) rDD.filter(…) rDD.Cache rDD.Count rDD.map Driver Program SparkContext Cluster Manager Worker Node Executer Cache Task Task Worker Node Executer Cache Task Task
  • 11. Spark Programming Model 10 User (Developer) Writes sc=new SparkContext rDD=sc.textfile(“hdfs://…”) rDD.filter(…) rDD.Cache rDD.Count rDD.map Driver Program RDD (Resilient Distributed Dataset) • Immutable Data structure • In-memory (explicitly) • Fault Tolerant • Parallel Data Structure • Controlled partitioning to optimize data placement • Can be manipulated using rich set of operators.
  • 12. RDD • Programming Interface: Programmer can perform 3 types of operations 11 Transformations • Create a new dataset from and existing one. • Lazy in nature. They are executed only when some action is performed. • Example : • Map(func) • Filter(func) • Distinct() Actions • Returns to the driver program a value or exports data to a storage system after performing a computation. • Example: • Count() • Reduce(funct) • Collect • Take() Persistence • For caching datasets in-memory for future operations. • Option to store on disk or RAM or mixed (Storage Level). • Example: • Persist() • Cache()
  • 13. How Spark works • RDD: Parallel collection with partitions • User application create RDDs, transform them, and run actions. • This results in a DAG (Directed Acyclic Graph) of operators. • DAG is compiled into stages • Each stage is executed as a series of Task (one Task for each Partition). 12
  • 16. Example 15 sc.textFile(“/wiki/pagecounts”) .map(line => line.split(“t”)) .map(R => (R[0], int(R[1]))) RDD[String] textFile map RDD[List[String]] RDD[(String, Int)] map
  • 17. Example 16 sc.textFile(“/wiki/pagecounts”) .map(line => line.split(“t”)) .map(R => (R[0], int(R[1]))) .reduceByKey(_+_) RDD[String] textFile map RDD[List[String]] RDD[(String, Int)] map RDD[(String, Int)] reduceByKey
  • 18. Example 17 sc.textFile(“/wiki/pagecounts”) .map(line => line.split(“t”)) .map(R => (R[0], int(R[1]))) .reduceByKey(_+_, 3) .collect() RDD[String] RDD[List[String]] RDD[(String, Int)] RDD[(String, Int)] reduceByKey Array[(String, Int)] collect
  • 19. Execution Plan Stages are sequences of RDDs, that don’t have a Shuffle in between 18 textFile map map reduceByKey collect Stage 1 Stage 2
  • 20. Execution Plan 19 textFil e map map reduceByK ey collect Stage 1 Stage 2 Stage 1 Stage 2 1. Read HDFS split 2. Apply both the maps 3. Start Partial reduce 4. Write shuffle data 1. Read shuffle data 2. Final reduce 3. Send result to driver program
  • 21. Stage Execution • Create a task for each Partition in the new RDD • Serialize the Task • Schedule and ship Tasks to Slaves And all this happens internally (you need to do anything) 20 Task 1 Task 2 Task 2 Task 2
  • 22. Spark Executor (Slaves) 21 Fetch Input Execute Task Write Output Fetch Input Execute Task Write Output Fetch Input Execute Task Write Output Fetch Input Execute Task Write Output Fetch Input Execute Task Write Output Fetch Input Execute Task Write Output Fetch Input Execute Task Write Output Core 1 Core 2 Core 3
  • 23. Summary of Components • Task: The fundamental unit of execution in Spark • Stage : Set of Tasks that run parallel • DAG: Logical Graph of RDD operations • RDD: Parallel dataset with partitions 22
  • 24. Start the docker container From •https://github.com/sequenceiq/docker-spark docker run -i -t -h sandbox sequenceiq/spark:1.1.1-ubuntu /etc/ bootstrap.sh –bash •Run the spark shell using yarn or local spark-shell --master yarn-client --driver-memory 1g --executor-memory 1g --executor-cores 2 23
  • 25. Running the example and Shell • To Run the examples – $ run-example SparkPi 10 • We can start a spark shell via – spark-shell -- master local n • The -- master specifies the master URL for a distributed cluster • Example applications are also provided in Python – spark-submit example/src/main/python/pi.py 10 24
  • 26. Collections and External Datasets • A Collection can be parallelized using the SparkContext – val data = Array(1, 2, 3, 4, 5) – val distData = sc.parallelize(data) • Spark can create distributed dataset from HDFS, Cassandra, Hbase, Amazon S3, etc. • Spark supports text files, Sequence Files and any other Hadoop input format • Files can be read from an URI local or remote (hdfs://, s3n://) – scala> val distFile = sc.textFile("data.txt") – distFile: RDD[String] = MappedRDD@1d4cee08 – distFile.map(s => s.length).reduce((a,b) => a + b) 25
  • 27. RDD operations • Count the length of the words in the file – val lines = sc.textFile("data.txt") – val lineLengths = lines.map(s => s.length) – val totalLength = lineLengths.reduce((a, b) => a + b) • If we want to use lineLengths later we can run – lineLengths.persist() • This will save in the memory the value of lineLengths before reducing 26
  • 28. Passing a function to Spark • Spark is based on Anonymous function syntax – (x: Int) => x *x • Which is a shorthand for new Function1[Int,Int] { def apply(x: Int) = x * x } • We can define functions with more parameters and without – (x: Int, y: Int) => "(" + x + ", " + y + ")” – () => { System.getProperty("user.dir") } • The syntax is a shorthand for – Funtion1[T,+E] … Function22[…] 27
  • 29. Passing a function to Spark object MyFunctions { def func1(s: String): String = s + s } file.map(MyFunctions.func1) class MyClass { def func1(s: String): String = { ... } def doStuff(rdd: RDD[String]): RDD[String] = { rdd.map(func1) } } 28
  • 30. Working with Key-Value Pairs • We can setup RDD with key-value pairs that are caster to Tuple2 type – val lines = sc.textFile("data.txt") – val pairs = lines.map(s => (s, 1)) – val counts = pairs.reduceByKey((a, b) => a + b) • We can use counts.sortByKey() to sort • And finally counts.collect() to bring them back • NOTE: when using custom objects as key-value we should be sure that they have the method equals() with hashcode() http://docs.oracle.com/javase/7/docs/api/java/lang/Object.ht ml#hashCode() 29
  • 31. Transformations • There are several transformations supported by Spark – Map – Filter – flatMap – mapPartitions – …. • When they are executed? 30
  • 32. Actions • The following table lists some of the common actions supported: – Reduce – Collect – Count – First – Take – takeSample 31
  • 33. RDD Persistence • One of the most important capabilities in Spark is persisting (or caching) a dataset in memory across operations • Caching is a key tool for iterative algorithms and fast interactive use • You can mark an RDD to be persisted using the persist() or cache() methods on it • The first time it is computed in an action, it will be kept in memory on the nodes. Spark’s cache is fault-tolerant – if any partition of an RDD is lost, it will automatically be recomputed using the transformations that originally created it. 32
  • 34. RDD persistence • In addition, each persisted RDD can be stored using a different storage level, • for example we can persist – the dataset on disk, – in memory but as serialized Java objects (to save space), replicate it across nodes, – off-heap in Tachyon • Note: In Python, stored objects will always be serialized with the Pickle library, so it does not matter whether you choose a serialized level. • Spark also automatically persists some intermediate data in shuffle operations (e.g. reduceByKey), even without users calling persist 33
  • 35. Which Storage Level to Choose? • Memory only if that fit in the main memory • If not, try using MEMORY_ONLY_SER and selecting a fast serialization library to make the objects much more space- efficient, but still reasonably fast to access. • Don’t spill to disk unless the functions that computed your datasets are expensive, or they filter a large amount of the data. Otherwise, recomputing a partition may be as fast as reading it from disk. • Use the replicated storage levels if you want fast fault recovery • Use OFF_HEAP in environments with hig amounts of memory used or multiple applications 34
  • 36. Shared Variables • Normally when functions are executed on a remote node it works on immutable copies • However, Sparks does provide two types of shared variables for two usages: – Broadcast variables – Accumulators 35
  • 37. Broadcast Variables • Broadcast variables allow the programmer to keep a read-only variable cached on each machine rather than shipping a copy of it with tasks. scala> val broadcastVar = sc.broadcast(Array(1, 2, 3)) broadcastVar: org.apache.spark.broadcast.Broadcast[Array[Int]] = Broadcast(0) scala> broadcastVar.value res0: Array[Int] = Array(1, 2, 3) 36
  • 38. Accumulators • Accumulators are variables that are only “added” to through an associative operation and can therefore be efficiently supported in parallel • Spark natively supports accumulators of numeric types, and programmers can add support for new types • Note: not yet supported on Python scala> val accum = sc.accumulator(0, "My Accumulator") accum: spark.Accumulator[Int] = 0 scala> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x) scala> accum.value res7: Int = 10 37
  • 39. Accumulators object VectorAccumulatorParam extends AccumulatorParam[Vector] { def zero(initialValue: Vector): Vector = { Vector.zeros(initialValue.size) } def addInPlace(v1: Vector, v2: Vector): Vector = { v1 += v2 } } // Then, create an Accumulator of this type: val vecAccum = sc.accumulator(new Vector(...))(VectorAccumulatorParam) 38
  • 40. Spark Examples • Let’s walk through http://spark.apache.org/examples.html • Other examples are on • Basic Sample =>https://github.com/apache/spark/tree/master/exa mples/src/main/scala/org/apache/spark/examples • Streaming Samples => https://github.com/apache/spark/tree/master/exam ples/src/main/scala/org/apache/spark/examples/stre aming 39
  • 41. Create a Self Contained App in Scala /* SimpleApp.scala */ import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf object SimpleApp { def main(args: Array[String]) { val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system val conf = new SparkConf().setAppName("Simple Application") val sc = new SparkContext(conf) val logData = sc.textFile(logFile, 2).cache() val numAs = logData.filter(line => line.contains("a")).count() val numBs = logData.filter(line => line.contains("b")).count() println("Lines with a: %s, Lines with b: %s".format(numAs, numBs)) } } 40
  • 42. Create a Self Contained App in Scala Create a build.sbt file name := "Simple Project" version := "1.0" scalaVersion := "2.10.4" libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0" 41
  • 43. Project folder • That how the project directory should look $ find . . ./simple.sbt ./src ./src/main ./src/main/scala ./src/main/scala/SimpleApp.scala • With sbt package we can create the jar • To submit the job $ YOUR_SPARK_HOME/bin/spark-submit --class "SimpleApp" --master local[4] target/scala-2.10/simple-project_2.10-1.0.jar 42
  • 46. A simple example • We create a local StreamingContext with two execution threads, and batch interval of 1 second. import org.apache.spark._ import org.apache.spark.streaming._ import org.apache.spark.streaming.StreamingContext._ // Create a local StreamingContext with two working thread and batch interval of 1 second. // The master requires 2 cores to prevent from a starvation scenario. val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount") val ssc = new StreamingContext(conf, Seconds(1)) 45
  • 47. A sample example • Using this context, we can create a DStream that represents streaming data from a TCP source val lines = ssc.socketTextStream("localhost", 9999) • Split each line into words val words = lines.flatMap(_.split(" ")) • Count each word in the batch import org.apache.spark.streaming.StreamingContext._ val pairs = words.map(word => (word, 1)) val wordCounts = pairs.reduceByKey(_ + _) wordCounts.print() 46
  • 48. A sample example • Note that when these lines are executed, Spark Streaming only sets up the computation it will perform when it is started, and no real processing has started yet ssc.start() // Start the computation ssc.awaitTermination() // Wait for the computation to terminate • Start netcat as data server by using – Nc –lk 9999 47
  • 49. A sample example • If you have already downloaded and built Spark, you can run this example as follows. You will first need to run Netcat (a small utility found in most Unix-like systems) as a data server by using – nc -lk 9999 • Run the example by – run-example streaming.NetworkWordCount localhost 9999 • http://spark.apache.org/docs/latest/streaming- programming-guide.html 48

Editor's Notes

  1. Resilient Distributed Datasets or RDD are the distributed memory abstractions that lets programmer perform in-memory parallel computations on large clusters. And that too in a highly fault tolerant manner. This is the main concept around which the whole Spark framework revolves around. Currently 2 types of RDDs: Parallelized collections: Created by calling parallelize method on an existing Scala collection. Developer can specify the number of slices to cut the dataset into. Ideally 2-3 slices per CPU. Hadoop Datasets: These distributed datasets are created from any file stored on HDFS or other storage systems supported by Hadoop (S3, Hbase etc). These are created using SparkContext’s textFile method. Default number of slices in this case is 1 slice per file block.
  2. Transformations: Like map – takes an RDD as an input, passes & process each element to a function, and return a new transformed RDD as an output. By default, each transformed RDD is recomputed each time you run an action on it. Unless you specify the RDD to be cached in memory. Spark will try to keep the elements around the cluster for faster access. RDD can be persisted on discs as well. Caching is the Key tool for iterative algorithms. Using persist, one can specify the Storage Level for persisting an RDD. Cache is just a short hand for default storage level. Which is MEMORY_ONLY. MEMORY_ONLY Store RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed. This is the default level. MEMORY_AND_DISK Store RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, store the partitions that don't fit on disk, and read them from there when they're needed. MEMORY_ONLY_SER Store RDD as serialized Java objects (one byte array per partition). This is generally more space-efficient than deserialized objects, especially when using a fast serializer, but more CPU-intensive to read. MEMORY_AND_DISK_SER Similar to MEMORY_ONLY_SER, but spill partitions that don't fit in memory to disk instead of recomputing them on the fly each time they're needed. DISK_ONLY Store the RDD partitions only on disk. MEMORY_ONLY_2, MEMORY_AND_DISK_2 etc Same as the levels above, but replicate each partition on two cluster nodes. Which Storage level is best: Few things to consider: Try to keep in-memory as much as possible Try not to spill to disc unless your computed datasets are memory expensive Use replication only if you want fault tolerance