© 2018 YASH Technologies | www.yash.com | Confidential
Apache Spark
- Mahesh Pandit
2
Agenda
 What is Big Data
 Sources for Big Data
 Big Data Exploration
 Hadoop Introduction
 Limitations of Hadoop
 Why Spark
 Spark Ecosystem
 Features of Spark
 Use cases of Spark
 Spark Architecture
 Spark Structure API’s
 Spark RDD
 DataFrame & Datasets
 Best Source to learn
 Question- Answers
3
© 2018 YASH Technologies | www.yash.com | Confidential
BIG DATA
 What is Big Data
 Source of Big Data
4
BIG DATA
5
What is Big Data
 Data sets that are too large or complex for traditional data-processing application software to
adequately deal with.
 Big data can be described by the following characteristics
VOLUME
VELOCITY
VALUE
VERACITY
VARIETY
Different Forms of
data
No use until turn into Value
Scale of data
Different forms of data
Uncertainty of Data
6
Source of Big data
HUMAN
MACHINE
7
© 2018 YASH Technologies | www.yash.com | Confidential
Apache Hadoop
 Introduction
 Storage in Hadoop(HDFS)
 Big Limitations of Hadoop
8
Apache Hadoop
 Hadoop is an open source distributed processing framework that manages data
 processing and storage for big data applications running in clustered systems.
 Inspired by “Google File System”
 Its co-founders, Doug Cutting and Mike Cafarella.
 Hadoop 0.1.0 was released in April 2006.
 Apache Hadoop has 2 main parts:
 Hadoop Distributed File System (HDFS)
 MapReduce
 It is designed to scale up from single servers to thousands of machines,
 each offering local computation and storage.
9
Data storage in HDFS
10
Big Limitations of Hadoop
11
© 2018 YASH Technologies | www.yash.com | Confidential
Introduction to Spark
 Introduction
 Spark Ecosystem
 Iterative & Interactive Data Mining by Spark
 Features of Spark
 Spark Cluster Manager
 Spark Use Cases
12
Why Spark ?
13
Top Companies Using Spark
14
Introduction of Spark
 Apache Spark is an open-source distributed general-purpose cluster-computing framework.
 It provides high-level APIs in Java, Scala, Python and R.
 It provides an optimized engine that supports general execution graphs.
 Spark is 100 times faster than MapReduce.
 It also supports a rich set of higher-level tools including
15
Apache Spark ecosystem
16
17
Iterative and Interactive Data Mining
 MapReduce simplified “Big Data” analysis on large clusters, but its inefficiency to handle
iterative algorithm and interactive data mining.
 It led to the innovation of a new technology i.e. Spark which could provide the abstractions
for leveraging distributed memory.
Iteration 1 Iteration 1
Query 1
Query 1
Query 1
….
Input
Input
One time
Processing
……
18
Features of Spark
19
Lazy Evaluations
20
DAG(Directed Acyclic Graph)
21
Spark Cluster Managers
There are 3 types of cluster managers
Spark Standalone cluster
YARN mode
Spark Mesos
22
Spark Use Cases
23
© 2018 YASH Technologies | www.yash.com | Confidential
Apache Spark Architecture
 Spark-Context
 Spark Driver
 DAG Scheduler
 Stages
 Executor
24
Spark Context
 It is Entry gate of Apache Spark functionality.
 It allows your Spark Application to access Spark Cluster with the help of Resource Manager.
 The resource manager can be one of these three- Spark Standalone, YARN, Apache Mesos.
25
Functions of SparkContext in Apache Spark
26
workflow of Spark Architecture
 client submits spark user application code. When an application code is submitted, the driver
implicitly converts user code that contains transformations and actions into a logically directed
acyclic graph called DAG.
 After that, it converts the logical graph called DAG into physical execution plan with many
stages.
 After converting into a physical execution plan, it creates physical execution units called tasks
under each stage. Then the tasks are bundled and sent to the cluster.
27
workflow of Spark Architecture
 Now the driver talks to the cluster manager and negotiates the resources. Cluster manager
launches executors in worker nodes on behalf of the driver.
 At this point, the driver will send the tasks to the executors based on data placement.
 When executors start, they register themselves with drivers.
 So, the driver will have a complete view of executors that are executing the task.
 During the course of execution of tasks, driver program will monitor the set of executors that runs.
Driver node also schedules future tasks based on data placement.
28
Spark Executor
 Executors in Spark are worker nodes.
 Those help to process in charge of running individual tasks in a given Spark job.
 we launch them at the start of a Spark application.
 Then it typically runs for the entire lifetime of an application.
 As soon as they have run the task, sends results to the driver.
 Executors also provide in-memory storage for Spark RDDs that are cached by user programs
through Block Manager.
29
Stages in Spark
 A stage is nothing but a step in a physical execution plan.
 Stage is a set of parallel tasks i.e. one task per partition.
 Basically, each job which gets divided into smaller sets of tasks is a stage.
30
Spark Stage - An Introduction to Physical Execution plan
Basically, stages in Apache spark are two categories
1. ShuffleMapStage in Spark
 It is an intermediate Spark stage in the physical execution of DAG.
 It produces data for another stage(s).
 we can consider it as the final stage in Apache Spark.
 It is possible that there is n number of multiple pipeline operations, in ShuffleMapStage. like map
and filter, before shuffle operation.
2. ResultStage in Spark
 By running a function on a spark RDD Stage that executes a Spark action in a user program is a
ResultStage.
 it is considered as a final stage in spark.
31
© 2018 YASH Technologies | www.yash.com | Confidential
Spark Structure API
 RDD
 DataFrame
 Dataset
32
Spark Structured API’s The Structured APIs are a tool for
manipulating all sorts of data, from
unstructured log files to semi-
structured CSV files and highly
structured Parquet files.
Spark RDD
 It is Read-only partition collection of records. It allows a
programmer to perform in-memory computations .
Spark Dataframe
 Unlike an RDD, data organized into named columns(like
table in RDBMS).
 Dataframe in Spark allows developers to impose a
structure onto a distributed collection of data, allowing
higher-level abstraction.
Spark Dataset
 Datasets in Apache Spark are an extension of DataFrame
API which provides type-safe, object-oriented
programming interface.
 It takes advantage of Spark’s Catalyst optimizer by
exposing expressions and data fields to a query planner.
33
© 2018 YASH Technologies | www.yash.com | Confidential
RDD(Resilient Distributed Dataset)
 Introduction
 Features of RDD
 Way to Create RDD
 Working of RDD
 RDD Persistence and Caching
 RDD Operations
34
Spark RDD
 It stands for Resilient Distributed Dataset.
 It is the fundamental data structure of Apache Spark .
 RDDs are an immutable collection of objects which computes
on the different node of the cluster.
Resilient
 i.e. fault-tolerant with the help of RDD lineage graph(DAG) and
so able to recomputed missing or damaged partitions due to
node failures.
Distributed
 since Data resides on multiple nodes.
Dataset
 represents records of the data you work with. The user can load
the data set externally which can be either JSON file, CSV file,
text file or database via JDBC with no specific data structure.
RDD Properties
35
RDD Properties
Input
Dataset
Block1 –
N1
Partition1 Partition1
Partition1 c1
Block2 –
N2
Partition 2 Partition2
Partition2 c2
Block3 –
N3
Partition3 Partition 3
Partition3 c3
Block4 –
N4
Partition4 Partition 4
Partition4 c4
Block 5 – N Partition5 Partition5
Partition5 c5
textFile() filter() filter() count()
Log File
RDD
Error
RDD
HDFS
RDD
Property#1
List of Partitions
Property#3
List of
Dependencies
Property#2
Compute Functions
36
Ways to Create RDD
From RDD
Parallelized
Collection
From RDD
External
data
Way to Create
RDD in Spark
37
Ways to Create RDD
Parallelized collection (parallelizing)
val data=spark.sparkContext.parallelize(Seq(("maths",52),("english",75),("science",82), ("computer",65),("maths",85)))
val sorted = data.sortByKey()
sorted.foreach(println)
External Datasets
val dataRDD = spark.read.csv("path/of/csv/file").rdd
val dataRDD = spark.read.json("path/of/json/file").rdd
val dataRDD = spark.read.textFile("path/of/text/file").rdd
Creating RDD from existing RDD
val words=spark.sparkContext.parallelize(Seq("the", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"))
val wordPair = words.map(w => (w.charAt(0), w))
wordPair.foreach(println)
38
Working of RDD
39
MEMORY_ONLY
 Here RDD is stored as de-serialized
Java object in the JVM.
 If the size of RDD is greater than
memory, It will not cache some partition
and recomputed them next time
whenever needed.
 space used for storage is very high, the
CPU computation time is low.
 data is stored in-memory. It does not
make use of the disk.
MEMORY_AND_DISK
 Here RDD is stored as de-serialized
Java object in the JVM.
 When the size of RDD is greater than
the size of memory, it stores the excess
partition on the disk, and retrieve from
disk whenever required.
 space used for storage is high, the CPU
computation time is medium.
 it makes use of both in-memory and on
disk storage.
Memory
Memory
RDD Persist
Memory only
RDD Persist Memory only
Memory
Memory
RDD Persist
Memory & Disk
RDD Persist Memory and Disk
MEMORY_ONLY_SER
 It store the RDD as serialized Java
object.
 It is more space efficient as
compared to de-serialized objects.
 storage space is low, the CPU
computation time is high
 data is stored in-memory. It does not
make use of the disk.
MEMORY_AND_DISK_SER
 It drops the partition that does not
fits into memory to disk, rather than
recomputing each time it is needed.
 space used for storage is low, the
CPU computation time is high.
 It makes use of both in-memory and
on disk storage.
Memory
Memory
RDD Persist
Memory_only_Ser
RDD Persist Memory only Serialize
RDD as Serialize Java Object
Excess Data
Goes to Disk
RDD Persist
Memory_and_Disk_Ser
RDD Persist Memory and Disk Serialize
RDD as
Serialize
Java
Object
Memory
DISK_ONLY
 RDD is stored only on disk.
 The space used for storage is low, the
CPU computation time is high
 It makes use of on disk storage.
Advantages of In-memory Processing
 It is good for real-time risk management and fraud detection.
 The data becomes highly accessible.
 The computation speed of the system increases.
 Improves complex event processing.
 Cached a large amount of data.
 It is economic, as the cost of RAM has fallen over a period of time.
Disk
Disk
RDD Persist
Memory only
RDD Persist Disk only
42
Spark RDD Operations
 Apache Spark RDD operations are- Transformations and Actions.
 Transformations: They are the operations that are applied to create a new RDD.
 Actions: They are applied on an RDD to instruct Apache Spark to apply computation and pass the
result back to the driver.
Transformation
Spark
RDD
Operations
Action
43
RDD Transformation
 Spark Transformation is a function that produces new RDD from the existing RDDs.
 Applying transformation built an RDD lineage, with the entire parent RDDs of the final RDD(s).
 RDD lineage, also known as RDD operator graph or RDD dependency graph.
 It is a logical execution plan i.e., it is Directed Acyclic Graph (DAG) of the entire parent RDDs of RDD.
 Transformations are lazy in nature i.e., they get execute when we call an action.
 It has 2 types.
 Narrow transformation
 Wide transformation
44
 Map
 flatMap
 Filter
 Union
 Sample
 MapPartition
Narrow transformation
 all the elements that are required
to compute the records in single
partition live in the single partition
of parent RDD.
wide transformation
 all the elements that are required
to compute the records in the
single partition may live in many
partitions of parent RDD.
 Intersection
 Distinct
 ReduceBykey
 GroupBykey
 Join
 Cartesian
 Repartition
45
© 2018 YASH Technologies | www.yash.com | Confidential
Spark DataFrame and Dataset
 Introduction of DataFrame
 DataFrame from various sources
 Dataset in Spark
 Features of Dataset
46
DATAFRAME
 Data is organized as a distributed collection of data into named columns. Basically, that we call a Spark
DataFrame.
 It is as same as a table in a relational database.
 we can construct a Spark DataFrame from a wide array of sources.
 For example, structured data files, tables in Hive, external databases.
 It can also handle Petabytes of data.
Creating
Spark
DataFrame in
Spark
From
Local
Data
Frames
From
Hive
tables
From
Data
Sources
47
DataFrame from various Data Sources
Way to Create DataFrame in Spark
 Hive data
 CSV data
 Json data
 RDBMS Data
 XML data
 Parquet data
 Cassandra data
 RDDs
Col 1 Col 2 Col3 …
Row 1
Row 2
Row 3
DataFrame
48
a. From local data frames
 To create a Spark DataFrame, there is one simplest way. That is the conversion of a local R data
frame into a Spark DataFrame.
df <- as.DataFrame(faithful)
# Displays the first part of the Spark DataFrame
head(df)
## eruptions waiting
##1 3.600 79
##2 1.800 54
##3 3.333 74
49
b. From Data Sources
 General method from data sources is read.df.
 This method takes in the path for the file to load.
 Spark supports reading JSON, CSV and parquet files natively.
# in Python
df = spark.read.format("json").load("/data/flight-data/json/2015-
summary.json")
## age name
##1 NA Michael
##2 30 Andy
df.printSchema()
# root
# |-- age: long (nullable = true)
# |-- name: string (nullable = true)
50
c. From Hive tables
 We can also use Hive tables to create SparkDataFrames. For this, we will need to create a
SparkSession with Hive support.
 Also can help to access tables in the Hive MetaStore.
 Although, SparkR attempt to create a SparkSession with Hive support enabled by default.
(enableHiveSupport = TRUE).
sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
# Queries can be expressed in HiveQL.
results <- sql("FROM src SELECT key, value")
# results is now a SparkDataFrame
head(results)
## key value
## 1 238 val_238
## 2 86 val_86
51
Dataset in Spark
 To overcome the limitations of RDD and Dataframe, Dataset emerged.
 In DataFrame, there was no provision for compile-time type safety. Data cannot be altered without knowing its
structure.
 In RDD there was no automatic optimization. So for optimization, we do it manually when needed.
 A Dataset is a type of interface that provides the benefits of RDD (strongly typed) and Spark SQL’s
optimization.
 Datasets are a strictly Java Virtual Machine (JVM) language feature that work only with Scala and Java.
 Spark Dataset provides both type safety and object-oriented programming interface.
 It represents structured queries with encoders. It is an extension to data frame API.
 Conceptually, they are equivalent to a table in a relational database or a DataFrame in R or Python.
52
Features of Dataset in Spark
Features
of
Spark
Dataset
Optimized Query
Analysis at compile time
Persistent Storage
Inter-convertible
Faster Computation
Less Memory
Consumption
Single API for
Java and Scala
strongly typed
53
Best Books for Spark and Scala
54
Question- Answers
© 2018 YASH Technologies | www.yash.com | Confidential
Feel free to write to me at:
mahesh.pandit@yash.com
in case of any queries / clarifications.

Apache Spark Introduction.pdf

  • 1.
    © 2018 YASHTechnologies | www.yash.com | Confidential Apache Spark - Mahesh Pandit
  • 2.
    2 Agenda  What isBig Data  Sources for Big Data  Big Data Exploration  Hadoop Introduction  Limitations of Hadoop  Why Spark  Spark Ecosystem  Features of Spark  Use cases of Spark  Spark Architecture  Spark Structure API’s  Spark RDD  DataFrame & Datasets  Best Source to learn  Question- Answers
  • 3.
    3 © 2018 YASHTechnologies | www.yash.com | Confidential BIG DATA  What is Big Data  Source of Big Data
  • 4.
  • 5.
    5 What is BigData  Data sets that are too large or complex for traditional data-processing application software to adequately deal with.  Big data can be described by the following characteristics VOLUME VELOCITY VALUE VERACITY VARIETY Different Forms of data No use until turn into Value Scale of data Different forms of data Uncertainty of Data
  • 6.
    6 Source of Bigdata HUMAN MACHINE
  • 7.
    7 © 2018 YASHTechnologies | www.yash.com | Confidential Apache Hadoop  Introduction  Storage in Hadoop(HDFS)  Big Limitations of Hadoop
  • 8.
    8 Apache Hadoop  Hadoopis an open source distributed processing framework that manages data  processing and storage for big data applications running in clustered systems.  Inspired by “Google File System”  Its co-founders, Doug Cutting and Mike Cafarella.  Hadoop 0.1.0 was released in April 2006.  Apache Hadoop has 2 main parts:  Hadoop Distributed File System (HDFS)  MapReduce  It is designed to scale up from single servers to thousands of machines,  each offering local computation and storage.
  • 9.
  • 10.
  • 11.
    11 © 2018 YASHTechnologies | www.yash.com | Confidential Introduction to Spark  Introduction  Spark Ecosystem  Iterative & Interactive Data Mining by Spark  Features of Spark  Spark Cluster Manager  Spark Use Cases
  • 12.
  • 13.
  • 14.
    14 Introduction of Spark Apache Spark is an open-source distributed general-purpose cluster-computing framework.  It provides high-level APIs in Java, Scala, Python and R.  It provides an optimized engine that supports general execution graphs.  Spark is 100 times faster than MapReduce.  It also supports a rich set of higher-level tools including
  • 15.
  • 16.
  • 17.
    17 Iterative and InteractiveData Mining  MapReduce simplified “Big Data” analysis on large clusters, but its inefficiency to handle iterative algorithm and interactive data mining.  It led to the innovation of a new technology i.e. Spark which could provide the abstractions for leveraging distributed memory. Iteration 1 Iteration 1 Query 1 Query 1 Query 1 …. Input Input One time Processing ……
  • 18.
  • 19.
  • 20.
  • 21.
    21 Spark Cluster Managers Thereare 3 types of cluster managers Spark Standalone cluster YARN mode Spark Mesos
  • 22.
  • 23.
    23 © 2018 YASHTechnologies | www.yash.com | Confidential Apache Spark Architecture  Spark-Context  Spark Driver  DAG Scheduler  Stages  Executor
  • 24.
    24 Spark Context  Itis Entry gate of Apache Spark functionality.  It allows your Spark Application to access Spark Cluster with the help of Resource Manager.  The resource manager can be one of these three- Spark Standalone, YARN, Apache Mesos.
  • 25.
  • 26.
    26 workflow of SparkArchitecture  client submits spark user application code. When an application code is submitted, the driver implicitly converts user code that contains transformations and actions into a logically directed acyclic graph called DAG.  After that, it converts the logical graph called DAG into physical execution plan with many stages.  After converting into a physical execution plan, it creates physical execution units called tasks under each stage. Then the tasks are bundled and sent to the cluster.
  • 27.
    27 workflow of SparkArchitecture  Now the driver talks to the cluster manager and negotiates the resources. Cluster manager launches executors in worker nodes on behalf of the driver.  At this point, the driver will send the tasks to the executors based on data placement.  When executors start, they register themselves with drivers.  So, the driver will have a complete view of executors that are executing the task.  During the course of execution of tasks, driver program will monitor the set of executors that runs. Driver node also schedules future tasks based on data placement.
  • 28.
    28 Spark Executor  Executorsin Spark are worker nodes.  Those help to process in charge of running individual tasks in a given Spark job.  we launch them at the start of a Spark application.  Then it typically runs for the entire lifetime of an application.  As soon as they have run the task, sends results to the driver.  Executors also provide in-memory storage for Spark RDDs that are cached by user programs through Block Manager.
  • 29.
    29 Stages in Spark A stage is nothing but a step in a physical execution plan.  Stage is a set of parallel tasks i.e. one task per partition.  Basically, each job which gets divided into smaller sets of tasks is a stage.
  • 30.
    30 Spark Stage -An Introduction to Physical Execution plan Basically, stages in Apache spark are two categories 1. ShuffleMapStage in Spark  It is an intermediate Spark stage in the physical execution of DAG.  It produces data for another stage(s).  we can consider it as the final stage in Apache Spark.  It is possible that there is n number of multiple pipeline operations, in ShuffleMapStage. like map and filter, before shuffle operation. 2. ResultStage in Spark  By running a function on a spark RDD Stage that executes a Spark action in a user program is a ResultStage.  it is considered as a final stage in spark.
  • 31.
    31 © 2018 YASHTechnologies | www.yash.com | Confidential Spark Structure API  RDD  DataFrame  Dataset
  • 32.
    32 Spark Structured API’sThe Structured APIs are a tool for manipulating all sorts of data, from unstructured log files to semi- structured CSV files and highly structured Parquet files. Spark RDD  It is Read-only partition collection of records. It allows a programmer to perform in-memory computations . Spark Dataframe  Unlike an RDD, data organized into named columns(like table in RDBMS).  Dataframe in Spark allows developers to impose a structure onto a distributed collection of data, allowing higher-level abstraction. Spark Dataset  Datasets in Apache Spark are an extension of DataFrame API which provides type-safe, object-oriented programming interface.  It takes advantage of Spark’s Catalyst optimizer by exposing expressions and data fields to a query planner.
  • 33.
    33 © 2018 YASHTechnologies | www.yash.com | Confidential RDD(Resilient Distributed Dataset)  Introduction  Features of RDD  Way to Create RDD  Working of RDD  RDD Persistence and Caching  RDD Operations
  • 34.
    34 Spark RDD  Itstands for Resilient Distributed Dataset.  It is the fundamental data structure of Apache Spark .  RDDs are an immutable collection of objects which computes on the different node of the cluster. Resilient  i.e. fault-tolerant with the help of RDD lineage graph(DAG) and so able to recomputed missing or damaged partitions due to node failures. Distributed  since Data resides on multiple nodes. Dataset  represents records of the data you work with. The user can load the data set externally which can be either JSON file, CSV file, text file or database via JDBC with no specific data structure. RDD Properties
  • 35.
    35 RDD Properties Input Dataset Block1 – N1 Partition1Partition1 Partition1 c1 Block2 – N2 Partition 2 Partition2 Partition2 c2 Block3 – N3 Partition3 Partition 3 Partition3 c3 Block4 – N4 Partition4 Partition 4 Partition4 c4 Block 5 – N Partition5 Partition5 Partition5 c5 textFile() filter() filter() count() Log File RDD Error RDD HDFS RDD Property#1 List of Partitions Property#3 List of Dependencies Property#2 Compute Functions
  • 36.
    36 Ways to CreateRDD From RDD Parallelized Collection From RDD External data Way to Create RDD in Spark
  • 37.
    37 Ways to CreateRDD Parallelized collection (parallelizing) val data=spark.sparkContext.parallelize(Seq(("maths",52),("english",75),("science",82), ("computer",65),("maths",85))) val sorted = data.sortByKey() sorted.foreach(println) External Datasets val dataRDD = spark.read.csv("path/of/csv/file").rdd val dataRDD = spark.read.json("path/of/json/file").rdd val dataRDD = spark.read.textFile("path/of/text/file").rdd Creating RDD from existing RDD val words=spark.sparkContext.parallelize(Seq("the", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog")) val wordPair = words.map(w => (w.charAt(0), w)) wordPair.foreach(println)
  • 38.
  • 39.
    39 MEMORY_ONLY  Here RDDis stored as de-serialized Java object in the JVM.  If the size of RDD is greater than memory, It will not cache some partition and recomputed them next time whenever needed.  space used for storage is very high, the CPU computation time is low.  data is stored in-memory. It does not make use of the disk. MEMORY_AND_DISK  Here RDD is stored as de-serialized Java object in the JVM.  When the size of RDD is greater than the size of memory, it stores the excess partition on the disk, and retrieve from disk whenever required.  space used for storage is high, the CPU computation time is medium.  it makes use of both in-memory and on disk storage. Memory Memory RDD Persist Memory only RDD Persist Memory only Memory Memory RDD Persist Memory & Disk RDD Persist Memory and Disk
  • 40.
    MEMORY_ONLY_SER  It storethe RDD as serialized Java object.  It is more space efficient as compared to de-serialized objects.  storage space is low, the CPU computation time is high  data is stored in-memory. It does not make use of the disk. MEMORY_AND_DISK_SER  It drops the partition that does not fits into memory to disk, rather than recomputing each time it is needed.  space used for storage is low, the CPU computation time is high.  It makes use of both in-memory and on disk storage. Memory Memory RDD Persist Memory_only_Ser RDD Persist Memory only Serialize RDD as Serialize Java Object Excess Data Goes to Disk RDD Persist Memory_and_Disk_Ser RDD Persist Memory and Disk Serialize RDD as Serialize Java Object Memory
  • 41.
    DISK_ONLY  RDD isstored only on disk.  The space used for storage is low, the CPU computation time is high  It makes use of on disk storage. Advantages of In-memory Processing  It is good for real-time risk management and fraud detection.  The data becomes highly accessible.  The computation speed of the system increases.  Improves complex event processing.  Cached a large amount of data.  It is economic, as the cost of RAM has fallen over a period of time. Disk Disk RDD Persist Memory only RDD Persist Disk only
  • 42.
    42 Spark RDD Operations Apache Spark RDD operations are- Transformations and Actions.  Transformations: They are the operations that are applied to create a new RDD.  Actions: They are applied on an RDD to instruct Apache Spark to apply computation and pass the result back to the driver. Transformation Spark RDD Operations Action
  • 43.
    43 RDD Transformation  SparkTransformation is a function that produces new RDD from the existing RDDs.  Applying transformation built an RDD lineage, with the entire parent RDDs of the final RDD(s).  RDD lineage, also known as RDD operator graph or RDD dependency graph.  It is a logical execution plan i.e., it is Directed Acyclic Graph (DAG) of the entire parent RDDs of RDD.  Transformations are lazy in nature i.e., they get execute when we call an action.  It has 2 types.  Narrow transformation  Wide transformation
  • 44.
    44  Map  flatMap Filter  Union  Sample  MapPartition Narrow transformation  all the elements that are required to compute the records in single partition live in the single partition of parent RDD. wide transformation  all the elements that are required to compute the records in the single partition may live in many partitions of parent RDD.  Intersection  Distinct  ReduceBykey  GroupBykey  Join  Cartesian  Repartition
  • 45.
    45 © 2018 YASHTechnologies | www.yash.com | Confidential Spark DataFrame and Dataset  Introduction of DataFrame  DataFrame from various sources  Dataset in Spark  Features of Dataset
  • 46.
    46 DATAFRAME  Data isorganized as a distributed collection of data into named columns. Basically, that we call a Spark DataFrame.  It is as same as a table in a relational database.  we can construct a Spark DataFrame from a wide array of sources.  For example, structured data files, tables in Hive, external databases.  It can also handle Petabytes of data. Creating Spark DataFrame in Spark From Local Data Frames From Hive tables From Data Sources
  • 47.
    47 DataFrame from variousData Sources Way to Create DataFrame in Spark  Hive data  CSV data  Json data  RDBMS Data  XML data  Parquet data  Cassandra data  RDDs Col 1 Col 2 Col3 … Row 1 Row 2 Row 3 DataFrame
  • 48.
    48 a. From localdata frames  To create a Spark DataFrame, there is one simplest way. That is the conversion of a local R data frame into a Spark DataFrame. df <- as.DataFrame(faithful) # Displays the first part of the Spark DataFrame head(df) ## eruptions waiting ##1 3.600 79 ##2 1.800 54 ##3 3.333 74
  • 49.
    49 b. From DataSources  General method from data sources is read.df.  This method takes in the path for the file to load.  Spark supports reading JSON, CSV and parquet files natively. # in Python df = spark.read.format("json").load("/data/flight-data/json/2015- summary.json") ## age name ##1 NA Michael ##2 30 Andy df.printSchema() # root # |-- age: long (nullable = true) # |-- name: string (nullable = true)
  • 50.
    50 c. From Hivetables  We can also use Hive tables to create SparkDataFrames. For this, we will need to create a SparkSession with Hive support.  Also can help to access tables in the Hive MetaStore.  Although, SparkR attempt to create a SparkSession with Hive support enabled by default. (enableHiveSupport = TRUE). sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)") sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src") # Queries can be expressed in HiveQL. results <- sql("FROM src SELECT key, value") # results is now a SparkDataFrame head(results) ## key value ## 1 238 val_238 ## 2 86 val_86
  • 51.
    51 Dataset in Spark To overcome the limitations of RDD and Dataframe, Dataset emerged.  In DataFrame, there was no provision for compile-time type safety. Data cannot be altered without knowing its structure.  In RDD there was no automatic optimization. So for optimization, we do it manually when needed.  A Dataset is a type of interface that provides the benefits of RDD (strongly typed) and Spark SQL’s optimization.  Datasets are a strictly Java Virtual Machine (JVM) language feature that work only with Scala and Java.  Spark Dataset provides both type safety and object-oriented programming interface.  It represents structured queries with encoders. It is an extension to data frame API.  Conceptually, they are equivalent to a table in a relational database or a DataFrame in R or Python.
  • 52.
    52 Features of Datasetin Spark Features of Spark Dataset Optimized Query Analysis at compile time Persistent Storage Inter-convertible Faster Computation Less Memory Consumption Single API for Java and Scala strongly typed
  • 53.
    53 Best Books forSpark and Scala
  • 54.
  • 55.
    © 2018 YASHTechnologies | www.yash.com | Confidential Feel free to write to me at: mahesh.pandit@yash.com in case of any queries / clarifications.