This document summarizes Spark as a service on YARN clusters and discusses key features:
- Spark on YARN allows running multiple workflows like Spark and Hadoop on the same cluster and improves resource utilization. The application master can dynamically request more containers as needed.
- Qubole YARN clusters support autoscaling to upscale and downscale based on load and use spot instances for cost savings.
- Spark applications were limited by initial resource allocation. Dynamic provisioning allows applications to request more executors or release unused executors to improve performance and cluster utilization.
5. Disadvantages of hadoop1
• Limited to only MR
• Separate Map and Reduce slots =>
underutilization
• JT is heavily loaded for job scheduling,
monitoring and resource allocation.
7. Advantages of Spark on Yarn
• General cluster for running multiple
workflows. AM can have custom logic for
scheduling
• AM can ask for more containers when
required and give up containers when free
• This become even better when yarn clusters
can autoscale
• Get features like spot nodes etc which brings
additional challenges
8. Advantages of Spark on Yarn
• Qubole Yarn clusters can upscale and downscale
based on load and support spot instances.
10. Spark Provisioning: Problems
• Spark Application starts with fixed number of
resources and hold on to them till its alive
• Sometimes its difficult to estimate resources
required by a job since AM is long running
• It becomes limiting spl when Yarn clusters can
autoscale.
11. Dynamic Provisioning
• Speed up spark commands by using free
resources in yarn cluster and also by releasing
resources when free to RM.
12. Spark on Yarn basics
Driver
AM
Executor-1 Executor-n
• Cluster Mode: Driver and AM run in same JVM in
a yarn Executor
• Client Mode: Driver and AM run in separate JVM
• Driver and AM talk using Actors to handle both
cases
Driver AM Executor-1 Executor-n
13. Dynamic Provisioning: Problem
Statement
• Two parts:
– Spark AM has no way to ask for additional
containers and give up free containers
– Automating the process of requesting containers
and releasing containers. Cached data in
containers make this difficult
15. Dynamic Provisioning: Part1
• Implementation of 2 new apis:
// Request 5 extra executors
sc.requestExecutors(5)
// Kill executors with IDs 1, 15, and 16
sc.killExecutors(Seq("1", "15", "16"))
16. requestExecutors
AM
Reporter Thread
E1 E2 En
• AM has reporter thread that has count of
number of executors
• Reporter thread was used to restart died
executors
• Driver increments count of number of
executors when sc.requestExecutors is called.
Driver
17. removeExecutors
• To kill executors, one must precisely tell which
executors need to be killed
• Driver maintains list of all executors and can
be obtained by:
sc.executorStorageStatuses.foreach(x => println(x.blockManagerId.executorId))
• Whats cached in each executor is also
available using:
sc.executorStorageStatuses.foreach(x => println(s”memUsed = ${x.memUsed}
diskUsed=${x.diskUsed)”))
18. Removing Executors Tradeoffs
• BlockManager in each executor can have
cached RDDs, shuffle and broadcast data
• Killing an executor with shuffle data will
require the stage to rerun.
• To avoid this use external shuffle service
introduced in spark-1.2
20. Upscaling Heuristics
• Request Executors as many pending tasks
• Request Executors in rounds if there are
pending tasks, doubling number of executors
added in each round bounded by some upper
limit
• Request executors by estimating workload
• Introduced –max-executors as extra param
21. Downscaling Heuristics
• Remove Executors when they are idle
• Remove Executors if then are idle for X secs
• Cant downscale executors with shuffle data or
broadcast data.
• --num-executors act as minimum executors
22. Scope
• Kill executors on spot nodes first
• Flag for not killing up executors if they have
shuffle data
23. Where is the code?
• https://github.com/apache/spark/pull/2840
• https://github.com/apache/spark/pull/2746
25. What is involved?
• Spark programs should be able to access hive
metastore
• Other Qubole services can be producers or
consumers of data and metadata(hive, presto,
pig etc)
34. Basic cluster organization
• DB instance in Qubole account
• ssh tunnel from master to metastore DB
• Metastore server running on master on port
10000
• On master and slave nodes, hive-site.xml:-
hive.metastore.uris=thrift://master_ip:10000
36. Problems
• yarn overhead should be 20% (TPC-H)
• Parquet needs higher PermGen
• cached tables use actual table
• alter table recover partitions not supported
• VPC cluster has slow access to metastore
• SchemaRDD gone - old jars dont run
• hive jars needed on system classpath