Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
Slides cover Spark core concepts of Apache Spark such as RDD, DAG, execution workflow, forming stages of tasks and shuffle implementation and also describes architecture and main components of Spark Driver. The workshop part covers Spark execution modes , provides link to github repo which contains Spark Applications examples and dockerized Hadoop environment to experiment with
Video: https://www.youtube.com/watch?v=kkOG_aJ9KjQ
This talk gives details about Spark internals and an explanation of the runtime behavior of a Spark application. It explains how high level user programs are compiled into physical execution plans in Spark. It then reviews common performance bottlenecks encountered by Spark users, along with tips for diagnosing performance problems in a production application.
My talk about Catalyst for QCon Beijing 2015. In this talk, we walk through Catalyst, Spark SQL's query optimizer, by using a simplified version of Catalyst to build an optimizing Brainfuck compiler named Brainsuck in less than 300 lines of code.
This presentation show the main Spark characteristics, like RDD, Transformations and Actions.
I used this presentation for many Spark Intro workshops from Cluj-Napoca Big Data community : http://www.meetup.com/Big-Data-Data-Science-Meetup-Cluj-Napoca/
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
Slides cover Spark core concepts of Apache Spark such as RDD, DAG, execution workflow, forming stages of tasks and shuffle implementation and also describes architecture and main components of Spark Driver. The workshop part covers Spark execution modes , provides link to github repo which contains Spark Applications examples and dockerized Hadoop environment to experiment with
Video: https://www.youtube.com/watch?v=kkOG_aJ9KjQ
This talk gives details about Spark internals and an explanation of the runtime behavior of a Spark application. It explains how high level user programs are compiled into physical execution plans in Spark. It then reviews common performance bottlenecks encountered by Spark users, along with tips for diagnosing performance problems in a production application.
My talk about Catalyst for QCon Beijing 2015. In this talk, we walk through Catalyst, Spark SQL's query optimizer, by using a simplified version of Catalyst to build an optimizing Brainfuck compiler named Brainsuck in less than 300 lines of code.
This presentation show the main Spark characteristics, like RDD, Transformations and Actions.
I used this presentation for many Spark Intro workshops from Cluj-Napoca Big Data community : http://www.meetup.com/Big-Data-Data-Science-Meetup-Cluj-Napoca/
Video to talk: https://www.youtube.com/watch?v=gd4Jqtyo7mM
Apache Spark is a next generation engine for large scale data processing built with Scala. This talk will first show how Spark takes advantage of Scala's function idioms to produce an expressive and intuitive API. You will learn about the design of Spark RDDs and the abstraction enables the Spark execution engine to be extended to support a wide variety of use cases(Spark SQL, Spark Streaming, MLib and GraphX). The Spark source will be be referenced to illustrate how these concepts are implemented with Scala.
http://www.meetup.com/Scala-Bay/events/209740892/
A very short set of slides to describe an RDD data structure.
Extracted from my 3-day course: www.sparkInternals.com
There is also a video of this on YouTube: http://youtu.be/odcEg515Ne8
This slide deck is used as an introduction to the internals of Apache Spark, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
Created at the University of Berkeley in California, Apache Spark combines a distributed computing system through computer clusters with a simple and elegant way of writing programs. Spark is considered the first open source software that makes distribution programming really accessible to data scientists. Here you can find an introduction and basic concepts.
We will see internal architecture of spark cluster i.e what is driver, worker, executor and cluster manager, how spark program will be run on cluster and what are jobs,stages and task.
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17spark-project
Slides from Tathagata Das's talk at the Spark Meetup entitled "Deep Dive with Spark Streaming" on June 17, 2013 in Sunnyvale California at Plug and Play. Tathagata Das is the lead developer on Spark Streaming and a PhD student in computer science in the UC Berkeley AMPLab.
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.
In this talk, we present two emerging, popular open source projects: Spark and Shark. Spark is an open source cluster computing system that aims to make data analytics fast — both fast to run and fast to write. It outperform Hadoop by up to 100x in many real-world applications. Spark programs are often much shorter than their MapReduce counterparts thanks to its high-level APIs and language integration in Java, Scala, and Python. Shark is an analytic query engine built on top of Spark that is compatible with Hive. It can run Hive queries much faster in existing Hive warehouses without modifications.
These systems have been adopted by many organizations large and small (e.g. Yahoo, Intel, Adobe, Alibaba, Tencent) to implement data intensive applications such as ETL, interactive SQL, and machine learning.
Introduction to Apache Spark. With an emphasis on the RDD API, Spark SQL (DataFrame and Dataset API) and Spark Streaming.
Presented at the Desert Code Camp:
http://oct2016.desertcodecamp.com/sessions/all
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Python and Bigdata - An Introduction to Spark (PySpark)hiteshnd
An Introduction to Spark. A cluster computing framework to process large quantities of data by leveraging RAM across the cluster. Talk was given at PyBelgaum 2015
Video to talk: https://www.youtube.com/watch?v=gd4Jqtyo7mM
Apache Spark is a next generation engine for large scale data processing built with Scala. This talk will first show how Spark takes advantage of Scala's function idioms to produce an expressive and intuitive API. You will learn about the design of Spark RDDs and the abstraction enables the Spark execution engine to be extended to support a wide variety of use cases(Spark SQL, Spark Streaming, MLib and GraphX). The Spark source will be be referenced to illustrate how these concepts are implemented with Scala.
http://www.meetup.com/Scala-Bay/events/209740892/
A very short set of slides to describe an RDD data structure.
Extracted from my 3-day course: www.sparkInternals.com
There is also a video of this on YouTube: http://youtu.be/odcEg515Ne8
This slide deck is used as an introduction to the internals of Apache Spark, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
Created at the University of Berkeley in California, Apache Spark combines a distributed computing system through computer clusters with a simple and elegant way of writing programs. Spark is considered the first open source software that makes distribution programming really accessible to data scientists. Here you can find an introduction and basic concepts.
We will see internal architecture of spark cluster i.e what is driver, worker, executor and cluster manager, how spark program will be run on cluster and what are jobs,stages and task.
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17spark-project
Slides from Tathagata Das's talk at the Spark Meetup entitled "Deep Dive with Spark Streaming" on June 17, 2013 in Sunnyvale California at Plug and Play. Tathagata Das is the lead developer on Spark Streaming and a PhD student in computer science in the UC Berkeley AMPLab.
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.
In this talk, we present two emerging, popular open source projects: Spark and Shark. Spark is an open source cluster computing system that aims to make data analytics fast — both fast to run and fast to write. It outperform Hadoop by up to 100x in many real-world applications. Spark programs are often much shorter than their MapReduce counterparts thanks to its high-level APIs and language integration in Java, Scala, and Python. Shark is an analytic query engine built on top of Spark that is compatible with Hive. It can run Hive queries much faster in existing Hive warehouses without modifications.
These systems have been adopted by many organizations large and small (e.g. Yahoo, Intel, Adobe, Alibaba, Tencent) to implement data intensive applications such as ETL, interactive SQL, and machine learning.
Introduction to Apache Spark. With an emphasis on the RDD API, Spark SQL (DataFrame and Dataset API) and Spark Streaming.
Presented at the Desert Code Camp:
http://oct2016.desertcodecamp.com/sessions/all
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Python and Bigdata - An Introduction to Spark (PySpark)hiteshnd
An Introduction to Spark. A cluster computing framework to process large quantities of data by leveraging RAM across the cluster. Talk was given at PyBelgaum 2015
Presentation at NY Scala Enthusiasts Meetup on 6/14/2010. Covers techniques for using Scala's flexible syntax and features to design internal DSLs and wrappers.
A gentle introduction on the use of implicit values & conversions in Scala.
It also introduces some design patterns for which implicit(s) are the building blocks.
http://blog.stratio.com/developers-guide-scala-implicit-values-part/
Unikernels: in search of a killer app and a killer ecosystemrhatr
By now, unikernels are not a new kid on the block anymore.
There's a healthy diversity of implementations and communities to a point where a project like UniK had to be created to curate it all. This talk will attempt to answer a question of what may be the missing piece to make unikernels as ubiquitous as virtualization or public clouds are today. We will mainly focus on an example of OSv (a popular almost-POSIX unikernel) and its evolution in search of a killer app to run. In addition to that we will attempt to present a vision of an utopian overall ecosystem where unikernels can take its rightful place.
Recent developments in Hadoop version 2 are pushing the system from the traditional, batch oriented, computational model based on MapRecuce towards becoming a multi paradigm, general purpose, platform. In the first part of this talk we will review and contrast three popular processing frameworks. In the second part we will look at how the ecosystem (eg. Hive, Mahout, Spark) is making use of these new advancements. Finally, we will illustrate "use cases" of batch, interactive and streaming architectures to power traditional and "advanced" analytics applications.
These are the slides from a talk I gave at the Waterloo Scala Meetup on October 9th 2013. The talk was geared toward describing the purpose of implicits, use cases, and getting past that initial hump of "what are they and why would I need them" in order to get people to start exploring the ideas.
The state of analytics has changed dramatically over the last few years. Hadoop is now commonplace, and the ecosystem has evolved to include new tools such as Spark, Shark, and Drill, that live alongside the old MapReduce-based standards. It can be difficult to keep up with the pace of change, and newcomers are left with a dizzying variety of seemingly similar choices. This is compounded by the number of possible deployment permutations, which can cause all but the most determined to simply stick with the tried and true. In this talk I will introduce you to a powerhouse combination of Cassandra and Spark, which provides a high-speed platform for both real-time and batch analysis.
Think Like Spark: Some Spark Concepts and a Use CaseRachel Warren
A deeper explanation of Spark's evaluation principals including lazy evaluation, the Spark execution environment, anatomy of a Spark Job (Tasks, Stages, Query execution plan) and presents one use case to demonstrate these concepts.
Storage and computation is getting cheaper AND easily accessible on demand in the cloud. We now collect and store some really large data sets Eg: user activity logs, genome sequencing, sensory data etc. Hadoop and the ecosystem of projects built around it present simple and easy to use tools for storing and analyzing such large data collections on commodity hardware.
Topics Covered
* The Hadoop architecture.
* Thinking in MapReduce.
* Run some sample MapReduce Jobs (using Hadoop Streaming).
* Introduce PigLatin, a easy to use data processing language.
Speaker Profile: Mahesh Reddy is an Entrepreneur, chasing dreams. Works on large scale crawl and extraction of structured data from the web. He is a graduate frm IIT Kanpur(2000-05) and previously worked at Yahoo! Labs as Research Engineer/Tech Lead on Search and Advertising products.
Abstract –
Spark 2 is here, while Spark has been the leading cluster computation framework for severl years, its second version takes Spark to new heights. In this seminar, we will go over Spark internals and learn the new concepts of Spark 2 to create better scalable big data applications.
Target Audience
Architects, Java/Scala developers, Big Data engineers, team leaders
Prerequisites
Java/Scala knowledge and SQL knowledge
Contents:
- Spark internals
- Architecture
- RDD
- Shuffle explained
- Dataset API
- Spark SQL
- Spark Streaming
OCF.tw's talk about "Introduction to spark"Giivee The
在 OCF and OSSF 的邀請下分享一下 Spark
If you have any interest about 財團法人開放文化基金會(OCF) or 自由軟體鑄造場(OSSF)
Please check http://ocf.tw/ or http://www.openfoundry.org/
另外感謝 CLBC 的場地
如果你想到在一個良好的工作環境下工作
歡迎跟 CLBC 接洽 http://clbc.tw/
Leveraging Hadoop in your PostgreSQL EnvironmentJim Mlodgenski
This talk will begin with a discussion of the strengths of PostgreSQL and Hadoop. We will then lead into a high level overview of Hadoop and its community of projects like Hive, Flume and Sqoop. Finally, we will dig down into various use cases detailing how you can leverage Hadoop technologies for your PostgreSQL databases today. The use cases will range from using HDFS for simple database backups to using PostgreSQL and Foreign Data Wrappers to do low latency analytics on your Big Data.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
2. What is
• A fast and general engine for large-scale data
processing
• An open source implementation of Resilient
Distributed Datasets (RDD)
• Has an advanced DAG execution engine that
supports cyclic data flow and in-memory
computing
3. Why
• Fast
– Run machine learning like iterative programs up to
100x faster than Hadoop MapReduce in memory,
or 10x faster on disk
– Run HiveQL compatible queries 100x faster than
Hive (with Shark/Spark SQL)
4. Why
• Easy to use
– Fluent Scala/Java/Python API
– Interactive shell
– 2-5x less code (than Hadoop MapReduce)
5. Why
• Easy to use
– Fluent Scala/Java/Python API
– Interactive shell
– 2-5x less code (than Hadoop MapReduce)
sc.textFile("hdfs://...")
.flatMap(_.split(" "))
.map(_ -> 1)
.reduceByKey(_ + _)
.collectAsMap()
6. Why
• Easy to use
– Fluent Scala/Java/Python API
– Interactive shell
– 2-5x less code (than Hadoop MapReduce)
sc.textFile("hdfs://...")
.flatMap(_.split(" "))
.map(_ -> 1)
.reduceByKey(_ + _)
.collectAsMap()
Can you write down
WordCount
in 30 seconds with
Hadoop MapReduce?
7. Why
• Unified big data pipeline for:
– Batch/Interactive (Spark Core vs MR/Tez)
– SQL (Shark/Spark SQL vs Hive)
– Streaming (Spark Streaming vs Storm)
– Machine learning (MLlib vs Mahout)
– Graph (GraphX vs Giraph)
9. An Even Bigger Picture...
• Components of the Spark stack focus on big
data analysis and are compatible with existing
Hadoop storage systems
• Users don’t need to suffer expensive ETL cost
to use the Spark stack
10. “One Stack To Rule Them All”
• Well, mostly :-)
• And don't forget Shark/Spark SQL vs Hive
12. Resilient Distributed Datasets
• Conceptually, RDDs can be roughly viewed as
partitioned, locality aware distributed vectors
• An RDD…
– either points to a direct data source
– or applies some transformation to its parent RDD(s)
to generate new data elements
– Computation can be represented by lazy evaluated
lineage DAGs composed by connected RDDs
13. Resilient Distributed Datasets
• Frequently used RDDs can be materialized and
cached in-memory to accelerate computation
• Spark scheduler takes data locality into
account
14. The In-Memory Magic
• “In fact, one study [1] analyzed the access
patterns in the Hive warehouses at Facebook
and discovered that for the vast majority (96%)
of jobs, the entire inputs could fit into a
fraction of the cluster’s total memory.”
[1] G. Ananthanarayanan, A. Ghodsi, S. Shenker, and I. Stoica. Disk-locality in
datacenter computing considered irrelevant. In HotOS ’11, 2011.
15. The In-Memory Magic
• Without cache
– Elements are accessed in an iterator-based
streaming style
– One element a time, no bulk copy
– Space complexity is almost O(1) when there’s only
narrow dependencies
16. The In-Memory Magic
• With cache
– One block per RDD partition
– LRU cache eviction
– Locality aware
– Evicted blocks can be recomputed in parallel with
the help of RDD lineage DAG
17. Play with Error Logs
sc.textFile("hdfs://<input>")
.filter(_.startsWith("ERROR"))
.map(_.split(" ")(1))
.saveAsTextFile("hdfs://<output>")
18. Step by Step
The RDD.saveAsTextFile() action triggers a job. Tasks are started on
scheduled executors.
19. The task pulls data lazily from the final RDD (the MappedRDD here).
Step by Step
20. A record (text line) is pulled out from HDFS...
Step by Step
29. A Synthesized Iterator
new Iterator[String] {
private var head: String = _
private var headDefined: Boolean = false
def hasNext: Boolean = headDefined || {
do {
try head = readOneLineFromHDFS(...) // (1) read from HDFS
catch {
case _: EOFException => return false
}
} while (!head.startsWith("ERROR")) // (2) filter closure
true
}
def next: String = if (hasNext) {
headDefined = false
head.split(" ")(1) // (3) map closure
} else {
throw new NoSuchElementException("...")
}
}
Constant Space
Complexity!
30. If Cached...
val cached = sc
.textFile("hdfs://<input>")
.filter(_.startsWith("ERROR"))
.cache()
cached
.map(_.split(" ")(1))
.saveAsTextFile("hdfs://<output>")
31. If Cached...
All filtered error messages are cached in memory before being passed to the
next RDD. LRU cache eviction is applied when memory is insufficient
32. But… I Don’t Have Enough
Memory To Cache All Data
33. Don’t Worry
• In most cases, you only need to cache a fraction of
hot data extracted/transformed from the whole input
dataset
• Spark performance downgrades linearly & gracefully
when memory decreases
38. Driver and SparkContext
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
val sc = new SparkContext(...)
• A SparkContext initializes the application driver,
the latter then registers the application to the cluster
manager, and gets a list of executors
• Since then, the driver takes full responsibilities
39. WordCount Revisited
val lines = sc.textFile("input")
val words = lines.flatMap(_.split(" "))
val ones = words.map(_ -> 1)
val counts = ones.reduceByKey(_ + _)
val result = counts.collectAsMap()
• RDD lineage DAG is built on driver side with:
– Data source RDD(s)
– Transformation RDD(s), which are created by
transformations
40. WordCount Revisited
val lines = sc.textFile("input")
val words = lines.flatMap(_.split(" "))
val ones = words.map(_ -> 1)
val counts = ones.reduceByKey(_ + _)
val result = counts.collectAsMap()
• Once an action is triggered on driver side, a
job is submitted to the DAG scheduler of the
driver
41. WordCount Revisited
• DAG scheduler cuts the DAG into stages and
turns each partition of a stage into a single task.
• DAG scheduler decides what to run
42. WordCount Revisited
• Tasks are then scheduled to executors by
driver side task scheduler according to
resource and locality constraints
• Task scheduler decides where to run
43. WordCount Revisited
val lines = sc.textFile("input")
val words = lines.flatMap(_.split(" "))
val ones = words.map(_ -> 1)
val counts = ones.reduceByKey(_ + _)
val result = counts.collectAsMap()
• Within a task, the lineage DAG of
corresponding stage is serialized together with
closures of transformations, then sent to and
executed on scheduled executors
44. WordCount Revisited
val lines = sc.textFile("input")
val words = lines.flatMap(_.split(" "))
val ones = words.map(_ -> 1)
val counts = ones.reduceByKey(_ + _)
val result = counts.collectAsMap()
• The reduceByKey transformation introduces
in a shuffle
• Shuffle outputs are written to local FS on the
mapper side, then downloaded by reducers
45. WordCount Revisited
val lines = sc.textFile("input")
val words = lines.flatMap(_.split(" "))
val ones = words.map(_ -> 1)
val counts = ones.reduceByKey(_ + _)
val result = counts.collectAsMap()
• ReduceByKey automatically combines values
within a single partition locally on the mapper
side and then reduce them globally on the
reducer side.
46. WordCount Revisited
val lines = sc.textFile("input")
val words = lines.flatMap(_.split(" "))
val ones = words.map(_ -> 1)
val counts = ones.reduceByKey(_ + _)
val result = counts.collectAsMap()
• At last, results of the action are sent back to
the driver, then the job finishes.
48. How To Control Parallelism?
• Can be specified in a number of ways
– RDD partition number
• sc.textFile("input", minSplits = 10)
• sc.parallelize(1 to 10000, numSlices = 10)
– Mapper side parallelism
• Usually inherited from parent RDD(s)
– Reducer side parallelism
• rdd.reduceByKey(_ + _, numPartitions = 10)
• rdd.reduceByKey(partitioner = p, _ + _)
• …
49. How To Control Parallelism?
• “Zoom in/out”
– RDD.repartition(numPartitions: Int)
– RDD.coalesce(
numPartitions: Int,
shuffle: Boolean)
50. A Trap of Partitions
sc.textFile("input", minSplits = 2)
.map { line =>
val Array(key, value) = line.split(",")
key.toInt -> value.toInt
}
.reduceByKey(_ + _)
.saveAsText("output")
• Run in local mode
• 60+GB input file
51. A Trap of Partitions
sc.textFile("input", minSplits = 2)
.map { line =>
val Array(key, value) = line.split(",")
key.toInt -> value.toInt
}
.reduceByKey(_ + _)
.saveAsText("output") Hmm...
Pretty cute?
52. A Trap of Partitions
sc.textFile("input", minSplits = 2)
.map { line =>
val Array(key, value) = line.split(",")
key.toInt -> value.toInt
}
.reduceByKey(_ + _)
.saveAsText("output")
• Actually, this may
generate 4,000,000
temporary files...
• Why 4M? (Hint: 2K2)
!!!
53. A Trap of Partitions
sc.textFile("input", minSplits = 2)
.map { line =>
val Array(key, value) = line.split(",")
key.toInt -> value.toInt
}
.reduceByKey(_ + _)
.saveAsText("output")
• 1 partition per HDFS split
• Similar to mapper task number in Hadoop
MapReduce, the minSplits parameter is
only a hint for split number
54. A Trap of Partitions
sc.textFile("input", minSplits = 2)
.map { line =>
val Array(key, value) = line.split(",")
key.toInt -> value.toInt
}
.reduceByKey(_ + _)
.saveAsText("output")
• Actual split number is also controlled by some
other properties like minSplitSize and default
FS block size
55. A Trap of Partitions
sc.textFile("input", minSplits = 2)
.map { line =>
val Array(key, value) = line.split(",")
key.toInt -> value.toInt
}
.reduceByKey(_ + _)
.saveAsText("output")
• In this case, the final split size equals to local
FS block size, which is 32MB by default, and
60GB / 32MB ≈ 2K
• ReduceByKey generates 2K2 shuffle outputs
56. A Trap of Partitions
sc.textFile("input", minSplits = 2).coalesce(2)
.map { line =>
val Array(key, value) = line.split(",")
key.toInt -> value.toInt
}
.reduceByKey(_ + _)
.saveAsText("output")
• Use RDD.coalesce() to control partition
number precisely.
57. Summary
• In-memory magic
– Iterator-based streaming style data access
– Usually you don’t need to cache all dataset
– When memory is not enough, Spark performance
downgrades linearly & gracefully
59. References
• Resilient distributed datasets: a fault-tolerant
abstraction for in-memory cluster computing
• An Architecture for Fast and General Data
Processing on Large Clusters
• Spark Internals (video, slides)
• Shark: SQL and Rich Analytics at Scale
• Spark Summit 2013