This document discusses contributing to Apache Spark. It provides an overview of finding issues to work on, the different components of Spark one could contribute to, and the process for contributing code changes through pull requests and code reviews. Key steps include searching Spark's JIRA issue tracker for starter issues, choosing a component to work in, making code and test changes, submitting a pull request for review, addressing review feedback, and getting the change merged once approved.
Big data with Python on kubernetes (pyspark on k8s) - Big Data Spain 2018Holden Karau
Big Data applications are increasingly being run on Kubernetes. Data scientists commonly use python-based workflows, with tools like PySpark and Jupyter for wrangling large amounts of data. The Kubernetes community over the past year has been actively investing in tools and support for frameworks such as Apache Spark, Jupyter and Apache Airflow. Attendees will learn how these tools can be used together to build a scalable self-service platform for data science on Kubernetes as well as the benefits that Kubernetes can provide over traditional options.
Validating Big Data Pipelines - Big Data Spain 2018Holden Karau
As big data jobs move from the proof-of-concept phase into powering real production services, we have to start considering what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data). This talk will attempt to convince you that we will all eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production), and it’s important to automatically recognize when things have gone wrong so we can stop deployment before we have to update our resumes.
The magic of (data parallel) distributed systems and where it all breaks - Re...Holden Karau
Distributed systems can seem magical, and sometimes all of the magic works and our job succeeds. However, if you've worked with them for a long enough time you've found a few places where the magic starts to break down and the fact that it's actually a collection of several hundred garden gnomes* rather than a single large garden gnome.
This talk will use Apache Spark, Beam, Flink, Kafka, and Map Reduce to explore the world of data parallel distributed systems. We'll start with some happy pieces of magic, like how we can combine different transformations into a single pass over the data, working between different languages, data partitioning, and lambda serialization. After each new piece of magic is introduced we'll look at how it breaks in one (or two) of the systems.
Come to be told it's not your fault everything is broken, or if your distributed software still works an exciting preview of everything that's going to go wrong. Don't work with distributed systems? Come to be reassured you've made good life choices.
Intro - End to end ML with Kubeflow @ SignalConf 2018Holden Karau
There are many great tools for training machine learning tools, ranging from sci-kit to Apache Spark, and tensorflow. However many of these systems largely leave open the question how to use our models outside of the batch world (like in a reactive application). Different options exist for persisting the results and using them for live training, and we will explore the trade-offs of the different formats and their corresponding serving/prediction layers.
Validating big data jobs - Spark AI Summit EUHolden Karau
As big data jobs move from the proof-of-concept phase into powering real production services, we have to start consider what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data). This talk will attempt to convince you that we will all eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production), and its important to automatically recognize when things have gone wrong so we can stop deployment before we have to update our resumes.
Figuring out when things have gone terribly wrong is trickier than it first appears, since we want to catch the errors before our users notice them (or failing that before CNN notices them). We will explore general techniques for validation, look at responses from people validating big data jobs in production environments, and libraries that can assist us in writing relative validation rules based on historical data.
For folks working in streaming, we will talk about the unique challenges of attempting to validate in a real-time system, and what we can do besides keeping an up-to-date resume on file for when things go wrong. To keep the talk interesting real-world examples (with company names removed) will be presented, as well as several creative-common licensed cat pictures and an adorable panda GIF.
If you’ve seen Holden’s previous testing Spark talks this can be viewed as a deep dive on the second half focused around what else we need to do besides good testing practices to create production quality pipelines. If you haven’t seen the testing talks watch those on YouTube after you come see this one
Big data with Python on kubernetes (pyspark on k8s) - Big Data Spain 2018Holden Karau
Big Data applications are increasingly being run on Kubernetes. Data scientists commonly use python-based workflows, with tools like PySpark and Jupyter for wrangling large amounts of data. The Kubernetes community over the past year has been actively investing in tools and support for frameworks such as Apache Spark, Jupyter and Apache Airflow. Attendees will learn how these tools can be used together to build a scalable self-service platform for data science on Kubernetes as well as the benefits that Kubernetes can provide over traditional options.
Validating Big Data Pipelines - Big Data Spain 2018Holden Karau
As big data jobs move from the proof-of-concept phase into powering real production services, we have to start considering what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data). This talk will attempt to convince you that we will all eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production), and it’s important to automatically recognize when things have gone wrong so we can stop deployment before we have to update our resumes.
The magic of (data parallel) distributed systems and where it all breaks - Re...Holden Karau
Distributed systems can seem magical, and sometimes all of the magic works and our job succeeds. However, if you've worked with them for a long enough time you've found a few places where the magic starts to break down and the fact that it's actually a collection of several hundred garden gnomes* rather than a single large garden gnome.
This talk will use Apache Spark, Beam, Flink, Kafka, and Map Reduce to explore the world of data parallel distributed systems. We'll start with some happy pieces of magic, like how we can combine different transformations into a single pass over the data, working between different languages, data partitioning, and lambda serialization. After each new piece of magic is introduced we'll look at how it breaks in one (or two) of the systems.
Come to be told it's not your fault everything is broken, or if your distributed software still works an exciting preview of everything that's going to go wrong. Don't work with distributed systems? Come to be reassured you've made good life choices.
Intro - End to end ML with Kubeflow @ SignalConf 2018Holden Karau
There are many great tools for training machine learning tools, ranging from sci-kit to Apache Spark, and tensorflow. However many of these systems largely leave open the question how to use our models outside of the batch world (like in a reactive application). Different options exist for persisting the results and using them for live training, and we will explore the trade-offs of the different formats and their corresponding serving/prediction layers.
Validating big data jobs - Spark AI Summit EUHolden Karau
As big data jobs move from the proof-of-concept phase into powering real production services, we have to start consider what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data). This talk will attempt to convince you that we will all eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production), and its important to automatically recognize when things have gone wrong so we can stop deployment before we have to update our resumes.
Figuring out when things have gone terribly wrong is trickier than it first appears, since we want to catch the errors before our users notice them (or failing that before CNN notices them). We will explore general techniques for validation, look at responses from people validating big data jobs in production environments, and libraries that can assist us in writing relative validation rules based on historical data.
For folks working in streaming, we will talk about the unique challenges of attempting to validate in a real-time system, and what we can do besides keeping an up-to-date resume on file for when things go wrong. To keep the talk interesting real-world examples (with company names removed) will be presented, as well as several creative-common licensed cat pictures and an adorable panda GIF.
If you’ve seen Holden’s previous testing Spark talks this can be viewed as a deep dive on the second half focused around what else we need to do besides good testing practices to create production quality pipelines. If you haven’t seen the testing talks watch those on YouTube after you come see this one
Building Recoverable (and optionally async) Pipelines with Apache Spark (+ s...Holden Karau
Have you ever had a Spark job fail in it’s second to last stage after a “trivial” update or been part of the way through debugging a pipeline to wish you could look at it’s data or had an “exploratory” notebook turn into something less exploratory? Come join me for a surprisingly simple adventure into how to build recoverable pipelines and have more debuggable pipelines. Then join me on the adventure where in we find out our “simple” solution has a bunch of hidden flaws, how to work around them, and end on the reminder of how important it is to test your code.
Validating big data pipelines - Scala eXchange 2018Holden Karau
Note: the link to the resource page should have been http://bit.ly/2QRVw0S
As big data jobs move from the proof-of-concept phase into powering real production services, you will need to consider what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data).
During this talk, you will discover that you will eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production). It's important to automatically recognise when things have gone wrong, so you can stop deployment before you have to update our resumes.
Figuring out when things have gone terribly wrong is trickier than it first appears, since you want to catch the errors before your users notice them (or failing that before CNN notices them). We will explore general techniques for validation, look at responses from people validating big data jobs in production environments, and libraries that can assist you in writing relative validation rules based on historical data. For folks working in streaming, you will learn about the unique challenges of attempting to validate in a real-time system, and what you can do besides keeping an up-to-date resume on file for when things go wrong.
You will discover code examples in Apache Spark, as well as learn about similar concepts in Apache BEAM (a cross platform tool), but the techniques should be applicable across systems.
Real-world examples (with company names removed) will be presented, as well as several creative-common licensed cat pictures
Getting started contributing to Apache SparkHolden Karau
Are you interested in contributing to Apache Spark? This workshop and associated slides walk through the basics of contributing to Apache Spark as a developer. This advice is based on my 3 years of contributing to Apache Spark but should not be considered official in any way.
Debugging Spark: Scala and Python - Super Happy Fun Times @ Data Day Texas 2018Holden Karau
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. Holden Karau and Joey Echeverria explore how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, and some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose. Holden and Joey demonstrate how to effectively search logs from Apache Spark to spot common problems and discuss options for logging from within your program itself. Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but Holden and Joey look at how to effectively use Spark’s current accumulators for debugging before gazing into the future to see the data property type accumulators that may be coming to Spark in future versions. And in addition to reading logs and instrumenting your program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems. Holden and Joey cover how to quickly use the UI to figure out if certain types of issues are occurring in your job.
The talk will wrap up with Holden trying to get everyone to buy several copies of her new book, High Performance Spark.
Many of the recent big data systems, like Hadoop, Spark, and Kafka, are written primarily in JVM languages. At the same time, there is a wealth of tools for data science and data analytics that exist outside of the JVM. Holden Karau and Rachel Warren explore the state of the current big data ecosystem and explain how to best work with it in non-JVM languages. While much of the focus will be on Python + Spark, the talk will also include interesting anecdotes about how these lessons apply to other systems (including Kafka).
Holden and Rachel detail how to bridge the gap using PySpark and discuss other solutions like Kafka Streams as well. They also outline the challenges of pure Python solutions like dask. Holden and Rachel start with the current architecture of PySpark and its evolution. They then turn to the future, covering Arrow-accelerated interchange for Python functions, how to expose Python machine learning models into Spark, and how to use systems like Spark to accelerate training of traditional Python models. They also dive into what other similar systems are doing as well as what the options are for (almost) completely ignoring the JVM in the big data space.
Python users will learn how to more effectively use systems like Spark and understand how the design is changing. JVM developers will gain an understanding of how to Python code from data scientist and Python developers while avoiding the traditional trap of needing to rewrite everything.
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in PySpark, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Debuggers are a wonderful tool, however when you have 100 computers the “wonder” can be a bit more like “pain”. This talk will look at how to connect remote debuggers, but also remind you that it’s probably not the easiest path forward.
Accelerating Big Data beyond the JVM - Fosdem 2018Holden Karau
Many popular big data technologies (such as Apache Spark, BEAM, Flink, and Kafka) are built in the JVM, and many interesting tools are built in other languages (ranging from Python to CUDA). For simple operations the cost of copying the data can quickly dominate, and in complex cases can limit our ability to take advantage of specialty hardware. This talk explores how improved formats are being integrated to reduce these hurdles to co-operation.
Many popular big data technologies (such as Apache Spark, BEAM, and Flink) are built in the JVM, and many interesting AI tools are built in other languages, and some requiring copying to the GPU. As many folks have experienced, while we may wish that we spend all of our time playing with cool algorithms -- we often need to spend more of our time working on data prep. Having to copy our data slowly between the JVM and the target language of computation can remove much of the benefit of being able to access our specialized tooling. Thankfully, as illustrated in the soon to be released Spark 2.3, Apache Arrow and related tools offer the ability to reduce this overhead. This talk will explore how Arrow is being integrated into Spark, and how it can be integrated into other systems, but also limitations and places where Apache Arrow will not magically save us.
Link: https://fosdem.org/2018/schedule/event/big_data_outside_jvm/
Validating Big Data Jobs—Stopping Failures Before Production on Apache Spark...Databricks
As big data jobs move from the proof-of-concept phase into powering real production services, we have to start consider what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data). This talk will attempt to convince you that we will all eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production), and its important to automatically recognize when things have gone wrong so we can stop deployment before we have to update our resumes.
Figuring out when things have gone terribly wrong is trickier than it first appears, since we want to catch the errors before our users notice them (or failing that before CNN notices them). We will explore general techniques for validation, look at responses from people validating big data jobs in production environments, and libraries that can assist us in writing relative validation rules based on historical data.
For folks working in streaming, we will talk about the unique challenges of attempting to validate in a real-time system, and what we can do besides keeping an up-to-date resume on file for when things go wrong. To keep the talk interesting real-world examples (with company names removed) will be presented, as well as several creative-common licensed cat pictures and an adorable panda GIF.
If you’ve seen Holden’s previous testing Spark talks this can be viewed as a deep dive on the second half focused around what else we need to do besides good testing practices to create production quality pipelines. If you haven’t seen the testing talks watch those on YouTube after you come see this one
Debugging PySpark: Spark Summit East talk by Holden KarauSpark Summit
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Testing and validating distributed systems with Apache Spark and Apache Beam ...Holden Karau
As distributed data parallel systems, like Spark, are used for more mission-critical tasks, it is important to have effective tools for testing and validation. This talk explores the general considerations and challenges of testing systems like Spark through spark-testing-base and other related libraries.
With over 40% of folks automatically deploying the results of their Spark jobs to production, testing is especially important. Many of the tools for working with big data systems (like notebooks) are great for exploratory work, and can give a false sense of security (as well as additional excuses not to test). This talk explores why testing these systems are hard, special considerations for simulating "bad" partioning, figuring out when your stream tests are stopped, and solutions to these challenges.
Big Data Beyond the JVM - Strata San Jose 2018Holden Karau
Many of the recent big data systems, like Hadoop, Spark, and Kafka, are written primarily in JVM languages. At the same time, there is a wealth of tools for data science and data analytics that exist outside of the JVM. Holden Karau and Rachel Warren explore the state of the current big data ecosystem and explain how to best work with it in non-JVM languages. While much of the focus will be on Python + Spark, the talk will also include interesting anecdotes about how these lessons apply to other systems (including Kafka).
Holden and Rachel detail how to bridge the gap using PySpark and discuss other solutions like Kafka Streams as well. They also outline the challenges of pure Python solutions like dask. Holden and Rachel start with the current architecture of PySpark and its evolution. They then turn to the future, covering Arrow-accelerated interchange for Python functions, how to expose Python machine learning models into Spark, and how to use systems like Spark to accelerate training of traditional Python models. They also dive into what other similar systems are doing as well as what the options are for (almost) completely ignoring the JVM in the big data space.
Python users will learn how to more effectively use systems like Spark and understand how the design is changing. JVM developers will gain an understanding of how to Python code from data scientist and Python developers while avoiding the traditional trap of needing to rewrite everything.
Extending spark ML for custom models now with python!Holden Karau
Are you interested in adding your own custom algorithms to Spark ML? This is the talk for you! See the companion examples in the High Performance Spark, and Sparkling ML project.
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Holden Karau
Slides from PyData London exploring how the big data ecosystem (currently) works together as well as how different parts of the ecosystem work with Python. Proof-of-concept examples are provided using nltk & spacy with Spark. Then we look to the future and how we can improve.
A fast introduction to PySpark with a quick look at Arrow based UDFsHolden Karau
This talk will introduce Apache Spark (one of the most popular big data tools), the different built ins (from SQL to ML), and, of course, everyone's favorite wordcount example. Once we've got the nice parts out of the way, we'll talk about some of the limitations and the work being undertaken to improve those limitations. We'll also look at the cases where Spark is more like trying to hammer a screw. Since we want to finish on a happy note, we will close out with looking at the new vectorized UDFs in PySpark 2.3.
Using Spark ML on Spark Errors - What do the clusters tell us?Holden Karau
If you’re subscribed to user@spark.apache.org, or work in a large company, you may see some common Spark error messages. Even attending Spark Summit over the past few years you have seen talks like the “Top K Mistakes in Spark.” While cool non-machine learning based tools do exist to examine Spark’s logs — they don’t use machine learning and therefore are not as cool but also limited in by the amount of effort humans can put into writing rules for them. This talk will look what happens when we train “regular” clustering models on stack traces, and explore DL models for classifying user message to the Spark list. Come for the reassurance that the robots are not yet able to fix themselves, and stay to learn how to work better with the help of our robot friends. The tl;dr of this talk is Spark ML on Spark output, plus a little bit of Tensorflow is fun for the whole family, but probably shouldn’t automatically respond to user list posts just yet.
Getting Started Contributing to Apache Spark – From PR, CR, JIRA, and BeyondDatabricks
With the community working on preparing the next versions of Apache Spark you may be asking yourself ‘how do I get involved in contributing to this?’ With such a large volume of contributions, it can be hard to know how to begin contributing yourself. Holden Karau offers a developer-focused head start, walking you through how to find good issues, formatting code, finding reviewers, and what to expect in the code review process. In addition to looking at how to contribute code we explore some of the other ways you can contribute to to Apache Spark from helping test release candidates, to doing the all important code reviews, bug triage, and many more (like answering questions).
A Glimpse At The Future Of Apache Spark 3.0 With Deep Learning And KubernetesLightbend
In this special guest webinar with Holden Karau, speaker, author and Developer Advocate at Google, we’ll take a walk through some of the interesting JIRAs, look at external components being developed (like deep learning support), and also talk about the future of running real-time Spark workloads on Kubernetes.
Are general purpose big data systems eating the world?Holden Karau
Every-time there is a new piece of big data technology we often see many different specific implementations of the concepts, which often eventually consolidate down to a few viable options, and then frequently end up getting rolled into part of another larger project. This talk will examine this trend in big data ecosystem, look at the exceptions to the "rule", and look at how better interchange formats like Apache Arrow have the potential to change this going forward. In addition to general vague happy feelings (or sad depending on your ideas about how software should be made), this talk will look at some specific examples with deep learning, so if anyone is looking for a little bit of pixie dust to sprinkle on a failing business plan to take to silicon valley to raise a series A, you'll get something out this as well.
Video - https://www.youtube.com/watch?v=P_YKrLFZQJo
Building Recoverable (and optionally async) Pipelines with Apache Spark (+ s...Holden Karau
Have you ever had a Spark job fail in it’s second to last stage after a “trivial” update or been part of the way through debugging a pipeline to wish you could look at it’s data or had an “exploratory” notebook turn into something less exploratory? Come join me for a surprisingly simple adventure into how to build recoverable pipelines and have more debuggable pipelines. Then join me on the adventure where in we find out our “simple” solution has a bunch of hidden flaws, how to work around them, and end on the reminder of how important it is to test your code.
Validating big data pipelines - Scala eXchange 2018Holden Karau
Note: the link to the resource page should have been http://bit.ly/2QRVw0S
As big data jobs move from the proof-of-concept phase into powering real production services, you will need to consider what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data).
During this talk, you will discover that you will eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production). It's important to automatically recognise when things have gone wrong, so you can stop deployment before you have to update our resumes.
Figuring out when things have gone terribly wrong is trickier than it first appears, since you want to catch the errors before your users notice them (or failing that before CNN notices them). We will explore general techniques for validation, look at responses from people validating big data jobs in production environments, and libraries that can assist you in writing relative validation rules based on historical data. For folks working in streaming, you will learn about the unique challenges of attempting to validate in a real-time system, and what you can do besides keeping an up-to-date resume on file for when things go wrong.
You will discover code examples in Apache Spark, as well as learn about similar concepts in Apache BEAM (a cross platform tool), but the techniques should be applicable across systems.
Real-world examples (with company names removed) will be presented, as well as several creative-common licensed cat pictures
Getting started contributing to Apache SparkHolden Karau
Are you interested in contributing to Apache Spark? This workshop and associated slides walk through the basics of contributing to Apache Spark as a developer. This advice is based on my 3 years of contributing to Apache Spark but should not be considered official in any way.
Debugging Spark: Scala and Python - Super Happy Fun Times @ Data Day Texas 2018Holden Karau
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. Holden Karau and Joey Echeverria explore how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, and some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose. Holden and Joey demonstrate how to effectively search logs from Apache Spark to spot common problems and discuss options for logging from within your program itself. Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but Holden and Joey look at how to effectively use Spark’s current accumulators for debugging before gazing into the future to see the data property type accumulators that may be coming to Spark in future versions. And in addition to reading logs and instrumenting your program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems. Holden and Joey cover how to quickly use the UI to figure out if certain types of issues are occurring in your job.
The talk will wrap up with Holden trying to get everyone to buy several copies of her new book, High Performance Spark.
Many of the recent big data systems, like Hadoop, Spark, and Kafka, are written primarily in JVM languages. At the same time, there is a wealth of tools for data science and data analytics that exist outside of the JVM. Holden Karau and Rachel Warren explore the state of the current big data ecosystem and explain how to best work with it in non-JVM languages. While much of the focus will be on Python + Spark, the talk will also include interesting anecdotes about how these lessons apply to other systems (including Kafka).
Holden and Rachel detail how to bridge the gap using PySpark and discuss other solutions like Kafka Streams as well. They also outline the challenges of pure Python solutions like dask. Holden and Rachel start with the current architecture of PySpark and its evolution. They then turn to the future, covering Arrow-accelerated interchange for Python functions, how to expose Python machine learning models into Spark, and how to use systems like Spark to accelerate training of traditional Python models. They also dive into what other similar systems are doing as well as what the options are for (almost) completely ignoring the JVM in the big data space.
Python users will learn how to more effectively use systems like Spark and understand how the design is changing. JVM developers will gain an understanding of how to Python code from data scientist and Python developers while avoiding the traditional trap of needing to rewrite everything.
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in PySpark, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Debuggers are a wonderful tool, however when you have 100 computers the “wonder” can be a bit more like “pain”. This talk will look at how to connect remote debuggers, but also remind you that it’s probably not the easiest path forward.
Accelerating Big Data beyond the JVM - Fosdem 2018Holden Karau
Many popular big data technologies (such as Apache Spark, BEAM, Flink, and Kafka) are built in the JVM, and many interesting tools are built in other languages (ranging from Python to CUDA). For simple operations the cost of copying the data can quickly dominate, and in complex cases can limit our ability to take advantage of specialty hardware. This talk explores how improved formats are being integrated to reduce these hurdles to co-operation.
Many popular big data technologies (such as Apache Spark, BEAM, and Flink) are built in the JVM, and many interesting AI tools are built in other languages, and some requiring copying to the GPU. As many folks have experienced, while we may wish that we spend all of our time playing with cool algorithms -- we often need to spend more of our time working on data prep. Having to copy our data slowly between the JVM and the target language of computation can remove much of the benefit of being able to access our specialized tooling. Thankfully, as illustrated in the soon to be released Spark 2.3, Apache Arrow and related tools offer the ability to reduce this overhead. This talk will explore how Arrow is being integrated into Spark, and how it can be integrated into other systems, but also limitations and places where Apache Arrow will not magically save us.
Link: https://fosdem.org/2018/schedule/event/big_data_outside_jvm/
Validating Big Data Jobs—Stopping Failures Before Production on Apache Spark...Databricks
As big data jobs move from the proof-of-concept phase into powering real production services, we have to start consider what will happen when everything eventually goes wrong (such as recommending inappropriate products or other decisions taken on bad data). This talk will attempt to convince you that we will all eventually get aboard the failboat (especially with ~40% of respondents automatically deploying their Spark jobs results to production), and its important to automatically recognize when things have gone wrong so we can stop deployment before we have to update our resumes.
Figuring out when things have gone terribly wrong is trickier than it first appears, since we want to catch the errors before our users notice them (or failing that before CNN notices them). We will explore general techniques for validation, look at responses from people validating big data jobs in production environments, and libraries that can assist us in writing relative validation rules based on historical data.
For folks working in streaming, we will talk about the unique challenges of attempting to validate in a real-time system, and what we can do besides keeping an up-to-date resume on file for when things go wrong. To keep the talk interesting real-world examples (with company names removed) will be presented, as well as several creative-common licensed cat pictures and an adorable panda GIF.
If you’ve seen Holden’s previous testing Spark talks this can be viewed as a deep dive on the second half focused around what else we need to do besides good testing practices to create production quality pipelines. If you haven’t seen the testing talks watch those on YouTube after you come see this one
Debugging PySpark: Spark Summit East talk by Holden KarauSpark Summit
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Testing and validating distributed systems with Apache Spark and Apache Beam ...Holden Karau
As distributed data parallel systems, like Spark, are used for more mission-critical tasks, it is important to have effective tools for testing and validation. This talk explores the general considerations and challenges of testing systems like Spark through spark-testing-base and other related libraries.
With over 40% of folks automatically deploying the results of their Spark jobs to production, testing is especially important. Many of the tools for working with big data systems (like notebooks) are great for exploratory work, and can give a false sense of security (as well as additional excuses not to test). This talk explores why testing these systems are hard, special considerations for simulating "bad" partioning, figuring out when your stream tests are stopped, and solutions to these challenges.
Big Data Beyond the JVM - Strata San Jose 2018Holden Karau
Many of the recent big data systems, like Hadoop, Spark, and Kafka, are written primarily in JVM languages. At the same time, there is a wealth of tools for data science and data analytics that exist outside of the JVM. Holden Karau and Rachel Warren explore the state of the current big data ecosystem and explain how to best work with it in non-JVM languages. While much of the focus will be on Python + Spark, the talk will also include interesting anecdotes about how these lessons apply to other systems (including Kafka).
Holden and Rachel detail how to bridge the gap using PySpark and discuss other solutions like Kafka Streams as well. They also outline the challenges of pure Python solutions like dask. Holden and Rachel start with the current architecture of PySpark and its evolution. They then turn to the future, covering Arrow-accelerated interchange for Python functions, how to expose Python machine learning models into Spark, and how to use systems like Spark to accelerate training of traditional Python models. They also dive into what other similar systems are doing as well as what the options are for (almost) completely ignoring the JVM in the big data space.
Python users will learn how to more effectively use systems like Spark and understand how the design is changing. JVM developers will gain an understanding of how to Python code from data scientist and Python developers while avoiding the traditional trap of needing to rewrite everything.
Extending spark ML for custom models now with python!Holden Karau
Are you interested in adding your own custom algorithms to Spark ML? This is the talk for you! See the companion examples in the High Performance Spark, and Sparkling ML project.
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Holden Karau
Slides from PyData London exploring how the big data ecosystem (currently) works together as well as how different parts of the ecosystem work with Python. Proof-of-concept examples are provided using nltk & spacy with Spark. Then we look to the future and how we can improve.
A fast introduction to PySpark with a quick look at Arrow based UDFsHolden Karau
This talk will introduce Apache Spark (one of the most popular big data tools), the different built ins (from SQL to ML), and, of course, everyone's favorite wordcount example. Once we've got the nice parts out of the way, we'll talk about some of the limitations and the work being undertaken to improve those limitations. We'll also look at the cases where Spark is more like trying to hammer a screw. Since we want to finish on a happy note, we will close out with looking at the new vectorized UDFs in PySpark 2.3.
Using Spark ML on Spark Errors - What do the clusters tell us?Holden Karau
If you’re subscribed to user@spark.apache.org, or work in a large company, you may see some common Spark error messages. Even attending Spark Summit over the past few years you have seen talks like the “Top K Mistakes in Spark.” While cool non-machine learning based tools do exist to examine Spark’s logs — they don’t use machine learning and therefore are not as cool but also limited in by the amount of effort humans can put into writing rules for them. This talk will look what happens when we train “regular” clustering models on stack traces, and explore DL models for classifying user message to the Spark list. Come for the reassurance that the robots are not yet able to fix themselves, and stay to learn how to work better with the help of our robot friends. The tl;dr of this talk is Spark ML on Spark output, plus a little bit of Tensorflow is fun for the whole family, but probably shouldn’t automatically respond to user list posts just yet.
Getting Started Contributing to Apache Spark – From PR, CR, JIRA, and BeyondDatabricks
With the community working on preparing the next versions of Apache Spark you may be asking yourself ‘how do I get involved in contributing to this?’ With such a large volume of contributions, it can be hard to know how to begin contributing yourself. Holden Karau offers a developer-focused head start, walking you through how to find good issues, formatting code, finding reviewers, and what to expect in the code review process. In addition to looking at how to contribute code we explore some of the other ways you can contribute to to Apache Spark from helping test release candidates, to doing the all important code reviews, bug triage, and many more (like answering questions).
A Glimpse At The Future Of Apache Spark 3.0 With Deep Learning And KubernetesLightbend
In this special guest webinar with Holden Karau, speaker, author and Developer Advocate at Google, we’ll take a walk through some of the interesting JIRAs, look at external components being developed (like deep learning support), and also talk about the future of running real-time Spark workloads on Kubernetes.
Are general purpose big data systems eating the world?Holden Karau
Every-time there is a new piece of big data technology we often see many different specific implementations of the concepts, which often eventually consolidate down to a few viable options, and then frequently end up getting rolled into part of another larger project. This talk will examine this trend in big data ecosystem, look at the exceptions to the "rule", and look at how better interchange formats like Apache Arrow have the potential to change this going forward. In addition to general vague happy feelings (or sad depending on your ideas about how software should be made), this talk will look at some specific examples with deep learning, so if anyone is looking for a little bit of pixie dust to sprinkle on a failing business plan to take to silicon valley to raise a series A, you'll get something out this as well.
Video - https://www.youtube.com/watch?v=P_YKrLFZQJo
Debugging Apache Spark - Scala & Python super happy fun times 2017Holden Karau
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. Holden Karau and Joey Echeverria explore how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, and some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose. Holden and Joey demonstrate how to effectively search logs from Apache Spark to spot common problems and discuss options for logging from within your program itself. Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but Holden and Joey look at how to effectively use Spark’s current accumulators for debugging before gazing into the future to see the data property type accumulators that may be coming to Spark in future versions. And in addition to reading logs and instrumenting your program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems. Holden and Joey cover how to quickly use the UI to figure out if certain types of issues are occurring in our job.
Sharing (or stealing) the jewels of python with big data & the jvm (1)Holden Karau
With the new Apache Arrow integration in PySpark 2.3, it is now starting become reasonable to look to the Python world and ask “what else do we want to steal besides tensorflow”, or as a Python developer look and say “how can I get my code into production without it being rewritten into a mess of Java?”
Regardless of your specific side(s) in the JVM/Python divide, collaboration is getting a lot faster, so lets learn how to share! In this brief talk we will examine sharing some of the wonders of Spacy with the Java world, which still has a somewhat lackluster set of options for NLP.
Keeping the fun in functional w/ Apache Spark @ Scala Days NYCHolden Karau
Apache Spark has been a great driver of not only Scala adoption, but introducing a new generation of developers to functional programming concepts. As Spark places more emphasis on its newer DataFrame & Dataset APIs, it’s important to ask ourselves how we can benefit from this while still keeping our fun functional roots. We will explore the cases where the Dataset APIs empower us to do cool things we couldn’t before, what the different approaches to serialization mean, and how to figure out when the shiny new API is actually just trying to steal your lunch money (aka CPU cycles).
Scala vs. Python: Which Language Should be learned in 2020NexSoftsys
Scala and Python are both most popular programming languages used in 2020. Here, in this presentation both language pros and cons with excellent feature and support emerging technologies. We list down the differences between these two popular languages.
An introduction into Spark ML plus how to go beyond when you get stuckData Con LA
Abstract:-
This talk will introduce Spark new machine learning frame work (Spark ML) and how to train basic models with it. A companion Jupyter notebook for people to follow along with will be provided. Once we've got the basics down we'll look at what to do when we find we need more than the tools available in Spark ML (and I'll try and convince people to contribute to my latest side project -- Sparkling ML).
Bio:-
Holden Karau is a transgender Canadian, Apache Spark committer, an active open source contributor, and coauthor of Learning Spark and High Performance Spark. When not in San Francisco working as a software development engineer at IBM’s Spark Technology Center, Holden speaks internationally about Spark and holds office hours at coffee shops at home and abroad. She makes frequent contributions to Spark, specializing in PySpark and machine learning. Prior to IBM, she worked on a variety of distributed, search, and classification problems at Alpine, Databricks, Google, Foursquare, and Amazon. She holds a bachelor of mathematics in computer science from the University of Waterloo. Outside of computers she enjoys scootering and playing with fire.
Debugging PySpark - Spark Summit East 2017Holden Karau
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Video: https://www.youtube.com/watch?v=A0jYQlxc2FU&feature=youtu.be
Contributing to Apache Airflow | Journey to becoming Airflow's leading contri...Kaxil Naik
From not knowing Python (let alone Airflow), and from submitting the first PR that fixes typo to becoming Airflow Committer, PMC Member, Release Manager, and #1 Committer this year, this talk walks through Kaxil’s journey in the Airflow World.
The second part of this talk explains:
how you can also start your OSS journey by contributing to Airflow
Expanding familiarity with a different part of the Airflow codebase
Continue committing regularly & steadily to become Airflow Committer. (including talking about current Guidelines of becoming a Committer)
Different mediums of communication (Dev list, users list, Slack channel, Github Discussions etc)
Beyond Wordcount with spark datasets (and scalaing) - Nide PDX Jan 2018Holden Karau
Apache Spark is one of the most popular big data systems, but once the shiny finish starts to wear off you can find yourself wondering if you've accidentally deployed a Ford Pinto into production. This talk will look at the challenges that come with scaling Spark jobs. Also, the talk will explore Spark's new(ish) Dataset/DataFrame API, as well as how it’s evolving in Spark 2.3 with improved Python support.
If you're already a Spark user, come to find out why it’s not all your fault. If you aren't already a Spark user, come to find out how to save yourself from some of the pitfalls once you move beyond the example code.
Check out Holden's newest book, High Performance Spark, for more information!
From https://niketechtalksjan2018.splashthat.com/
Does Django scale? How to manage traffic peaks? What happens when the database grows too big? How to find (and fix) the bottlenecks?
We will overview the basics concepts, we'll use metrics to find bottlenecks, and finally we'll see some tips and tricks to improve the scalability and the performance of a Django project.
Main topics:
- System architecture
- Database performance
- Finding bottlenecks
- Monitoring, profiling, debugging
- Query optimization
- Dealing with a slow admin
- Queues and workers
- Faster tests
Talk given at #EuroPython 2016: https://ep2016.europython.eu/conference/talks/efficient-django
Spark 2.0 is a major release of Apache Spark. This release has brought many changes to API(s) and libraries of Spark. So in this KnolX, we will be looking at some improvements that are made in Spark 2.0. Also, in these slides we will be getting an introduction to some new features in Spark 2,0 like SparkSession API and Structured Streaming.
A view from the ivory tower: Participating in Apache as a member of academiaMichael Mior
Academics in an ivory tower conjures images of people toiling away nicely insulated from many of the concerns of reality. While this has it's advantages, anyone who's tried to use a project written for a research paper under a deadline can attest that it doesn't always result in useful code. While completing my PhD, I found an Apache project that fit well with the work I was doing s I rolled up my sleeves to write some code to make it more useful for solving my own problems. I've since had the opportunity to join the project's PMC and now as a faculty member, I continue to find value in encouraging my own students to contribute to Apache projects. I'll discuss how academics and Apache projects can find mutual benefit in close collaboration.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
5. @holdenkarau
What we are going to explore together!
Getting a change into Apache Spark & the components
involved:
● The current state of the Apache Spark dev community
● Reason to contribute to Apache Spark
● Different ways to contribute
● Places to find things to contribute
● Tooling around code & doc contributions
Torsten Reuschling
6. @holdenkarau
Who I think you wonderful humans are?
● Nice* people
● Don’t mind pictures of cats
● May know some Apache Spark?
● Want to contribute to Apache Spark
7. @holdenkarau
Why I’m assuming you might want to contribute:
● Fix your own bugs/problems with Apache Spark
● Learn more about distributed systems (for fun or profit)
● Improve your Scala/Python/R/Java experience
● You <3 functional programming and want to trick more
people into using it
● “Credibility” of some vague type
● You just like hacking on random stuff and Spark seems
shiny
8. @holdenkarau
What’s the state of the Spark dev community?
● Really large number of contributors
● Active PMC & Committer’s somewhat concentrated
○ Better than we used to be
● Also a lot of SF Bay Area - but certainly not exclusively
so
gigijin
9. @holdenkarau
How can we contribute to Spark?
● Direct code in the Apache Spark code base
● Code in packages built on top of Spark
● Code reviews
● Yak shaving (aka fixing things that Spark uses)
● Documentation improvements & examples
● Books, Talks, and Blogs
● Answering questions (mailing lists, stack overflow, etc.)
● Testing & Release Validation
Andrey
10. @holdenkarau
Which is right for you?
● Direct code in the Apache Spark code base
○ High visibility, some things can only really be done here
○ Can take a lot longer to get changes in
● Code in packages built on top of Spark
○ Really great for things like formats or standalone features
● Yak shaving (aka fixing things that Spark uses)
○ Super important to do sometimes - can take even longer to get in
romana klee
11. @holdenkarau
Which is right for you? (continued)
● Code reviews
○ High visibility to PMC, can be faster to get started, easier to time
box
○ Less tracked in metrics
● Documentation improvements & examples
○ Lots of places to contribute - mixed visibility - large impact
● Advocacy: Books, Talks, and Blogs
○ Can be high visibility
romana klee
12. @holdenkarau
Contributing Code Directly to Spark
● Maybe we encountered a bug we want to fix
● Maybe we’ve got a feature we want to add
● Either way we should see if other people are doing it
● And if what we want to do is complex, it might be better
to find something simple to start with
● It’s dangerous to go alone - take this
https://cwiki.apache.org/confluence/display/SPARK/Contrib
uting+to+Spark
Jon Nelson
13. @holdenkarau
The different pieces of Spark
Apache Spark “Core”
SQL &
DataFrames
Streaming
Language
APIs
Scala,
Java,
Python, &
R
Graph
Tools
Spark ML
bagel &
Graph X
MLLib
Community
Packages
Spark on
Yarn
Spark on
Mesos
Standalone
Spark
14. @holdenkarau
The different pieces of Spark: 2.0+
Apache Spark “Core”
SQL &
DataFrames
Streaming
Language
APIs
Scala,
Java,
Python, &
R
Graph
Tools
Spark
ML
bagel &
Graph X
MLLib
Community
Packages
Structured
Streaming
15. @holdenkarau
The different pieces of Spark: 3+?
Apache Spark “Core”
SQL &
DataFrames
Streaming
Language
APIs
Scala,
Java,
Python, &
R
Graph
Tools
Spark
ML
bagel &
Graph X
MLLib
Community
Packages
Structured
Streaming
Spark on
Yarn
Spark on
Mesos
Spark on
Kubernetes
Standalone
Spark
16. @holdenkarau
Choosing a component?
● Core
○ Conservative to external changes, but biggest impact
● ML / MLlib
○ ML is the home of the future - you can improve existing algorithms -
new algorithms face uphill battle
● Structured Streaming
○ Current API is in a lot of flux so it is difficult for external
participation
● SQL
○ Lots of fun stuff - very active - I have limited personal experience
● Python / R
○ Improve coverage of current APIs, structural change hard
● GraphX - Dead see GraphFrames instead
Rikki's Refuge
17. @holdenkarau
Choosing a component? (cont)
● Kubernetes
○ New, lots of active work and reviewers
● YARN
○ Old faithful, always needs a little work. Hadoop 3 support
● Mesos
○ Needs some love, probably easy-ish-path to committer (still hard)
● Standalone
○ Not a lot going on
Rikki's Refuge
18. @holdenkarau
Onto JIRA - Issue tracking funtimes
● It’s like bugzilla or fog bugz
● There is an Apache JIRA for many Apache projects
● You can (and should) sign up for an account
● All changes in Spark (now) require a JIRA
● https://www.youtube.com/watch?v=ca8n9uW3afg
● Check it out at:
○ https://issues.apache.org/jira/browse/SPARK
19. @holdenkarau
What we can do with ASF JIRA?
● Search for issues (remember to filter to Spark project)
● Create new issues
○ search first to see if someone else has reported it
● Comment on issues to let people know we are working on it
● Ask people for clarification or help
○ e.g. “Reading this I think you want the null values to be replaced by
a string when processing - is that correct?”
○ @mentions work here too
20. @holdenkarau
What can’t we do with ASF JIRA?
● Assign issues (to ourselves or other people)
○ In lieu of assigning we can “watch” & comment
● Post long design documents (create a Google Doc & link to
it from the JIRA)
● Tag issues
○ While we can add tags, they often get removed
22. @holdenkarau
Finding a good “starter” issue:
● There are explicit starter tags in JIRA we can search for
● But often the starter tag isn’t applied
● Read through and look for simple issues
● Pick something in the same component you eventually want
to work in
○ And or consider improving the non-Scala language API for the
component(s) you want to work on.
● Look at the reporter and commenters - is there a
committer or someone whose name you recognize?
● Leave a comment that says you are going to start working
on this
23. @holdenkarau
Find an issue you want to work on
https://issues.apache.org/jira/browse/SPARK
Also grep for TODO in components you are interested in (e.g.
grep -r TODO ./python/pyspark or grep -R TODO ./core/src)
Look between language APIs and see if anything is missing
that you think is interesting -
http://spark.apache.org/docs/latest/api/scala/index.html#org
.apache.spark.package
http://spark.apache.org/docs/latest/api/python/index.html
neko kabachi
24. @holdenkarau
Explore things that make sense to revisit
https://issues.apache.org/jira/browse/SPARK
Consider looking for issues which we couldn't fix due to our
compatibility requirements and should revisit for 3+
Maurizio Zanetti
27. @holdenkarau
But before we get too far:
● Spark wishes to maintain compatibility between releases
● We're working on 3 though so this is the time to break
things
Meagan Fisher
28. @holdenkarau
Getting at the code: yay for GitHub :)
● https://github.com/apache/spark
● Make a fork of it
● Clone it locally
dougwoods
31. @holdenkarau
What about documentation changes?
● Still use JIRAs to track
● We can’t edit the wiki :(
● But a lot of documentations lives in docs/*.md
Kreg Steppe
32. @holdenkarau
Building Spark’s docs
./docs/README.md has a lot of info - but quickly:
SKIP_API=1 jekyll build
SKIP_API=1 jekyll serve --watch
*Requires a recentish jekyll - install instructions assume
ruby2.0 only, on debian based s/gem/gem2.0/
33. @holdenkarau
Finding your way around the project
● Organized into sub-projects by directory
● IntelliJ is very popular with Spark developers
○ The free version is fine
● Some people like using emacs + ensime or magit too
● Language specific code is in each sub directory
34. @holdenkarau
Testing the issue
The spark-shell can often be a good way to verify the issue
reported in the JIRA is still occurring and come up with a
reasonable test.
Once you’ve got a handle on the issue in the spark-shell (or
if you decide to skip that step) check out
./[component]/src/test for Scala or doctests for Python
35. @holdenkarau
While we get our code working:
● Remember to follow the style guides
○ https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Gu
ide
● Please always add tests
○ For development we can run scala test with “sbt [module]/testOnly”
○ In python we can specify module with ./python/run-tests
● ./dev/lint-scala & ./dev/lint-python check for some style
● Changing the API? Make sure we pass or you update MiMa!
○ Sometimes its OK to make breaking changes, and MiMa can be a bit
overzealous so adding exceptions is common
36. @holdenkarau
A bit more on MiMa
● Spark wishes to maintain binary compatibility
○ in non-experimental components
○ 3.0 can be different
● MiMa exclusions can be added if we verify (and document
how we verified) the compatibility
● Often MiMa is a bit over sensitive so don’t feel stressed
- feel free to ask for help if confused
Julie
Krawczyk
37. @holdenkarau
Making the change:
No arguing about which editor please - kthnx
Making a doc change? Look inside docs/*.md
Making a code change? grep or intellij or github inside
project codesearch can all help you find what you're looking
for.
39. @holdenkarau
Yay! Let’s make a PR :)
● Push to your branch
● Visit github
● Create PR (put JIRA name in title as well as component)
○ Components control where our PR shows up in
https://spark-prs.appspot.com/
● If you’ve been whitelisted tests will run
● Otherwise will wait for someone to verify
● Tag it “WIP” if its a work in progress (but maybe wait)
[puamelia]
40. @holdenkarau
Code review time
● Note: this is after the pull request creation
● I believe code reviews should be done in the open
○ With an exception of when we are deciding if we want to try and
submit a change
○ Even then should have hopefully decided that back at the JIRA stage
● My personal beliefs & your org’s may not align
● If you have the time you can contribute by reviewing
others code too (please!)
Mitchell
Joyce
41. @holdenkarau
And now onto the actual code review...
● Most often committers will review your code (eventually)
● Other people can help too
● People can be very busy (check the release schedule)
● If you don’t get traction try pinging people
○ Me ( @holdenkarau - I'm not an expert everywhere but I can look)
○ The author of the JIRA (even if not a committer)
○ The shepherd of the JIRA (if applicable)
○ The person who wrote the code you are changing (git blame)
○ Active committers for the component
Mitchell
Joyce
42. @holdenkarau
What does the review look like?
● LGTM - Looks good to me
○ Individual thinks the code looks good - ready to merge (sometimes
LGTM pending tests or LGTM but check with @[name]).
● SGTM - Sounds good to me (normally in response to a
suggestion)
● Sometimes get sent back to the drawing board
● Not all PRs get in - its ok!
○ Don’t feel bad & don’t get discouraged.
● Mixture of in-line comments & general comments
● You can see some videos of my live reviews at
http://bit.ly/holdenLiveOSS
Phil Long
52. @holdenkarau
That’s a pretty standard small PR
● It took some time to get merged in
● It was fairly simple
● Review cycles are long - so move on to other things
● Only two reviewers
● Apache Spark Jenkins comments on build status :)
○ “Jenkins retest this please” is great
● Big PRs - like making PySpark pip installable can have >
10 reviewers and take a long time
● Sometimes it can be hard to find reviewers - tag your PRs
& ping people on github
James Joel
53. @holdenkarau
Don’t get discouraged
David Martyn Hunt
It is normal to not get every pull request accepted
Sometimes other people will “scoop” you on your
pull request
Sometimes people will be super helpful with your
pull request
54. @holdenkarau
Don’t get discouraged
David Martyn Hunt
If you don’t hear anything there is a good chance it
is a “soft no” - but you can ping me and I can try
and help.
The community has been trying to get better at
explicit “Won’t Fix” or saying no on PRs
55. @holdenkarau
So who was that “Spark QA”/SparkJenkins/etc.?
● Automated pull request builder
● Jenkins based
● Runs all of the tests & style checks
● Lives in Berkeley
● Test logs live on, artifacts not so much
● https://amplab.cs.berkeley.edu/jenkins
56. @holdenkarau
Some changes require even more testing
● spark-perf (common for ML changes)
● spark-sql-perf (common for SQL changes)
● spark-integration-tests (integration testing)
Image of FLG by Eric Kilby
57. @holdenkarau
While we are waiting:
● Keep merging in master when we get out of sync
● If we don’t jenkins can’t run :(
● We get out of sync surprisingly quickly!
● If our pull request gets older than 30 days it might get
auto-closed
● If you don’t here anything try pinging the dev list to
see if it's a “soft no” (and or ping me :))
Moyan Brenn
58. @holdenkarau
In review: Where do we get started?
● Search for “starter” on JIRA
● Look on the mailing list for problems
● Stackoverflow - lots of questions some of which are bugs
● grep TODO broken FIXME
● Compare APIs between languages
● Customer/user reports?
Serena
59. @holdenkarau
What about doing reviews?
● You don't need to be an expert (just will be slower)
● It's OK to leave suggestions like "I think does X but
it's a little confusing maybe add a comment"
● First pass reviews from others are super useful
● Helping people find the right reviewers is useful
● We have over 450 open pull request (> 150 "active")
● You can drill down by component in
https://spark-prs.appspot.com/
60. @holdenkarau
What about when we want to make big changes?
● Talk with the community
○ Developer mailing list dev@spark.apache.org
○ User mailing list user@spark.apache.org
● Consider if it can be published as a spark-package
● Create a public design document (google doc normally)
● Be aware this will be somewhat of an uphill battle (I’m
sorry)
● You can look at SPIPs (Spark's versions of PEPs)
61. @holdenkarau
Other resources:
● “Contributing to Apache Spark” -
https://cwiki.apache.org/confluence/display/SPARK/Contrib
uting+to+Spark
● Programming guide (along with JavaDoc, PyDoc, ScalaDoc,
etc.) - http://spark.apache.org/docs/latest/
● Developer list -
http://apache-spark-developers-list.1001551.n3.nabble.com
/
62. @holdenkarau
What things can be good Spark packages?
● Input formats (especially Spark SQL, Streaming)
● Machine learning pipeline components & algorithms
● Testing support
● Monitoring data sinks
● Deployment tools
frankieleon
63. @holdenkarau
Making your a package
● Relatively simple - need to publish to maven central
● Listed on http://spark-packages.org
● Cross building (Spark versions) not super easy
○ I use a perl script (don’t tell on me)
● If your building with sbt check out
https://github.com/databricks/sbt-spark-package to make
it easy to publish
● Used to do API compatibility checks
● Sometimes flakey - just republish if it doesn’t go
through
frankieleon
64. @holdenkarau
How about writing a book?
● Can be lots of fun
● Can also take up 100% of your “free” time
● Can get you invited to more nerd parties
● Most of the publisher are looking to improve/broaden
their Spark book line up
● Like an old book that hasn’t been updated? Talk to the
publisher about updating it.
Kreg Steppe
65. @holdenkarau
How about yak shaving?
● Lots of areas need shaving
● JVM deps are easier to update, Python deps are not :(
● Things built on top are a great place to go yak shaving
○ Jupyter etc.
Jason Crane
66. @holdenkarau
Testing/Release Validation
● Join the dev@ list and look for [VOTE] threads
○ Check and see if Spark deploys on your environment
○ If your application still works, or if we need to fix something
○ Great way to keep your Spark application working with less work
● Adding more automated tests is good too
○ Especially integration tests
67. @holdenkarau
Spark Videos
● Apache Spark Youtube Channel
● My Spark videos on YouTube -
○ http://bit.ly/holdenSparkVideos
● Spark Summit 2014 training
● Paco’s Introduction to Apache Spark
Paul Anderson
68. @holdenkarau
Learning Spark
Fast Data
Processing with
Spark
(Out of Date)
Fast Data
Processing with
Spark
(2nd edition)
Advanced
Analytics with
Spark
Spark in Action
High Performance SparkLearning PySpark
69. @holdenkarau
High Performance Spark!
You can buy it today! On the internet!
Cats love it*
*Or at least the box it comes in. If buying for a cat, get
print rather than e-book.
71. @holdenkarau
And some upcoming talks:
● March
○ Dataworks Barcelona -- tomorrow
○ Strata San Francisco -- next week
● April
○ Spark Summit
● May
○ KiwiCoda Mania
● June
○ "Secret" (for another week or so)
● July
○ OSCON Portland
○ Skills Matter in London
72. @holdenkarau
k thnx bye :)
If you care about Spark testing and
don’t hate surveys:
http://bit.ly/holdenTestingSpark
.
Will tweet results
“eventually” @holdenkarau
Do you want more realistic
benchmarks? Share your UDFs!
http://bit.ly/pySparkUDF
It’s performance review season, so help a friend out and
fill out this survey with your talk feedback
http://bit.ly/holdenTalkFeedback