With the community working on preparing the next versions of Apache Spark you may be asking yourself ‘how do I get involved in contributing to this?’ With such a large volume of contributions, it can be hard to know how to begin contributing yourself. Holden Karau offers a developer-focused head start, walking you through how to find good issues, formatting code, finding reviewers, and what to expect in the code review process. In addition to looking at how to contribute code we explore some of the other ways you can contribute to to Apache Spark from helping test release candidates, to doing the all important code reviews, bug triage, and many more (like answering questions).
Getting started contributing to Apache SparkHolden Karau
Are you interested in contributing to Apache Spark? This workshop and associated slides walk through the basics of contributing to Apache Spark as a developer. This advice is based on my 3 years of contributing to Apache Spark but should not be considered official in any way.
Accelerating Big Data beyond the JVM - Fosdem 2018Holden Karau
Many popular big data technologies (such as Apache Spark, BEAM, Flink, and Kafka) are built in the JVM, and many interesting tools are built in other languages (ranging from Python to CUDA). For simple operations the cost of copying the data can quickly dominate, and in complex cases can limit our ability to take advantage of specialty hardware. This talk explores how improved formats are being integrated to reduce these hurdles to co-operation.
Many popular big data technologies (such as Apache Spark, BEAM, and Flink) are built in the JVM, and many interesting AI tools are built in other languages, and some requiring copying to the GPU. As many folks have experienced, while we may wish that we spend all of our time playing with cool algorithms -- we often need to spend more of our time working on data prep. Having to copy our data slowly between the JVM and the target language of computation can remove much of the benefit of being able to access our specialized tooling. Thankfully, as illustrated in the soon to be released Spark 2.3, Apache Arrow and related tools offer the ability to reduce this overhead. This talk will explore how Arrow is being integrated into Spark, and how it can be integrated into other systems, but also limitations and places where Apache Arrow will not magically save us.
Link: https://fosdem.org/2018/schedule/event/big_data_outside_jvm/
A Glimpse At The Future Of Apache Spark 3.0 With Deep Learning And KubernetesLightbend
In this special guest webinar with Holden Karau, speaker, author and Developer Advocate at Google, we’ll take a walk through some of the interesting JIRAs, look at external components being developed (like deep learning support), and also talk about the future of running real-time Spark workloads on Kubernetes.
Getting started contributing to Apache SparkHolden Karau
Are you interested in contributing to Apache Spark? This workshop and associated slides walk through the basics of contributing to Apache Spark as a developer. This advice is based on my 3 years of contributing to Apache Spark but should not be considered official in any way.
Accelerating Big Data beyond the JVM - Fosdem 2018Holden Karau
Many popular big data technologies (such as Apache Spark, BEAM, Flink, and Kafka) are built in the JVM, and many interesting tools are built in other languages (ranging from Python to CUDA). For simple operations the cost of copying the data can quickly dominate, and in complex cases can limit our ability to take advantage of specialty hardware. This talk explores how improved formats are being integrated to reduce these hurdles to co-operation.
Many popular big data technologies (such as Apache Spark, BEAM, and Flink) are built in the JVM, and many interesting AI tools are built in other languages, and some requiring copying to the GPU. As many folks have experienced, while we may wish that we spend all of our time playing with cool algorithms -- we often need to spend more of our time working on data prep. Having to copy our data slowly between the JVM and the target language of computation can remove much of the benefit of being able to access our specialized tooling. Thankfully, as illustrated in the soon to be released Spark 2.3, Apache Arrow and related tools offer the ability to reduce this overhead. This talk will explore how Arrow is being integrated into Spark, and how it can be integrated into other systems, but also limitations and places where Apache Arrow will not magically save us.
Link: https://fosdem.org/2018/schedule/event/big_data_outside_jvm/
A Glimpse At The Future Of Apache Spark 3.0 With Deep Learning And KubernetesLightbend
In this special guest webinar with Holden Karau, speaker, author and Developer Advocate at Google, we’ll take a walk through some of the interesting JIRAs, look at external components being developed (like deep learning support), and also talk about the future of running real-time Spark workloads on Kubernetes.
Sharing (or stealing) the jewels of python with big data & the jvm (1)Holden Karau
With the new Apache Arrow integration in PySpark 2.3, it is now starting become reasonable to look to the Python world and ask “what else do we want to steal besides tensorflow”, or as a Python developer look and say “how can I get my code into production without it being rewritten into a mess of Java?”
Regardless of your specific side(s) in the JVM/Python divide, collaboration is getting a lot faster, so lets learn how to share! In this brief talk we will examine sharing some of the wonders of Spacy with the Java world, which still has a somewhat lackluster set of options for NLP.
Are general purpose big data systems eating the world?Holden Karau
Every-time there is a new piece of big data technology we often see many different specific implementations of the concepts, which often eventually consolidate down to a few viable options, and then frequently end up getting rolled into part of another larger project. This talk will examine this trend in big data ecosystem, look at the exceptions to the "rule", and look at how better interchange formats like Apache Arrow have the potential to change this going forward. In addition to general vague happy feelings (or sad depending on your ideas about how software should be made), this talk will look at some specific examples with deep learning, so if anyone is looking for a little bit of pixie dust to sprinkle on a failing business plan to take to silicon valley to raise a series A, you'll get something out this as well.
Video - https://www.youtube.com/watch?v=P_YKrLFZQJo
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Holden Karau
Slides from PyData London exploring how the big data ecosystem (currently) works together as well as how different parts of the ecosystem work with Python. Proof-of-concept examples are provided using nltk & spacy with Spark. Then we look to the future and how we can improve.
Overcoming the Fear of Contributing to Open SourceAll Things Open
Presented by: Rizel Scarlett
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: If you're feeling uncertain about contributing to an open source project for the first time, I understand. Navigating the open source space can feel intimidating. In this talk, audience members will learn how to confidently navigate the open source space and gain inspiration to make their first contribution.
Debugging PySpark - Spark Summit East 2017Holden Karau
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Video: https://www.youtube.com/watch?v=A0jYQlxc2FU&feature=youtu.be
Debugging PySpark: Spark Summit East talk by Holden KarauSpark Summit
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Contributing to Apache Airflow | Journey to becoming Airflow's leading contri...Kaxil Naik
From not knowing Python (let alone Airflow), and from submitting the first PR that fixes typo to becoming Airflow Committer, PMC Member, Release Manager, and #1 Committer this year, this talk walks through Kaxil’s journey in the Airflow World.
The second part of this talk explains:
how you can also start your OSS journey by contributing to Airflow
Expanding familiarity with a different part of the Airflow codebase
Continue committing regularly & steadily to become Airflow Committer. (including talking about current Guidelines of becoming a Committer)
Different mediums of communication (Dev list, users list, Slack channel, Github Discussions etc)
Debugging Apache Spark - Scala & Python super happy fun times 2017Holden Karau
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. Holden Karau and Joey Echeverria explore how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, and some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose. Holden and Joey demonstrate how to effectively search logs from Apache Spark to spot common problems and discuss options for logging from within your program itself. Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but Holden and Joey look at how to effectively use Spark’s current accumulators for debugging before gazing into the future to see the data property type accumulators that may be coming to Spark in future versions. And in addition to reading logs and instrumenting your program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems. Holden and Joey cover how to quickly use the UI to figure out if certain types of issues are occurring in our job.
Keeping the fun in functional w/ Apache Spark @ Scala Days NYCHolden Karau
Apache Spark has been a great driver of not only Scala adoption, but introducing a new generation of developers to functional programming concepts. As Spark places more emphasis on its newer DataFrame & Dataset APIs, it’s important to ask ourselves how we can benefit from this while still keeping our fun functional roots. We will explore the cases where the Dataset APIs empower us to do cool things we couldn’t before, what the different approaches to serialization mean, and how to figure out when the shiny new API is actually just trying to steal your lunch money (aka CPU cycles).
Simplifying training deep and serving learning models with big data in python...Holden Karau
More Serious Business Kitty Description:
While some deep learning systems have promised to not require any kind of data preparation or cleaning, in practice many folks find that effectively training their models requires some amount of data preparation and often we spend more time on our data preparation than anything else. This talk will examine tools for data preparation that can be used at scale on "big-data" and then how to use their results on-line at serving time (where we hopefully no longer require a cluster to predict every new user).
Less Serious Business Kitty Description:
Deep Learning, in addition to being a world class tool for detecting the presence of cats, requires large amounts of data for training. As much vendors may say "no data prep required", they are all lying*. This talk will look tools to build a deep learning pipeline with feature prep on top of existing big data technologies without rewriting your code for serving.
Traditionally feature prep done in a big data system, like Spark, Flink, or Beam, would have to be rewritting for the on-line serving component. This is about as much fun as when we have to rewrite our sample Python code into Java, as for some reason that's what a lot companies associate with "production." Come for the deep learning buzz-words, stay for the how to perform on-line serving without writing Java code.
*All vendors are optimists when it comes to their own products, including the vendors who pay Holden and Gris but they pay us so its ok.
Many of the recent big data systems, like Hadoop, Spark, and Kafka, are written primarily in JVM languages. At the same time, there is a wealth of tools for data science and data analytics that exist outside of the JVM. Holden Karau and Rachel Warren explore the state of the current big data ecosystem and explain how to best work with it in non-JVM languages. While much of the focus will be on Python + Spark, the talk will also include interesting anecdotes about how these lessons apply to other systems (including Kafka).
Holden and Rachel detail how to bridge the gap using PySpark and discuss other solutions like Kafka Streams as well. They also outline the challenges of pure Python solutions like dask. Holden and Rachel start with the current architecture of PySpark and its evolution. They then turn to the future, covering Arrow-accelerated interchange for Python functions, how to expose Python machine learning models into Spark, and how to use systems like Spark to accelerate training of traditional Python models. They also dive into what other similar systems are doing as well as what the options are for (almost) completely ignoring the JVM in the big data space.
Python users will learn how to more effectively use systems like Spark and understand how the design is changing. JVM developers will gain an understanding of how to Python code from data scientist and Python developers while avoiding the traditional trap of needing to rewrite everything.
Data Lakehouse Symposium | Day 1 | Part 1Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
More Related Content
Similar to Getting Started Contributing to Apache Spark – From PR, CR, JIRA, and Beyond
Sharing (or stealing) the jewels of python with big data & the jvm (1)Holden Karau
With the new Apache Arrow integration in PySpark 2.3, it is now starting become reasonable to look to the Python world and ask “what else do we want to steal besides tensorflow”, or as a Python developer look and say “how can I get my code into production without it being rewritten into a mess of Java?”
Regardless of your specific side(s) in the JVM/Python divide, collaboration is getting a lot faster, so lets learn how to share! In this brief talk we will examine sharing some of the wonders of Spacy with the Java world, which still has a somewhat lackluster set of options for NLP.
Are general purpose big data systems eating the world?Holden Karau
Every-time there is a new piece of big data technology we often see many different specific implementations of the concepts, which often eventually consolidate down to a few viable options, and then frequently end up getting rolled into part of another larger project. This talk will examine this trend in big data ecosystem, look at the exceptions to the "rule", and look at how better interchange formats like Apache Arrow have the potential to change this going forward. In addition to general vague happy feelings (or sad depending on your ideas about how software should be made), this talk will look at some specific examples with deep learning, so if anyone is looking for a little bit of pixie dust to sprinkle on a failing business plan to take to silicon valley to raise a series A, you'll get something out this as well.
Video - https://www.youtube.com/watch?v=P_YKrLFZQJo
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Holden Karau
Slides from PyData London exploring how the big data ecosystem (currently) works together as well as how different parts of the ecosystem work with Python. Proof-of-concept examples are provided using nltk & spacy with Spark. Then we look to the future and how we can improve.
Overcoming the Fear of Contributing to Open SourceAll Things Open
Presented by: Rizel Scarlett
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: If you're feeling uncertain about contributing to an open source project for the first time, I understand. Navigating the open source space can feel intimidating. In this talk, audience members will learn how to confidently navigate the open source space and gain inspiration to make their first contribution.
Debugging PySpark - Spark Summit East 2017Holden Karau
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Video: https://www.youtube.com/watch?v=A0jYQlxc2FU&feature=youtu.be
Debugging PySpark: Spark Summit East talk by Holden KarauSpark Summit
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. This talk will examine how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, as well as some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose, and this talk will examine how to effectively search logs from Apache Spark to spot common problems. In addition to the internal logging, this talk will look at options for logging from within our program itself.
Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but this talk will look at how to effectively use Spark’s current accumulators for debugging as well as a look to future for data property type accumulators which may be coming to Spark in future version.
In addition to reading logs, and instrumenting our program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems.
Contributing to Apache Airflow | Journey to becoming Airflow's leading contri...Kaxil Naik
From not knowing Python (let alone Airflow), and from submitting the first PR that fixes typo to becoming Airflow Committer, PMC Member, Release Manager, and #1 Committer this year, this talk walks through Kaxil’s journey in the Airflow World.
The second part of this talk explains:
how you can also start your OSS journey by contributing to Airflow
Expanding familiarity with a different part of the Airflow codebase
Continue committing regularly & steadily to become Airflow Committer. (including talking about current Guidelines of becoming a Committer)
Different mediums of communication (Dev list, users list, Slack channel, Github Discussions etc)
Debugging Apache Spark - Scala & Python super happy fun times 2017Holden Karau
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. Holden Karau and Joey Echeverria explore how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, and some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose. Holden and Joey demonstrate how to effectively search logs from Apache Spark to spot common problems and discuss options for logging from within your program itself. Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but Holden and Joey look at how to effectively use Spark’s current accumulators for debugging before gazing into the future to see the data property type accumulators that may be coming to Spark in future versions. And in addition to reading logs and instrumenting your program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems. Holden and Joey cover how to quickly use the UI to figure out if certain types of issues are occurring in our job.
Keeping the fun in functional w/ Apache Spark @ Scala Days NYCHolden Karau
Apache Spark has been a great driver of not only Scala adoption, but introducing a new generation of developers to functional programming concepts. As Spark places more emphasis on its newer DataFrame & Dataset APIs, it’s important to ask ourselves how we can benefit from this while still keeping our fun functional roots. We will explore the cases where the Dataset APIs empower us to do cool things we couldn’t before, what the different approaches to serialization mean, and how to figure out when the shiny new API is actually just trying to steal your lunch money (aka CPU cycles).
Simplifying training deep and serving learning models with big data in python...Holden Karau
More Serious Business Kitty Description:
While some deep learning systems have promised to not require any kind of data preparation or cleaning, in practice many folks find that effectively training their models requires some amount of data preparation and often we spend more time on our data preparation than anything else. This talk will examine tools for data preparation that can be used at scale on "big-data" and then how to use their results on-line at serving time (where we hopefully no longer require a cluster to predict every new user).
Less Serious Business Kitty Description:
Deep Learning, in addition to being a world class tool for detecting the presence of cats, requires large amounts of data for training. As much vendors may say "no data prep required", they are all lying*. This talk will look tools to build a deep learning pipeline with feature prep on top of existing big data technologies without rewriting your code for serving.
Traditionally feature prep done in a big data system, like Spark, Flink, or Beam, would have to be rewritting for the on-line serving component. This is about as much fun as when we have to rewrite our sample Python code into Java, as for some reason that's what a lot companies associate with "production." Come for the deep learning buzz-words, stay for the how to perform on-line serving without writing Java code.
*All vendors are optimists when it comes to their own products, including the vendors who pay Holden and Gris but they pay us so its ok.
Many of the recent big data systems, like Hadoop, Spark, and Kafka, are written primarily in JVM languages. At the same time, there is a wealth of tools for data science and data analytics that exist outside of the JVM. Holden Karau and Rachel Warren explore the state of the current big data ecosystem and explain how to best work with it in non-JVM languages. While much of the focus will be on Python + Spark, the talk will also include interesting anecdotes about how these lessons apply to other systems (including Kafka).
Holden and Rachel detail how to bridge the gap using PySpark and discuss other solutions like Kafka Streams as well. They also outline the challenges of pure Python solutions like dask. Holden and Rachel start with the current architecture of PySpark and its evolution. They then turn to the future, covering Arrow-accelerated interchange for Python functions, how to expose Python machine learning models into Spark, and how to use systems like Spark to accelerate training of traditional Python models. They also dive into what other similar systems are doing as well as what the options are for (almost) completely ignoring the JVM in the big data space.
Python users will learn how to more effectively use systems like Spark and understand how the design is changing. JVM developers will gain an understanding of how to Python code from data scientist and Python developers while avoiding the traditional trap of needing to rewrite everything.
Data Lakehouse Symposium | Day 1 | Part 1Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Data Lakehouse Symposium | Day 1 | Part 2Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
5 Critical Steps to Clean Your Data Swamp When Migrating Off of HadoopDatabricks
In this session, learn how to quickly supplement your on-premises Hadoop environment with a simple, open, and collaborative cloud architecture that enables you to generate greater value with scaled application of analytics and AI on all your data. You will also learn five critical steps for a successful migration to the Databricks Lakehouse Platform along with the resources available to help you begin to re-skill your data teams.
Democratizing Data Quality Through a Centralized PlatformDatabricks
Bad data leads to bad decisions and broken customer experiences. Organizations depend on complete and accurate data to power their business, maintain efficiency, and uphold customer trust. With thousands of datasets and pipelines running, how do we ensure that all data meets quality standards, and that expectations are clear between producers and consumers? Investing in shared, flexible components and practices for monitoring data health is crucial for a complex data organization to rapidly and effectively scale.
At Zillow, we built a centralized platform to meet our data quality needs across stakeholders. The platform is accessible to engineers, scientists, and analysts, and seamlessly integrates with existing data pipelines and data discovery tools. In this presentation, we will provide an overview of our platform’s capabilities, including:
Giving producers and consumers the ability to define and view data quality expectations using a self-service onboarding portal
Performing data quality validations using libraries built to work with spark
Dynamically generating pipelines that can be abstracted away from users
Flagging data that doesn’t meet quality standards at the earliest stage and giving producers the opportunity to resolve issues before use by downstream consumers
Exposing data quality metrics alongside each dataset to provide producers and consumers with a comprehensive picture of health over time
Learn to Use Databricks for Data ScienceDatabricks
Data scientists face numerous challenges throughout the data science workflow that hinder productivity. As organizations continue to become more data-driven, a collaborative environment is more critical than ever — one that provides easier access and visibility into the data, reports and dashboards built against the data, reproducibility, and insights uncovered within the data.. Join us to hear how Databricks’ open and collaborative platform simplifies data science by enabling you to run all types of analytics workloads, from data preparation to exploratory analysis and predictive analytics, at scale — all on one unified platform.
Why APM Is Not the Same As ML MonitoringDatabricks
Application performance monitoring (APM) has become the cornerstone of software engineering allowing engineering teams to quickly identify and remedy production issues. However, as the world moves to intelligent software applications that are built using machine learning, traditional APM quickly becomes insufficient to identify and remedy production issues encountered in these modern software applications.
As a lead software engineer at NewRelic, my team built high-performance monitoring systems including Insights, Mobile, and SixthSense. As I transitioned to building ML Monitoring software, I found the architectural principles and design choices underlying APM to not be a good fit for this brand new world. In fact, blindly following APM designs led us down paths that would have been better left unexplored.
In this talk, I draw upon my (and my team’s) experience building an ML Monitoring system from the ground up and deploying it on customer workloads running large-scale ML training with Spark as well as real-time inference systems. I will highlight how the key principles and architectural choices of APM don’t apply to ML monitoring. You’ll learn why, understand what ML Monitoring can successfully borrow from APM, and hear what is required to build a scalable, robust ML Monitoring architecture.
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixDatabricks
Autonomy and ownership are core to working at Stitch Fix, particularly on the Algorithms team. We enable data scientists to deploy and operate their models independently, with minimal need for handoffs or gatekeeping. By writing a simple function and calling out to an intuitive API, data scientists can harness a suite of platform-provided tooling meant to make ML operations easy. In this talk, we will dive into the abstractions the Data Platform team has built to enable this. We will go over the interface data scientists use to specify a model and what that hooks into, including online deployment, batch execution on Spark, and metrics tracking and visualization.
Stage Level Scheduling Improving Big Data and AI IntegrationDatabricks
In this talk, I will dive into the stage level scheduling feature added to Apache Spark 3.1. Stage level scheduling extends upon Project Hydrogen by improving big data ETL and AI integration and also enables multiple other use cases. It is beneficial any time the user wants to change container resources between stages in a single Apache Spark application, whether those resources are CPU, Memory or GPUs. One of the most popular use cases is enabling end-to-end scalable Deep Learning and AI to efficiently use GPU resources. In this type of use case, users read from a distributed file system, do data manipulation and filtering to get the data into a format that the Deep Learning algorithm needs for training or inference and then sends the data into a Deep Learning algorithm. Using stage level scheduling combined with accelerator aware scheduling enables users to seamlessly go from ETL to Deep Learning running on the GPU by adjusting the container requirements for different stages in Spark within the same application. This makes writing these applications easier and can help with hardware utilization and costs.
There are other ETL use cases where users want to change CPU and memory resources between stages, for instance there is data skew or perhaps the data size is much larger in certain stages of the application. In this talk, I will go over the feature details, cluster requirements, the API and use cases. I will demo how the stage level scheduling API can be used by Horovod to seamlessly go from data preparation to training using the Tensorflow Keras API using GPUs.
The talk will also touch on other new Apache Spark 3.1 functionality, such as pluggable caching, which can be used to enable faster dataframe access when operating from GPUs.
Simplify Data Conversion from Spark to TensorFlow and PyTorchDatabricks
In this talk, I would like to introduce an open-source tool built by our team that simplifies the data conversion from Apache Spark to deep learning frameworks.
Imagine you have a large dataset, say 20 GBs, and you want to use it to train a TensorFlow model. Before feeding the data to the model, you need to clean and preprocess your data using Spark. Now you have your dataset in a Spark DataFrame. When it comes to the training part, you may have the problem: How can I convert my Spark DataFrame to some format recognized by my TensorFlow model?
The existing data conversion process can be tedious. For example, to convert an Apache Spark DataFrame to a TensorFlow Dataset file format, you need to either save the Apache Spark DataFrame on a distributed filesystem in parquet format and load the converted data with third-party tools such as Petastorm, or save it directly in TFRecord files with spark-tensorflow-connector and load it back using TFRecordDataset. Both approaches take more than 20 lines of code to manage the intermediate data files, rely on different parsing syntax, and require extra attention for handling vector columns in the Spark DataFrames. In short, all these engineering frictions greatly reduced the data scientists’ productivity.
The Databricks Machine Learning team contributed a new Spark Dataset Converter API to Petastorm to simplify these tedious data conversion process steps. With the new API, it takes a few lines of code to convert a Spark DataFrame to a TensorFlow Dataset or a PyTorch DataLoader with default parameters.
In the talk, I will use an example to show how to use the Spark Dataset Converter to train a Tensorflow model and how simple it is to go from single-node training to distributed training on Databricks.
Scaling your Data Pipelines with Apache Spark on KubernetesDatabricks
There is no doubt Kubernetes has emerged as the next generation of cloud native infrastructure to support a wide variety of distributed workloads. Apache Spark has evolved to run both Machine Learning and large scale analytics workloads. There is growing interest in running Apache Spark natively on Kubernetes. By combining the flexibility of Kubernetes and scalable data processing with Apache Spark, you can run any data and machine pipelines on this infrastructure while effectively utilizing resources at disposal.
In this talk, Rajesh Thallam and Sougata Biswas will share how to effectively run your Apache Spark applications on Google Kubernetes Engine (GKE) and Google Cloud Dataproc, orchestrate the data and machine learning pipelines with managed Apache Airflow on GKE (Google Cloud Composer). Following topics will be covered: – Understanding key traits of Apache Spark on Kubernetes- Things to know when running Apache Spark on Kubernetes such as autoscaling- Demonstrate running analytics pipelines on Apache Spark orchestrated with Apache Airflow on Kubernetes cluster.
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
Pipelines have become ubiquitous, as the need for stringing multiple functions to compose applications has gained adoption and popularity. Common pipeline abstractions such as “fit” and “transform” are even shared across divergent platforms such as Python Scikit-Learn and Apache Spark.
Scaling pipelines at the level of simple functions is desirable for many AI applications, however is not directly supported by Ray’s parallelism primitives. In this talk, Raghu will describe a pipeline abstraction that takes advantage of Ray’s compute model to efficiently scale arbitrarily complex pipeline workflows. He will demonstrate how this abstraction cleanly unifies pipeline workflows across multiple platforms such as Scikit-Learn and Spark, and achieves nearly optimal scale-out parallelism on pipelined computations.
Attendees will learn how pipelined workflows can be mapped to Ray’s compute model and how they can both unify and accelerate their pipelines with Ray.
Sawtooth Windows for Feature AggregationsDatabricks
In this talk about zipline, we will introduce a new type of windowing construct called a sawtooth window. We will describe various properties about sawtooth windows that we utilize to achieve online-offline consistency, while still maintaining high-throughput, low-read latency and tunable write latency for serving machine learning features.We will also talk about a simple deployment strategy for correcting feature drift – due operations that are not “abelian groups”, that operate over change data.
We want to present multiple anti patterns utilizing Redis in unconventional ways to get the maximum out of Apache Spark.All examples presented are tried and tested in production at Scale at Adobe. The most common integration is spark-redis which interfaces with Redis as a Dataframe backing Store or as an upstream for Structured Streaming. We deviate from the common use cases to explore where Redis can plug gaps while scaling out high throughput applications in Spark.
Niche 1 : Long Running Spark Batch Job – Dispatch New Jobs by polling a Redis Queue
· Why?
o Custom queries on top a table; We load the data once and query N times
· Why not Structured Streaming
· Working Solution using Redis
Niche 2 : Distributed Counters
· Problems with Spark Accumulators
· Utilize Redis Hashes as distributed counters
· Precautions for retries and speculative execution
· Pipelining to improve performance
Re-imagine Data Monitoring with whylogs and SparkDatabricks
In the era of microservices, decentralized ML architectures and complex data pipelines, data quality has become a bigger challenge than ever. When data is involved in complex business processes and decisions, bad data can, and will, affect the bottom line. As a result, ensuring data quality across the entire ML pipeline is both costly, and cumbersome while data monitoring is often fragmented and performed ad hoc. To address these challenges, we built whylogs, an open source standard for data logging. It is a lightweight data profiling library that enables end-to-end data profiling across the entire software stack. The library implements a language and platform agnostic approach to data quality and data monitoring. It can work with different modes of data operations, including streaming, batch and IoT data.
In this talk, we will provide an overview of the whylogs architecture, including its lightweight statistical data collection approach and various integrations. We will demonstrate how the whylogs integration with Apache Spark achieves large scale data profiling, and we will show how users can apply this integration into existing data and ML pipelines.
Raven: End-to-end Optimization of ML Prediction QueriesDatabricks
Machine learning (ML) models are typically part of prediction queries that consist of a data processing part (e.g., for joining, filtering, cleaning, featurization) and an ML part invoking one or more trained models. In this presentation, we identify significant and unexplored opportunities for optimization. To the best of our knowledge, this is the first effort to look at prediction queries holistically, optimizing across both the ML and SQL components.
We will present Raven, an end-to-end optimizer for prediction queries. Raven relies on a unified intermediate representation that captures both data processing and ML operators in a single graph structure.
This allows us to introduce optimization rules that
(i) reduce unnecessary computations by passing information between the data processing and ML operators
(ii) leverage operator transformations (e.g., turning a decision tree to a SQL expression or an equivalent neural network) to map operators to the right execution engine, and
(iii) integrate compiler techniques to take advantage of the most efficient hardware backend (e.g., CPU, GPU) for each operator.
We have implemented Raven as an extension to Spark’s Catalyst optimizer to enable the optimization of SparkSQL prediction queries. Our implementation also allows the optimization of prediction queries in SQL Server. As we will show, Raven is capable of improving prediction query performance on Apache Spark and SQL Server by up to 13.1x and 330x, respectively. For complex models, where GPU acceleration is beneficial, Raven provides up to 8x speedup compared to state-of-the-art systems. As part of the presentation, we will also give a demo showcasing Raven in action.
Processing Large Datasets for ADAS Applications using Apache SparkDatabricks
Semantic segmentation is the classification of every pixel in an image/video. The segmentation partitions a digital image into multiple objects to simplify/change the representation of the image into something that is more meaningful and easier to analyze [1][2]. The technique has a wide variety of applications ranging from perception in autonomous driving scenarios to cancer cell segmentation for medical diagnosis.
Exponential growth in the datasets that require such segmentation is driven by improvements in the accuracy and quality of the sensors generating the data extending to 3D point cloud data. This growth is further compounded by exponential advances in cloud technologies enabling the storage and compute available for such applications. The need for semantically segmented datasets is a key requirement to improve the accuracy of inference engines that are built upon them.
Streamlining the accuracy and efficiency of these systems directly affects the value of the business outcome for organizations that are developing such functionalities as a part of their AI strategy.
This presentation details workflows for labeling, preprocessing, modeling, and evaluating performance/accuracy. Scientists and engineers leverage domain-specific features/tools that support the entire workflow from labeling the ground truth, handling data from a wide variety of sources/formats, developing models and finally deploying these models. Users can scale their deployments optimally on GPU-based cloud infrastructure to build accelerated training and inference pipelines while working with big datasets. These environments are optimized for engineers to develop such functionality with ease and then scale against large datasets with Spark-based clusters on the cloud.
Massive Data Processing in Adobe Using Delta LakeDatabricks
At Adobe Experience Platform, we ingest TBs of data every day and manage PBs of data for our customers as part of the Unified Profile Offering. At the heart of this is a bunch of complex ingestion of a mix of normalized and denormalized data with various linkage scenarios power by a central Identity Linking Graph. This helps power various marketing scenarios that are activated in multiple platforms and channels like email, advertisements etc. We will go over how we built a cost effective and scalable data pipeline using Apache Spark and Delta Lake and share our experiences.
What are we storing?
Multi Source – Multi Channel Problem
Data Representation and Nested Schema Evolution
Performance Trade Offs with Various formats
Go over anti-patterns used
(String FTW)
Data Manipulation using UDFs
Writer Worries and How to Wipe them Away
Staging Tables FTW
Datalake Replication Lag Tracking
Performance Time!
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
Getting Started Contributing to Apache Spark – From PR, CR, JIRA, and Beyond
1. Thanks for coming early!
Want to make clothes from code?
https://haute.codes
Want to hear about a KF book?
http://www.introtomlwithkubeflow.com
Teach kids Apache Spark?
http://distributedcomputing4kids.com
3. @holdenkarau
Who am I?
Holden
● Prefered pronouns: she/her
● Co-author of the Learning Spark & High Performance Spark books
● Spark PMC & Committer
● Twitter @holdenkarau
● Live stream code & reviews: http://bit.ly/holdenLiveOSS
● Spark Dev in the bay area (no longer @ Google)
5. @holdenkarau
What we are going to explore together!
Getting a change into Apache Spark & the components
involved:
● The current state of the Apache Spark dev community
● Reason to contribute to Apache Spark
● Different ways to contribute
● Places to find things to contribute
● Tooling around code & doc contributions
Torsten Reuschling
6. @holdenkarau
Who I think you wonderful humans are?
● Nice* people
● Don’t mind pictures of cats
● May know some Apache Spark?
● Want to contribute to Apache Spark
7. @holdenkarau
Why I’m assuming you might want to contribute:
● Fix your own bugs/problems with Apache Spark
● Learn more about distributed systems (for fun or profit)
● Improve your Scala/Python/R/Java experience
● You <3 functional programming and want to trick more
people into using it
● “Credibility” of some vague type
● You just like hacking on random stuff and Spark seems
shiny
8. @holdenkarau
What’s the state of the Spark dev community?
● Really large number of contributors
● Active PMC & Committer’s somewhat concentrated
○ Better than we used to be
● Also a lot of SF Bay Area - but certainly not exclusively
so
gigijin
9. @holdenkarau
How can we contribute to Spark?
● Direct code in the Apache Spark code base
● Code in packages built on top of Spark
● Code reviews
● Yak shaving (aka fixing things that Spark uses)
● Documentation improvements & examples
● Books, Talks, and Blogs
● Answering questions (mailing lists, stack overflow, etc.)
● Testing & Release Validation
Andrey
10. @holdenkarau
Which is right for you?
● Direct code in the Apache Spark code base
○ High visibility, some things can only really be done here
○ Can take a lot longer to get changes in
● Code in packages built on top of Spark
○ Really great for things like formats or standalone features
● Yak shaving (aka fixing things that Spark uses)
○ Super important to do sometimes - can take even longer to get in
romana klee
11. @holdenkarau
Which is right for you? (continued)
● Code reviews
○ High visibility to PMC, can be faster to get started, easier to time
box
○ Less tracked in metrics
● Documentation improvements & examples
○ Lots of places to contribute - mixed visibility - large impact
● Advocacy: Books, Talks, and Blogs
○ Can be high visibility
romana klee
12. @holdenkarau
Testing/Release Validation
● Join the dev@ list and look for [VOTE] threads
○ Check and see if Spark deploys on your environment
○ If your application still works, or if we need to fix something
○ Great way to keep your Spark application working with less work
● Adding more automated tests is good too
○ Especially integration tests
● Check out release previews
○ Run mirrors of your production workloads if possible and compare the
results
○ The earlier we know the easier it is to improve
○ Even if we can't fix it, gives you a heads up on coming changes
13. @holdenkarau
Helping users
● Join the user@ list to answer peoples questions
○ You'll probably want to make some filter rules so you see the
relevant ones
○ I tried this with ML once -- it didn't go great. Now I look for
specific Python questions.
● Contribute to docs (internal and external)
● Stackoverflow questions
● Blog posts
● Tools to explain errors
● Pay it forward
● Stream your experiences -- there is value in not being
alone
Mitchell Friedman
14. @holdenkarau
Contributing Code Directly to Spark
● Maybe we encountered a bug we want to fix
● Maybe we’ve got a feature we want to add
● Either way we should see if other people are doing it
● And if what we want to do is complex, it might be better
to find something simple to start with
● It’s dangerous to go alone - take this
http://spark.apache.org/contributing.html
Jon Nelson
15. @holdenkarau
The different pieces of Spark: 3+?
Apache Spark “Core”
SQL &
DataFrames
Streaming
Language
APIs
Scala,
Java,
Python, &
R
Graph
Tools
Spark
ML
bagel &
Graph X
MLLib
Community
Packages
Structured
Streaming
Spark on
Yarn
Spark on
Mesos
Spark on
Kubernetes
Standalone
Spark
16. @holdenkarau
Choosing a component?
● Core
○ Conservative to external changes, but biggest impact
● ML / MLlib
○ ML is the home of the future - you can improve existing algorithms -
new algorithms face uphill battle
● Structured Streaming
○ Current API is in a lot of flux so it is difficult for external
participation
● SQL
○ Lots of fun stuff - very active - I have limited personal experience
● Python / R
○ Improve coverage of current APIs, improve performance
Rikki's Refuge
17. @holdenkarau
Choosing a component? (cont)
● GraphX - See (external) GraphFrames instead
● Kubernetes
○ New, lots of active work and reviewers
● YARN
○ Old faithful, always needs a little work.
● Mesos
○ Needs some love, probably easy-ish-path to committer (still hard)
● Standalone
○ Not a lot going on
Rikki's Refuge
18. @holdenkarau
Onto JIRA - Issue tracking funtimes
● It’s like bugzilla or fog bugz
● There is an Apache JIRA for many Apache projects
● You can (and should) sign up for an account
● All changes in Spark (now) require a JIRA
● https://www.youtube.com/watch?v=ca8n9uW3afg
● Check it out at:
○ https://issues.apache.org/jira/browse/SPARK
19. @holdenkarau
What we can do with ASF JIRA?
● Search for issues (remember to filter to Spark project)
● Create new issues
○ search first to see if someone else has reported it
● Comment on issues to let people know we are working on it
● Ask people for clarification or help
○ e.g. “Reading this I think you want the null values to be replaced by
a string when processing - is that correct?”
○ @mentions work here too
20. @holdenkarau
What can’t we do with ASF JIRA?
● Assign issues (to ourselves or other people)
○ In lieu of assigning we can “watch” & comment
● Post long design documents (create a Google Doc & link to
it from the JIRA)
● Tag issues
○ While we can add tags, they often get removed
22. @holdenkarau
Finding a good “starter” issue:
● https://issues.apache.org/jira/browse/SPARK
○ Has an starter issue tag, but inconsistently applied
● Instead read through and look for simple issues
● Pick something in the same component you eventually want to work in
● Look at the reporter and commenters - is there a committer or someone
whose name you recognize?
● Leave a comment that says you are going to start working on this
● Look for old issues that we couldn't fix because of API compatibility
23. @holdenkarau
Going beyond reported issues:
Read the user list & look for issues
Grep for TODO in components you are interested in (e.g. grep
-r TODO ./python/pyspark or grep -R TODO ./core/src)
Look between language APIs and see if anything is missing
that you think is interesting
Check deprecations (internal & external)
neko kabachi
24. @holdenkarau
While we are here: Bug Triage
● Add tags as you go
○ e.g. Found a good starter issue in another area? Tag it!
● Things that are questions in the bug tracker?
○ Redirect folks to the dev/user lists gently and helpfully
● Data correctness issues tagged as "minor"?
○ Help us avoid missing important issues with "blockers"
● Additional information required to be useful?
○ Let people know what would help the bug be more actionable
● Old issue - not sure if it's fixed?
○ Try and repro. A repro from a 2nd person is so valuable
● It's ok that not to look at all of the issues
Carol VanHook
27. @holdenkarau
But before we get too far:
● Spark wishes to maintain compatibility between releases
● We're working on 3 though so this is the time to break
things
Meagan Fisher
28. @holdenkarau
Getting at the code: yay for GitHub :)
● https://github.com/apache/spark
● Make a fork of it
● Clone it locally
dougwoods
31. @holdenkarau
What about documentation changes?
● Still use JIRAs to track
● We can’t edit the wiki :(
● But a lot of documentations lives in docs/*.md
Kreg Steppe
32. @holdenkarau
Building Spark’s docs
./docs/README.md has a lot of info - but quickly:
SKIP_API=1 jekyll build
SKIP_API=1 jekyll serve --watch
*Requires a recentish jekyll - install instructions assume
ruby2.0 only, on debian based s/gem/gem2.0/
33. @holdenkarau
Finding your way around the project
● Organized into sub-projects by directory
● IntelliJ is very popular with Spark developers
○ The free version is fine
● Some people like using emacs + ensime or magit too
● Language specific code is in each sub directory
34. @holdenkarau
Testing the issue
The spark-shell can often be a good way to verify the issue
reported in the JIRA is still occurring and come up with a
reasonable test.
Once you’ve got a handle on the issue in the spark-shell (or
if you decide to skip that step) check out
./[component]/src/test for Scala or doctests for Python
35. @holdenkarau
While we get our code working:
● Remember to follow the style guides
○ http://spark.apache.org/contributing.html#code-style-guide
● Please always add tests
○ For development we can run scala test with “sbt [module]/testOnly”
○ In python we can specify module with ./python/run-tests -m
● ./dev/lint-scala & ./dev/lint-python check for some style
● Changing the API? Make sure we pass or you update MiMa!
○ Sometimes its OK to make breaking changes, and MiMa can be a bit
overzealous so adding exceptions is common
36. @holdenkarau
A bit more on MiMa
● Spark wishes to maintain binary compatibility
○ in non-experimental components
○ 3.0 can be different
● MiMa exclusions can be added if we verify (and document
how we verified) the compatibility
● Often MiMa is a bit over sensitive so don’t feel stressed
- feel free to ask for help if confused
Julie
Krawczyk
37. @holdenkarau
Making the change:
No arguing about which editor please - kthnx
Making a doc change? Look inside docs/*.md
Making a code change? grep or intellij or github inside
project codesearch can all help you find what you're looking
for.
39. @holdenkarau
Yay! Let’s make a PR :)
● Push to your branch
● Visit github
● Create PR (put JIRA name in title as well as component)
○ Components control where our PR shows up in
https://spark-prs.appspot.com/
● If you’ve been whitelisted tests will run
● Otherwise will wait for someone to verify
● Tag it “WIP” if its a work in progress (but maybe wait)
[puamelia]
40. @holdenkarau
Code review time
● Note: this is after the pull request creation
● I believe code reviews should be done in the open
○ With an exception of when we are deciding if we want to try and
submit a change
○ Even then should have hopefully decided that back at the JIRA stage
● My personal beliefs & your org’s may not align
● If you have the time you can contribute by reviewing
others code too (please!)
Mitchell
Joyce
41. @holdenkarau
And now onto the actual code review...
● Most often committers will review your code (eventually)
● Other people can help too
● People can be very busy (check the release schedule)
● If you don’t get traction try pinging people
○ Me ( @holdenkarau - I'm not an expert everywhere but I can look)
○ The author of the JIRA (even if not a committer)
○ The shepherd of the JIRA (if applicable)
○ The person who wrote the code you are changing (git blame)
○ Active committers for the component
Mitchell
Joyce
42. @holdenkarau
What does the review look like?
● LGTM - Looks good to me
○ Individual thinks the code looks good - ready to merge (sometimes
LGTM pending tests or LGTM but check with @[name]).
● SGTM - Sounds good to me (normally in response to a
suggestion)
● Sometimes get sent back to the drawing board
● Not all PRs get in - its ok!
○ Don’t feel bad & don’t get discouraged.
● Mixture of in-line comments & general comments
● You can see some videos of my live reviews at
http://bit.ly/holdenLiveOSS
Phil Long
52. @holdenkarau
That’s a pretty standard small PR
● It took some time to get merged in
● It was fairly simple
● Review cycles are long - so move on to other things
● Only two reviewers
● Apache Spark Jenkins comments on build status :)
○ “Jenkins retest this please” is great
● Big PRs - like making PySpark pip installable can have >
10 reviewers and take a long time
● Sometimes it can be hard to find reviewers - tag your PRs
& ping people on github
James Joel
53. @holdenkarau
Don’t get discouraged
David Martyn Hunt
It is normal to not get every pull request accepted
Sometimes other people will “scoop” you on your
pull request
Sometimes people will be super helpful with your
pull request
54. @holdenkarau
When things don't go well...
If you don’t hear anything there is a good chance it is a “soft no”
The community has been trying to get better at explicit “Won’t Fix” or saying no on PRs
If folks say "no" (explicitly or implicitly) it doesn't mean your idea isn't awesome
If your idea doesn't fit in Spark at present, see if you can make it as a library
If you can't make a library see what hooks Spark would need to make those libraries possible and
suggest them.
55. @holdenkarau
While we are waiting:
● Keep merging in master when we get out of sync
● If we don’t jenkins can’t run :(
● We get out of sync surprisingly quickly!
● If our pull request gets older than 30 days it might get
auto-closed
● If you don’t here anything try pinging the dev list to
see if it's a “soft no” (and or ping me :))
Moyan Brenn
56. Open Source Code reviews are a like
Mermaid School
1) They help you grow your skills
2) Build on your existings skills (e.g. swimming or Scala)
3) You get better with time but you need to start
4) People (read sometimes management*) don't
understand how they help you grow your skills and don't
want to pay for it
5) Coffee makes it better
57. Why the community needs you?
● Many projects suffer from maintainer burn out
○ Some of this comes from the pressure to review too much code
● Reviewing code is less “fun”
○ and with a largely fun motivated work base
● Some projects are limited by reviewers not coding
○ Spark has > 500 open PRs
● More diverse reviewers: more diverse solutions
● Experienced reviewers become blind to “the way it’s
always been done”
● Represent the user(s)
Jerry Lai
59. Benefits you get from OSS reviews
● Grow skills
● See the world*
● Faster recognition
● Deeper integration in community
● The ability to contribute with fixed amounts of time
*Of open source & maybe the real world
60. See more of the world
● Starter issues are often designed to only touch a few
things
● Even moving beyond starter issues, there’s only so
many hours in the day and you can’t write everything
● Helps you can a better understanding of the project as a
whole
● Let's you take skills between projects faster
○ Know what good Python looks like? Great, many projects need help
with that
Vania Rivalta
61. Possible Faster Recognition
● General more contributors than reviewers
● Reviewers stand out
● Reviews can be the difference between a contributor
and someone trusted to make their own changes to the
project
● Allows you to work with more people
Sham Hardy
62. Easier to control your time
● Getting code into large OSS projects can take lots of
time
● Want to contribute a new PR? You will often need to
shepard a PR for an extended period of time
● “One more bug”
● With reviews: do what you can, but you don’t have to be
continuously responding to provide value
Rob Hill
63. Finding a good first PR to review
● Smaller PRs can be better
● Something you care about
● Often easier to be one of the early reviewers so if it’s
late stage stay away from
● You can drill down by component in
https://spark-prs.appspot.com/
64. Doing that first review:
● Feel free to leave comments like
○ “I’m new to the project reading this I think it’s intention is X is that
correct? Maybe we could add a comment here”
○ Look for when changes are getting out of sync with docs “Can we
update the docs or create a follow up issue to do that?”
○ Style: Is there a style guide? Does this follow it? Does this follow
general “good” style?
○ Building: Does this build on your platform?
○ Look around for duplicated logic elsewhere in the codebase
○ Find the original author and ping them to take a look
● Get your IDE set up and jump to definition a lot
● Be prepared to look at the libraries documentation
65. Communicate carefully please
● The internet is scary enough
● “This sucks” can be heartbreaking
● You don’t know how much time someone put in
● Make it clear you are new to the project (gives you
some more leeway) & sets expectations
● Understand folks can get defensive about designs:
sometimes it’s not worth the argument
● People are allowed to be wrong on the internet
● It’s ok to be scared
ivva
66. Phrasing matters a lot
● This is slow
● This is hard to
understand
● This library sucks
● No one would ever use
this
● You're using this wrong
● Could we do this faster?
● I'm confused, is it doing X
& could we add a
comment?
● Have you looked at X?
● What's the usage
pattern?
● X has problem Y, how
about Z?
67. OSS reviews videos (live & recorded):
https://www.youtube.com/user/holdenkarau
Depending on time we can do one now….
68. @holdenkarau
What about when we want to make big changes?
● Talk with the community
○ Developer mailing list dev@spark.apache.org
○ User mailing list user@spark.apache.org
● First change? Try and build some karma first
● Consider if it can be published as a spark-package
● Create a public design document (google doc normally)
● Be aware this will be somewhat of an uphill battle (I’m
sorry)
● You can look at SPIPs (Spark's versions of PEPs)
69. @holdenkarau
How about yak shaving?
● Lots of areas need shaving
● JVM deps are easier to update, Python deps are not :(
● Things built on top are a great place to go yak shaving
○ Jupyter etc.
Jason Crane
70. @holdenkarau
Learning Spark
Fast Data
Processing with
Spark
(Out of Date)
Fast Data
Processing with
Spark
(2nd edition)
Advanced
Analytics with
Spark
Spark in Action
High Performance SparkLearning PySpark
71. @holdenkarau
High Performance Spark!
You can buy it today! On the internet!
Cats love it*
*Or at least the box it comes in. If buying for a cat, get
print rather than e-book.
73. @holdenkarau
Local to Amsterdam?
● I'll be back for ITNext at the end of the month
● Have spark/oss questions?
○ Let me know and we can set up office hours
● Also know of any good halloween parties?
○ I've got a cool costume but I'm told y'all don't normally celebrate
:(
74. @holdenkarau
k thnx bye :)
If you care about Spark testing and
don’t hate surveys:
http://bit.ly/holdenTestingSpark
.
Will tweet results
“eventually” @holdenkarau
Do you want more realistic
benchmarks? Share your UDFs!
http://bit.ly/pySparkUDF
I want to give better talks and feedback is welcome:
http://bit.ly/holdenTalkFeedback