spark 1.6을 기준으로 spark sql에 대해서 개략적으로 설명한 자료입니다. 발표 자료가 친절하지 않으나 한글로 된 자료가 없길래 혹시나 도움 되시는 분들이 있을까 하여 공유합니다.
발표자료 보다는 마지막 페이지의 참고자료들을 읽어보시기를 권장 드립니다.
출처만 남겨주시면 자유롭게 가져가셔서 사용하셔도 무방합니다.
In Spark SQL the physical plan provides the fundamental information about the execution of the query. The objective of this talk is to convey understanding and familiarity of query plans in Spark SQL, and use that knowledge to achieve better performance of Apache Spark queries. We will walk you through the most common operators you might find in the query plan and explain some relevant information that can be useful in order to understand some details about the execution. If you understand the query plan, you can look for the weak spot and try to rewrite the query to achieve a more optimal plan that leads to more efficient execution.
The main content of this talk is based on Spark source code but it will reflect some real-life queries that we run while processing data. We will show some examples of query plans and explain how to interpret them and what information can be taken from them. We will also describe what is happening under the hood when the plan is generated focusing mainly on the phase of physical planning. In general, in this talk we want to share what we have learned from both Spark source code and real-life queries that we run in our daily data processing.
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)Yongho Ha
요즘 Hadoop 보다 더 뜨고 있는 Spark.
그 Spark의 핵심을 이해하기 위해서는 핵심 자료구조인 Resilient Distributed Datasets (RDD)를 이해하는 것이 필요합니다.
RDD가 어떻게 동작하는지, 원 논문을 리뷰하며 살펴보도록 합시다.
http://www.cs.berkeley.edu/~matei/papers/2012/sigmod_shark_demo.pdf
This presentation introduces every aspects of the Kafka ecosystem:
- Concepts: explain all misleading concepts such as topic vs partition vs replication, producer vs consumer vs consumer group, group leader vs group coordinator, ...
- Advanced concepts: delivery semantic; idempotent producers; isolation levels; differences between the offsets such as High Watermark, Log End Offset, Last Stable Offset ...
- Kafka architecture: explain all Kafka components such as brokers, controllers, zookeeper, ...
- Overview of Kafka security: TLS/SSL, SASL, Kerberos, ...
- Overview of Kafka ecosystem: Kafka Stream, Kafka Connect, Schema Registry, monitoring tools.
- Kafka in Golang: How to use Kafka client in Golang.
- Comparison with other message queues such as RabbitMQ.
"The common use cases of Spark SQL include ad hoc analysis, logical warehouse, query federation, and ETL processing. Spark SQL also powers the other Spark libraries, including structured streaming for stream processing, MLlib for machine learning, and GraphFrame for graph-parallel computation. For boosting the speed of your Spark applications, you can perform the optimization efforts on the queries prior employing to the production systems. Spark query plans and Spark UIs provide you insight on the performance of your queries. This talk discloses how to read and tune the query plans for enhanced performance. It will also cover the major related features in the recent and upcoming releases of Apache Spark.
"
Performant Streaming in Production: Preventing Common Pitfalls when Productio...Databricks
Running a stream in a development environment is relatively easy. However, some topics can cause serious issues in production when they are not addressed properly.
spark 1.6을 기준으로 spark sql에 대해서 개략적으로 설명한 자료입니다. 발표 자료가 친절하지 않으나 한글로 된 자료가 없길래 혹시나 도움 되시는 분들이 있을까 하여 공유합니다.
발표자료 보다는 마지막 페이지의 참고자료들을 읽어보시기를 권장 드립니다.
출처만 남겨주시면 자유롭게 가져가셔서 사용하셔도 무방합니다.
In Spark SQL the physical plan provides the fundamental information about the execution of the query. The objective of this talk is to convey understanding and familiarity of query plans in Spark SQL, and use that knowledge to achieve better performance of Apache Spark queries. We will walk you through the most common operators you might find in the query plan and explain some relevant information that can be useful in order to understand some details about the execution. If you understand the query plan, you can look for the weak spot and try to rewrite the query to achieve a more optimal plan that leads to more efficient execution.
The main content of this talk is based on Spark source code but it will reflect some real-life queries that we run while processing data. We will show some examples of query plans and explain how to interpret them and what information can be taken from them. We will also describe what is happening under the hood when the plan is generated focusing mainly on the phase of physical planning. In general, in this talk we want to share what we have learned from both Spark source code and real-life queries that we run in our daily data processing.
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)Yongho Ha
요즘 Hadoop 보다 더 뜨고 있는 Spark.
그 Spark의 핵심을 이해하기 위해서는 핵심 자료구조인 Resilient Distributed Datasets (RDD)를 이해하는 것이 필요합니다.
RDD가 어떻게 동작하는지, 원 논문을 리뷰하며 살펴보도록 합시다.
http://www.cs.berkeley.edu/~matei/papers/2012/sigmod_shark_demo.pdf
This presentation introduces every aspects of the Kafka ecosystem:
- Concepts: explain all misleading concepts such as topic vs partition vs replication, producer vs consumer vs consumer group, group leader vs group coordinator, ...
- Advanced concepts: delivery semantic; idempotent producers; isolation levels; differences between the offsets such as High Watermark, Log End Offset, Last Stable Offset ...
- Kafka architecture: explain all Kafka components such as brokers, controllers, zookeeper, ...
- Overview of Kafka security: TLS/SSL, SASL, Kerberos, ...
- Overview of Kafka ecosystem: Kafka Stream, Kafka Connect, Schema Registry, monitoring tools.
- Kafka in Golang: How to use Kafka client in Golang.
- Comparison with other message queues such as RabbitMQ.
"The common use cases of Spark SQL include ad hoc analysis, logical warehouse, query federation, and ETL processing. Spark SQL also powers the other Spark libraries, including structured streaming for stream processing, MLlib for machine learning, and GraphFrame for graph-parallel computation. For boosting the speed of your Spark applications, you can perform the optimization efforts on the queries prior employing to the production systems. Spark query plans and Spark UIs provide you insight on the performance of your queries. This talk discloses how to read and tune the query plans for enhanced performance. It will also cover the major related features in the recent and upcoming releases of Apache Spark.
"
Performant Streaming in Production: Preventing Common Pitfalls when Productio...Databricks
Running a stream in a development environment is relatively easy. However, some topics can cause serious issues in production when they are not addressed properly.
The Top Five Mistakes Made When Writing Streaming Applications with Mark Grov...Databricks
So you know you want to write a streaming app, but any non-trivial streaming app developer would have to think about these questions:
– How do I manage offsets?
– How do I manage state?
– How do I make my Spark Streaming job resilient to failures? Can I avoid some failures?
– How do I gracefully shutdown my streaming job?
– How do I monitor and manage my streaming job (i.e. re-try logic)?
– How can I better manage the DAG in my streaming job?
– When do I use checkpointing, and for what? When should I not use checkpointing?
– Do I need a WAL when using a streaming data source? Why? When don’t I need one?
This session will share practices that no one talks about when you start writing your streaming app, but you’ll inevitably need to learn along the way.
Deep dive into stateful stream processing in structured streaming by Tathaga...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it very easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I am going to dive deeper into how stateful processing works in Structured Streaming. In particular, I am going to discuss the following. – Different stateful operations in Structured Streaming – How state data is stored in a distributed, fault-tolerant manner using State Stores – How you can write custom State Stores for saving state to external storage systems.
빅데이터 개념 부터 시작해서 빅데이터 분석 플랫폼의 출현(hadoop)과 스파크의 등장배경까지 풀어서 작성된 spark 소개 자료 입니다.
스파크는 RDD에 대한 개념과 spark SQL 라이브러리에 대한 자료가 조금 자세히 설명 되어있습니다. (텅스텐엔진, 카탈리스트 옵티마이져에 대한 간략한 설명이 있습니다.)
마지막에는 간단한 설치 및 interactive 분석 실습자료가 포함되어 있습니다.
원본 ppt 를 공개해 두었으니 언제 어디서든 필요에 따라 변형하여 사용하시되 출처만 잘 남겨주시면 감사드리겠습니다.
다른 슬라이드나, 블로그에서 사용된 그림과 참고한 자료들은 작게 출처를 표시해두었는데, 본 ppt의 초기버전을 작성하면서 찾았던 일부 자료들은 출처가 불분명한 상태입니다. 자료 출처를 알려주시면 반영하여 수정해 두도록하겠습니다. (제보 부탁드립니다!)
Distributed Databases Deconstructed: CockroachDB, TiDB and YugaByte DBYugabyteDB
Slides for Amey Banarse's, Principal Data Architect at Yugabyte, "Distributed Databases Deconstructed: CockroachDB, TiDB and YugaByte DB" webinar recorded on Oct 30, 2019 at 11 AM Pacific.
Playback here: https://vimeo.com/369929255
Fine Tuning and Enhancing Performance of Apache Spark JobsDatabricks
Apache Spark defaults provide decent performance for large data sets but leave room for significant performance gains if able to tune parameters based on resources and job.
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Databricks
Parquet is a very popular column based format. Spark can automatically filter useless data using parquet file statistical data by pushdown filters, such as min-max statistics. On the other hand, Spark user can enable Spark parquet vectorized reader to read parquet files by batch. These features improve Spark performance greatly and save both CPU and IO. Parquet is the default data format of data warehouse in Bytedance. In practice, we find that parquet pushdown filters work poorly resulting in reading too much unnecessary data for statistical data has no discrimination across parquet row groups(column data is out of order when writing to parquet files by ETL jobs).
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang Spark Summit
In this session we will present a Configurable FPGA-Based Spark SQL Acceleration Architecture. It is target to leverage FPGA highly parallel computing capability to accelerate Spark SQL Query and for FPGA’s higher power efficiency than CPU we can lower the power consumption at the same time. The Architecture consists of SQL query decomposition algorithms, fine-grained FPGA based Engine Units which perform basic computation of sub string, arithmetic and logic operations. Using SQL query decomposition algorithm, we are able to decompose a complex SQL query into basic operations and according to their patterns each is fed into an Engine Unit. SQL Engine Units are highly configurable and can be chained together to perform complex Spark SQL queries, finally one SQL query is transformed into a Hardware Pipeline. We will present the performance benchmark results comparing the queries with FGPA-Based Spark SQL Acceleration Architecture on XEON E5 and FPGA to the ones with Spark SQL Query on XEON E5 with 10X ~ 100X improvement and we will demonstrate one SQL query workload from a real customer.
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkDatabricks
In this talk, we will introduce some of the new available APIs around stateful aggregation in Structured Streaming, namely flatMapGroupsWithState. We will show how this API can be used to power many complex real-time workflows, including stream-to-stream joins, through live demos using Databricks and Apache Kafka.
How to Build a Scylla Database Cluster that Fits Your NeedsScyllaDB
Sizing a database cluster makes or breaks your application. Too small and you could sustain spikes in usage and recover from a node loss or an operational slowdown. Too big and your cluster will cost more and waste valuable human resources.
Since different workloads have different requirements, successful sizing of your application should be optimized for both throughput and latency performance. However, in many cases, the requirements for each contradicts each other.
In this webinar, we explain how to remediate the contradicting forces and build a sustainable cluster to meet both performance and resiliency requirements.
Native Support of Prometheus Monitoring in Apache Spark 3.0Databricks
All production environment requires monitoring and alerting. Apache Spark also has a configurable metrics system in order to allow users to report Spark metrics to a variety of sinks. Prometheus is one of the popular open-source monitoring and alerting toolkits which is used with Apache Spark together.
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
This Edureka "What is Spark" tutorial will introduce you to big data analytics framework - Apache Spark. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Analytics
2) What is Apache Spark?
3) Why Apache Spark?
4) Using Spark with Hadoop
5) Apache Spark Features
6) Apache Spark Architecture
7) Apache Spark Ecosystem - Spark Core, Spark Streaming, Spark MLlib, Spark SQL, GraphX
8) Demo: Analyze Flight Data Using Apache Spark
Parallelization of Structured Streaming Jobs Using Delta LakeDatabricks
We’ll tackle the problem of running streaming jobs from another perspective using Databricks Delta Lake, while examining some of the current issues that we faced at Tubi while running regular structured streaming. A quick overview on why we transitioned from parquet data files to delta and the problems it solved for us in running our streaming jobs.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
Properly shaping partitions and your jobs to enable powerful optimizations, eliminate skew and maximize cluster utilization. We will explore various Spark Partition shaping methods along with several optimization strategies including join optimizations, aggregate optimizations, salting and multi-dimensional parallelism.
Jump Start with Apache Spark 2.0 on DatabricksDatabricks
Apache Spark 2.0 has laid the foundation for many new features and functionality. Its main three themes—easier, faster, and smarter—are pervasive in its unified and simplified high-level APIs for Structured data.
In this introductory part lecture and part hands-on workshop you’ll learn how to apply some of these new APIs using Databricks Community Edition. In particular, we will cover the following areas:
What’s new in Spark 2.0
SparkSessions vs SparkContexts
Datasets/Dataframes and Spark SQL
Introduction to Structured Streaming concepts and APIs
Cloud dw benchmark using tpd-ds( Snowflake vs Redshift vs EMR Hive )SANG WON PARK
몇년 전부터 Data Architecture의 변화가 빠르게 진행되고 있고,
그 중 Cloud DW는 기존 Data Lake(Hadoop 기반)의 한계(성능, 비용, 운영 등)에 대한 대안으로 주목받으며,
많은 기업들이 이미 도입했거나, 도입을 검토하고 있다.
본 자료는 이러한 Cloud DW에 대해서 개념적으로 이해하고,
시장에 존재하는 다양한 Cloud DW 중에서 기업의 환경에 맞는 제품이 어떤 것인지 성능/비용 관점으로 비교했다.
- 왜기업들은 CloudDW에주목하는가?
- 시장에는어떤 제품들이 있는가?
- 우리Biz환경에서는 어떤 제품을 도입해야 하는가?
- CloudDW솔루션의 성능은?
- 기존DataLake(EMR)대비 성능은?
- 유사CloudDW(snowflake vs redshift) 대비성능은?
앞으로도 Data를 둘러싼 시장은 Cloud DW를 기반으로 ELT, Mata Mesh, Reverse ETL등 새로운 생테계가 급속하게 발전할 것이고,
이를 위한 데이터 엔지니어/데이터 아키텍트 관점의 기술적 검토와 고민이 필요할 것 같다.
https://blog.naver.com/freepsw/222654809552
"Structured Streaming was a new streaming API introduced to Spark over 2 years ago in Spark 2.0, and was announced GA as of Spark 2.2. Databricks customers have processed over a hundred trillion rows in production using Structured Streaming. We received dozens of questions on how to best develop, monitor, test, deploy and upgrade these jobs. In this talk, we aim to share best practices around what has worked and what hasn't across our customer base.
We will tackle questions around how to plan ahead, what kind of code changes are safe for structured streaming jobs, how to architect streaming pipelines which can give you the most flexibility without sacrificing performance by using tools like Databricks Delta, how to best monitor your streaming jobs and alert if your streams are falling behind or are actually failing, as well as how to best test your code."
The Top Five Mistakes Made When Writing Streaming Applications with Mark Grov...Databricks
So you know you want to write a streaming app, but any non-trivial streaming app developer would have to think about these questions:
– How do I manage offsets?
– How do I manage state?
– How do I make my Spark Streaming job resilient to failures? Can I avoid some failures?
– How do I gracefully shutdown my streaming job?
– How do I monitor and manage my streaming job (i.e. re-try logic)?
– How can I better manage the DAG in my streaming job?
– When do I use checkpointing, and for what? When should I not use checkpointing?
– Do I need a WAL when using a streaming data source? Why? When don’t I need one?
This session will share practices that no one talks about when you start writing your streaming app, but you’ll inevitably need to learn along the way.
Deep dive into stateful stream processing in structured streaming by Tathaga...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it very easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I am going to dive deeper into how stateful processing works in Structured Streaming. In particular, I am going to discuss the following. – Different stateful operations in Structured Streaming – How state data is stored in a distributed, fault-tolerant manner using State Stores – How you can write custom State Stores for saving state to external storage systems.
빅데이터 개념 부터 시작해서 빅데이터 분석 플랫폼의 출현(hadoop)과 스파크의 등장배경까지 풀어서 작성된 spark 소개 자료 입니다.
스파크는 RDD에 대한 개념과 spark SQL 라이브러리에 대한 자료가 조금 자세히 설명 되어있습니다. (텅스텐엔진, 카탈리스트 옵티마이져에 대한 간략한 설명이 있습니다.)
마지막에는 간단한 설치 및 interactive 분석 실습자료가 포함되어 있습니다.
원본 ppt 를 공개해 두었으니 언제 어디서든 필요에 따라 변형하여 사용하시되 출처만 잘 남겨주시면 감사드리겠습니다.
다른 슬라이드나, 블로그에서 사용된 그림과 참고한 자료들은 작게 출처를 표시해두었는데, 본 ppt의 초기버전을 작성하면서 찾았던 일부 자료들은 출처가 불분명한 상태입니다. 자료 출처를 알려주시면 반영하여 수정해 두도록하겠습니다. (제보 부탁드립니다!)
Distributed Databases Deconstructed: CockroachDB, TiDB and YugaByte DBYugabyteDB
Slides for Amey Banarse's, Principal Data Architect at Yugabyte, "Distributed Databases Deconstructed: CockroachDB, TiDB and YugaByte DB" webinar recorded on Oct 30, 2019 at 11 AM Pacific.
Playback here: https://vimeo.com/369929255
Fine Tuning and Enhancing Performance of Apache Spark JobsDatabricks
Apache Spark defaults provide decent performance for large data sets but leave room for significant performance gains if able to tune parameters based on resources and job.
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Databricks
Parquet is a very popular column based format. Spark can automatically filter useless data using parquet file statistical data by pushdown filters, such as min-max statistics. On the other hand, Spark user can enable Spark parquet vectorized reader to read parquet files by batch. These features improve Spark performance greatly and save both CPU and IO. Parquet is the default data format of data warehouse in Bytedance. In practice, we find that parquet pushdown filters work poorly resulting in reading too much unnecessary data for statistical data has no discrimination across parquet row groups(column data is out of order when writing to parquet files by ETL jobs).
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang Spark Summit
In this session we will present a Configurable FPGA-Based Spark SQL Acceleration Architecture. It is target to leverage FPGA highly parallel computing capability to accelerate Spark SQL Query and for FPGA’s higher power efficiency than CPU we can lower the power consumption at the same time. The Architecture consists of SQL query decomposition algorithms, fine-grained FPGA based Engine Units which perform basic computation of sub string, arithmetic and logic operations. Using SQL query decomposition algorithm, we are able to decompose a complex SQL query into basic operations and according to their patterns each is fed into an Engine Unit. SQL Engine Units are highly configurable and can be chained together to perform complex Spark SQL queries, finally one SQL query is transformed into a Hardware Pipeline. We will present the performance benchmark results comparing the queries with FGPA-Based Spark SQL Acceleration Architecture on XEON E5 and FPGA to the ones with Spark SQL Query on XEON E5 with 10X ~ 100X improvement and we will demonstrate one SQL query workload from a real customer.
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkDatabricks
In this talk, we will introduce some of the new available APIs around stateful aggregation in Structured Streaming, namely flatMapGroupsWithState. We will show how this API can be used to power many complex real-time workflows, including stream-to-stream joins, through live demos using Databricks and Apache Kafka.
How to Build a Scylla Database Cluster that Fits Your NeedsScyllaDB
Sizing a database cluster makes or breaks your application. Too small and you could sustain spikes in usage and recover from a node loss or an operational slowdown. Too big and your cluster will cost more and waste valuable human resources.
Since different workloads have different requirements, successful sizing of your application should be optimized for both throughput and latency performance. However, in many cases, the requirements for each contradicts each other.
In this webinar, we explain how to remediate the contradicting forces and build a sustainable cluster to meet both performance and resiliency requirements.
Native Support of Prometheus Monitoring in Apache Spark 3.0Databricks
All production environment requires monitoring and alerting. Apache Spark also has a configurable metrics system in order to allow users to report Spark metrics to a variety of sinks. Prometheus is one of the popular open-source monitoring and alerting toolkits which is used with Apache Spark together.
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
This Edureka "What is Spark" tutorial will introduce you to big data analytics framework - Apache Spark. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Analytics
2) What is Apache Spark?
3) Why Apache Spark?
4) Using Spark with Hadoop
5) Apache Spark Features
6) Apache Spark Architecture
7) Apache Spark Ecosystem - Spark Core, Spark Streaming, Spark MLlib, Spark SQL, GraphX
8) Demo: Analyze Flight Data Using Apache Spark
Parallelization of Structured Streaming Jobs Using Delta LakeDatabricks
We’ll tackle the problem of running streaming jobs from another perspective using Databricks Delta Lake, while examining some of the current issues that we faced at Tubi while running regular structured streaming. A quick overview on why we transitioned from parquet data files to delta and the problems it solved for us in running our streaming jobs.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
Properly shaping partitions and your jobs to enable powerful optimizations, eliminate skew and maximize cluster utilization. We will explore various Spark Partition shaping methods along with several optimization strategies including join optimizations, aggregate optimizations, salting and multi-dimensional parallelism.
Jump Start with Apache Spark 2.0 on DatabricksDatabricks
Apache Spark 2.0 has laid the foundation for many new features and functionality. Its main three themes—easier, faster, and smarter—are pervasive in its unified and simplified high-level APIs for Structured data.
In this introductory part lecture and part hands-on workshop you’ll learn how to apply some of these new APIs using Databricks Community Edition. In particular, we will cover the following areas:
What’s new in Spark 2.0
SparkSessions vs SparkContexts
Datasets/Dataframes and Spark SQL
Introduction to Structured Streaming concepts and APIs
Cloud dw benchmark using tpd-ds( Snowflake vs Redshift vs EMR Hive )SANG WON PARK
몇년 전부터 Data Architecture의 변화가 빠르게 진행되고 있고,
그 중 Cloud DW는 기존 Data Lake(Hadoop 기반)의 한계(성능, 비용, 운영 등)에 대한 대안으로 주목받으며,
많은 기업들이 이미 도입했거나, 도입을 검토하고 있다.
본 자료는 이러한 Cloud DW에 대해서 개념적으로 이해하고,
시장에 존재하는 다양한 Cloud DW 중에서 기업의 환경에 맞는 제품이 어떤 것인지 성능/비용 관점으로 비교했다.
- 왜기업들은 CloudDW에주목하는가?
- 시장에는어떤 제품들이 있는가?
- 우리Biz환경에서는 어떤 제품을 도입해야 하는가?
- CloudDW솔루션의 성능은?
- 기존DataLake(EMR)대비 성능은?
- 유사CloudDW(snowflake vs redshift) 대비성능은?
앞으로도 Data를 둘러싼 시장은 Cloud DW를 기반으로 ELT, Mata Mesh, Reverse ETL등 새로운 생테계가 급속하게 발전할 것이고,
이를 위한 데이터 엔지니어/데이터 아키텍트 관점의 기술적 검토와 고민이 필요할 것 같다.
https://blog.naver.com/freepsw/222654809552
"Structured Streaming was a new streaming API introduced to Spark over 2 years ago in Spark 2.0, and was announced GA as of Spark 2.2. Databricks customers have processed over a hundred trillion rows in production using Structured Streaming. We received dozens of questions on how to best develop, monitor, test, deploy and upgrade these jobs. In this talk, we aim to share best practices around what has worked and what hasn't across our customer base.
We will tackle questions around how to plan ahead, what kind of code changes are safe for structured streaming jobs, how to architect streaming pipelines which can give you the most flexibility without sacrificing performance by using tools like Databricks Delta, how to best monitor your streaming jobs and alert if your streams are falling behind or are actually failing, as well as how to best test your code."
100% Serverless big data scale production Deep Learning Systemhoondong kim
- BigData Sale Deep Learning Training System (with GPU Docker PaaS on Azure Batch AI)
- Deep Learning Serving Layer (with Auto Scale Out Mode on Web App for Linux Docker)
- BigDL, Keras, Tensorlfow, Horovod, TensorflowOnAzure
어느 해커쏜에 참여한 백엔드 개발자들을 위한 교육자료
쉽게 만든다고 했는데도, 많이 어려웠나봅니다.
제 욕심이 과했던 것 같아요. 담번엔 좀 더 쉽게 !
- 독자 : 백엔드 개발자를 희망하는 사람 (취준생, 이직 희망자), 5년차 이하
- 주요 내용 : 백엔드 개발을 할 때 일어나는 일들(개발팀의 일)
- 비상업적 목적으로 인용은 가능합니다. (출처 명기 필수)
Machine Learning Model Serving with Backend.AIJeongkyu Shin
머신러닝 모델을 서비스 단에서 서빙하는 것은 손이 많이 갑니다.
서비스 과정을 편리하게 하기 위하여 TensorFlow serving 등 서빙 과정을 돕는 다양한 도구들이 공개되고 개발되고 있습니다만, 여전히 서빙 과정은 귀찮고 불편합니다. 이 세션에서는 Backend.AI 와 TensorFlow serving을 이용하여 간단하게 TensorFlow 모델을 서빙하는 법에 대해 다루어 봅니다.
Backend.AI 서빙 모드를 소개하고, 여러 TF serving 모델 등을 Backend.AI 로 서비스하는 과정을 통해 실제로 사용하는 법을 알아봅니다.
Serving the machine learning model at the service level is a lot of work. A variety of tools are being developed and released to facilitate the process of serving. TensorFlow serving is the greatest one for serving now, but the docker image baking-based serving process is not easy, not flexible and controllable enough. In this session, I will discuss how to simplify the serving process of TensorFlow models by using Backend.AI and TensorFlow serving.
I will introduce the Backend.AI serving mode (on the trunk but will be official since 1.6). After that, I will demonstrate how to use the Backend.AI serving mode that conveniently provides various TensorFlow models with TensorFlow serving on the fly.
Just Model It 이벤트에서 사용할 Backend.AI 에 관한 소개입니다. Backend.AI의 개괄, 주요 기능 및 사용예들을 다룹니다. 또한 Backend.AI 를 이용한 End-to-end ML model 개발 시나리오도 소개합니다.
An Introduction to Backend.AI to use in Just Model It event. It covers the overview of Backend.AI, its main features and examples. It also introduces the scenario of developing end-to-end ML model using Backend.AI.
Auto Scalable 한 Deep Learning Production 을 위한 AI Serving Infra 구성 및 AI DevOps...hoondong kim
[Tensorflow-KR Offline 세미나 발표자료]
Auto Scalable 한 Deep Learning Production 을 위한 AI Serving Infra 구성 및 AI DevOps Cycle 구성 방법론. (Azure Docker PaaS 위에서 1만 TPS Tensorflow Inference Serving 방법론 공유)
Backend.AI (https://backend.ai)는 클라우드 및 온-프레미스 환경에서 여러 사용자가 안전하고 효율적으로 컴퓨팅 자원을 공유할 수 있는 머신러닝에 특화된 인프라 관리 프레임워크입니다. 현재 널리 사용되고 있는 오픈소스 기술인 OpenStack, Kubernetes 등과 비교하여 어떤 특징과 차이점이 있는지 소개하고, 프레임워크의 구조와 기반 기술 및 응용 사례를 데모와 함께 소개합니다.
3. Spark Intro
• In-memory 기반 범용 클러스터 컴퓨팅 엔진
- 하둡 맵리듀스보다 100배 빠름(공홈에서 주장)
• Unified Engine
- batch/stream, SQL 및 Machine learning, Graph processing 제공
• 다양한 언어 지원
- Java, Scala, Python, R
• 여러 클러스터 매니저를 지원하여 다양한 환경에서 구동 가능
- Standalone, YARN, mesos 등
12. Why Spark Streaming?
데이터를 실시간으로 바로 처리 하고 싶어요
“웹사이트를 모니터링할 수 없을까요?”
“이거 추가되거나 삭제되면 바로 반영해주세요”
“실시간 데이터로 머신러닝 모델을 학습시키고 싶어요”
“문제를 바로 알고 싶어요”
Website monitoring Fraud detection ML from streaming data
13. Why Spark Streaming?
• Integration with Batch Processing
같은 프레임워크에서 배치와 스트리밍을 같이 처리하고 싶어요
“배치는 MapReduce, 스트리밍은 Storm…”
“유지 보수가 너무 어려워요 ㅠㅠ”
“배치를 쉽게 스트리밍으로 바꿀수 없을까요?”
14. Why Spark Streaming?
• Integration with Batch Processing
같은 프레임워크에서 배치와 스트리밍을 같이 처리하고 싶어요
“배치는 MapReduce, 스트리밍은 Storm…”
“유지 보수가 너무 어려워요 ㅠㅠ”
“배치를 쉽게 스트리밍으로 바꿀수 없을까요?”
16. What is Spark Streaming?
• 배치를 작게해서 스트리밍처럼 돌리자
- 스트림 데이터를 시간 간격으로 분할
- 분할된 데이터를 대상으로 배치 수행
- 각 배치는 기존의 Spark job과 동일하게 처리
Spark
Streaming
live data stream
batches of input data
Spark
Engine
processed result
RDD
17. Streaming Context
• Spark streaming을 사용하기 위해서 제일 먼저 생성하는 인스
턴스
- SparkContext, SparkSession과 비슷
• 어떤 주기로 배치 처리를 수행할지에 대한 정보를 함께 제공
• SparkConf나 SparkContext를 이용해 생성
18. Programming Model - DStream
• Discretized Stream(Dstream)
- 끊임없이 생성되는 연속된 데이터를 나타내기 위한 데이터 모델
- 일정 시간마다 데이터를 모아서 RDD를 만들어줌
- RDD로 구성된 시퀀스
Reference : zero-to-streaming-spark-and-cassandra
19. 그림과 함께 보는 예제
Spark
Streaming
live data stream
batches of input data
Spark
Engine
processed result
20. 그림과 함께 보는 예제
Spark
Streaming
live data stream
batches of input data
Spark
Engine
processed result
22. Example – Twitter 데이터와 놀아보기
• 살펴볼 예제
- 초당 생성되는 트윗 수 세어보기
- 최근 10초동안 생성되는 트윗 중 가장 많이 사용되는 단어수를 매 초마
다 확인하기
- 지금부터 유저별 트위터 작성 수 집계하기
예제 저장소 : https://github.com/eoriented/spark-streaming-tutorial
34. Data source
• 지원하는 데이터 소스
- Default data source
• Socket
• 파일 (HDFS 호환 파일 가능)
• RDD Queue
- Advanced data source (외부 연동 라이브러리)
• Kafka
• Flume
• Kinesis
• Twitter
- Receiver를 직접 구현
35. Data source
• Custom Receiver
- 만일 내가 원하는 Data source가 존재하지 않는다면?
• Custom Receiver로 구현
- http://spark.apache.org/docs/latest/streaming-custom-receivers.html
- onStart 메소드와 onStop 메소드를 구현
37. Fault tolerance
• Check Point
- Metadata checkpoint
• 드라이버의 장애 대응
- Data checkpoint
• 최종 상태의 데이터를 빠르게 복구하기 위한 용도
- 파일 시스템
• HDFS, S3, local FS(test용) 등이 사용 가능
38. 성능 고려사항
• 배치 / 윈도우 사이즈
- 500ms 가 적당
- 큰 배치로 시작하여 작은 사이즈로 낮춰가면서 배치 사이즈 결정 추천
• 병렬화
- 리시버 개수 늘리기
• 하나의 리시버가 받는게 아닌 여러 리시버가 받아서 처리하는게 효율적
- Repartitioning
• 입력 스트림의 파티션을 재설정하여 처리
• 메모리 튜닝
- GC 옵션 튜닝
- Spark.cleaner.ttl 옵션을 이용하여 RDD 제거 시간 조정
언어 선택은 자유지만 만약 어떤 언어도 상관하지 않는 경우에는 scala를 추천
- 자바보다 API가 간결하고, 파이썬 보다 더 좋은 성능을 낸다.(Spark는 scala로 만들어져 있고, 새로운 기능은 스칼라부터 적용)
왜 만듦?
왜 만들었냐?
무엇을 만들었을까?
무엇을 만들었나?
용어 익히기
빠르게 보는 Spark Streaming
어떻게 사용하는가
두번째 예제를 살펴 보기 전에
윈도우 사이즈와 슬라이드 인터벌의 경우 배치 크기의 배수로 지정
스트럭처 스트리밍은 간단하게 소개만 해드리겠음
기존 맵리듀스 패턴을 이용하여 스트리밍을 처리하는 방식의 예
- 사용자가 앱을 열때 open 이벤트를 보내고, 닫을 때 close 이벤트를 보내는 예제
사용자가 앱을 열때 open 이벤트를 보내고, 닫을 때 close 이벤트를 보내는 예제
Consistency
open을 처리하는 reducer와 close를 담당하는 리듀서가 있을 때 open을 처리하는 리듀서가 close를 담당하는 리듀서보다 느린 경우 mysql에서는 open 보다 close가 더 많아질 수도 있어서 데이터의 일관성이 깨질 수 있음
Fault tolerance
Mapper나 리듀서가 죽는 경우는?
Out-of-order data
여러 출처의 데이터가 순서가 다른 경우 문제가 될 수 있음
입력 데이터의 스트림을 입력테이블로 간주해서 처리.
스트림에 도착하는 모든 데이터는 입력 테이블에 추가되는 새로운 row와 같이 처리
결과를 Result Table에 업데이트 한다. 결과 테이블이 업데이트 될때마다 변경된 결과와 행을 외부 싱크에 기록해야 함
아웃풋은 외부 스토리지에 쓰여지는 것으로 정의됨
Result 테이블에 전부 업데이트를 하고 쓰여짐
결과를 Result Table에 업데이트 한다. 결과 테이블이 업데이트 될때마다 변경된 결과와 행을 외부 싱크에 기록해야 함
늦게 들어오는 데이터를 어떻게 처리할 것인가?
한참 후에(하루, 일주일) 들어오는 데이터를 처리하기 위해서 기존 데이터를 계속 유지할 수 없음
한참 지난 후에 도달하는 데이터를 적절히 처리하기 위한 방법으로 워터마크 도입
이벤트의 유효기간을 설정하여 처리하는 방식입니다.
해당 트리거가 발생하기 전에 인입된 모든 이벤트중에
가장 마지막에 발생된 이벤트 발생시각에서
미리 지정해둔 유효기간을 뺀 것으로 결정