Presentation at Presto Conference Tokyo 2019
- Arm Treasure Data
- Plazma DB Indexes
- Real-time, Archive Storages
- Schema-on-read data processing
- Physical partition maintenance via presto-stella plugin
Apache Iceberg - A Table Format for Hige Analytic DatasetsAlluxio, Inc.
Data Orchestration Summit
www.alluxio.io/data-orchestration-summit-2019
November 7, 2019
Apache Iceberg - A Table Format for Hige Analytic Datasets
Speaker:
Ryan Blue, Netflix
For more Alluxio events: https://www.alluxio.io/events/
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
Iceberg: a modern table format for big data (Ryan Blue & Parth Brahmbhatt, Netflix)
Presto Summit 2018 (https://www.starburstdata.com/technical-blog/presto-summit-2018-recap/)
Apache Iceberg - A Table Format for Hige Analytic DatasetsAlluxio, Inc.
Data Orchestration Summit
www.alluxio.io/data-orchestration-summit-2019
November 7, 2019
Apache Iceberg - A Table Format for Hige Analytic Datasets
Speaker:
Ryan Blue, Netflix
For more Alluxio events: https://www.alluxio.io/events/
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
Recently, a set of modern table formats such as Delta Lake, Hudi, Iceberg spring out. Along with Hive Metastore these table formats are trying to solve problems that stand in traditional data lake for a long time with their declared features like ACID, schema evolution, upsert, time travel, incremental consumption etc.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
Iceberg: a modern table format for big data (Ryan Blue & Parth Brahmbhatt, Netflix)
Presto Summit 2018 (https://www.starburstdata.com/technical-blog/presto-summit-2018-recap/)
Delta Lake delivers reliability, security and performance to data lakes. Join this session to learn how customers have achieved 48x faster data processing, leading to 50% faster time to insight after implementing Delta Lake. You’ll also learn how Delta Lake provides the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
Tomer Shiran est le fondateur et chef de produit (CPO) de Dremio. Tomer était le 4e employé et vice-président produit de MapR, un pionnier de l'analyse du Big Data. Il a également occupé de nombreux postes de gestion de produits et d'ingénierie chez IBM Research et Microsoft, et a fondé plusieurs sites Web qui ont servi des millions d'utilisateurs. Il est titulaire d'un Master en génie informatique de l'Université Carnegie Mellon et d'un Bachelor of Science en informatique du Technion - Israel Institute of Technology.
Le Modern Data Stack meetup est ravi d'accueillir Tomer Shiran. Depuis Apache Drill, Apache Arrow maintenant Apache Iceberg, il ancre avec ses équipes des choix pour Dremio avec une vision de la plateforme de données “ouverte” basée sur des technologies open source. En plus, de ces valeurs qui évitent le verrouillage de clients dans des formats propriétaires, il a aussi le souci des coûts qu’engendrent de telles plateformes. Il sait aussi proposer un certain nombre de fonctionnalités qui transforment la gestion de données grâce à des initiatives telles Nessie qui ouvre la route du Data As Code et du transactionnel multi-processus.
Le Modern Data Stack Meetup laisse “carte blanche” à Tomer Shiran afin qu’il nous partage son expérience et sa vision quant à l’Open Data Lakehouse.
Observability for Data Pipelines With OpenLineageDatabricks
Data is increasingly becoming core to many products. Whether to provide recommendations for users, getting insights on how they use the product, or using machine learning to improve the experience. This creates a critical need for reliable data operations and understanding how data is flowing through our systems. Data pipelines must be auditable, reliable, and run on time. This proves particularly difficult in a constantly changing, fast-paced environment.
Collecting this lineage metadata as data pipelines are running provides an understanding of dependencies between many teams consuming and producing data and how constant changes impact them. It is the underlying foundation that enables the many use cases related to data operations. The OpenLineage project is an API standardizing this metadata across the ecosystem, reducing complexity and duplicate work in collecting lineage information. It enables many projects, consumers of lineage in the ecosystem whether they focus on operations, governance or security.
Marquez is an open source project part of the LF AI & Data foundation which instruments data pipelines to collect lineage and metadata and enable those use cases. It implements the OpenLineage API and provides context by making visible dependencies across organizations and technologies as they change over time.
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021StreamNative
You may be familiar with the Presto plugin used to run fast interactive queries over Pulsar using ANSI SQL and can be joined with other data sources. This plugin will soon get a rename to align with the rename of the PrestoSQL project to Trino. What is the purpose of this rename and what does it mean for those using the Presto plugin? We cover the history of the community shift from PrestoDB to PrestoSQL, as well as, the future plans for the Pulsar community to donate this plugin to the Trino project. One of the connector maintainers will then demo the connector and show what is possible when using Trino and Pulsar!
ADV Slides: Strategies for Fitting a Data Lake into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the Data Warehouse or to facilitate competitive Data Science and building algorithms in the organization, the Data Lake — a place for unmodeled and vast data — will be provisioned widely in 2019.
Though it doesn’t have to be complicated, the Data Lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the Data Swamp, but not the Data Lake! The tool ecosystem is building up around the Data Lake and soon many will have a robust Lake and Data Warehouse. We will discuss policy to keep them straight, send “horses to courses,” and keep up users’ confidence in the Data Platforms.
As for platform, although Hadoop received the early majority of Data Lakes, organizations are now weighing in that the Data Lake will be built in Cloud object storage. We’ll discuss these options as well.
Get this data point for your Data Lake journey.
We want to present multiple anti patterns utilizing Redis in unconventional ways to get the maximum out of Apache Spark.All examples presented are tried and tested in production at Scale at Adobe. The most common integration is spark-redis which interfaces with Redis as a Dataframe backing Store or as an upstream for Structured Streaming. We deviate from the common use cases to explore where Redis can plug gaps while scaling out high throughput applications in Spark.
Niche 1 : Long Running Spark Batch Job – Dispatch New Jobs by polling a Redis Queue
· Why?
o Custom queries on top a table; We load the data once and query N times
· Why not Structured Streaming
· Working Solution using Redis
Niche 2 : Distributed Counters
· Problems with Spark Accumulators
· Utilize Redis Hashes as distributed counters
· Precautions for retries and speculative execution
· Pipelining to improve performance
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
Lambda architecture is a popular technique where records are processed by a batch system and streaming system in parallel. The results are then combined during query time to provide a complete answer. Strict latency requirements to process old and recently generated events made this architecture popular. The key downside to this architecture is the development and operational overhead of managing two different systems.
There have been attempts to unify batch and streaming into a single system in the past. Organizations have not been that successful though in those attempts. But, with the advent of Delta Lake, we are seeing lot of engineers adopting a simple continuous data flow model to process data as it arrives. We call this architecture, The Delta Architecture.
Every business today wants to leverage data to drive strategic initiatives with machine learning, data science and analytics — but runs into challenges from siloed teams, proprietary technologies and unreliable data.
That’s why enterprises are turning to the lakehouse because it offers a single platform to unify all your data, analytics and AI workloads.
Join our How to Build a Lakehouse technical training, where we’ll explore how to use Apache SparkTM, Delta Lake, and other open source technologies to build a better lakehouse. This virtual session will include concepts, architectures and demos.
Here’s what you’ll learn in this 2-hour session:
How Delta Lake combines the best of data warehouses and data lakes for improved data reliability, performance and security
How to use Apache Spark and Delta Lake to perform ETL processing, manage late-arriving data, and repair corrupted data directly on your lakehouse
Making Apache Spark Better with Delta LakeDatabricks
Delta Lake is an open-source storage layer that brings reliability to data lakes. Delta Lake offers ACID transactions, scalable metadata handling, and unifies the streaming and batch data processing. It runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
In this talk, we will cover:
* What data quality problems Delta helps address
* How to convert your existing application to Delta Lake
* How the Delta Lake transaction protocol works internally
* The Delta Lake roadmap for the next few releases
* How to get involved!
Architect’s Open-Source Guide for a Data Mesh ArchitectureDatabricks
Data Mesh is an innovative concept addressing many data challenges from an architectural, cultural, and organizational perspective. But is the world ready to implement Data Mesh?
In this session, we will review the importance of core Data Mesh principles, what they can offer, and when it is a good idea to try a Data Mesh architecture. We will discuss common challenges with implementation of Data Mesh systems and focus on the role of open-source projects for it. Projects like Apache Spark can play a key part in standardized infrastructure platform implementation of Data Mesh. We will examine the landscape of useful data engineering open-source projects to utilize in several areas of a Data Mesh system in practice, along with an architectural example. We will touch on what work (culture, tools, mindset) needs to be done to ensure Data Mesh is more accessible for engineers in the industry.
The audience will leave with a good understanding of the benefits of Data Mesh architecture, common challenges, and the role of Apache Spark and other open-source projects for its implementation in real systems.
This session is targeted for architects, decision-makers, data-engineers, and system designers.
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfAltinity Ltd
Join the Altinity experts as we dig into ClickHouse sharding and replication, showing how they enable clusters that deliver fast queries over petabytes of data. We’ll start with basic definitions of each, then move to practical issues. This includes the setup of shards and replicas, defining schema, choosing sharding keys, loading data, and writing distributed queries. We’ll finish up with tips on performance optimization.
#ClickHouse #datasets #ClickHouseTutorial #opensource #ClickHouseCommunity #Altinity
-----------------
Join ClickHouse Meetups: https://www.meetup.com/San-Francisco-...
Check out more ClickHouse resources: https://altinity.com/resources/
Visit the Altinity Documentation site: https://docs.altinity.com/
Contribute to ClickHouse Knowledge Base: https://kb.altinity.com/
Join the ClickHouse Reddit community: https://www.reddit.com/r/Clickhouse/
----------------
Learn more about Altinity!
Site: https://www.altinity.com
LinkedIn: https://www.linkedin.com/company/alti...
Twitter: https://twitter.com/AltinityDB
How YugaByte DB Implements Distributed PostgreSQLYugabyte
Building applications on PostgreSQL that require automatic data sharding and replication, fault tolerance, distributed transactions and geographic data distribution has been hard. In this 3 hour workshop, we will look at how to do this using a real-world example running on top of YugaByte DB, a distributed database that is fully wire-compatible with PostgreSQL and NoSQL APIs (Apache Cassandra and Redis). We will look at the design and architecture of YugaByte DB and how it reuses the PostgreSQL codebase to achieve full API compatibility. YugaByte DB support for PostgreSQL includes most data types, queries, stored procedures, etc. We will also take a look at how to build applications that are planet scale (requiring geographic distribution of data) and how to run them in cloud-native environments (for example, Kubernetes, hybrid or multi-cloud deployments).
Using Apache Arrow, Calcite, and Parquet to Build a Relational CacheDremio Corporation
From DataEngConf 2017 - Everybody wants to get to data faster. As we move from more general solution to specific optimization techniques, the level of performance impact grows. This talk will discuss how layering in-memory caching, columnar storage and relational caching can combine to provide a substantial improvement in overall data science and analytical workloads. It will include a detailed overview of how you can use Apache Arrow, Calcite and Parquet to achieve multiple magnitudes improvement in performance over what is currently possible.
Parquet performance tuning: the missing guideRyan Blue
Ryan Blue explains how Netflix is building on Parquet to enhance its 40+ petabyte warehouse, combining Parquet’s features with Presto and Spark to boost ETL and interactive queries. Information about tuning Parquet is hard to find. Ryan shares what he’s learned, creating the missing guide you need.
Topics include:
* The tools and techniques Netflix uses to analyze Parquet tables
* How to spot common problems
* Recommendations for Parquet configuration settings to get the best performance out of your processing platform
* The impact of this work in speeding up applications like Netflix’s telemetry service and A/B testing platform
Journey of Migrating 1 Million Presto Queries - Presto Webinar 2020Taro L. Saito
Arm Treasure Data utilizes Presto as the query engine processing over 1 million queries per day to support the data business of 500+ companies in three regions; US, EU, and Asia. Arm Treasure Data had been using Presto 0.205 and in 2019 started a big migration project to Presto 317. Although we performed extensive query simulations to check any incompatibilities, we faced many unexpected challenges during the migration at production.
Delta Lake delivers reliability, security and performance to data lakes. Join this session to learn how customers have achieved 48x faster data processing, leading to 50% faster time to insight after implementing Delta Lake. You’ll also learn how Delta Lake provides the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
Tomer Shiran est le fondateur et chef de produit (CPO) de Dremio. Tomer était le 4e employé et vice-président produit de MapR, un pionnier de l'analyse du Big Data. Il a également occupé de nombreux postes de gestion de produits et d'ingénierie chez IBM Research et Microsoft, et a fondé plusieurs sites Web qui ont servi des millions d'utilisateurs. Il est titulaire d'un Master en génie informatique de l'Université Carnegie Mellon et d'un Bachelor of Science en informatique du Technion - Israel Institute of Technology.
Le Modern Data Stack meetup est ravi d'accueillir Tomer Shiran. Depuis Apache Drill, Apache Arrow maintenant Apache Iceberg, il ancre avec ses équipes des choix pour Dremio avec une vision de la plateforme de données “ouverte” basée sur des technologies open source. En plus, de ces valeurs qui évitent le verrouillage de clients dans des formats propriétaires, il a aussi le souci des coûts qu’engendrent de telles plateformes. Il sait aussi proposer un certain nombre de fonctionnalités qui transforment la gestion de données grâce à des initiatives telles Nessie qui ouvre la route du Data As Code et du transactionnel multi-processus.
Le Modern Data Stack Meetup laisse “carte blanche” à Tomer Shiran afin qu’il nous partage son expérience et sa vision quant à l’Open Data Lakehouse.
Observability for Data Pipelines With OpenLineageDatabricks
Data is increasingly becoming core to many products. Whether to provide recommendations for users, getting insights on how they use the product, or using machine learning to improve the experience. This creates a critical need for reliable data operations and understanding how data is flowing through our systems. Data pipelines must be auditable, reliable, and run on time. This proves particularly difficult in a constantly changing, fast-paced environment.
Collecting this lineage metadata as data pipelines are running provides an understanding of dependencies between many teams consuming and producing data and how constant changes impact them. It is the underlying foundation that enables the many use cases related to data operations. The OpenLineage project is an API standardizing this metadata across the ecosystem, reducing complexity and duplicate work in collecting lineage information. It enables many projects, consumers of lineage in the ecosystem whether they focus on operations, governance or security.
Marquez is an open source project part of the LF AI & Data foundation which instruments data pipelines to collect lineage and metadata and enable those use cases. It implements the OpenLineage API and provides context by making visible dependencies across organizations and technologies as they change over time.
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021StreamNative
You may be familiar with the Presto plugin used to run fast interactive queries over Pulsar using ANSI SQL and can be joined with other data sources. This plugin will soon get a rename to align with the rename of the PrestoSQL project to Trino. What is the purpose of this rename and what does it mean for those using the Presto plugin? We cover the history of the community shift from PrestoDB to PrestoSQL, as well as, the future plans for the Pulsar community to donate this plugin to the Trino project. One of the connector maintainers will then demo the connector and show what is possible when using Trino and Pulsar!
ADV Slides: Strategies for Fitting a Data Lake into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the Data Warehouse or to facilitate competitive Data Science and building algorithms in the organization, the Data Lake — a place for unmodeled and vast data — will be provisioned widely in 2019.
Though it doesn’t have to be complicated, the Data Lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the Data Swamp, but not the Data Lake! The tool ecosystem is building up around the Data Lake and soon many will have a robust Lake and Data Warehouse. We will discuss policy to keep them straight, send “horses to courses,” and keep up users’ confidence in the Data Platforms.
As for platform, although Hadoop received the early majority of Data Lakes, organizations are now weighing in that the Data Lake will be built in Cloud object storage. We’ll discuss these options as well.
Get this data point for your Data Lake journey.
We want to present multiple anti patterns utilizing Redis in unconventional ways to get the maximum out of Apache Spark.All examples presented are tried and tested in production at Scale at Adobe. The most common integration is spark-redis which interfaces with Redis as a Dataframe backing Store or as an upstream for Structured Streaming. We deviate from the common use cases to explore where Redis can plug gaps while scaling out high throughput applications in Spark.
Niche 1 : Long Running Spark Batch Job – Dispatch New Jobs by polling a Redis Queue
· Why?
o Custom queries on top a table; We load the data once and query N times
· Why not Structured Streaming
· Working Solution using Redis
Niche 2 : Distributed Counters
· Problems with Spark Accumulators
· Utilize Redis Hashes as distributed counters
· Precautions for retries and speculative execution
· Pipelining to improve performance
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
Lambda architecture is a popular technique where records are processed by a batch system and streaming system in parallel. The results are then combined during query time to provide a complete answer. Strict latency requirements to process old and recently generated events made this architecture popular. The key downside to this architecture is the development and operational overhead of managing two different systems.
There have been attempts to unify batch and streaming into a single system in the past. Organizations have not been that successful though in those attempts. But, with the advent of Delta Lake, we are seeing lot of engineers adopting a simple continuous data flow model to process data as it arrives. We call this architecture, The Delta Architecture.
Every business today wants to leverage data to drive strategic initiatives with machine learning, data science and analytics — but runs into challenges from siloed teams, proprietary technologies and unreliable data.
That’s why enterprises are turning to the lakehouse because it offers a single platform to unify all your data, analytics and AI workloads.
Join our How to Build a Lakehouse technical training, where we’ll explore how to use Apache SparkTM, Delta Lake, and other open source technologies to build a better lakehouse. This virtual session will include concepts, architectures and demos.
Here’s what you’ll learn in this 2-hour session:
How Delta Lake combines the best of data warehouses and data lakes for improved data reliability, performance and security
How to use Apache Spark and Delta Lake to perform ETL processing, manage late-arriving data, and repair corrupted data directly on your lakehouse
Making Apache Spark Better with Delta LakeDatabricks
Delta Lake is an open-source storage layer that brings reliability to data lakes. Delta Lake offers ACID transactions, scalable metadata handling, and unifies the streaming and batch data processing. It runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
In this talk, we will cover:
* What data quality problems Delta helps address
* How to convert your existing application to Delta Lake
* How the Delta Lake transaction protocol works internally
* The Delta Lake roadmap for the next few releases
* How to get involved!
Architect’s Open-Source Guide for a Data Mesh ArchitectureDatabricks
Data Mesh is an innovative concept addressing many data challenges from an architectural, cultural, and organizational perspective. But is the world ready to implement Data Mesh?
In this session, we will review the importance of core Data Mesh principles, what they can offer, and when it is a good idea to try a Data Mesh architecture. We will discuss common challenges with implementation of Data Mesh systems and focus on the role of open-source projects for it. Projects like Apache Spark can play a key part in standardized infrastructure platform implementation of Data Mesh. We will examine the landscape of useful data engineering open-source projects to utilize in several areas of a Data Mesh system in practice, along with an architectural example. We will touch on what work (culture, tools, mindset) needs to be done to ensure Data Mesh is more accessible for engineers in the industry.
The audience will leave with a good understanding of the benefits of Data Mesh architecture, common challenges, and the role of Apache Spark and other open-source projects for its implementation in real systems.
This session is targeted for architects, decision-makers, data-engineers, and system designers.
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfAltinity Ltd
Join the Altinity experts as we dig into ClickHouse sharding and replication, showing how they enable clusters that deliver fast queries over petabytes of data. We’ll start with basic definitions of each, then move to practical issues. This includes the setup of shards and replicas, defining schema, choosing sharding keys, loading data, and writing distributed queries. We’ll finish up with tips on performance optimization.
#ClickHouse #datasets #ClickHouseTutorial #opensource #ClickHouseCommunity #Altinity
-----------------
Join ClickHouse Meetups: https://www.meetup.com/San-Francisco-...
Check out more ClickHouse resources: https://altinity.com/resources/
Visit the Altinity Documentation site: https://docs.altinity.com/
Contribute to ClickHouse Knowledge Base: https://kb.altinity.com/
Join the ClickHouse Reddit community: https://www.reddit.com/r/Clickhouse/
----------------
Learn more about Altinity!
Site: https://www.altinity.com
LinkedIn: https://www.linkedin.com/company/alti...
Twitter: https://twitter.com/AltinityDB
How YugaByte DB Implements Distributed PostgreSQLYugabyte
Building applications on PostgreSQL that require automatic data sharding and replication, fault tolerance, distributed transactions and geographic data distribution has been hard. In this 3 hour workshop, we will look at how to do this using a real-world example running on top of YugaByte DB, a distributed database that is fully wire-compatible with PostgreSQL and NoSQL APIs (Apache Cassandra and Redis). We will look at the design and architecture of YugaByte DB and how it reuses the PostgreSQL codebase to achieve full API compatibility. YugaByte DB support for PostgreSQL includes most data types, queries, stored procedures, etc. We will also take a look at how to build applications that are planet scale (requiring geographic distribution of data) and how to run them in cloud-native environments (for example, Kubernetes, hybrid or multi-cloud deployments).
Using Apache Arrow, Calcite, and Parquet to Build a Relational CacheDremio Corporation
From DataEngConf 2017 - Everybody wants to get to data faster. As we move from more general solution to specific optimization techniques, the level of performance impact grows. This talk will discuss how layering in-memory caching, columnar storage and relational caching can combine to provide a substantial improvement in overall data science and analytical workloads. It will include a detailed overview of how you can use Apache Arrow, Calcite and Parquet to achieve multiple magnitudes improvement in performance over what is currently possible.
Parquet performance tuning: the missing guideRyan Blue
Ryan Blue explains how Netflix is building on Parquet to enhance its 40+ petabyte warehouse, combining Parquet’s features with Presto and Spark to boost ETL and interactive queries. Information about tuning Parquet is hard to find. Ryan shares what he’s learned, creating the missing guide you need.
Topics include:
* The tools and techniques Netflix uses to analyze Parquet tables
* How to spot common problems
* Recommendations for Parquet configuration settings to get the best performance out of your processing platform
* The impact of this work in speeding up applications like Netflix’s telemetry service and A/B testing platform
Journey of Migrating 1 Million Presto Queries - Presto Webinar 2020Taro L. Saito
Arm Treasure Data utilizes Presto as the query engine processing over 1 million queries per day to support the data business of 500+ companies in three regions; US, EU, and Asia. Arm Treasure Data had been using Presto 0.205 and in 2019 started a big migration project to Presto 317. Although we performed extensive query simulations to check any incompatibilities, we faced many unexpected challenges during the migration at production.
times ten in-memory database for extreme performanceOracle Korea
어디서나 업무가 가능한 모바일 시대가 되면서 비약적으로 데이터 사이즈가 커지고 이를 처리하기 위해서는 고성능의 빠른 Database가 필요하게 되었습니다. 이러한 요구사항을 반영하여 기존에 우리가 잘 사용하고있던 Database 들도 In-Memory 기술을 속속 도입하고 있습니다. In-Memory 기술은 이전부터 있었지만 하드웨어의 한계와 소프트웨어의 확정성의 부족으로 많이 사용되지 않았던 기술입니다.
Oracle TimesTen 18.1은 기존 In-Memory Database가 가지는 한계를 극복하고, 빠른 처리 속도와 확장(Scaleout)가능한 분산 아키텍처를 지원하는 In-Memory 관계형 Database 입니다.
본 세션에서는 Oracle TimesTen의 분산 아키텍처와 주요 Feature를 소개하고 TimesTen 최신버전인 18.1의 데모를 진행할 예정입니다. 또한 현재 TimesTen을 이용하여 국내 통신사의 서비스를 개발하고 있는 이루온의 실제 적용 사례 및 성능 테스트 결과를 공유하는 시간이 될 것입니다.
This is the presentation I delivered on Hadoop User Group Ireland meetup in Dublin on Nov 28 2015. It covers at glance the architecture of GPDB and most important its features. Sorry for the colors - Slideshare is crappy with PDFs
Lyft’s data platform is at the heart of the company's business. Decisions from pricing to ETA to business operations rely on Lyft’s data platform. Moreover, it powers the enormous scale and speed at which Lyft operates. Mark Grover and Deepak Tiwari walk you through the choices Lyft made in the development and sustenance of the data platform, along with what lies ahead in the future.
Amazon Aurora is a relational database built for the cloud and is compatible with MySQL and PostgreSQL. It combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. In this session, we cover some of the key innovations in the Aurora database engine and storage layers. We explain recently announced features, such as Aurora Serverless, Aurora Multi-Master, and Aurora Parallel Query, and we discuss best practices and optimal configurations. See why Aurora is a great fit for new application development and for migrations from overpriced, restrictive commercial databases.
Airframe Meetup #3: 2019 Updates & AirSpecTaro L. Saito
Presentation slides of Airframe Meetup #3 https://airframe.connpass.com/event/148169/
- Airframe 19 Milestone
- AirSpec: A new testing library for Scala
-
PostgreSQL is a powerful, enterprise class open source object-relational database system with an emphasis on extensibility and standards-compliance. PostgreSQL boasts many sophisticated features and runs stored procedures in more than a dozen programming languages. We’ll explore the advantages and limitations of PostgreSQL, examples of where it is best suited for use, and examples of who is using PostgreSQL to power their applications.
MySQL 8.0: What Is New in Optimizer and Executor?Norvald Ryeng
There are substantial improvements in the MySQL 8.0 optimizer. Most noticeably, support for advanced SQL features including common table expressions and window functions. The updates also make DBAs' lives easier with invisible indexes and additional hints that can be used together with the query rewrite plugin. On the performance side, cost model changes will make a huge impact. JSON support is even more powerful with the new JSON table function, aggregation functions, and more.
Tiered Data Sets in Amazon Redshift (ANT321) - AWS re:Invent 2018Amazon Web Services
Amazon Redshift offers a common query interface against data stored in fast, local storage as well as data from high-capacity, inexpensive storage (S3). This workshop will cover the basics of this tiered storage model and outline the design patterns you can leverage to get the most from large volumes of data. You will build out your own Redshift cluster with multiple data sets to illustrate the trade-offs between the storage systems. By the time you leave, you’ll know how to distribute your data and design your DDL to deliver the best data warehouse for your business.
Big data Argentina meetup 2020-09: Intro to presto on dockerFederico Palladoro
We will talk about how we are migrating our Presto clusters from AWS EMR to Docker using production-grade orchestrators
considering cluster management, configuration and monitoring. We will discuss between Hashicorp Nomad and Kubernetes as a base solution
Similar to Presto At Arm Treasure Data - 2019 Updates (20)
Unifying Frontend and Backend Development with Scala - ScalaCon 2021Taro L. Saito
Scala can be used for developing both frontend (Scala.js) and backend (Scala JVM) applications. A missing piece has been bridging these two worlds using Scala. We built Airframe RPC, a framework that uses Scala traits as a unified RPC interface between servers and clients. With Airframe RPC, you can build HTTP/1 (Finagle) and HTTP/2 (gRPC) services just by defining Scala traits and case classes. It simplifies web application design as you only need to care about Scala interfaces without using existing web standards like REST, ProtocolBuffers, OpenAPI, etc. Scala.js support of Airframe also enables building interactive Web applications that can dynamically render DOM elements while talking with Scala-based RPC servers. With Airframe RPC, the value of Scala developers will be much higher both for frontend and backend areas.
Scala for Everything: From Frontend to Backend Applications - Scala Matsuri 2020Taro L. Saito
Scala is a powerful language; You can build front-end applications with Scala.js, and efficient backend application servers for JVM. In this session, we will learn how to build everything with Scala by using Airframe OSS framework.
Airframe is a library designed for maximizing the advantages of Scala as a hybrid of object-oriented and functional programming language. In this session, we will learn how to use Airframe to build REST APIs and RPC (with Finagle or gRPC) services, and how to create frontend applications in Scala.js that interact with the servers using functional interfaces for dynamically updating web pages.
Airframe RPC is a framework for building RPC services by using Scala as a unified RPC interface between servers and clients. It supports Finagle (HTTP/1) and gRPC (HTTP/2) backend, and even Scala.js for web application development.
Talk video: https://www.youtube.com/watch?v=qf8wOc2YHmQ&feature=youtu.be
Documentation: https://wvlet.org/airframe/docs/airframe-rpc
Demo source code: https://github.com/wvlet/airframe/tree/master/examples/rpc-examples
Presto @ Treasure Data - Presto Meetup Boston 2015Taro L. Saito
Treasure Data simplifies event analytics for the complex digital
world. Our customers send us 1,000,000 events per second and issue 30,000+ Presto queries everyday to understand their customers better. One of the challenges is designing a cloud database with zero downtime to support a global customer base. We have achieved this goal by developing several open-source technologies; Fluentd and Embulk enable seamless log collection from stream/batch sources, and with MessagePack we can provide an extensible columnar store that accommodates future schema changes. Finally, Presto allows us to serve a wide variety of data processing our customers perform on our service. In this talk, I will present an overview of our system, and how our customers keep using Presto while collecting and extending their data set.
Weaving Dataflows with Silk - ScalaMatsuri 2014, TokyoTaro L. Saito
Silk is a framework for building dataflows in Scala. In Silk users write data processing code with collection operators (e.g., map, filter, reduce, join, etc.). Silk uses Scala Macros to construct a DAG of dataflows, nodes of which are annotated with variable names in the program. By using these variable names as markers in the DAG, Silk can support interruption and resume of dataflows and querying the intermediate data. By separating dataflow descriptions from its computation, Silk enables us to switch executors, called weavers, for in-memory or cluster computing without modifying the code. In this talk, we will show how Silk helps you run data-processing pipelines as you write the code.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
1. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Kai Sasaki, Taro L. Saito
Arm Treasure Data
June 11th, 2019
Presto Conference Tokyo 2019
Presto At Arm Treasure Data
2019 Updates
1
2. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
About Me: Kai Sasaki
2
● Kai Sasaki (@Lewuathe)
● https://www.lewuathe.com/
● Software Engineer in Arm
● MPP team to maintain Presto cluster
and around ecosystems
● OSS Contributor
○ Presto, Hadoop, Spark, TensorFlow
5. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Treasure Data = Unified Data Platform
5
6. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
The Architecture of Treasure Data
6
DataLogs
Device
Data
Batch
Data
PlazmaDB
Table Schema
Data Collection Cloud Storage Distributed Data Processing
2 million records / sec. 130 trillion records 1 billion rows processed / sec.
Jobs
Job Management
SQL Editor
Scheduler
Workflows
Machine
Learning
Treasure Data OSS
Third Party OSS
8. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Presto Usage (2019)
● 3x more usage since 2017
8
3,500 ~ users
(400+ customer accounts)
600,000~ Queries / Day
100 Trillion Rows Processed / Day
(= 1.2 billion rows processed / sec.)
9. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
PlazmaDB: MessagePack DBMS
● Fluentd -> MessagePack -> Arm Treasure Data
● Generating table schema from the input MessagePack data
■ No need to worry about changing schema as adding columns or escalating column types are
managed by the service
● Apply schema–on-read for providing table data for Presto/Hive/Spark, etc.
● Storage Format:
● Our internal MessagePack Columnar Format (MPC1) for schema-on-read
Table Schema
Int Column Reader
String Column Reader
Update
Schema
Generate
Reader Set
Table Reader
Schema-free Data
9
Data Collection Distributed Data Processing
10. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Treasure Data Storage Architecture
● Real-Time vs Archive Storages
● Provide an access to the recent data in real-time storage
● Store optimized partitions into Archive Storage by using MapReduce jobs
(LogMergeJob)
10
11. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
TD_INTERVAL UDF
● Support human-friendly time window support
11
12. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
PlazmaDB Partition Indexes
● Q: How can we get a list of partition files?
● Limitations of the S3 API:
■ LIST operation of S3 files is quite slow
■ No range filtering of S3 files
○ Time range queries are not supported
● PlazmaDB Partition Indexes
● Manages indexes to partition files on S3
● Implemented on top of PostgreSQL
■ SQL + PL/Python functions
● Use GiST indexes (B-tree) to support time range filtering
■ dataset id, (partition start_time, end_time)
● Support transactional partition insertion + deletion for a single table
■ INSERT INTO, DELETE are atomic operations in TD
12
13. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Extension to Presto
● No major change has been made to Presto master branch
● Fork: https://github.com/treasure-data/presto (almost no diff)
● This is a strategy for catching up with the latest master.
● Adding extension modules in a different internal repository (td-presto)
● td-presto-server
■ Extending presto-server main to inject our own modules
■ Adding a split-resource manager for throttling query resource usage
● td-presto-plugin
■ Metadata Management
○ A bridge to TD API (table metadata API)
○ PlazmaDB: Partition indexes
■ MPC1 file reader
○ S3 I/O request manager for pipelining a lot of S3 GET requests
■ Treasure Data specific UDFs
○ https://support.treasuredata.com/hc/en-us/articles/360001450828-Supported-Presto-and-TD-Functions
● td-presto-stella
■ Partition maintenance module implemented as Presto connectors 13
14. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Stella Plugin
● A Presto plugin for maintaining
fragmented partitions
● Too small partitions
● Too large partitions
● Use Presto to merge/split partitions
● Guidelines
■ less than 1M records /
partition
■ 250MB / partition
● Using CTAS statement for merging
partitions:
■ CREATE TABLE stella (account_id =
xxx, database = xxx, table = xxx,
max_file_size=xxx,
max_time_range=xxx)
14
15. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Ecosystems Around
Presto
16. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Prestobase: Presto Gateway (api-presto)
16
● Prestobase is a proxy gateway to Presto clusters to support standard presto
clients (e.g., presto-cli, jdbc, odbc, etc.)
● Written in Scala
17. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
td-spark: Apache Spark Driver for Treasure Data
17
● td-spark provides a way to use TD table as a datasource of Spark application
● Supporting both read/write mode makes TD extended to further use cases
Python
$ pip install td-pyspark
Or
$ docker pull armtd/td-spark-pyspark
Scala
Add td-spark-assembly.jar in the Spark class path.
https://support.treasuredata.com/hc/en-us/articles/360000716627-Apache-Spark-Driver-td-sp
ark-Release
18. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Internal Optimization
19. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Data-Driven System Optimization
● TD is one of the biggest users of TD
● Query logs
● Collecting all Presto query logs since 2015
● Query statements, performance statistics, logs, etc.
● Logs are our valuable assets
● To understand user activities and enable data-driven decisions
19
Logs
User
Query
Collect Query Logs
Analyze Query Logs
Machine
Learning
Query
Optimization
Optimize System
20. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Checking Query Correctness And Performance
● Upgrading Query Engine Versions
● Need to check customer query compatibilities, performance degradation, etc.
● Testing all 500,000 query / day = 15M queries / month is impractical
● Use ML techniques to effectively reduce the problem size
● Simulate all possible customer query patterns to check the compatibility
● Compute checksums of queries, record performance results to TD
20
User
QueryUser
QueryUser
QueryUser
QueryUser
QueryUser
Query
15,000,000
queries
clustering
Query
SigQuery
SigQuery
SigQuery
Sig
minimize
Small
QuerySmall
QuerySmall
QuerySmall
Query
100,000 query
patterns
100,000 small
queries
simulate
queries
simulation
results and
stats
21. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Query Metric Analysis
● Resource Usage Prediction
○ Based on the historical metric data
● Further optimization leveraged by prediction result
● Working with Internship Student from UCB
21
22. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Presto Resource Manager
● Collecting the system metric in robust manner is challenging
● Unified metric collector of Presto
○ Cluster metric management
○ Query routing optimization
22
24. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Challenge: Optimizing Query Workload As A Whole
● 2000 query patterns (5000 queries in a day)
● A real example in our production workload
● How can we improve the entire data processing?
24
25. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Detecting Redundant Data Processing
● Redundancy In Queries
○ Same table scans, joins,
aggregations, UDF processing, etc.
● Related work:
○ Selecting Subexpressions to
Materialize At Datacenter Scale
(Microsoft, VLDB 2018)
■ Extract best common sub-
expressions from query graphs
■ Linear programming
● Challenges
● Updating sub-expression caches
■ Cache invalidation
● Combine cached results + query
results for time series data
25
26. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Challenge: Maximizing Machine Resource Utilization
● Uneven CPU usage due to regional
differences
● US (upper)
■ Global customers
● Tokyo (lower)
■ Only Japan customers
● Semi-scheduled auto-scaling
● Using past stats + runtime
metrics
● Distributing workloads to off-peak
times
● Early data processing
● Optimizing query scheduling
26
27. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Idea: Using Presto As A Backend of Other Query Engines
● Presto is efficient for table scans, filter, aggregations
● Can’t we use the power of Presto for accelerating other query engines?
27
Primary Query EngineSecondary Query Engine
28. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Idea: Presto-Presto Connector
● Launch multiple Presto clusters with different configurations
● Presto 1
● Caching common sub-expressions (e.g., Materialized Views or in-memory
storage)
● Presto 2
● Delegate sub-expression processing to the upstream Presto cluster
● Challenge:
● Extracting appropriate sub-queries to run at the upstream Presto
28
With Sub-Query CacheReuse Pre-Computed Results
29. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Missing: Binary Protocol
● Current Presto sends query results in JSON format (v1 protocol)
● Need a faster data transfer method
○ Idea: Using MessagePack for binary data representation
○ Support parallel query results transfer
29
v1 JSON
(slow)
Binary Data Transfer
30. Copyright 1995-2019 Arm Limited (or its affiliates). All rights reserved.
Join Arm Treasure Data Team!
● Solving Challenges In The Real World Data Processing
● Building a scalable data processing platform on the cloud
■ Enabled 400+ companies to use Presto for their data processing
● Stream data ingestion systems
■ PlazmaDB indexes, physical storage optimization
● Advanced analytics with Presto, Hive, Spark, and their workflows.
● CDP (Customer Data Platform)
● Building data platform for managing our customers’ customer data
● Supporting non-engineers (e.g., marketers, executives) to manage their own data
30