This document provides an overview of automatic parameter tuning for databases and big data systems. It discusses the challenges of tuning many parameters across different systems and workloads. The document then covers various approaches to parameter tuning, including rule-based, cost modeling, simulation-based, experiment-driven, machine learning, and adaptive tuning. Recent works that use machine learning methods like Gaussian processes and reinforcement learning for automatic parameter tuning are also summarized.
Introduction To Machine Learning | EdurekaEdureka!
** Data Science Certification Training: https://www.edureka.co/data-science **
This Edureka's PPT on "Introduction To Machine Learning" will help you understand the basics of Machine Learning and how it can be used to solve real-world problems. The following topics are covered in this session:
Need For Machine Learning
What is Machine Learning?
Machine Learning Definitions
Machine Learning Process
Types Of Machine Learning
Type Of Problems Solved Using Machine Learning
Demo
YouTube Video: https://youtu.be/BuezNNeOGCI
Blog Series: http://bit.ly/data-science-blogs
Data Science Training Playlist: http://bit.ly/data-science-playlist
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
What is Deep Learning | Deep Learning Simplified | Deep Learning Tutorial | E...Edureka!
This Edureka "What is Deep Learning" video will help you to understand about the relationship between Deep Learning, Machine Learning and Artificial Intelligence and how Deep Learning came into the picture. This tutorial will be discussing about Artificial Intelligence, Machine Learning and its limitations, how Deep Learning overcame Machine Learning limitations and different real-life applications of Deep Learning.
Below are the topics covered in this tutorial:
1. What Is Artificial Intelligence?
2. What Is Machine Learning?
3. Limitations Of Machine Learning
4. Deep Learning To The Rescue
5. What Is Deep Learning?
6. Deep Learning Applications
To take a structured training on Deep Learning, you can check complete details of our Deep Learning with TensorFlow course here: https://goo.gl/VeYiQZ
*** Apache Spark and Scala Certification Training: https://www.edureka.co/apache-spark-scala-training ***
This Edureka PPT on "RDD Using Spark" will provide you the detailed and comprehensive knowledge about RDD, which are considered to be the backbone of Apache Spark. You will learn about the various Transformations and Actions that can be performed on RDDs. This PPT will cover the following topics:
Need for RDDs
What are RDDs?
Features of RDDs
Creation of RDDs using Spark
Operations performed on RDDs
RDDs using Spark: Pokemon Use Case
Blog Series: http://bit.ly/2VRogGx
Complete Apache Spark and Scala playlist: http://bit.ly/2In8IXD
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
안녕하세요.
이번에 '1st 함께하는 딥러닝 컨퍼런스'에서 "안.전.제.일. 강화학습"이란 주제로 발표한 이동민이라고 합니다.
컨퍼런스 관련 링크는 다음과 같습니다.
https://tykimos.github.io/2018/06/28/ISS_1st_Deep_Learning_Conference_All_Together/
그리고 대략적인 개요는 다음과 같습니다.
1. What is Artificial Intelligence?
2. What is Reinforcement Learning?
3. What is Artificial General Intelligence?
4. Planning and Learning
5. Safe Reinforcement Learning
또한 이 자료에는 "Imagination-Augmented Agents for Deep Reinforcement Learning"이라는 논문을 자세히 설명하였습니다.
많은 분들이 보시고 도움이 되셨으면 좋겠습니다~!
Introduction To Machine Learning | EdurekaEdureka!
** Data Science Certification Training: https://www.edureka.co/data-science **
This Edureka's PPT on "Introduction To Machine Learning" will help you understand the basics of Machine Learning and how it can be used to solve real-world problems. The following topics are covered in this session:
Need For Machine Learning
What is Machine Learning?
Machine Learning Definitions
Machine Learning Process
Types Of Machine Learning
Type Of Problems Solved Using Machine Learning
Demo
YouTube Video: https://youtu.be/BuezNNeOGCI
Blog Series: http://bit.ly/data-science-blogs
Data Science Training Playlist: http://bit.ly/data-science-playlist
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
What is Deep Learning | Deep Learning Simplified | Deep Learning Tutorial | E...Edureka!
This Edureka "What is Deep Learning" video will help you to understand about the relationship between Deep Learning, Machine Learning and Artificial Intelligence and how Deep Learning came into the picture. This tutorial will be discussing about Artificial Intelligence, Machine Learning and its limitations, how Deep Learning overcame Machine Learning limitations and different real-life applications of Deep Learning.
Below are the topics covered in this tutorial:
1. What Is Artificial Intelligence?
2. What Is Machine Learning?
3. Limitations Of Machine Learning
4. Deep Learning To The Rescue
5. What Is Deep Learning?
6. Deep Learning Applications
To take a structured training on Deep Learning, you can check complete details of our Deep Learning with TensorFlow course here: https://goo.gl/VeYiQZ
*** Apache Spark and Scala Certification Training: https://www.edureka.co/apache-spark-scala-training ***
This Edureka PPT on "RDD Using Spark" will provide you the detailed and comprehensive knowledge about RDD, which are considered to be the backbone of Apache Spark. You will learn about the various Transformations and Actions that can be performed on RDDs. This PPT will cover the following topics:
Need for RDDs
What are RDDs?
Features of RDDs
Creation of RDDs using Spark
Operations performed on RDDs
RDDs using Spark: Pokemon Use Case
Blog Series: http://bit.ly/2VRogGx
Complete Apache Spark and Scala playlist: http://bit.ly/2In8IXD
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
안녕하세요.
이번에 '1st 함께하는 딥러닝 컨퍼런스'에서 "안.전.제.일. 강화학습"이란 주제로 발표한 이동민이라고 합니다.
컨퍼런스 관련 링크는 다음과 같습니다.
https://tykimos.github.io/2018/06/28/ISS_1st_Deep_Learning_Conference_All_Together/
그리고 대략적인 개요는 다음과 같습니다.
1. What is Artificial Intelligence?
2. What is Reinforcement Learning?
3. What is Artificial General Intelligence?
4. Planning and Learning
5. Safe Reinforcement Learning
또한 이 자료에는 "Imagination-Augmented Agents for Deep Reinforcement Learning"이라는 논문을 자세히 설명하였습니다.
많은 분들이 보시고 도움이 되셨으면 좋겠습니다~!
Today, financial services firms rely on data as the basis of their industry. In the absence of the means of production for physical goods, data is the raw material used to create value for and capture value from the market. However, as data volume and variety increase, so do the susceptibility to fraud and the temptation to hackers. Learn how an enterprise data hub built on Hadoop enables advanced security and machine learning on much more descriptive and real-time data to detect and prevent fraud, from payment encryption to anti-money-laundering processes.
What Is Machine Learning? | Machine Learning Basics | EdurekaEdureka!
YouTube Link: https://youtu.be/hjh1ikznScg
*** Machine Learning Certification Training - https://www.edureka.co/machine-learning-certification-training ***
This PPT covers the Basics of Machine Learning. It will explain why machine learning came to existence and how it solved major problems. This PPT also describes the various types of Machine Learning with real-life examples.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Техносфера Mail.ru Group, МГУ им. М.В. Ломоносова. Курс "Алгоритмы интеллектуальной обработки больших объемов данных", Лекция №11 "Основы нейронных сетей"
Лектор - Павел Нестеров
Биологический нейрон и нейронные сети. Искусственный нейрон Маккалока-Питтса и искусственная нейронная сеть. Персептрон Розенблатта и Румельхарта. Алгоритм обратного распространения ошибки. Момент обучения, регуляризация в нейросети, локальная скорость обучения, softmax слой. Различные режимы обучения.
Видео лекции курса https://www.youtube.com/playlist?list=PLrCZzMib1e9pyyrqknouMZbIPf4l3CwUP
This presentation covers Decision Tree as a supervised machine learning technique, talking about Information Gain method and Gini Index method with their related Algorithms.
Beyond SQL: Speeding up Spark with DataFramesDatabricks
In this talk I describe how you can use Spark SQL DataFrames to speed up Spark programs, even without writing any SQL. By writing programs using the new DataFrame API you can write less code, read less data and let the optimizer do the hard work.
Gradient Boosted Regression Trees in scikit-learnDataRobot
Slides of the talk "Gradient Boosted Regression Trees in scikit-learn" by Peter Prettenhofer and Gilles Louppe held at PyData London 2014.
Abstract:
This talk describes Gradient Boosted Regression Trees (GBRT), a powerful statistical learning technique with applications in a variety of areas, ranging from web page ranking to environmental niche modeling. GBRT is a key ingredient of many winning solutions in data-mining competitions such as the Netflix Prize, the GE Flight Quest, or the Heritage Health Price.
I will give a brief introduction to the GBRT model and regression trees -- focusing on intuition rather than mathematical formulas. The majority of the talk will be dedicated to an in depth discussion how to apply GBRT in practice using scikit-learn. We will cover important topics such as regularization, model tuning and model interpretation that should significantly improve your score on Kaggle.
KNN Algorithm using Python | How KNN Algorithm works | Python Data Science Tr...Edureka!
** Python for Data Science: https://www.edureka.co/python **
This Edureka tutorial on KNN Algorithm will help you to build your base by covering the theoretical, mathematical and implementation part of the KNN algorithm in Python. Topics covered under this tutorial includes:
1. What is KNN Algorithm?
2. Industrial Use case of KNN Algorithm
3. How things are predicted using KNN Algorithm
4. How to choose the value of K?
5. KNN Algorithm Using Python
6. Implementation of KNN Algorithm from scratch
Check out our playlist: http://bit.ly/2taym8X
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangDatabricks
As a general computing engine, Spark can process data from various data management/storage systems, including HDFS, Hive, Cassandra and Kafka. For flexibility and high throughput, Spark defines the Data Source API, which is an abstraction of the storage layer. The Data Source API has two requirements.
1) Generality: support reading/writing most data management/storage systems.
2) Flexibility: customize and optimize the read and write paths for different systems based on their capabilities.
Data Source API V2 is one of the most important features coming with Spark 2.3. This talk will dive into the design and implementation of Data Source API V2, with comparison to the Data Source API V1. We also demonstrate how to implement a file-based data source using the Data Source API V2 for showing its generality and flexibility.
Engineering Fast Indexes for Big-Data Applications: Spark Summit East talk by...Spark Summit
Contemporary computing hardware offers massive new performance opportunities. Yet high-performance programming remains a daunting challenge.
We present some of the lessons learned while designing faster indexes, with a particular emphasis on compressed bitmap indexes. Compressed bitmap indexes accelerate queries in popular systems such as Apache Spark, Git, Elastic, Druid and Apache Kylin.
This Hadoop will help you understand the different tools present in the Hadoop ecosystem. This Hadoop video will take you through an overview of the important tools of Hadoop ecosystem which include Hadoop HDFS, Hadoop Pig, Hadoop Yarn, Hadoop Hive, Apache Spark, Mahout, Apache Kafka, Storm, Sqoop, Apache Ranger, Oozie and also discuss the architecture of these tools. It will cover the different tasks of Hadoop such as data storage, data processing, cluster resource management, data ingestion, machine learning, streaming and more. Now, let us get started and understand each of these tools in detail.
Below topics are explained in this Hadoop ecosystem presentation:
1. What is Hadoop ecosystem?
1. Pig (Scripting)
2. Hive (SQL queries)
3. Apache Spark (Real-time data analysis)
4. Mahout (Machine learning)
5. Apache Ambari (Management and monitoring)
6. Kafka & Storm
7. Apache Ranger & Apache Knox (Security)
8. Oozie (Workflow system)
9. Hadoop MapReduce (Data processing)
10. Hadoop Yarn (Cluster resource management)
11. Hadoop HDFS (Data storage)
12. Sqoop & Flume (Data collection and ingestion)
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Learn Spark SQL, creating, transforming, and querying Data frames
14. Understand the common use-cases of Spark and the various interactive algorithms
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training.
Unlocking Operational Intelligence from the Data LakeMongoDB
Hadoop-based data lakes are enabling enterprises and governments to efficiently capture and analyze unprecedented volumes of data. Join this webinar to learn how digital transformation is driving the rise of the data lake, the role Hadoop plays in generating new classes of analytics and insight, the critical capabilities you need to evaluate in an operational database for your data lake, and more.
Today, financial services firms rely on data as the basis of their industry. In the absence of the means of production for physical goods, data is the raw material used to create value for and capture value from the market. However, as data volume and variety increase, so do the susceptibility to fraud and the temptation to hackers. Learn how an enterprise data hub built on Hadoop enables advanced security and machine learning on much more descriptive and real-time data to detect and prevent fraud, from payment encryption to anti-money-laundering processes.
What Is Machine Learning? | Machine Learning Basics | EdurekaEdureka!
YouTube Link: https://youtu.be/hjh1ikznScg
*** Machine Learning Certification Training - https://www.edureka.co/machine-learning-certification-training ***
This PPT covers the Basics of Machine Learning. It will explain why machine learning came to existence and how it solved major problems. This PPT also describes the various types of Machine Learning with real-life examples.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Техносфера Mail.ru Group, МГУ им. М.В. Ломоносова. Курс "Алгоритмы интеллектуальной обработки больших объемов данных", Лекция №11 "Основы нейронных сетей"
Лектор - Павел Нестеров
Биологический нейрон и нейронные сети. Искусственный нейрон Маккалока-Питтса и искусственная нейронная сеть. Персептрон Розенблатта и Румельхарта. Алгоритм обратного распространения ошибки. Момент обучения, регуляризация в нейросети, локальная скорость обучения, softmax слой. Различные режимы обучения.
Видео лекции курса https://www.youtube.com/playlist?list=PLrCZzMib1e9pyyrqknouMZbIPf4l3CwUP
This presentation covers Decision Tree as a supervised machine learning technique, talking about Information Gain method and Gini Index method with their related Algorithms.
Beyond SQL: Speeding up Spark with DataFramesDatabricks
In this talk I describe how you can use Spark SQL DataFrames to speed up Spark programs, even without writing any SQL. By writing programs using the new DataFrame API you can write less code, read less data and let the optimizer do the hard work.
Gradient Boosted Regression Trees in scikit-learnDataRobot
Slides of the talk "Gradient Boosted Regression Trees in scikit-learn" by Peter Prettenhofer and Gilles Louppe held at PyData London 2014.
Abstract:
This talk describes Gradient Boosted Regression Trees (GBRT), a powerful statistical learning technique with applications in a variety of areas, ranging from web page ranking to environmental niche modeling. GBRT is a key ingredient of many winning solutions in data-mining competitions such as the Netflix Prize, the GE Flight Quest, or the Heritage Health Price.
I will give a brief introduction to the GBRT model and regression trees -- focusing on intuition rather than mathematical formulas. The majority of the talk will be dedicated to an in depth discussion how to apply GBRT in practice using scikit-learn. We will cover important topics such as regularization, model tuning and model interpretation that should significantly improve your score on Kaggle.
KNN Algorithm using Python | How KNN Algorithm works | Python Data Science Tr...Edureka!
** Python for Data Science: https://www.edureka.co/python **
This Edureka tutorial on KNN Algorithm will help you to build your base by covering the theoretical, mathematical and implementation part of the KNN algorithm in Python. Topics covered under this tutorial includes:
1. What is KNN Algorithm?
2. Industrial Use case of KNN Algorithm
3. How things are predicted using KNN Algorithm
4. How to choose the value of K?
5. KNN Algorithm Using Python
6. Implementation of KNN Algorithm from scratch
Check out our playlist: http://bit.ly/2taym8X
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangDatabricks
As a general computing engine, Spark can process data from various data management/storage systems, including HDFS, Hive, Cassandra and Kafka. For flexibility and high throughput, Spark defines the Data Source API, which is an abstraction of the storage layer. The Data Source API has two requirements.
1) Generality: support reading/writing most data management/storage systems.
2) Flexibility: customize and optimize the read and write paths for different systems based on their capabilities.
Data Source API V2 is one of the most important features coming with Spark 2.3. This talk will dive into the design and implementation of Data Source API V2, with comparison to the Data Source API V1. We also demonstrate how to implement a file-based data source using the Data Source API V2 for showing its generality and flexibility.
Engineering Fast Indexes for Big-Data Applications: Spark Summit East talk by...Spark Summit
Contemporary computing hardware offers massive new performance opportunities. Yet high-performance programming remains a daunting challenge.
We present some of the lessons learned while designing faster indexes, with a particular emphasis on compressed bitmap indexes. Compressed bitmap indexes accelerate queries in popular systems such as Apache Spark, Git, Elastic, Druid and Apache Kylin.
This Hadoop will help you understand the different tools present in the Hadoop ecosystem. This Hadoop video will take you through an overview of the important tools of Hadoop ecosystem which include Hadoop HDFS, Hadoop Pig, Hadoop Yarn, Hadoop Hive, Apache Spark, Mahout, Apache Kafka, Storm, Sqoop, Apache Ranger, Oozie and also discuss the architecture of these tools. It will cover the different tasks of Hadoop such as data storage, data processing, cluster resource management, data ingestion, machine learning, streaming and more. Now, let us get started and understand each of these tools in detail.
Below topics are explained in this Hadoop ecosystem presentation:
1. What is Hadoop ecosystem?
1. Pig (Scripting)
2. Hive (SQL queries)
3. Apache Spark (Real-time data analysis)
4. Mahout (Machine learning)
5. Apache Ambari (Management and monitoring)
6. Kafka & Storm
7. Apache Ranger & Apache Knox (Security)
8. Oozie (Workflow system)
9. Hadoop MapReduce (Data processing)
10. Hadoop Yarn (Cluster resource management)
11. Hadoop HDFS (Data storage)
12. Sqoop & Flume (Data collection and ingestion)
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Learn Spark SQL, creating, transforming, and querying Data frames
14. Understand the common use-cases of Spark and the various interactive algorithms
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training.
Unlocking Operational Intelligence from the Data LakeMongoDB
Hadoop-based data lakes are enabling enterprises and governments to efficiently capture and analyze unprecedented volumes of data. Join this webinar to learn how digital transformation is driving the rise of the data lake, the role Hadoop plays in generating new classes of analytics and insight, the critical capabilities you need to evaluate in an operational database for your data lake, and more.
Analyzing petabytes of smartmeter data using Cloud Bigtable, Cloud Dataflow, ...Edwin Poot
The Energy Industry is in transition due to the exponential growth of data being generated by the ever increasing number of connected devices which comprise the Smart Grid. Learn how Energyworx uses GCP to collect and ingest this IoT data with ease and is helping her customers uncover hidden value from this data, allowing them to create new business models and concepts.
Applying linear regression and predictive analyticsMariaDB plc
In this session Alejandro Infanzon, Solutions Engineer, introduces the linear regression and statistical functions that debuted in MariaDB ColumnStore 1.2, and how you can use them to support powerful analytics. He explains how to perform even-more-powerful analytics by writing multi-parameter user-defined functions (UDFs) – also new in MariaDB ColumnStore 1.2.
L'architettura di classe enterprise di nuova generazione - Massimo BrignoliData Driven Innovation
La nascita dei data lake - La aziende, ormai, sono sommerse dai dati e il classico datawarehouse fa fatica a macinare questi dati per numerosità e varietà. In molti hanno iniziato a guardare a delle architetture chiamate Data Lakes con Hadoop come tecnologia di riferimento. Ma questa soluzione va bene per tutto? Vieni a capire come operazionalizzare i data lakes per creare delle moderne architetture di gestione dati.
Simplified Machine Learning, Text, and Graph Analytics with Pivotal GreenplumVMware Tanzu
Data is at the center of digital transformation; using data to drive action is how transformation happens. But data is messy, and it’s everywhere. It’s in the cloud and on-premises. It’s in different types and formats. By the time all this data is moved, consolidated, and cleansed, it can take weeks to build a predictive model.
Even with data lakes, efficiently integrating multi-structured data from different data sources and streams is a major challenge. Enterprises struggle with a stew of data integration tools, application integration middleware, and various data quality and master data management software. How can we simplify this complexity to accelerate and de-risk analytic projects?
The data warehouse—once seen as only for traditional business intelligence applications — has learned new tricks. Join James Curtis from 451 Research and Pivotal’s Bob Glithero for an interactive discussion about the modern analytic data warehouse. In this webinar, we’ll share insights such as:
- Why after much experimentation with other architectures such as data lakes, the data warehouse has reemerged as the platform for integrated operational analytics
- How consolidating structured and unstructured data in one environment—including text, graph, and geospatial data—makes in-database, highly parallel, analytics practical
- How bringing open-source machine learning, graph, and statistical methods to data accelerates analytical projects
- How open-source contributions from a vibrant community of Postgres developers reduces adoption risk and accelerates innovation
We thank you in advance for joining us.
Presenter : Bob Glithero, PMM, Pivotal and James Curtis Senior Analyst, 451 Research
Enabling Fast Data Strategy: What’s new in Denodo Platform 6.0Denodo
In this presentation, you will see the new functionalities of the Denodo 6.0 detailing dynamic query optimization engine, managing enterprise deployments, and using information self-service for discovery and search.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/DzRtkg.
Making Big Data Analytics with Hadoop fast & easy (webinar slides)Yellowfin
Looking to analyze your Big Data assets to unlock real business benefits today? But, are you sick of all the theories, hype and whoopla?
View these slides from Actian and Yellowfin’s "Big Data Analytics with Hadoop" Webinar to discover how we’re making Big Data Analytics fast and easy.
Hold on as we go from data in Hadoop to dashboard in just 40-minutes.
Learn how to combine Hadoop with the most advanced Big Data technologies, and world’s easiest BI solution, to quickly generate real business value from Big Data Analytics.
Watch as we use live CDR data stored in Hadoop – quickly connecting, preparing, optimizing and analyzing this data in a tangible real-world use case from the telecommunications industry – to easily deliver actionable insights to anyone, anywhere, anytime.
To learn more about Yellowfin, and to try its intuitive Business Intelligence platform today, go here: http://www.yellowfinbi.com
To learn more about Actian, and its next generation suite of Big Data technologies, go here: http://www.actian.com/
Importance of ML Reproducibility & Applications with MLfLowDatabricks
With data as a valuable currency and the architecture of reliable, scalable Data Lakes and Lakehouses continuing to mature, it is crucial that machine learning training and deployment techniques keep up to realize value. Reproducibility, efficiency, and governance in training and production environments rest on the shoulders of both point in time snapshots of the data and a governing mechanism to regulate, track, and make best use of associated metadata.
This talk will outline the challenges and importance of building and maintaining reproducible, efficient, and governed machine learning solutions as well as posing solutions built on open source technologies – namely Delta Lake for data versioning and MLflow for efficiency and governance.
This is our contributions to the Data Science projects, as developed in our startup. These are part of partner trainings and in-house design and development and testing of the course material and concepts in Data Science and Engineering. It covers Data ingestion, data wrangling, feature engineering, data analysis, data storage, data extraction, querying data, formatting and visualizing data for various dashboards.Data is prepared for accurate ML model predictions and Generative AI apps
This is our project work at our startup for Data Science. This is part of our internal training and focused on data management for AI, ML and Generative AI apps
Do compilers look anything like a data pipeline? How do you do data testing to ensure end to end provenance and enforce engineering guarantees for your data products? What babysteps should you consider when assembling your team?
A Common Problem:
- My Reports run slow
- Reports take 3 hours to run
- We don’t have enough time to run our reports
- It takes 5 minutes to view the first page!
As the report processing time increases, so the frustration level.
Auto-Pilot for Apache Spark Using Machine LearningDatabricks
At Qubole, users run Spark at scale on cloud (900+ concurrent nodes). At such scale, for efficiently running SLA critical jobs, tuning Spark configurations is essential. But it continues to be a difficult undertaking, largely driven by trial and error. In this talk, we will address the problem of auto-tuning SQL workloads on Spark. The same technique can also be adapted for non-SQL Spark workloads. In our earlier work[1], we proposed a model based on simple rules and insights. It was simple yet effective at optimizing queries and finding the right instance types to run queries. However, with respect to auto tuning Spark configurations we saw scope of improvement. On exploration, we found previous works addressing auto-tuning using Machine learning techniques. One major drawback of the simple model[1] is that it cannot use multiple runs of query for improving recommendation, whereas the major drawback with Machine Learning techniques is that it lacks domain specific knowledge. Hence, we decided to combine both techniques. Our auto-tuner interacts with both models to arrive at good configurations. Once user selects a query to auto tune, the next configuration is computed from models and the query is run with it. Metrics from event log of the run is fed back to models to obtain next configuration. Auto-tuner will continue exploring good configurations until it meets the fixed budget specified by the user. We found that in practice, this method gives much better configurations compared to configurations chosen even by experts on real workload and converges soon to optimal configuration. In this talk, we will present a novel ML model technique and the way it was combined with our earlier approach. Results on real workload will be presented along with limitations and challenges in productionizing them. [1] Margoor et al,'Automatic Tuning of SQL-on-Hadoop Engines' 2018,IEEE CLOUD
Similar to Automatic Parameter Tuning for Databases and Big Data Systems (20)
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Automatic Parameter Tuning for Databases and Big Data Systems
1. Speedup your Analytics:
Automatic Parameter Tuning for
Databases and Big Data Systems
VLDB 2019 Tutorial
Jiaheng Lu, University of Helsinki
Yuxing Chen, University of Helsinki
Herodotos Herodotou, Cyprus University of Technology
Shivnath Babu, Duke University / Unravel Data Systems
2. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 2
Outline
Motivation and Background
History and Classification
Parameter Tuning on Databases
Parameter Tuning on Big Data Systems
Applications of Automatic Parameter Tuning
Open Challenges and Discussion
3. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 3
Modern applications are being built on a
collection of distributed systems
4. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 4
But:
Running distributed applications
reliably & efficiently is hard
5. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 5
My app failed
6. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 6
My data pipeline is missing SLA
7. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 7
My cloud cost is out of control
8. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 8
There are many challenges
9. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 9
Self-driving Systems
Automate physical data layout
Index and view
recommendation
Knob tuning, buffer
pool sizes, etc
Plan
optimization
10. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 10
Different Optimization Levels of Self-driving Systems
The focus of
this tutorial
Automate physical data layout
Index and view
recommendation
Knob tuning, buffer
pool sizes, etc
Plan
optimization
11. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 11
Effectiveness of Knob (Parameter) Tuning
12. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 12
Selected Performance-aware Parameters in PostgreSQL
Parameter Name Description Default value
bgwriter_lru_maxpages Max number of buffers written by the background writer 100
checkpoint_segments Max number of log file segments between WAL checkpoints 3
checkpoint_timeout Max time between automatic WAL checkpoints 5 min
deadlock_timeout Waiting time on locks for checking for deadlocks 1 sec
effective_cache_size Size of the disk cache accessible to one query 4 GB
effective_io_concurrency Number of disk I/O operations to be executed concurrently 1 or 0
shared_buffers Memory size for shared memory buffers 128MB
13. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 13
Selected Performance-aware Parameters in Hadoop
Parameter Name Description Default value
dfs.block.size The default block size for files stored HDFS 128MB
mapreduce.map.tasks Number of map tasks 2
mapreduce.reduce.tasks Number of reduce tasks 1
mapreduce.job.reduce
.slowstart.completedmaps
Min percent of map tasks completed before scheduling
reduce tasks
0.05
mapreduce.map.combine
.minspills
Min number of map output spill files present for using the
combine function
3
mapreduce.reduce.merge.in
mem.threshold
Max number of shuffled map output pairs before initiating
merging during the shuffle
1000
14. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 14
Selected Performance-aware Parameters in Spark
Parameter Name Description Default value
spark.driver.cores Number of cores used by the Spark driver process 1
spark.driver.memory Memory size for driver process 1 GB
spark.sql.shuffle.partitions Number of tasks 200
spark.executor.cores The number of cores for each executor 1
spark.files.maxPartitionBytes Max number of bytes to group into one partition
during file reading
128MB
spark.memory.fraction Fraction for execution and storage memory. It may
cause frequent spills or cached data eviction if given a
low fraction
0.6
15. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 15
Challenge 1: A Huge Number of Systems
16. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 16
Challenge 2: Many Parameters in a Single System
17. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 17
Challenge 3: Diverse Workloads and System Complexity
An example of Spark framework
18. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 18
Franco Pepe Chef:
“There is no pizza recipe. Every time the dough was
made there were no scales, recipes, machinery.”
There is no knob tuning recipe. Every time, we
need to configure the parameters based on the
bottleneck of different jobs and environment.
-- VLDB 2019 tutorial
19. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 19
Running Examples of Parameter Tuning (Hadoop)*
Workloads
➢ Terasort: Sort a terabyte of data
➢ N-gram: Compute the inverted list of N-gram data
➢ PageRank: Compute pagerank of graphs
Hadoop platform with MapReduce
* Juwei Shi, Jia Zou, Jiaheng Lu, Zhao Cao, Shiqiang Li, Chen Wang:
MRTuner: A Toolkit to Enable Holistic Optimization for MapReduce Jobs.
PVLDB 7(13): 1319-1330 (2014)
20. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 20
Running Examples of Parameter Tuning (Hadoop)
Problem: Given a MapReduce (or Spark) job with input data and running
cluster, we want to find the setting of parameters that optimize the execution
time of the job (i.e., minimize the job execution time)
21. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 21
Tuned Key Parameters in Hadoop
Parameter Name Description
MapInputSplit Split number for map jobs
MapOutputBuffer Buffer size of map output
MapOutputCompression Whether the map output data is compressed
ReduceCopy Time to start the copy in Reduce phase
ReduceInputBuffer Input buffer size of Reduce
ReduceOutput Reduce output block size
22. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 22
Impact of Parameters on Selected Jobs
TeraSort Job N-gram Job PageRank Job
23. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 23
Comparison between Hadoop-X and MRTuner with
Different Parameters
24. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 24
50 Years of Knob Tuning
Rule-based &
DBA guideline
(1960s)
Cost-model
(1970s)
Experiments-driven
(1980s)
Machine learning (2007)
Adaptive model
(2006)
Simulation
(2000s)
25. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 25
Classification of Existing Approaches
Approach Main Idea
Rule-based Based on the experience of human experts
Cost Modeling Using statistical cost functions
Simulation-based Modular or complete system simulation
Experiment-driven Execute an experiment with different parameter settings
Machine Learning Employ machine learning methods
Adaptive Tune configuration parameters adaptively
26. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 26
Rule-based Approach
➢ Assist users based on the experience of human experts
Parameter Name Default Description Recommendation
dfs.replication in
HDFS
3 Lower it to reduce
replication cost.
Higher replication can make
data local to more workers,
but more space overhead
IF (Running time is
the critical goal and
enough space)
Set 5
Otherwise
Set 3
27. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 27
Cost Modeling Approach
➢ Build performance prediction models by using statistical cost functions
Cost Constants
Cost Formulas
Cost Estimation
Operations
Parameter Values
Statistics
28. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 28
Simulation-based Approach
➢ Build performance models based on modular or complete simulation
29. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 29
Experiment-driven Approach
➢ Execute the experiments repeatedly with different parameter settings,
guided by a search algorithm
Input knobs
Recommended knobs
Goal
Experiments
Exploit knobs
30. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 30
Machine Learning Approach
➢ Establish performance models by employing machine learning methods
➢ Consider the complex system as a whole and assume no knowledge of
system
31. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 31
Adaptive Approach
➢ Tune parameters adaptively while an application is running
➢ Adjust the parameter settings as the environment changes
Online
CLOT (2006) strategy
New query
New environment
Self-tuning Module
Recommend
index selection
32. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 32
Outline
Motivation and Background
History and Classification
Parameter Tuning on Databases
Parameter Tuning on Big Data Systems
Applications of Automatic Parameter Tuning
Open Challenges and Discussion
33. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 33
What and How to Tune?
➢ What to configure?
❖ Which parameters (knobs)?
❖ Which are most important?
➢ How to tune (to best throughput)?
❖ Increase buffer size?
❖ More parallelism on writing?
I am a database
I am running queries
Figure. Tuning guitar knobs to right notes (frequencies)
Run faster?
Higher throughput?
34. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 34
What to Tune – Some Important Knobs for throughput
Parameter Name Brief Description and Use Deafult
bgwriter_delay Background writer’s delay between activity rounds 200ms
bgwriter_lru_maxpages Max number of buffers written by the background
writer
100
checkpoint_segments Max number of log file segments between WAL
checkpoints
3
checkpoint_timeout Max time between automatic WAL checkpoints 5min
deadlock_timeout Waiting time on locks for checking for deadlocks 1s
default_statistics_target Default statistics target for table columns 100
effective_cache_size Effective size of the disk cache accessible to one
query
4GB
shared_buffers Memory size for shared memory buffers 128MB
Memory
Cache
Timeout
Settings
Threads
35. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 35
What are the Important Parameters and How to Choose
➢ Affect the performance most (manually)
❖ Based on expert experiences
❖ Default documentation
If you want higher throughput,
better tuning memory-related
parameters
Performance-sensitive
parameters are important!
Parameters have strong
correlation to performance are
important!
36. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 36
What are the Important Parameters and How to Choose
➢ Affect the performance most
➢ Strongest correlation between parameters and objective function (model)
❖ Linear regression model for independent parameters:
❑ Regularized version of least squares – Lasso (OtterTune 2017)
✔ Interpretable, stable, and computationally efficient with higher dimensions
❖ Deep learning model (CBDTune 2019)
❑ The important input parameters will gain higher weights in training
Weights Knobs
37. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 37
How to Tune – Key Tuning Goals
➢ Avoidance: to identify and avoid error-prone configuration settings
➢ Ranking: to rank parameters according to the performance impact
➢ Profiling: to classify and store useful log information from previous runs
➢ Prediction: to predict the database or workload performance under
hypothetical resource or parameter changes
➢ Tuning: to recommend parameter values to achieve objective goals
38. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 38
How to Tune – Tuning Methods
Methods Approach Methodology Target Level
Rule-based SPEX (2013) Constraint inference Avoidance
Xu (2015) Configuration navigation Ranking
Cost-model STMM (2006) Cost model Tuning
Simulation-
based
Dushyanth (2005) Trace-based simulation Prediction
ADDM (2005) DAG model & simulation Profiling, tuning
Experiment
driven
SARD (2008) P&B statistical design Ranking
iTuned (2009) LHS & Guassian Process Profiling, tuning
Machine
Learning
Rodd (2016) Neural Networks Tuning
OtterTune (2017) Guassian Process Ranking, tuning
CDBTune (2019) Deep RL Tuning
Adaptive COLT (2006) Cost Vs. Gain analysis Profiling, tuning
39. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 39
Relational Database Tuning Methods
Rule-based
Cost Modeling
Simulation-based
Experiment-driven
Machine Learning
Covered #knobs
Training cost
Time
Rule-based
Simulation-based
Experiment-driven
Machine Learning
Adaptive
Cost Modeling
Figure. Required expert knowledge on systemFigure. Developing trend: putting more
training cost to uncover more knobs
2005 2017
40. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 40
Tuning Method: Rule-based
➢ Tuning based on rules derived from DBAs’ expertise, experience,
and knowledge, or Rule of Thumb default recommendation
Expert
Guarantee cache
memory to accelerate
queries …
Rule of Thumb
Better not change the
deadlock timeout if …
Documents
Default settings work
most of the cases …Rules
Trusted
Parties
41. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems
➢ A cost model establishes a performance model by cost functions
based on the deep understanding of system components
Cost Model
41
Tuning Method: Cost Modeling
Cost Constants
Cost Formulas
Cost Estimate
Operations
Parameter Values
Statistics
42. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 42
Tuning Method: Cost Modeling (STMM)
➢ STMM: Adaptive Self-Tuning Memory in DB2 (2006)
❖ Reallocates memory for several critical components(e.g., compiled
statement cache, sort, and buffer pools)
43. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 43
Tuning Method: Simulation-based
➢ A simulation-based approach simulates workloads in one
environment and learns experience or builds models to predict
the performance in another.
Running job here is (1) expensive
or (2) slowdown concurrent jobs or
(3)…
Simulate it in small
environment with tiny
portion of data …
Often product environment Often test environment
44. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 44
Tuning Method: Experiment-driven
➢ An experiment-driven approach relies on repeated executions of
the same workload under different configuration settings towards
tuning parameter values
Input knobs
Recommended knobs
Goal
Experiments
Exploit knobs
Classic paper: Tuning Database Configuration Parameters with iTuned. 2009
45. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 45
Tuning Method: Machine Learning
➢ Machine Learning (ML) approaches aim to tune parameters
automatically by taking advantages of ML methods.
ML Training
Input knobs
Recommended knobs
Goal
ML Model
Training logs
Actual run logs
46. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 46
Tuning Method: Machine Learning (OtterTune 2017)
➢ Factor Analysis: transform high dimension parameters to few factors
➢ Kmeans: Cluster distinct metrics
➢ Lasso: Rank parameters
➢ Gaussian Process: Predict and tune performance
Figure. OtterTune system architecture
47. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 47
Tuning Method: Machine Learning (CDBTune 2019)
➢ Reinforcement learning
▪ State: knobs and metrics
▪ Reward: performance change
▪ Action: recommended knobs
▪ Policy: Deep Neural network
➢ Key idea
▪ Feedback: try-and-error method
▪ Recommend -> good/bad
▪ Deep deterministic policy gradient
▪ Actor critic algorithm
Figure. CDBTune Deep deterministic policy gradient
Reward: Throughput and latency
performance change Δ from time t − 1
and the initial time to time t
48. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems
Online
48
Tuning Method: Adaptive
➢ An adaptive approach changes parameter configurations online as
the environment or query workload changes
Figure. CLOT (2006) strategy
New query
New environment
Self-tuning Module
Recommend
index selection
49. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 49
The Differences of Tuning Database & Big Data Systems in
research papers
Relational Database Big Data System
Parameters More parameters on memory More parameters on vcores
Resource Often fixed resources Now more varying resources
Scalability Often single machine Often many machines in a
distributed environment
Metrics Throughput, latency Time, resource cost
50. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 50
References (1/1)
➢ Narayanan, Dushyanth, Eno Thereska, and Anastassia Ailamaki. "Continuous resource monitoring for self-predicting
DBMS." 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and
Telecommunication Systems. IEEE, 2005.
➢ Dias, K., Ramacher, M., Shaft, U., Venkataramani, V., & Wood, G.. Automatic Performance Diagnosis and Tuning in
Oracle. In CIDR (pp. 84-94), 2005
➢ Schnaitter, K., Abiteboul, S., Milo, T., & Polyzotis, N. Colt: continuous on-line tuning. In Proceedings of the 2006 ACM
SIGMOD international conference on Management of data (pp. 793-795). ACM, 2006. (CLOT)
➢ Storm, Adam J., et al. "Adaptive self-tuning memory in DB2." Proceedings of the 32nd international conference on
Very large data bases. VLDB Endowment, 2006. (STMM)
➢ Debnath, Biplob K., David J. Lilja, and Mohamed F. Mokbel. "SARD: A statistical approach for ranking database tuning
parameters." 2008 IEEE 24th International Conference on Data Engineering Workshop. IEEE, 2008
➢ Babu, S., Borisov, N., Duan, S., Herodotou, H., & Thummala, V. Automated Experiment-Driven Management of
(Database) Systems. In HotOS, 2009.
➢ Duan, Songyun, Vamsidhar Thummala, and Shivnath Babu. "Tuning database configuration parameters with
iTuned." Proceedings of the VLDB Endowment 2.1: 1246-1257, 2009. (iTuned)
➢ Rodd, Sunil F., and Umakanth P. Kulkarni. "Adaptive tuning algorithm for performance tuning of database
management system." arXiv preprint arXiv:1005.0972, 2010.
➢ Van Aken, D., Pavlo, A., Gordon, G. J., & Zhang, B.. Automatic database management system tuning through large-
scale machine learning. In Proceedings of the 2017 ACM International Conference on Management of Data(pp. 1009-
1024). ACM, 2017. (OtterTune)
➢ Zhang, J., Liu, Y., Zhou, K., Li, G., Xiao, Z., Cheng, B., & Ran, M. An end-to-end automatic cloud database tuning system
using deep reinforcement learning. In Proceedings of the 2019 International Conference on Management of Data
(pp. 415-432). ACM, 2019. (CBDTune)
51. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 51
Outline
Motivation and Background
History and Classification
Parameter Tuning on Databases
Parameter Tuning on Big Data Systems
Applications of Automatic Parameter Tuning
Open Challenges and Discussion
52. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 52
Ecosystems for Big Data Analytics
MapReduce-based Systems Spark-based Systems
Resource Managers
Distributed File Systems
53. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 53
Executing Analytics Workloads
Goal:
Execute MapReduce workload
Decisions:
Task
Task
Task
Input
Output
Goal:
Execute an analytics workload in < 2 hours
➢ Task parallelism
➢ Use compression
➢ …
➢ Container settings
➢ Executor cores
➢ …
➢ Number of nodes
➢ Machine specs
➢ …
C C
NMDN
C C
NMDN
ResourcesPlatformJob
54. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 54
Effect of Job-level Configuration Parameters
➢ 190+ parameters in Hadoop, 15-20 impact performance
➢ 200+ parameters in Spark, 20-30 impact performance
Scenario: 2 MapReduce jobs, 50GB, 16-node EC2 cluster
Word Co-occurrence Terasort
Two-dimensional projections of a multi-dimensional surface
55. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 55
Tuning Challenges
➢ High-dimensional space of configuration parameters
➢ Non-linear effect of hardware/applications/parameters on performance
➢ Heavy use of programming languages (e.g., Java/Python)
➢ Lack of schema & statistics for input data residing in files
➢ Terabyte-scale data cycles
56. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 56
Applying Cost-based Optimization
57. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 57
Applying Cost-based Optimization
Profile
Collect
concise &
general
summaries
of execution
Predict
Estimate
impact of
hypothetical
changes on
execution
Optimize
58. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 58
Profiling MapReduce Job Execution
⮚ Use dynamic instrumentation
❖ Support unmodified
MapReduce programs
⮚ Use sampling techniques
❖ Minimize overhead of
profiling
Dataflow
Statistics
Dataflow
Counters
Cost
Statistics
Cost
Counters
MapReduce
Job Execution
59. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 59
Predicting Job Profiles in Starfish
Dataflow
Statistics
Dataflow
Counters
Cost
Statistics
Cost
Counters
Dataflow
Statistics
Dataflow
Counters
Cost
Statistics
Cost
Counters
Cardinality
Models
Relative Black-box
Models
Analytical
Models
Analytical
Models
60. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 60
Job Optimization & Resource Provisioning
Dataflow
Statistics
Dataflow
Counters
Cost
Statistics
Cost
Counters
Cluster
Resources r2
Input
Data d2
Job Optimizer
Enumerate
Independent
Subspaces
Recursive
Random Search
Input
Data d2
Elastisizer
Enumerate
Cloud Resource
Options
61. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 61
MapReduce Cost Modeling Approaches
Approach Modeling Optimization Target Level
Starfish (2011-13) Analytical & relative
black box models
Recursive Random
Search
Job, Platform,
Cloud
ARIA (2011) Analytical models Lagrange Multipliers Job, Platform
HPM (2011) Scaling models & LR Brute-force Search Platform
Predator (2012) Analytical models Grid Hill Climbing Job
MRTuner (2014) PTC analytical models Grid-based search Job, Platform
CRESP (2014) Analytical models & LR Brute-force Search Platform, Cloud
MR-COF (2015) Analytical models &
MRPerf simulation
Genetic Algorithm Job
IHPM (2016) Scaling models & LWLR Lagrange Multipliers Platform, Cloud
62. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 62
Spark Cost Modeling Approaches
➢ Focus:
➢ Unique features:
❖ Ernest (2016): Focus on machine learning Spark applications
❖ Assurance (2017): Mixes white-box models with simulation
❖ DynamiConf (2017): Optimizes degree of parallelism for tasks
Predict performance on
large clusters, full data
Profile using small
clusters, data samples
63. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 63
Cost Modeling Approach: Pros & Cons
Pros Very efficient for predicting performance
Good accuracy in many (not complex) scenarios
Cons Hard to capture complexity of system internals &
pluggable components (e.g., schedulers)
Models often based on simplified assumptions
Not effective on heterogeneous clusters
64. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 64
Simulation-based Approach
➢ Key Objective: Accurately predict MapReduce job performance at fine
granularity
❖ Sorry, no fully-fledged Spark simulator available at this point!
➢ Use cases:
❖ Find optimal configuration settings
❖ Find cluster settings based on user requirements
❖ Identify performance bottlenecks
❖ Test new pluggable components (e.g., schedulers)
➢ Common technique: discrete event simulation
65. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 65
HSim: Hadoop Simulator
Detailed fine-grained execution trace
Slave nodeMaster node
JobTracker TaskTracker
MapperSim ReducerSimTask
Heartbeat
Job Reader Cluster Reader
Job Specs
- Data properties
- Conf parameters
Cluster Specs
- Network topology
- Node resources
66. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 66
Comparison of Hadoop Simulators
Simulator Network
Traffic
Hardware
Properties
MapReduce
Execution
MapReduce
Scheduling
Conf
Parameters
MRPerf (2009) Yes (ns-2) Yes Task sub-phases No Only few
MRSim (2010) Yes (GridSim) Yes Task sub-phases No Several
Mumak (2009) No No Only task level No Only few
SimMR (2011) No No Task sub-phases
FIFO,
Deadline
Only few
SimMapRed
(2011)
Yes (GridSim) Yes Task sub-phases Several Only few
HSim (2013) Yes (GridSim) Yes Task sub-phases FIFO, FAIR Several
67. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 67
Simulation-based Approach: Pros & Cons
Pros High accuracy in simulating dynamic system
behaviors
Efficient for predicting fine-grained performance
Cons Hard to comprehensively simulate complex
internal dynamics
Unable to capture dynamic cluster utilization
Not very efficient for finding optimal settings
68. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 68
General Experiment-driven Architecture
[Sample]
Input Data
MR/Spark
Application
Hadoop/Spark Cluster
Parameter
Search Engine
Best
Parameter
Settings
Application
Executor
Conf
Performance
Analyzer
Logs
69. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 69
Experiment-driven Approaches
[Sample]
Input Data
MR/Spark
Application Best
Parameter
Settings
Performance
Analyzer
Application
Executor
Conf
Parameter
Search Engine
Panacea
(2012)
• Grid search on
independent
subspaces
Gunther
(2013)
• Genetic search
algorithm
Petridis
(2016)
• Search tree
based on
heuristics
AutoTune
(2018)
• Multiple bound
and search
algorithm
70. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 70
Gounaris (2018) Exp-driven Approach
Test all candidate configurations
and keep best one
Use benchmarking applications
(sort-by-key, shuffling, k-means)
Test parameter values separately and
in pairs (117 runs for 15 parameters)
Create candidate configurations (9
complex parameter configurations)
Offline
Online
Spark
Application
71. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 71
Experiment-driven Approach: Pros & Cons
Pros Finds good settings based on real test runs on real
systems
Works across different system versions and
hardware
Cons Very time consuming as it requires multiple actual
runs
Not cost effective for ad-hoc analytics applications
72. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 72
Machine Learning (ML) Approaches
➢ Three categories of ML approaches:
1) Build a historical store and use similarity measures
2) Perform clustering and ML modeling per cluster
3) Train and utilize a ML model per application
Offline Phase
Historical
data
Output
Feature
selection
Modeling
Online Phase
New
job
Best
conf
Prediction
Optimization
73. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 73
ML Approaches with Historical Stores
Offline
Phase
MR Jobs
Execute &
collect data
Store: <Features, Execution time>
Online
Phase
Kavulya (2010) PStorm (2014)
New MR Job
Similar Jobs & Execution Times
Execution Time Prediction
k-Nearest-Neighbors
Locally-weighted
Linear Regression
MR Jobs
Execute &
collect data
Store: <Features, Job profile>
New MR Job
Job Features
New Job Profile
Sample execution
XGBoost probing
Optimal Configuration
Starfish Optimizer
74. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 74
ML Approaches with Clustering
Offline
Phase
Online
Phase
AROMA (2012) PPABS (2013)
Job Utilization Data
k-mediod Clustering
Job Clusters
SVM Model per Cluster
Support Vector Machine
New MR Job
Job Utilization Data
SVM Model
Sample execution
Find cluster
Optimal Resources & Configuration
Pattern Search algorithm
New MR Job
Job Utilization Data
Sample execution
Find cluster
Optimal Configuration
Job Utilization Data
k-means++ Clustering
Job Clusters
Optimal Conf per Cluster
Simulated Annealing
75. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 75
ML Approaches with App Modeling
Offline
Phase
Online
Phase
Sample executions
MR Job / Spark App
Training Data
Machine Learning Model
Training model *
Guolu
(2016)
* Decision Tree
(C5.0)
+ Recursive
Random Search
Hernandez
(2017)
* Boosted
Regression Trees
+ Heuristic
algorithm
Chen
(2015)
* Tree-based
Regression
+ Random Hill
Climbing
RFHOC
(2016)
* Random-Forest
Approach
+ Genetic
Algorithm
MR Job / Spark App
Execution Time Prediction
Employ ML model
Optimal Configuration
Search algorithm +
76. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 76
Machine Learning Approach: Pros & Cons
Pros Ability to capture complex system dynamics
Independence from system internals and hardware
Learning based on real observations of system
performance
Cons Requires large training sets, which are expensive to
collect
Training from history logs leads to data under-
fitting
Typically low accuracy for unseen analytics
applications
77. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 77
Adaptive Approach
➢ Key idea: Track execution of a job and change its configuration in an
online fashion in order to improve performance
Map
Wave 1
Map
Wave 3
Map
Wave 2
Reduce
Wave 1
Reduce
Wave 2
Shuffle
1. Collect statistics from previous wave(s)
2. Set better configurations for next wave *
78. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 78
Adaptive Approach
➢ Key idea: Track execution of a job and change its configuration in an
online fashion in order to improve performance
Map
Wave 1
Map
Wave 3
Map
Wave 2
Reduce
Wave 1
Reduce
Wave 2
Shuffle
1. Collect statistics from previous wave(s)
2. Set better configurations for next wave *
MROnline
(2014)
* Gray-box hill
climbing
Ant
(2014)
* Genetic
Algorithm
JellyFish
(2015)
* Model-based
hill climbing
79. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 79
The KERMIT (2016) Approach
Master Node
Slave Node
Node
Manager
Slave Node
Node
Manager
Slave Node
Node
Manager
YARN
Resource
Manager
Container
Spark AppContainer
Resource
requests
Container
MR App
Container
Resource
requests
KERMIT
1. Observe container
performance
2. Adjust memory & CPU
allocations to maximize
performance
80. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 80
Adaptive Approach: Pros & Cons
Pros Finds good settings based on actual task runs
Able to adjust to dynamic runtime status
Works well for ad-hoc analytics applications
Cons Only applies to long-running analytics applications
Inappropriate configuration can cause issues (e.g.,
stragglers)
Neglects efficient resource utilization in the whole
system
81. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 81
Outline
Motivation and Background
History and Classification
Parameter Tuning on Databases
Parameter Tuning on Big Data Systems
Applications of Automatic Parameter Tuning
Open Challenges and Discussion
82. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 82
Auto Parameter Tuning in Database Systems
➢ Oracle Self-driving Database
❖ Automatically set various memory parameters and use of
compression using machine learning
➢ IBM DB2 Self-tuning Memory Manager
❖ Dynamically distributes available memory resources among
buffer pools, locking memory, package cache, and sort memory
➢ Azure SQL Database Automatic Tuning
❖ Memory buffer settings, index management, plan choice
correction
83. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 83
Auto Parameter Tuning in Big Data Systems
➢ Databricks Optimized Autoscaling
❖ Automatically scale number of executors in Spark up and
down
➢ Spotfire Data Science Autotuning
❖ Automatically set Spark parameters for number and
memory size of executors
➢ Sparklens: Qubole’s Spark Tuning Tool
❖ Automatically set memory of Spark executors
84. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 84
Auto Parameter Tuning with Unravel
85. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems
spark.driver.cores 2
spark.executor.cores
…
10
spark.sql.shuffle.partitions 300
spark.sql.autoBroadcastJoinThreshold 20MB
…
SKEW('orders', 'o_custId') true
spark.catalog.cacheTable(“orders") true
…
PERFORMANCE
85
Today, tuning is often by trial-and-error
86. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 86
A New World
INPUTS
1. App = Spark Query
2. Goal = Speedup
“I need to make this app faster”
87. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems
TIME
APPDURATION
In blink of an eye, user
gets recommendations
to make the app 30%
faster
As user finishes
checking email, she
has a verified run
that is 60% faster
User comes back from
lunch. A verified run
that is 90% faster
90%
faster!
87
A New World
88. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems
Response Surface MethodologyReinforcement Learning
88
89. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 89
Autotuning Workflow
Monitoring
Data
Historic Data
&
Probe Data
Recommendation
Algorithm
Cluster Services On-premises and Cloud
App,Goal
Orchestrator
Xnext
Probe Algorithm
90. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 90
Outline
Motivation and Background
History and Classification
Parameter Tuning on Databases
Parameter Tuning on Big Data Systems
Applications of Automatic Parameter Tuning
Open Challenges and Discussion
91. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 91
Putting it all Together
Approach Pros Cons
Cost Modeling Very efficient for predicting performance
Good accuracy in many (not complex) scenarios
Very efficient for predicting performance
Good accuracy in many (not complex) scenarios
Hard to capture complexity of system internals & pluggable
components (e.g., schedulers)
Models often based on simplified assumptions
Not effective on heterogeneous clusters
Simulation-based High accuracy in simulating dynamic system behaviors
Efficient for predicting fine-grained performance
Hard to comprehensively simulate complex internal dynamics
Unable to capture dynamic cluster utilization
Not very efficient for finding optimal settings
Experiment-driven Finds good settings based on real test runs on real
systems
Works across different system versions and hardware
Very time consuming as it requires multiple actual runs
Not cost effective for ad-hoc analytics applications
Machine Learning Ability to capture complex system dynamics
Independence from system internals and hardware
Learning based on real observations of system
performance
Requires large training sets, which are expensive to collect
Training from history logs leads to data under-fitting
Typically low accuracy for unseen analytics applications
Adaptive Finds good settings based on actual task runs
Able to adjust to dynamic runtime status
Works well for ad-hoc analytics applications
Only applies to long-running analytics applications
Inappropriate configuration can cause issues (e.g., stragglers)
Neglects efficient resource utilization in the whole system
92. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 92
Open Challenges
Clusters are becoming heterogeneous in nature,
both for compute and storage
The proliferation of Cloud leads to multi-
tenancy, overheads, perf interaction issues
Real-time analytics pushes boundaries on latency
requirements and combination of systems
Ensuring good
and robust
system
performance at
scale poses new
challenges
94. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 94
References (1/6)
➢ Shivnath Babu. Towards Automatic Optimization of MapReduce Programs. In Proceedings of the 1st ACM
Symposium on Cloud Computing (SoCC), pages 137–142. ACM, 2010.
➢ Shivnath Babu, Herodotos Herodotou, et al. Massively Parallel Databases and MapReduce Systems.
Foundations and Trends® in Databases, 5(1):1–104, 2013.
➢ Liang Bao, Xin Liu, and Weizhao Chen. Learning-based Automatic Parameter Tuning for Big Data Analytics
Frameworks. arXiv preprint arXiv:1808.06008, 2018.
➢ Zhendong Bei, Zhibin Yu, Huiling Zhang, Wen Xiong, Chengzhong Xu, Lieven Eeckhout, and Shengzhong Feng.
RFHOC: A Random-Forest Approach to Auto-Tuning Hadoop’s Configuration. IEEE Transactions on Parallel and
Distributed Systems (TPDS), 27(5):1470–1483, 2016.
➢ Chi-Ou Chen, Ye-Qi Zhuo, Chao-Chun Yeh, Che-Min Lin, and Shih-Wei Liao. Machine Learning-Based
Configuration Parameter Tuning on Hadoop System. In Proceedings of the 2015 IEEE International Congress
on Big Data (BigData Congress), pages 386–392. IEEE, 2015.
➢ Keke Chen, James Powers, Shumin Guo, and Fengguang Tian. CRESP: Towards Optimal Resource Provisioning
for MapReduce Computing in Public Clouds. IEEE Transactions on Parallel and Distributed Systems (TPDS),
25(6):1403–1412, 2014.
➢ Dazhao Cheng, Jia Rao, Yanfei Guo, and Xiaobo Zhou. Improving MapReduce Performance in Heterogeneous
Environments with Adaptive Task Tuning. In Proceedings of the 15th International Middleware Conference,
pages 97–108. ACM, 2014.
➢ Jeffrey Dean and Sanjay Ghemawat. MapReduce: Simplified Data Processing on Large Clusters.
Communications of the ACM, 51(1):107–113, 2008.
95. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 95
References (2/6)
➢ Xiaoan Ding, Yi Liu, and Depei Qian. Jellyfish: Online Performance Tuning with Adaptive Configuration and
Elastic Container in Hadoop YARN. In Proceedings of the 2015 IEEE 21st International Conference on Parallel
and Distributed Systems (ICPADS), pages 831–836. IEEE, 2015.
➢ Mostafa Ead, Herodotos Herodotou, Ashraf Aboulnaga, and Shivnath Babu. PStorM: Profile Storage and
Matching for Feedback-Based Tuning of MapReduce Jobs. In Proceedings of the 17th International
Conference on Extending Database Technology (EDBT), pages 1–12. OpenProceedings, March 2014.
➢ Mikhail Genkin, Frank Dehne, Maria Pospelova, Yabing Chen, and Pablo Navarro. Automatic, On-Line Tuning
of YARN Container Memory and CPU Parameters. In Proceedings of the 2016 IEEE 18th International
Conference on High Performance Computing and Communications (HPCC), pages 317–324. IEEE, 2016.
➢ Anastasios Gounaris, Georgia Kougka, Ruben Tous, Carlos Tripiana Montes, and Jordi Torres. Dynamic
Configuration of Partitioning in Spark Applications. IEEE Transactions on Parallel and Distributed Systems
(TPDS), 28(7):1891–1904, 2017.
➢ Anastasios Gounaris and Jordi Torres. A Methodology for Spark Parameter Tuning. Big Data Research, 11:22–
32, March 2017.
➢ Suhel Hammoud, Maozhen Li, Yang Liu, Nasullah Khalid Alham, and Zelong Liu. MRSim: A Discrete Event
Based MapReduce Simulator. In Proceedings of the Seventh International Conference on Fuzzy Systems and
Knowledge Discovery (FSKD), volume 6, pages 2993–2997. IEEE, 2010.
➢ Álvaro Brandón Hernández, María S Perez, Smrati Gupta, and Victor Muntés-Mulero. Using Machine Learning
to Optimize Parallelism in Big Data Applications. Future Generation Computer Systems, pages 1–12, 2017.
96. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 96
References (3/6)
➢ Herodotos Herodotou and Shivnath Babu. Profiling, What-if Analysis, and Cost-based Optimization of
MapReduce Programs. Proceedings of the VLDB Endowment, 4(11):1111–1122, 2011.
➢ Herodotos Herodotou and Shivnath Babu. A What-if Engine for Cost-based MapReduce Optimization. IEEE
Data Engineering Bulletin, 36(1):5–14, 2013.
➢ Herodotos Herodotou, Fei Dong, and Shivnath Babu. No One (Cluster) Size Fits All: Automatic Cluster Sizing
for Data-intensive Analytics. In Proceedings of the 2nd ACM Symposium on Cloud Computing (SoCC). ACM,
2011.
➢ Herodotos Herodotou, Harold Lim, Gang Luo, Nedyalko Borisov, Liang Dong, Fatma Bilgen Cetin, and Shivnath
Babu. Starfish: A Self-tuning System for Big Data Analytics. In Proceedings of the Fifth Biennial Conference on
Innovative Data Systems Research (CIDR), volume 11, pages 261–272, 2011.
➢ Soila Kavulya, Jiaqi Tan, Rajeev Gandhi, and Priya Narasimhan. An Analysis of Traces from a Production
MapReduce Cluster. In Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud
and Grid Computing, pages 94–103. IEEE Computer Society, 2010.
➢ Mukhtaj Khan, Yong Jin, Maozhen Li, Yang Xiang, and Changjun Jiang. Hadoop Performance Modeling for Job
Estimation and Resource Provisioning. IEEE Transactions on Parallel and Distributed Systems (TPDS),
27(2):441–454, 2016.
➢ Palden Lama and Xiaobo Zhou. AROMA: Automated Resource Allocation and Configuration of MapReduce
Environment in the Cloud. In Proceedings of the 9th international conference on Autonomic computing
(ICAC), pages 63–72. ACM, 2012.
97. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 97
References (4/6)
➢ Min Li, Liangzhao Zeng, Shicong Meng, Jian Tan, Li Zhang, Ali R Butt, and Nicholas Fuller. MROnline:
MapReduce Online Performance Tuning. In Proceedings of the 23rd International Symposium on High-
performance Parallel and Distributed Computing (HPDC), pages 165–176. ACM, 2014.
➢ Guangdeng Liao, Kushal Datta, and Theodore L Willke. Gunther: Search-based Auto-tuning of MapReduce. In
Proceedings of the European Conference on Parallel Processing (Euro-Par), pages 406–419. Springer, 2013.
➢ Chao Liu, Deze Zeng, Hong Yao, Chengyu Hu, Xuesong Yan, and Yuanyuan Fan. MR-COF: A Genetic
MapReduce Configuration Optimization Framework. In Proceedings of the International Conference on
Algorithms and Architectures for Parallel Processing (ICA3PP), pages 344–357. Springer, 2015.
➢ Jun Liu, Nishkam Ravi, Srimat Chakradhar, and Mahmut Kandemir. Panacea: Towards Holistic Optimization of
MapReduce Applications. In Proceedings of the Tenth International Symposium on Code Generation and
Optimization (CGO), pages 33–43. ACM, 2012.
➢ Yang Liu, Maozhen Li, Nasullah Khalid Alham, and Suhel Hammoud. HSim: A MapReduce Simulator in Enabling
Cloud Computing. Future Generation Computer Systems, 29(1):300–308, 2013.
➢ Mumak: Map-Reduce Simulator, 2010. https://issues.apache.org/jira/browse/MAPREDUCE-728.
➢ Panagiotis Petridis, Anastasios Gounaris, and Jordi Torres. Spark Parameter Tuning via Trial-and-Error. In
Proceedings of the INNS Conference on Big Data, pages 226–237. Springer, 2016.
➢ Juwei Shi, Jia Zou, Jiaheng Lu, Zhao Cao, Shiqiang Li, and Chen Wang. MRTuner: a Toolkit to Enable Holistic
Optimization for MapReduce Jobs. Proceedings of the VLDB Endowment, 7(13):1319–1330, 2014.
98. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 98
References (5/6)
➢ Rekha Singhal and Praveen Singh. Performance Assurance Model for Applications on SPARK Platform. In
Proceedings of the Technology Conference on Performance Evaluation and Benchmarking (TPCTC), pages
131–146. Springer, 2017.
➢ Fei Teng, Lei Yu, and Frederic Magoulès. SimMapReduce: A Simulator for Modeling MapReduce Framework.
In Proceedings of the 5th FTRA International Conference on Multimedia and Ubiquitous Engineering (MUE),
pages 277–282. IEEE, 2011.
➢ Vinod Kumar Vavilapalli, Arun C Murthy, Chris Douglas, Sharad Agarwal, Mahadev Konar, Robert Evans,
Thomas Graves, Jason Lowe, Hitesh Shah, Siddharth Seth, et al. Apache Hadoop YARN: Yet Another Resource
Negotiator. In Proceedings of the 4th annual Symposium on Cloud Computing (SoCC), page 5. ACM, 2013.
➢ Shivaram Venkataraman, Zongheng Yang, Michael J Franklin, Benjamin Recht, and Ion Stoica. Ernest: Efficient
Performance Prediction for Large-Scale Advanced Analytics. In Proceedings of the 13th USENIX Symposium on
Networked Systems Design and Implementation (NSDI), pages 363–378. USENIX Association, 2016.
➢ Abhishek Verma, Ludmila Cherkasova, and Roy H. Campbell. ARIA: Automatic Resource Inference and
Allocation for Mapreduce Environments. In Proceedings of the 8th ACM International Conference on
Autonomic Computing (ICAC), ICAC ’11, pages 235–244, New York, NY, USA, 2011. ACM.
➢ Abhishek Verma, Ludmila Cherkasova, and Roy H Campbell. Play it Again, SimMR! In Proceedings of the 2011
IEEE International Conference on Cluster Computing (CLUSTER), pages 253–261. IEEE, 2011.
99. VLDB 2019 Speedup Your Analytics: Automatic Parameter Tuning for Databases and Big Data Systems 99
References (6/6)
➢ Abhishek Verma, Ludmila Cherkasova, and Roy H. Campbell. Resource Provisioning Framework for MapReduce
Jobs with Performance Goals. In Proceedings of the ACM/IFIP/USENIX 12th International Middleware
Conference, pages 165–186. Springer, 2011.
➢ Guanying Wang, Ali R Butt, Prashant Pandey, and Karan Gupta. A Simulation Approach to Evaluating Design
Decisions in MapReduce Setups. In Proceedings of the 2009 IEEE International Symposium on Modeling,
Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS), pages 1–11. IEEE, 2009.
➢ Guolu Wang, Jungang Xu, and Ben He. A Novel Method for Tuning Configuration Parameters of Spark based on
Machine Learning. In Proceedings of the 2016 IEEE 18th International Conference on High Performance
Computing and Communications (HPCC), pages 586–593. IEEE, 2016.
➢ Kewen Wang, Xuelian Lin, and Wenzhong Tang. Predator - An Experience Guided Configuration Optimizer for
Hadoop MapReduce. In Proceedings of the IEEE 4th International Conference on Cloud Computing Technology
and Science (CloudCom), pages 419–426. IEEE, 2012.
➢ Dili Wu and Aniruddha Gokhale. A Self-tuning System based on Application Profiling and Performance Analysis
for Optimizing Hadoop MapReduce Cluster Configuration. In Proceedings of the 20th International Conference
on High Performance Computing (HiPC), pages 89–98. IEEE, 2013.