This document summarizes several performance improvements in MySQL 5.6 including index condition pushdown, multi-range read, batched key access, and persistent optimizer statistics. It provides examples of query execution with and without these optimizations, showing significant performance gains in MySQL 5.6 for queries that push index conditions down to the storage engine, read multiple index ranges sequentially, and access keys in batches for joins.
New features in Performance Schema 5.7 in actionSveta Smirnova
The document discusses new features in Performance Schema 5.7, including improved instrumentation for locks, memory usage, stored routines, prepared statements, and variables. It provides examples of using Performance Schema tables like METADATA_LOCKS, TABLE_HANDLES, and prepared_statements_instances to diagnose issues like locks preventing DDL statements from completing and inconsistently timed stored procedure executions. Practices are suggested to identify memory usage and optimize prepared statement performance.
This document provides an agenda and overview for a MySQL Query Tuning 101 presentation. The summary includes:
1. The agenda covers topics like identifying slow queries, using indexes, the EXPLAIN tool, and other optimization techniques.
2. When queries run slow, the presenter will discuss using indexes to improve performance by allowing MySQL to access data more efficiently.
3. The EXPLAIN tool is covered as a way to estimate query execution and see how MySQL utilizes indexes. Different EXPLAIN output will be demonstrated using examples from an employees database.
Percona Live 2016 (https://www.percona.com/live/data-performance-conference-2016/sessions/why-use-explain-formatjson). Although EXPLAIN FORMAT=JSON was first presented a long time ago, there still aren't many resources that explain how and why to use it. The most advertised feature is visual EXPLAIN in MySQL Workbench, but this format can do more than create nice pictures. It prints additional information that can't be found in good old tabular EXPLAIN, and can help to solve many tricky performance issues. In this session, I will not only describe which additional information we can get with the new syntax, but also provide examples showing how to use it to diagnose production issues.
This document discusses troubleshooting MySQL performance issues. It begins with an overview of MySQL server architecture and important components like the optimizer and storage engines. It then covers various diagnostic instruments for troubleshooting like log files, the Information Schema, and the Performance Schema. Specific issues covered include single statement performance using EXPLAIN, internal concurrency issues detected via locks diagnostics from sources like SHOW PROCESSLIST and the Performance Schema. The document provides examples of using these diagnostic tools to analyze and optimize query performance.
Basic MySQL Troubleshooting for Oracle Database AdministratorsSveta Smirnova
This document provides an overview of basic MySQL troubleshooting techniques for Oracle database administrators. It covers MySQL server architecture including connectors, clients, APIs, storage engines, and plugins. It then discusses basic troubleshooting techniques such as error processing, access privileges, and using system variables, performance schema, and EXPLAIN to analyze query execution plans. The document is intended to help Oracle DBAs understand fundamental aspects of MySQL administration.
This document provides a summary of a presentation on practical MySQL tuning. It discusses measuring critical system resources like CPU, memory, I/O and network usage to identify bottlenecks. It also covers rough tuning of MySQL parameters like the InnoDB buffer pool size, log file size and key buffer size. Further tuning includes application optimizations like query tuning with EXPLAIN, index tuning, and schema design. The presentation also discusses scaling MySQL through approaches like caching, sharding, replication and optimizing architecture and data distribution. Regular performance monitoring is emphasized to simulate increased load and aid capacity planning.
Performance Schema for MySQL TroubleshootingSveta Smirnova
The Performance Schema in MySQL provides tables and instruments for troubleshooting issues like locks, I/O bottlenecks, slow queries, memory usage, and replication failures. It contains over 500 instruments in MySQL 5.6 and over 800 in 5.7. The tables provide visibility into the internal workings of MySQL to analyze and optimize performance.
MySQL is a relational database management system. It provides tools for managing data, including creating, querying, updating and deleting data in databases. Some key features include:
- Creating, altering and dropping databases, tables, indexes, users and more.
- Inserting, selecting, updating and deleting data with SQL statements.
- Backup and restore capabilities using mysqldump to backup entire databases or tables.
- Security features including user accounts and privileges to control access.
- Performance optimization using indexes, partitioning, query tuning and more.
- Data types for different kinds of data like numbers, dates, text, JSON and more.
New features in Performance Schema 5.7 in actionSveta Smirnova
The document discusses new features in Performance Schema 5.7, including improved instrumentation for locks, memory usage, stored routines, prepared statements, and variables. It provides examples of using Performance Schema tables like METADATA_LOCKS, TABLE_HANDLES, and prepared_statements_instances to diagnose issues like locks preventing DDL statements from completing and inconsistently timed stored procedure executions. Practices are suggested to identify memory usage and optimize prepared statement performance.
This document provides an agenda and overview for a MySQL Query Tuning 101 presentation. The summary includes:
1. The agenda covers topics like identifying slow queries, using indexes, the EXPLAIN tool, and other optimization techniques.
2. When queries run slow, the presenter will discuss using indexes to improve performance by allowing MySQL to access data more efficiently.
3. The EXPLAIN tool is covered as a way to estimate query execution and see how MySQL utilizes indexes. Different EXPLAIN output will be demonstrated using examples from an employees database.
Percona Live 2016 (https://www.percona.com/live/data-performance-conference-2016/sessions/why-use-explain-formatjson). Although EXPLAIN FORMAT=JSON was first presented a long time ago, there still aren't many resources that explain how and why to use it. The most advertised feature is visual EXPLAIN in MySQL Workbench, but this format can do more than create nice pictures. It prints additional information that can't be found in good old tabular EXPLAIN, and can help to solve many tricky performance issues. In this session, I will not only describe which additional information we can get with the new syntax, but also provide examples showing how to use it to diagnose production issues.
This document discusses troubleshooting MySQL performance issues. It begins with an overview of MySQL server architecture and important components like the optimizer and storage engines. It then covers various diagnostic instruments for troubleshooting like log files, the Information Schema, and the Performance Schema. Specific issues covered include single statement performance using EXPLAIN, internal concurrency issues detected via locks diagnostics from sources like SHOW PROCESSLIST and the Performance Schema. The document provides examples of using these diagnostic tools to analyze and optimize query performance.
Basic MySQL Troubleshooting for Oracle Database AdministratorsSveta Smirnova
This document provides an overview of basic MySQL troubleshooting techniques for Oracle database administrators. It covers MySQL server architecture including connectors, clients, APIs, storage engines, and plugins. It then discusses basic troubleshooting techniques such as error processing, access privileges, and using system variables, performance schema, and EXPLAIN to analyze query execution plans. The document is intended to help Oracle DBAs understand fundamental aspects of MySQL administration.
This document provides a summary of a presentation on practical MySQL tuning. It discusses measuring critical system resources like CPU, memory, I/O and network usage to identify bottlenecks. It also covers rough tuning of MySQL parameters like the InnoDB buffer pool size, log file size and key buffer size. Further tuning includes application optimizations like query tuning with EXPLAIN, index tuning, and schema design. The presentation also discusses scaling MySQL through approaches like caching, sharding, replication and optimizing architecture and data distribution. Regular performance monitoring is emphasized to simulate increased load and aid capacity planning.
Performance Schema for MySQL TroubleshootingSveta Smirnova
The Performance Schema in MySQL provides tables and instruments for troubleshooting issues like locks, I/O bottlenecks, slow queries, memory usage, and replication failures. It contains over 500 instruments in MySQL 5.6 and over 800 in 5.7. The tables provide visibility into the internal workings of MySQL to analyze and optimize performance.
MySQL is a relational database management system. It provides tools for managing data, including creating, querying, updating and deleting data in databases. Some key features include:
- Creating, altering and dropping databases, tables, indexes, users and more.
- Inserting, selecting, updating and deleting data with SQL statements.
- Backup and restore capabilities using mysqldump to backup entire databases or tables.
- Security features including user accounts and privileges to control access.
- Performance optimization using indexes, partitioning, query tuning and more.
- Data types for different kinds of data like numbers, dates, text, JSON and more.
Performance Schema for MySQL TroubleshootingSveta Smirnova
The Performance Schema provides detailed information for troubleshooting and optimizing MySQL. It collects instrumentation data on server operations, statements, memory usage, locks and connections. The data can be used to identify slow queries, statements not using indexes, memory consumption trends over time, and more. Configuration and enabling specific instruments allows controlling the level of detail collected.
Introducing new SQL syntax and improving performance with preparse Query Rewr...Sveta Smirnova
This document discusses a new preparse query rewrite plugin for MySQL that allows adding new SQL syntax like the FILTER clause from SQL:2003. The plugin works by catching the query before parsing and rewriting parts using regular expressions. It also describes extending the plugin to support custom optimizer hints by modifying and restoring thread-specific variable values.
Modern query optimisation features in MySQL 8.Mydbops
MySQL 8 (a huge leap forward), indexing capabilities, execution plan enhancements, optimizer improvements, and many other current query tweak features are covered in the slides.
MySQL 5.6 introduces several new query optimization features over MySQL 5.5, including:
1) Filesort optimization for queries with a filesort but a short LIMIT, improving performance over 2x in one example.
2) Index Condition Pushdown which pushes conditions from the WHERE clause into the index tree evaluation, improving a query over 5x faster by reducing the number of rows accessed.
3) Other optimizations like Multi-Range Read which improve performance of queries that access multiple ranges or indexes in a single query. The document provides examples comparing execution plans and performance between MySQL 5.5 and 5.6 to demonstrate the benefits of the new optimization features.
The document discusses how the PostgreSQL query planner works. It explains that a query goes through several stages including parsing, rewriting, planning/optimizing, and execution. The optimizer or planner has to estimate things like the number of rows and cost to determine the most efficient query plan. Statistics collected by ANALYZE are used for these estimates but can sometimes be inaccurate, especially for n_distinct values. Increasing the default_statistics_target or overriding statistics on columns can help address underestimation issues. The document also discusses different plan types like joins, scans, and aggregates that the planner may choose between.
Moving to the NoSQL side: MySQL JSON functionsSveta Smirnova
This document provides an overview of JSON functions that have been added to MySQL to support NoSQL and manipulation of JSON documents. It discusses the history and improvements of the JSON functions, provides examples of how various functions such as json_valid, json_contains_key, json_extract, json_append, json_replace, json_set, json_remove, json_search, json_merge, json_depth, and json_count work, and describes how to install and compile the JSON functions. The JSON functions allow users to validate, search, modify, and work with JSON documents directly in MySQL.
The document discusses improvements to the MariaDB query optimizer. It notes that while MySQL is widely used for web applications and OLTP, it is not well-suited for complex analytics queries on large datasets due to issues with disk access strategies and subquery optimizations. MariaDB 5.3 includes multi-range read and batched key access features that improve disk access, reducing query times by over 10x on benchmark tests. It also includes many additional subquery optimization strategies beyond those in earlier MySQL versions.
The optimizer trace provides a detailed log of the actions taken by the query optimizer. It traces the major stages of query optimization including join preparation, join optimization, and join execution. During join optimization, it records steps like condition processing, determining table dependencies, estimating rows for plans, considering different execution plans, and choosing the best join order. The trace helps understand why certain query plans are chosen and catch differences in plans that may occur due to factors like database version changes.
Using PostgreSQL statistics to optimize performance Alexey Ermakov
The document discusses using statistics in PostgreSQL to optimize performance. It describes how the planner estimates row counts in tables and selectivity of query conditions. Default estimators are used if no statistics are collected. Statistics are gathered on tables and indexes to estimate selectivity. Partial indexes can be useful when not all values need to be indexed. Monitoring and diagnosing performance issues is also covered.
New features in Performance Schema 5.7 in actionSveta Smirnova
New features in Performance Schema 5.7 in action provides an overview of Performance Schema improvements in MySQL 5.7 and 8.0 including new tables, instruments, and variables. It demonstrates how to use Performance Schema to diagnose locks, memory usage, stored routines, and prepared statements. Examples show identifying blocking locks, measuring memory usage by thread, and instrumentation of stored procedure execution and prepared statement statistics.
This presentation discusses troubleshooting MySQL performance issues. It begins by explaining how to determine if MySQL is the source of slow performance by measuring query response times. The presentation then covers various aspects to check like queries, server options, hardware resources, and replication setup. Specific diagnostic tools are also introduced like EXPLAIN, slow query log, and various system monitoring commands. Finally, it recommends following a structured process of tuning queries, options, and hardware to isolate and resolve any performance problems.
Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013Jaime Crespo
Tutorial delivered at Percona MySQL Conference Live London 2013.
It doesn't matter what new SSD technologies appear, or what are the latest breakthroughs in flushing algorithms: the number one cause for MySQL applications being slow is poor execution plan of SQL queries. While the latest GA version provided a huge amount of transparent optimizations -specially for JOINS and subqueries- it is still the developer's responsibility to take advantage of all new MySQL 5.6 features.
In this tutorial we will propose the attendants a sample PHP application with bad response time. Through practical examples, we will suggest step-by-step strategies to improve its performance, including:
* Checking MySQL & InnoDB configuration
* Internal (performance_schema) and external tools for profiling (pt-query-digest)
* New EXPLAIN tools
* Simple and multiple column indexing
* Covering index technique
* Index condition pushdown
* Batch key access
* Subquery optimization
The document discusses how to use EXPLAIN to optimize SQL queries. EXPLAIN shows how tables are joined, indexes used, and records examined. It returns information like the number of tables, join types, and data access methods. The fastest strategies are const, which uses a primary or unique key to lookup at most one value, and eq_ref, which joins on a unique index. EXPLAIN helps identify inefficient queries to improve performance.
This document discusses techniques for efficient pagination over large datasets using MySQL. The typical solution of using LIMIT and OFFSET can degrade performance as the offset increases. The document proposes using additional criteria like a "last seen" value combined with ordering to retrieve the next page without an offset. This allows fetching the next results set using an index scan. Testing showed a 6x improvement in query throughput over using large offsets. Additional enhancements like secondary indexes and caching are discussed to further optimize pagination.
MySQL/MariaDB query optimizer tuning tutorial from Percona Live 2013Sergey Petrunya
The document discusses techniques for identifying and addressing problems with a database query optimizer. It describes old and new tools for catching slow queries, such as the slow query log, SHOW PROCESSLIST, and the Performance Schema. It also provides examples of using these tools to analyze query plans, identify inefficient plans, and determine if optimizer settings or query structure need to be modified to address performance issues.
This document discusses new SQL syntax and query rewrite plugins in MySQL. It introduces FILTER clauses and custom optimizer hints and describes how to implement them using query rewrite plugins. Key points covered include the plugin interface, parsing and rewriting queries, managing memory, and customizing variables. The goal is to add new SQL features and control query execution through simple syntax extensions. Examples of implementing FILTER clauses and custom optimizer hints are provided to demonstrate how rewrite plugins work.
16 MySQL Optimization #burningkeyboardsDenis Ristic
The document discusses MySQL optimization. It provides details on using EXPLAIN to analyze query performance and the slow query log. It also summarizes using mysqltuner.pl to analyze a MySQL configuration and make recommendations such as disabling unused storage engines, defragmenting tables, enabling the slow query log, and adjusting certain variables like query_cache_size, tmp_table_size, and table_cache. Additional resources on MySQL optimization are also listed.
The document is about explaining the MySQL EXPLAIN statement. It provides an overview of EXPLAIN, how to read the query execution plan (QEP) produced by EXPLAIN, examples of QEPs, and limitations of the MySQL optimizer.
Performance Schema for MySQL TroubleshootingSveta Smirnova
Percona Live (https://www.percona.com/live/data-performance-conference-2016/sessions/performance-schema-mysql-troubleshooting)
The performance schema in MySQL version 5.6, released in February, 2013, is a very powerful tool that can help DBAs discover why even the trickiest performance issues occur. Version 5.7 introduces even more instruments and tables. And while all these give you great power, you can get stuck choosing which instrument to use.
In this session, I will start with a description of a typical problem, then guide you how to use the performance schema to find out what causes the issue, the reason for unwanted behavior and how the received information can help you solve a particular problem.
Traditionally, performance schema sessions teach what is in contained in tables. I will, in contrast, start from a performance issue, then demonstrate which instruments and tables can help solve it. We will discuss how to setup the performance schema so that it has minimal impact on your server.
THE MUSEUM MEIJI-MURA Open air museum for preserving and exhibiting Japanese ...B & M Co., Ltd.
This document outlines a plan for a company reorganization that will reduce costs. It proposes consolidating several smaller departments into two larger divisions to remove redundant manager and supervisor roles. This restructuring is estimated to save the company $500,000 per year in personnel expenses.
Performance Schema for MySQL TroubleshootingSveta Smirnova
The Performance Schema provides detailed information for troubleshooting and optimizing MySQL. It collects instrumentation data on server operations, statements, memory usage, locks and connections. The data can be used to identify slow queries, statements not using indexes, memory consumption trends over time, and more. Configuration and enabling specific instruments allows controlling the level of detail collected.
Introducing new SQL syntax and improving performance with preparse Query Rewr...Sveta Smirnova
This document discusses a new preparse query rewrite plugin for MySQL that allows adding new SQL syntax like the FILTER clause from SQL:2003. The plugin works by catching the query before parsing and rewriting parts using regular expressions. It also describes extending the plugin to support custom optimizer hints by modifying and restoring thread-specific variable values.
Modern query optimisation features in MySQL 8.Mydbops
MySQL 8 (a huge leap forward), indexing capabilities, execution plan enhancements, optimizer improvements, and many other current query tweak features are covered in the slides.
MySQL 5.6 introduces several new query optimization features over MySQL 5.5, including:
1) Filesort optimization for queries with a filesort but a short LIMIT, improving performance over 2x in one example.
2) Index Condition Pushdown which pushes conditions from the WHERE clause into the index tree evaluation, improving a query over 5x faster by reducing the number of rows accessed.
3) Other optimizations like Multi-Range Read which improve performance of queries that access multiple ranges or indexes in a single query. The document provides examples comparing execution plans and performance between MySQL 5.5 and 5.6 to demonstrate the benefits of the new optimization features.
The document discusses how the PostgreSQL query planner works. It explains that a query goes through several stages including parsing, rewriting, planning/optimizing, and execution. The optimizer or planner has to estimate things like the number of rows and cost to determine the most efficient query plan. Statistics collected by ANALYZE are used for these estimates but can sometimes be inaccurate, especially for n_distinct values. Increasing the default_statistics_target or overriding statistics on columns can help address underestimation issues. The document also discusses different plan types like joins, scans, and aggregates that the planner may choose between.
Moving to the NoSQL side: MySQL JSON functionsSveta Smirnova
This document provides an overview of JSON functions that have been added to MySQL to support NoSQL and manipulation of JSON documents. It discusses the history and improvements of the JSON functions, provides examples of how various functions such as json_valid, json_contains_key, json_extract, json_append, json_replace, json_set, json_remove, json_search, json_merge, json_depth, and json_count work, and describes how to install and compile the JSON functions. The JSON functions allow users to validate, search, modify, and work with JSON documents directly in MySQL.
The document discusses improvements to the MariaDB query optimizer. It notes that while MySQL is widely used for web applications and OLTP, it is not well-suited for complex analytics queries on large datasets due to issues with disk access strategies and subquery optimizations. MariaDB 5.3 includes multi-range read and batched key access features that improve disk access, reducing query times by over 10x on benchmark tests. It also includes many additional subquery optimization strategies beyond those in earlier MySQL versions.
The optimizer trace provides a detailed log of the actions taken by the query optimizer. It traces the major stages of query optimization including join preparation, join optimization, and join execution. During join optimization, it records steps like condition processing, determining table dependencies, estimating rows for plans, considering different execution plans, and choosing the best join order. The trace helps understand why certain query plans are chosen and catch differences in plans that may occur due to factors like database version changes.
Using PostgreSQL statistics to optimize performance Alexey Ermakov
The document discusses using statistics in PostgreSQL to optimize performance. It describes how the planner estimates row counts in tables and selectivity of query conditions. Default estimators are used if no statistics are collected. Statistics are gathered on tables and indexes to estimate selectivity. Partial indexes can be useful when not all values need to be indexed. Monitoring and diagnosing performance issues is also covered.
New features in Performance Schema 5.7 in actionSveta Smirnova
New features in Performance Schema 5.7 in action provides an overview of Performance Schema improvements in MySQL 5.7 and 8.0 including new tables, instruments, and variables. It demonstrates how to use Performance Schema to diagnose locks, memory usage, stored routines, and prepared statements. Examples show identifying blocking locks, measuring memory usage by thread, and instrumentation of stored procedure execution and prepared statement statistics.
This presentation discusses troubleshooting MySQL performance issues. It begins by explaining how to determine if MySQL is the source of slow performance by measuring query response times. The presentation then covers various aspects to check like queries, server options, hardware resources, and replication setup. Specific diagnostic tools are also introduced like EXPLAIN, slow query log, and various system monitoring commands. Finally, it recommends following a structured process of tuning queries, options, and hardware to isolate and resolve any performance problems.
Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013Jaime Crespo
Tutorial delivered at Percona MySQL Conference Live London 2013.
It doesn't matter what new SSD technologies appear, or what are the latest breakthroughs in flushing algorithms: the number one cause for MySQL applications being slow is poor execution plan of SQL queries. While the latest GA version provided a huge amount of transparent optimizations -specially for JOINS and subqueries- it is still the developer's responsibility to take advantage of all new MySQL 5.6 features.
In this tutorial we will propose the attendants a sample PHP application with bad response time. Through practical examples, we will suggest step-by-step strategies to improve its performance, including:
* Checking MySQL & InnoDB configuration
* Internal (performance_schema) and external tools for profiling (pt-query-digest)
* New EXPLAIN tools
* Simple and multiple column indexing
* Covering index technique
* Index condition pushdown
* Batch key access
* Subquery optimization
The document discusses how to use EXPLAIN to optimize SQL queries. EXPLAIN shows how tables are joined, indexes used, and records examined. It returns information like the number of tables, join types, and data access methods. The fastest strategies are const, which uses a primary or unique key to lookup at most one value, and eq_ref, which joins on a unique index. EXPLAIN helps identify inefficient queries to improve performance.
This document discusses techniques for efficient pagination over large datasets using MySQL. The typical solution of using LIMIT and OFFSET can degrade performance as the offset increases. The document proposes using additional criteria like a "last seen" value combined with ordering to retrieve the next page without an offset. This allows fetching the next results set using an index scan. Testing showed a 6x improvement in query throughput over using large offsets. Additional enhancements like secondary indexes and caching are discussed to further optimize pagination.
MySQL/MariaDB query optimizer tuning tutorial from Percona Live 2013Sergey Petrunya
The document discusses techniques for identifying and addressing problems with a database query optimizer. It describes old and new tools for catching slow queries, such as the slow query log, SHOW PROCESSLIST, and the Performance Schema. It also provides examples of using these tools to analyze query plans, identify inefficient plans, and determine if optimizer settings or query structure need to be modified to address performance issues.
This document discusses new SQL syntax and query rewrite plugins in MySQL. It introduces FILTER clauses and custom optimizer hints and describes how to implement them using query rewrite plugins. Key points covered include the plugin interface, parsing and rewriting queries, managing memory, and customizing variables. The goal is to add new SQL features and control query execution through simple syntax extensions. Examples of implementing FILTER clauses and custom optimizer hints are provided to demonstrate how rewrite plugins work.
16 MySQL Optimization #burningkeyboardsDenis Ristic
The document discusses MySQL optimization. It provides details on using EXPLAIN to analyze query performance and the slow query log. It also summarizes using mysqltuner.pl to analyze a MySQL configuration and make recommendations such as disabling unused storage engines, defragmenting tables, enabling the slow query log, and adjusting certain variables like query_cache_size, tmp_table_size, and table_cache. Additional resources on MySQL optimization are also listed.
The document is about explaining the MySQL EXPLAIN statement. It provides an overview of EXPLAIN, how to read the query execution plan (QEP) produced by EXPLAIN, examples of QEPs, and limitations of the MySQL optimizer.
Performance Schema for MySQL TroubleshootingSveta Smirnova
Percona Live (https://www.percona.com/live/data-performance-conference-2016/sessions/performance-schema-mysql-troubleshooting)
The performance schema in MySQL version 5.6, released in February, 2013, is a very powerful tool that can help DBAs discover why even the trickiest performance issues occur. Version 5.7 introduces even more instruments and tables. And while all these give you great power, you can get stuck choosing which instrument to use.
In this session, I will start with a description of a typical problem, then guide you how to use the performance schema to find out what causes the issue, the reason for unwanted behavior and how the received information can help you solve a particular problem.
Traditionally, performance schema sessions teach what is in contained in tables. I will, in contrast, start from a performance issue, then demonstrate which instruments and tables can help solve it. We will discuss how to setup the performance schema so that it has minimal impact on your server.
THE MUSEUM MEIJI-MURA Open air museum for preserving and exhibiting Japanese ...B & M Co., Ltd.
This document outlines a plan for a company reorganization that will reduce costs. It proposes consolidating several smaller departments into two larger divisions to remove redundant manager and supervisor roles. This restructuring is estimated to save the company $500,000 per year in personnel expenses.
Talend is an open source integration software provider specializing in data integration, master data management, data quality, big data integration, and enterprise application integration. It offers a platform and tools like Talend Data Integration and Talend Data Quality that help with Extract, Transfer, and Load (ETL) processes, ensuring data quality, and enabling integration across various data sources and targets. The presentation concluded with a live demo of Talend's enterprise service bus functionality for intelligent routing, mediation, and service enablement using open standards.
O documento apresenta o calendário editorial de 2014 da publicação Meio & Mensagem, com as datas de fechamento, material e circulação de cada edição do periódico ao longo dos meses do ano, além de informar datas importantes para envio de reservas e materiais.
Haiku Deck is a presentation tool that allows users to create Haiku style slideshows. The tool encourages users to get started making their own Haiku Deck presentations which can be shared on SlideShare. A call to action is given to users to get started creating their own Haiku Deck presentations.
This document provides a summary of Patricia Meredith's qualifications and experience as a registered nurse. It outlines her extensive experience working in various clinical settings including hospitals, forensic nursing, community nursing and as a clinic nurse. It also lists her education qualifications including certificates in various clinical areas as well as a diploma of rural mental health nursing and science in nursing. Key skills highlighted include holistic patient care, communication, and ongoing professional development. The document provides referees and contact details.
Demo: How to get your Digital Aadhaar (eAadhaar) in DigiLockerAmit Ranjan
You can get a digital copy of your Aadhaar (eAadhaar issued by UIDAI, Unique Identification Authority of India) directly in your DigiLocker account. All you have to do is to sign up for a DigiLocker account and sync it with Aadhaar - the digital Aadhaar automatically shows up in your issued documents section.
Query Optimization with MySQL 5.7 and MariaDB 10: Even newer tricksJaime Crespo
Tutorial delivered at Percona Live London 2014, where we explore new features and techniques for faster queries with MySQL 5.6 and 5.7 and MariaDB 10, including the newest options in MySQL 5.7.5 and MariaDB 10.1.
Download here the virtual machine with the example database: http://dbahire.com/pluk14
Update: WordPress has a workaround for STRICT mode: https://core.trac.wordpress.org/ticket/26847
Query Optimization with MySQL 5.6: Old and New TricksMYXPLAIN
The document discusses query optimization techniques for MySQL 5.6, including both established techniques and new features in 5.6. It provides an overview of tools for profiling queries such as EXPLAIN, the slow query log, and the performance schema. It also covers indexing strategies like compound indexes and index condition pushdown.
The document provides tips for optimizing performance of MySQL databases by discussing settings for variables in MySQLD to optimize memory usage and query processing, settings for the MyISAM and InnoDB storage engines to improve performance, and methods for examining slow query logs and using EXPLAIN to identify and address inefficient queries.
MySQLinsanity! This document provides an overview of Stanley Huang's MySQL performance tuning experience and expertise. It begins with introductions and background on Stanley Huang. It then discusses the typical phases of MySQL performance tuning projects, including SQL tuning and RDBMS tuning. Specific tips are provided around topics like slow query logging, index usage, partitioning, and server configuration. The document concludes with an invitation for questions.
MariaDB 10.5 new features for troubleshooting (mariadb server fest 2020)Valeriy Kravchuk
The recently released MariaDB 10.5 GA includes many new, useful features, but I’d like to concentrate on those helping DBAs and support engineers to find out what’s going on when a problem occurs.
Specifically I present and discuss the Performance Schema updates to match MySQL 5.7 instrumentation, new tables in the INFORMATION_SCHEMA to monitor the internals of a generic thread pool and improvements of ANALYZE for statements.
The document discusses three common ways to improve performance of a MySQL database that is experiencing high load:
1. Upgrade hardware by adding more RAM, faster disks, or more powerful CPUs. This provides a temporary fix but can become exponentially more expensive and does not address underlying issues.
2. Change MySQL configuration settings like tmp_table_size or sort_buffer_size to optimize for specific bottlenecks shown in global status variables, but there are no "silver bullets" and misconfigurations must be addressed.
3. Improve indexing and tune queries by addressing issues like temporary tables on disk, full table scans, and lack of indexes causing full joins or sorting, which can have long term benefits over simply adding resources
Talk at "Istanbul Tech Talks" in Istanbul, April, 17, 2018. http://www.istanbultechtalks.com/
In this talk I will show how to get started with MySQL Query Tuning. I will make short introduction into physical table structure and demonstrate how it may influence query execution time. Then we will discuss basic query tuning instruments and techniques, mainly EXPLAIN command with its latest variations. You will learn how to understand its output and how to rewrite query or change table structure to achieve better performance.
The slow query log aggregates queries that took longer than a threshold to run and examines more than a minimum number of rows. Tools like mk-query-digest and mysqldumpslow can analyze the slow query log to provide summaries of the longest running queries, number of calls, and other metrics to help identify optimization opportunities. The top query in this example was a SELECT statement joining multiple tables that accounted for over 99% of the total execution time recorded in the log.
MariaDB and Clickhouse Percona Live 2019 talkAlexander Rubin
Running an analytical (OLAP) workload on top of MySQL can be slow and painful. A specifically designed storage format ("Column Store") can significantly improve analytical queries' performance. There are a number of opensource column store databases around. In this talk, I will focus on two of them which can support MySQL protocol: MariaDB ColumnStore and ClickHouse.
I will show some realtime benchmarks and use cases, and demonstrate how MariaDB ColumnStore and ClickHouse can be used for typical OLAP queries.
New Features
● Developer and SQL Features
● DBA and Administration
● Replication
● Performance
By Amit Kapila at India PostgreSQL UserGroup Meetup, Bangalore at InMobi.
http://technology.inmobi.com/events/india-postgresql-usergroup-meetup-bangalore
Advanced Query Optimizer Tuning and AnalysisMYXPLAIN
The document discusses techniques for identifying and addressing problems with a database query optimizer. It describes how to use tools like the slow query log, SHOW PROCESSLIST, and PERFORMANCE SCHEMA to find slow queries and examine their execution plans. The document provides examples of analyzing queries, identifying inefficient plans, and determining appropriate actions like rewriting queries or adjusting optimizer settings.
This document provides an overview of PostgreSQL topics including:
- Installation and configuration best practices such as using package management and configuring logging
- Routine maintenance activities like vacuuming and backups
- Upgrades and the differences between major, minor, and bugfix versions
- Advanced SQL topics like window functions, common table expressions, and querying slow queries
This document discusses various Oracle SQL concepts including query optimization, execution plans, joins, indexes, and full table scans. It provides guidance on understanding how Oracle processes and executes SQL queries, the importance of statistics and selectivity, and techniques for writing efficient queries such as predicate pushing and query transformations. The goal is to help readers gain a conceptual understanding of Oracle's internals to formulate more efficient SQL.
MySQL® 5.7 is a great release which has a lot to offer, especially in the development and replication areas. It provides a lot of new optimizer features for developers to take advantage of, a much more powerful GIS function and high performance JSON data type, allowing for a more powerful store for semi-structured data. It also features dramatically improved Performance Schema, Parallel and Multi-Source replication, allowing you to scale much further than ever before, just to give you a taste. In this webinar, we will provide an overview of the most important MySQL 5.7 features.
This webinar will be part of a 3-part series which will include MySQL 5.7 for Developers and MySQL 5.7 for DBAs.
Memcached Functions For My Sql Seemless Caching In My SqlMySQLConference
The document summarizes memcached Functions for MySQL, which are user defined functions (UDFs) that allow MySQL to interact with the memcached caching server. The UDFs provide functions to store, retrieve, delete data from memcached as well as retrieve server stats and configure client behaviors. The UDFs are written in C using the libmemcached client library and MySQL UDF API. They allow caching data from MySQL queries in memcached and combining data from MySQL tables and memcached in queries.
Common Schema is a MySQL DBA toolkit that provides a self-contained database schema with tables, views, and stored routines to help with monitoring, security, and analyzing schema objects. It can be installed by running an SQL script and provides built-in documentation and help functions.
Common Schema is a MySQL DBA toolkit that provides a self-contained database schema with tables, views, and stored routines. It allows users to monitor servers, analyze security and objects, and access documentation directly from SQL queries. The presentation introduces Common Schema's key capabilities and provides examples of monitoring status variables, accessing help documentation, and analyzing data size and object information.
New Tuning Features in Oracle 11g - How to make your database as boring as po...Sage Computing Services
One of the key problems that have haunted Oracle sites since the introduction of the cost based optimiser is the ability to provide a stable level of performance over time. The very responsiveness of the CBO to factors such as changes in statistics and initialisation parameters can lead to sudden changes in performance levels. Oracle 11g is set to introduce a number of features that will assist the DBA in providing a stable environment for mission critical applications. Excitement is for out of work time, (and for developers). The aim of most database administrators is to have as boring a working life as possible. Oracle 11g may help us achieve those aims.
This presentation discusses some of those features including:
Capture and replay of workload
Automatic SGA tuning
Managing and fixing plans
The 11g Automatic Tuning Advisor
2. Some improvements in MySQL 5.6
• Basic configuration changes
• EXPLAIN for DML queries
Performance Improvements
• Index Condition Pushdown
• Multi-Range Read
• File Sort Optimization
• Persistent Optimizer Stats
• Partitioning Improvements
3. Some basic configuration changes
• InnoDB File Per Table is enabled by
default
• Larger Buffer Pool and Transaction
Log file
• Optimized Row-Based Replication
• Multi-Threaded Slaves
• Performance Schema overhead
4. EXPLAIN for DML queries
Explain for DML queries (INSERT/UPDATE/DELETE) is available
with this version of MySQL.
EXPLAIN DELETE FROM couponG
***************** 1. row ***************************
id: 1
select_type: SIMPLE
table: NULL
type: NULL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 1548305
Extra: Deleting all rows
1 row in set (0.00 sec)
5. Index Condition Pushdown Optimization
• Index Condition Pushdown (ICP) is an optimization for the case
where MySQL retrieves rows from a table using an index.
• Without ICP, the storage engine traverses the index to locate
rows in the base table and returns them to the MySQL server
which evaluates the WHERE condition for the rows.
• With ICP ,if parts of the WHERE condition can be evaluated by
using only fields from the index, the MySQL pushes this part of
the WHERE condition down to the storage engine. The storage
engine then evaluates the pushed index condition by using the
index entry and only if this is satisfied is the row read from
the table.
• Index Condition Pushdown optimization is used for the range,
ref, eq_ref, and ref_or_null access methods when there is a
need to access full table rows
• Can be used for InnoDB and MyISAM tables.
• Not supported with partitioned tables in MySQL 5.6
6. ICP
Lets say we want to execute below query, we will be
comparing query execution in MySQL 5.5 and MySQL
5.6.
SELECT * FROM coupon
WHERE store_id = 1525
AND name LIKE '%Memorial%' ;
Index is on (`store_id`,`name`)
7. Without ICP (5.5)
mysql> EXPLAIN SELECT * FROM coupon
-> WHERE store_id = 1525 AND
-> name LIKE '%Memorial%' G
*********** 1. row ****************
id: 1
select_type: SIMPLE
table: coupon
type: ref
possible_keys:
idx_test_icp,idx_test_icp_2
key: idx_test_icp
key_len: 4
ref: const
rows: 638280
Extra: Using where
1 row in set (0.00 sec)
SHOW STATUS LIKE 'Hand%';
+----------------------------+--------+
| Variable_name | Value |
+----------------------------+--------+
| Handler_commit | 1 |
| Handler_delete | 0 |
| Handler_discover | 0 |
| Handler_prepare | 0 |
| Handler_read_first | 0 |
| Handler_read_key | 1 |
| Handler_read_last | 0 |
| Handler_read_next | 316312 |
| Handler_read_prev | 0 |
| Handler_read_rnd | 0 |
| Handler_read_rnd_next | 84 |
| Handler_rollback | 0 |
| Handler_savepoint | 0 |
| Handler_savepoint_rollback | 0 |
| Handler_update | 0 |
| Handler_write | 82 |
+----------------------------+--------+
9. Comparison of ICP Execution
• Execution time for this example:
MySQL 5.5: 12.76 sec
MySQL 5.6: 0.15 sec
• The Results are consistent across multiple
executions
10. Multi-Range Read (MRR)
• Read data sequentially from disk.
• For secondary indexes, the order for the index entries on disk is different
than the order of disk blocks for the full rows.
• Instead of retrieving the full rows using a sequence of small out-of-order
reads, MRR scans one or more index ranges used in a query, sorts the
associated disk blocks for the row data, then reads those disk blocks using
larger sequential I/O requests. The speedup benefits operations such as
range index scans and equi-joins on indexed columns.
In below Example the index is as follows
KEY `idx_test_icp_2` (`store_id`,`custom_sort_order_rank_goupd_id`),
13. Comparison of MRR
Execution
• Execution time for this example:
MySQL 5.5: (1.82 sec)
MySQL 5.6 (w/MRR, wo/ICP): (0.09 Sec)
• The results are consistent between executions
14. Batched Key Access (BKA)
• It retrieves keys in batches and allows MRR
usage for JOINs, as an alternative to standard
Nested Loop Join execution
• Not enabled by default we need to set like
below
SET
optimizer_switch='mrr=on,mrr_cost_based=of
f,batched_key_access=on';
16. With BKA (5.6)
EXPLAIN SELECT c.coupon_id as c_id,
`c` . *,`st`.`name` AS `store`
FROM `coupon` AS `c`
JOIN `store` AS `st`
ON st.store_id = c.store_id
WHERE (st.store_id > 50 AND st.store_id < 1000)G
**************** 1. row ***************
id: 1
select_type: SIMPLE
table: st
type: range
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: NULL
rows: 1210
Extra: Using index condition; Using MRR
**************** 2. row ***************
id: 1
select_type: SIMPLE
table: c
type: ref
possible_keys:
idx_test_icp,idx_test_icp_2,idx_store
key: idx_test_icp
key_len: 4
ref: sonicsave.st.store_id
rows: 103
Extra: Using join buffer (Batched Key
Access)
2 rows in set (0.00 sec)
mysql> SHOW STATUS LIKE 'Hand%';
+----------------------------+--------+
| Variable_name | Value |
+----------------------------+--------+
| Handler_commit | 1 |
| Handler_delete | 0 |
| Handler_discover | 0 |
| Handler_external_lock | 4 |
| Handler_mrr_init | 0 |
| Handler_prepare | 0 |
| Handler_read_first | 0 |
| Handler_read_key | 941 |
| Handler_read_last | 0 |
| Handler_read_next | 573892 |
| Handler_read_prev | 0 |
| Handler_read_rnd | 0 |
| Handler_read_rnd_next | 65 |
| Handler_rollback | 0 |
| Handler_savepoint | 0 |
| Handler_savepoint_rollback | 0 |
| Handler_update | 0 |
| Handler_write | 63 |
+----------------------------+--------+
17. Comparison of BKA
Execution
• Execution time for this example:
MySQL 5.5: (13.78 sec)
MySQL 5.6: (9.73 sec)
• The results are consistent between executions
• We can also gain some performance
improvement by increasing join_buffer_size,
join_buffer_size does not affect execution time in
the 5.5 version
• In example above I have set join_buffer_size to
50MB
18. Extended Secondary Keys
• Implicit primary keys inside secondary keys
can be used for filtering (ref, range, etc), not
only for covering index or sorting.
• use_index_extensions should be on , which is
by default enabled in 5.6
• In example below index in as
KEY `idx_name` (`name`(30))
19. Extended Secondary Keys
mysql> EXPLAIN SELECT * FROM coupon
-> WHERE name = '25% off and Free Shipping on $150+ order.'
-> AND coupon_id > 100000 AND coupon_id < 500000G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: coupon
type: range
possible_keys: PRIMARY,idx_name
key: idx_name
key_len: 36
ref: NULL
rows: 41
Extra: Using index condition; Using where
1 row in set (0.00 sec)
20. Duplicate Key Check
In MySQL 5.6, If you create a duplicate index it will show a warning
Example : I have already a index on column name as KEY `idx_name` (`name`(30)).
Create another one with same definition
CREATE INDEX `idx_duplicate_name` ON coupon(name(30));
Query OK, 0 rows affected, 1 warning (23.34 sec)
Records: 0 Duplicates: 0 Warnings: 1
show warningsG
*************************** 1. row ***************************
Level: Note
Code: 1831
Message: Duplicate index 'idx_duplicate_name' defined on the table
'coupon'. This is deprecated and will be disallowed in a future
release.
1 row in set (0.01 sec)
21. Filesort with Short LIMIT
• For queries that combine ORDER
BY non_indexed_column and a LIMIT x clause,
this feature speeds up the sort when the
contents of X rows can fit into the sort buffer.
Works with all storage engines.
22. Filesort with Short LIMIT
EXPLAIN SELECT * FROM coupon ORDER BY page_title
LIMIT 100G
********************** 1. row **********************
id: 1
select_type: SIMPLE
table: coupon
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 1548305
Extra: Using filesort
1 row in set (0.00 sec)
23. Filesort with Short LIMIT Comparision
• Query : SELECT * FROM coupon ORDER BY
page_title LIMIT 100;
• MySQL 5.6 : 3.56 Sec
• MySQL 5.5 : 10.25 Sec
• The results are consistent between executions
24. Join Order
• Table order algorithm has been optimized,
which leads to better query plans when
joining many tables
25. Persistent Optimizer Stats
• Provides improved accuracy of InnoDB index
statistics, and consistency across MySQL
restarts.
• This is Controlled by variable
innodb_stats_persistent which is enabled by
default.
26. Partitioning Improvements
Explicit Partition Selection
• With partitioned tables, MySQL can restrict
processing to only the relevant portions of a
big data set.
• you can directly define which partitions are
used in a query, DML, or data load operation,
rather than repeating all the partitioning
criteria in each statement
27. Partition Selection Examples
SELECT * FROM coupon PARTITION (p0, p2);
DELETE FROM coupon PARTITION (p0, p1);
UPDATE coupon PARTITION (p0) SET store_id = 2
WHERE name = 'Jill';
SELECT e.id, s.city FROM employees AS e JOIN
stores PARTITION (p1) AS s ...;
28. Replication Improvement
Multi-Threaded Slaves
• Using multiple execution threads to apply
replication events to slave servers.
• The multi-threaded slave splits work between
worker threads based on the database name,
allowing updates to be applied in parallel
rather than sequentially.