Flexviews is a materialized view solution for MySQL. This set of slides introduces Flexviews concepts, and gives some examples in how to use it and what to use it for.
New features in Performance Schema 5.7 in actionSveta Smirnova
Tutorial for PerconaLive Amsterdam - September, 2015
Virtual machine can be downloaded from https://drive.google.com/file/d/0B67lYkmv0ZcsNE5wTFJUNC1sUlk/view?usp=sharing
Percona Live 2016 (https://www.percona.com/live/data-performance-conference-2016/sessions/why-use-explain-formatjson). Although EXPLAIN FORMAT=JSON was first presented a long time ago, there still aren't many resources that explain how and why to use it. The most advertised feature is visual EXPLAIN in MySQL Workbench, but this format can do more than create nice pictures. It prints additional information that can't be found in good old tabular EXPLAIN, and can help to solve many tricky performance issues. In this session, I will not only describe which additional information we can get with the new syntax, but also provide examples showing how to use it to diagnose production issues.
Resolving performance problems is not too difficult when you can access and change the offending code. Tuning packaged applications however requires a different set of skills and techniques. This presentation covers methods of identifying problems in packaged applications, and discusses some tuning techniques that can be used in situations where you can?t change the code. Topics include Initialisation Parameters, Stored Outlines, and SQL*Profiles.
Flexviews is a materialized view solution for MySQL. This set of slides introduces Flexviews concepts, and gives some examples in how to use it and what to use it for.
New features in Performance Schema 5.7 in actionSveta Smirnova
Tutorial for PerconaLive Amsterdam - September, 2015
Virtual machine can be downloaded from https://drive.google.com/file/d/0B67lYkmv0ZcsNE5wTFJUNC1sUlk/view?usp=sharing
Percona Live 2016 (https://www.percona.com/live/data-performance-conference-2016/sessions/why-use-explain-formatjson). Although EXPLAIN FORMAT=JSON was first presented a long time ago, there still aren't many resources that explain how and why to use it. The most advertised feature is visual EXPLAIN in MySQL Workbench, but this format can do more than create nice pictures. It prints additional information that can't be found in good old tabular EXPLAIN, and can help to solve many tricky performance issues. In this session, I will not only describe which additional information we can get with the new syntax, but also provide examples showing how to use it to diagnose production issues.
Resolving performance problems is not too difficult when you can access and change the offending code. Tuning packaged applications however requires a different set of skills and techniques. This presentation covers methods of identifying problems in packaged applications, and discusses some tuning techniques that can be used in situations where you can?t change the code. Topics include Initialisation Parameters, Stored Outlines, and SQL*Profiles.
This slide deck describes the Flexviews materialized view toolkit for MySQL:
http://flexvie.ws
Learn how to use incrementally refreshable materialized views, and how they can improve your performance.
Too many of us have been taught that views are nothing more than stored SQL statements. The goal of this presentation is to challenge this notion. Or, to be precise, to take you one step further, from technicalities to the huge role that well-designed views can play in contemporary database solutions.
Current views are very advanced. They can be built on top of user-defined functions; they can utilize extremely complex INSTEAD-OF triggers (including composite ones), and they can even have indexes! As a result, with all of this added functionality views can serve as an isolation level between UI-driven data representation (heavily denormalized and customized) and DBA-driven data representation (normalized with referential integrity constraints and foreign keys).
Contemporary IT solutions often take this transformation completely out of the database, usually moving it to the middle-tier. This presentation will show that keeping business logic IN the database provides you with much greater flexibility, manageability, and performance. Of course, there are some traps and pitfalls, but these are also avoidable. Real-world examples will be provided to show that the role of views is seriously underestimated.
Performance Schema for MySQL TroubleshootingSveta Smirnova
Percona Live (https://www.percona.com/live/data-performance-conference-2016/sessions/performance-schema-mysql-troubleshooting)
The performance schema in MySQL version 5.6, released in February, 2013, is a very powerful tool that can help DBAs discover why even the trickiest performance issues occur. Version 5.7 introduces even more instruments and tables. And while all these give you great power, you can get stuck choosing which instrument to use.
In this session, I will start with a description of a typical problem, then guide you how to use the performance schema to find out what causes the issue, the reason for unwanted behavior and how the received information can help you solve a particular problem.
Traditionally, performance schema sessions teach what is in contained in tables. I will, in contrast, start from a performance issue, then demonstrate which instruments and tables can help solve it. We will discuss how to setup the performance schema so that it has minimal impact on your server.
Maximizing SQL Reviews and Tuning with pt-query-digestPythian
PalominoDB's Mark Filipi feels that pt-query-digest is one of the more valuable components of the Percona Toolkit available as OSS to DBAs. In this talk, Mark will teach with an eye towards real world test cases, output reviews and anecdotal production experience.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
This slide deck describes the Flexviews materialized view toolkit for MySQL:
http://flexvie.ws
Learn how to use incrementally refreshable materialized views, and how they can improve your performance.
Too many of us have been taught that views are nothing more than stored SQL statements. The goal of this presentation is to challenge this notion. Or, to be precise, to take you one step further, from technicalities to the huge role that well-designed views can play in contemporary database solutions.
Current views are very advanced. They can be built on top of user-defined functions; they can utilize extremely complex INSTEAD-OF triggers (including composite ones), and they can even have indexes! As a result, with all of this added functionality views can serve as an isolation level between UI-driven data representation (heavily denormalized and customized) and DBA-driven data representation (normalized with referential integrity constraints and foreign keys).
Contemporary IT solutions often take this transformation completely out of the database, usually moving it to the middle-tier. This presentation will show that keeping business logic IN the database provides you with much greater flexibility, manageability, and performance. Of course, there are some traps and pitfalls, but these are also avoidable. Real-world examples will be provided to show that the role of views is seriously underestimated.
Performance Schema for MySQL TroubleshootingSveta Smirnova
Percona Live (https://www.percona.com/live/data-performance-conference-2016/sessions/performance-schema-mysql-troubleshooting)
The performance schema in MySQL version 5.6, released in February, 2013, is a very powerful tool that can help DBAs discover why even the trickiest performance issues occur. Version 5.7 introduces even more instruments and tables. And while all these give you great power, you can get stuck choosing which instrument to use.
In this session, I will start with a description of a typical problem, then guide you how to use the performance schema to find out what causes the issue, the reason for unwanted behavior and how the received information can help you solve a particular problem.
Traditionally, performance schema sessions teach what is in contained in tables. I will, in contrast, start from a performance issue, then demonstrate which instruments and tables can help solve it. We will discuss how to setup the performance schema so that it has minimal impact on your server.
Maximizing SQL Reviews and Tuning with pt-query-digestPythian
PalominoDB's Mark Filipi feels that pt-query-digest is one of the more valuable components of the Percona Toolkit available as OSS to DBAs. In this talk, Mark will teach with an eye towards real world test cases, output reviews and anecdotal production experience.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
Slides from OOW13
Its the age old problem. The SQL statement that needs to run in 5 seconds - unfortunately runs in 10 seconds, or 10 minutes, or 10 hours. A SQL statement gets emailed to you with the simple subject title: "Make it faster". We'll start from this point in the process, and look at what you can do to tackle this common issue.
In this talk about performance for the 2018-01-25 Dallas Oracle Users Group meeting, Cary Millsap talks about application design, ways to control Oracle sql_trace, measurement intrusion, and function interposition.
OpenWorld 2018 - Common Application Developer DisastersConnor McDonald
Two of the critical requirements of a database are:
- run fast
- data integrity
The database can achieve these things, but only as long as you understand the mechanisms correctly. If you don't, then things can go downhill fast.
Robert Pankowecki - Czy sprzedawcy SQLowych baz nas oszukali?SegFaultConf
Wyobraź sobie, że w twojej aplikacji zachodzą jakieś zmiany (domain eventy). Chcielibyśmy te zmiany wystawić na zewnątrz, żebyśmy mogli na ich podstawie robić sobie raporty, read modele, sagi, synchronizować dane. Czy to zadanie okaże się być trudne czy proste, jeśli użyjemy bazy danych SQL. Co zyskaliśmy dzięki temu, że używam RDBMS/SQL a co utraciliśmy, być może, bezpowrotnie. W tej prezentacji opowiem wam jak chciałem zbudować pewną funkcjonalność dla biblioteki Rails Event Store, dlaczego okazało być się to trudniejsze niż myślałem, o modelu MVCC w PostgreSQL, czy jest sposób, żeby go obejść i uzyskać emulację trybu READ UNCOMMITTED. A może możnaby do całego problemu podejśc zupełnie inaczej i podłączyć się pod Write-Ahead-Log (WAL) i wygrać świat w ten sposób? Pokażę też jak moim zdaniem, korzystając z dokładnie tych samych konceptów, które stoją za Event Sourcingiem i bazami danych moglibyśmy budować API, tak bym za każdym razem pisząc integrację z serwisem X nie musiał się zastanawiać czy jego autorzy rozumieją pojęcie idempotent czy nie. Albo jak moglibyśmy osiągnąć prostotę dzięki używaniu Convergent Replicated Data Types (CRDT). Być może jako community stać nas na więcej niż REST nad CRUDem. Zastanowimy się, czy sprzedawcy SQLa zlasowali nam mózgi, sprawili, że zapomnieliśmy o najprostszym sposobie, który może działać i wprowadzili nas w maliny, w których aktualnie się znajdujemy. A może sami jesteśmy sobie winni? TLDR: Czy nasze aplikacje nie mogłyby działać tak jak pod spodem działają bazy danych? Czy to wszystko musi być takie ciężkie i skomplikowane jeśli chcemy mieć mikro-serwisy, zwłaszcza w małym zespole, który niekoniecznie lubi dostawiać 5 bazę danych do stacku technologicznego.
Replication Troubleshooting in Classic VS GTIDMydbops
This presentation talk will assist you in troubleshooting MySQL replication for the most common issues we might face with a simple comparison of how can we get them solved in the different replication methods (Classic VS GTID).
OSMC 2008 | Monitoring MySQL by Geert VanderkelenNETWAYS
Monitoring MySQL has a long history within Nagios. Several plugins are available already. In addition to that, there are probably lots of plugins that have been developed by the community. We take a look at some of these and discuss what kind of additional useful information could be pulled out of a MySQL Server for monitoring it even better. A simple example on how to write such plugins will be shown, also using NDB API for monitoring MySQL Cluster. Now that MySQL Enterprise Monitor (MEM) is available, we'll go through the possibilities for combining the two platforms. We will also discuss the NDOUtils for storing configuration and event data using MySQL.
This talk starts with a brief overview of MySQL itself: some history, where it's heading too, and why it is so successful.
Slides from OOW13
The optimizer must try to be all things to all people, and similarly, the collection of optimizer statistics must try to satisfy the needs of all. And many DBA's just leave it at that. But the optimizer offers so much more than that. With a little more effort and discipline, we can achieve much more than a "one-size-fits-all" policy, and maximize the benefit of all of the optimizer features. We'll look at the tools now available under DBMS_STATS to get more stability and better performance with optimizer statistics.
Similar to Databse & Technology 2 | Connor McDonald | Managing Optimiser Statistics - A better way.pdf (20)
64. SQL> select client_name, status
2 from DBA_AUTOTASK_CLIENT;
CLIENT_NAME STATUS
------------------------------------ --------
auto optimizer stats collection ENABLED
auto space advisor ENABLED
sql tuning advisor ENABLED
64
120. select * insert into PEOPLE
from PEOPLE select * from ...
select ...
from PEOPLE,
DEPT
where ...
SQL> begin
2 dbms_stats.gather_table_stats(
3 'DEMO',
4 'PEOPLE);
5 end;
6 /
declare
delete from T v people.name%type;
where X in begin
( select PID ...
from PEOPLE )
120
125. SQL> desc DBMS_STATS
PROCEDURE GATHER_TABLE_STATS
Argument Name Type In/Out Default?
----------------------- -------------- ------ --------
OWNNAME VARCHAR2 IN
TABNAME VARCHAR2 IN
PARTNAME VARCHAR2 IN DEFAULT
ESTIMATE_PERCENT NUMBER IN DEFAULT
BLOCK_SAMPLE BOOLEAN IN DEFAULT
METHOD_OPT VARCHAR2 IN DEFAULT
DEGREE NUMBER IN DEFAULT
GRANULARITY VARCHAR2 IN DEFAULT
CASCADE BOOLEAN IN DEFAULT
STATTAB VARCHAR2 IN DEFAULT
STATID VARCHAR2 IN DEFAULT
STATOWN VARCHAR2 IN DEFAULT
NO_INVALIDATE BOOLEAN IN DEFAULT
STATTYPE VARCHAR2 IN DEFAULT
FORCE BOOLEAN IN DEFAULT
125
129. SQL> select
2 x.ksppinm name,
3 y.kspftctxvl value
4 from
5 sys.x$ksppi x,
6 sys.x$ksppcv2 y
7 where
8 x.indx+1 = y.kspftctxpn and
9 x.ksppinm = '_optimizer_invalidation_period'
NAME VALUE
------------------------------ ------------
_optimizer_invalidation_period 18000
129
130. select * insert into PEOPLE
from PEOPLE select * from ...
select ...
from PEOPLE,
DEPT
where ...
SQL> exec dbms_stats.gather_table_stats(
user,'PEOPLE');
declare
delete from T v people.name%type;
where X in begin
( select PID ...
from PEOPLE )
130
215. SQL> select * from t where dte = '12-AUG-2011';
----------------------------------------------------
| Id | Operation | Name | Rows |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 2381 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 2381 |
|* 2 | INDEX RANGE SCAN | IX | 2381 |
----------------------------------------------------
216. SQL> select * from t where dte = '14-AUG-2011';
----------------------------------------------------
| Id | Operation | Name | Rows |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 |
|* 2 | INDEX RANGE SCAN | IX | 1 |
----------------------------------------------------
231. SQL> desc VEHICLE
Name Null? Type
-------------------------- -------- -------------
ID NUMBER
MAKE VARCHAR2(6)
MODEL VARCHAR2(6)
SQL> select count(*)
2 from VEHICLE;
COUNT(*)
------------
2,157,079
231
231
273. SQL> desc DBA_TABLES
Name Null? Type
----------------------------- -------- -------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
COLUMN_NAME NOT NULL VARCHAR2(30)
...
NUM_ROWS NUMBER
273
274. SQL> desc DBA_TAB_COLS
Name Null? Type
----------------------------- -------- -------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
COLUMN_NAME NOT NULL VARCHAR2(30)
...
NUM_DISTINCT NUMBER
LOW_VALUE RAW(32)
HIGH_VALUE RAW(32)
...
274
275. SQL> desc PEOPLE
Name Null? Type
----------------------------- -------- -------------
PID NUMBER
GENDER CHAR(1)
NAME VARCHAR2(47)
AGE NUMBER
DEATH_RATE NUMBER
275
276. SQL> alter session set sql_true = true;
Session altered.
SQL> begin
2 dbms_stats.gather_table_stats(
3 'DEMO',
4 'PEOPLE);
5 end;
6 /
276
293. SQL> set serverout on
SQL> declare
2 type t_bucket is table of varchar2(1);
3 l_synopsis t_bucket;
4 l_splits number := 0;
5 l_hash int;
6 l_min_val int := 0;
7 l_synopsis_size int := 16;
8 begin
9 for i in ( select single_char from one_pass ) loop
10 l_hash := ascii(i.single_char);
11
12 if l_synopsis.count = l_synopsis_size then
13 l_min_val :=
14 case
15 when l_min_val = 0 then 64
16 when l_min_val = 64 then 96
17 when l_min_val = 96 then 112
18 when l_min_val = 112 then 120
19 end;
20 l_splits := l_splits + 1;
293
294. 22
23 for j in 1 .. l_min_val loop
24 if l_synopsis.exists(j) then
25 l_synopsis.delete(j);
26 end if;
27 end loop;
28 end if;
29
30 if l_hash > l_min_val then
31 l_synopsis(l_hash) := 'Y';
32 end if;
33 end loop;
34 dbms_output.put_line(l_synopsis.count *
35 power(2,l_splits));
36 end;
37 /
Splitting, keeping entries above 64
Splitting, keeping entries above 96
Splitting, keeping entries above 112
88
294
313. SQL> desc SYS.COL_USAGE$
Name Null? Type
----------------------- -------- ---------
OBJ# NUMBER
INTCOL# NUMBER
EQUALITY_PREDS NUMBER
EQUIJOIN_PREDS NUMBER
NONEQUIJOIN_PREDS NUMBER
RANGE_PREDS NUMBER
LIKE_PREDS NUMBER
NULL_PREDS NUMBER
TIMESTAMP DATE
313
314. SQL> create table T ( 5 special
2 skew varchar2(10),
3 even number); values
Table created.
SQL> insert into T
2 select
3 case
4 when rownum > 99995 then 'SPECIAL'
5 else dbms_random.string('U',8)
6 end,
7 mod(rownum,200)
8 from dual
9 connect by level <= 100000 even
10 /
distribution
100000 rows created.
314
315. SQL> exec dbms_stats.gather_table_stats(
user,'T', estimate_percent=>null);
PL/SQL procedure successfully completed.
SQL> select COLUMN_NAME,NUM_DISTINCT,DENSITY,
2 ( select count(*)
3 from user_tab_histograms
4 where table_name = 'T'
5 and column_name = c.column_name ) hist_cnt
6 from user_tab_cols c
7 where table_name = 'T'
8 order by table_name,COLUMN_ID
9 /
COLUMN_NAME NUM_DISTINCT DENSITY HIST_CNT
------------------- ------------ ---------- ----------
SKEW 99996 .00001 2
EVEN 200 .005 2
315
316. SQL> select * from T where skew = 'SPECIAL';
SKEW VAL
---------- ----------
SPECIAL 196
SPECIAL 197
SPECIAL 198
SPECIAL 199
SPECIAL 0
5 rows selected.
SQL> select * from T where even = 5;
TAG VAL
---------- ----------
IBRXGVIE 5
[snip]
500 rows selected.
316
317. SQL> exec dbms_stats.gather_table_stats(
user,'T', estimate_percent=>null);
PL/SQL procedure successfully completed.
SQL> select COLUMN_NAME,NUM_DISTINCT,DENSITY,
2 ( select count(*)
3 from user_tab_histograms
4 where table_name = 'T'
5 and column_name = c.column_name ) hist_cnt
6 from user_tab_cols c
7 where table_name = 'T'
8 order by table_name,COLUMN_ID
9 /
COLUMN_NAME NUM_DISTINCT DENSITY HIST_CNT
------------------- ------------ ---------- ----------
SKEW 99996 .00001 2
EVEN 200 .000005 200
317
324. SQL> desc CONFERENCE_ATTENDEES
Name Null? Type
----------------------------- -------- --------------
PID NUMBER
GENDER CHAR(1)
NAME VARCHAR2(40)
AGE NUMBER
DEATH_RATE NUMBER
324
335. SQL> desc INDEX_STATS
Name Null? Type
----------------------------- -------- --------------
HEIGHT NUMBER
BLOCKS NUMBER
NAME VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
...
OPT_CMPR_COUNT NUMBER
OPT_CMPR_PCTSAVE NUMBER
335