This document discusses using MySQL 5.1 partitions to improve performance. It begins with an introduction to Giuseppe Maxia, the presenter. It then provides an overview of MySQL partitions, explaining that they allow logical splitting of tables in a way that is transparent to users. It discusses how partitions can address issues like having too much data, not enough RAM, or needing to store and purge historical data quickly. It then demonstrates how to create partitions and discusses partitioning strategies, limitations, and when partitions should be used.
Running queries across multiple tables. This will involve the concept of joins—that is, how we join tables together.
Using joins to run queries over multiple tables, including:
Natural, inner, and cross joins
Straight joins
Left and right joins
Writing subqueries
Using SELECT statement options
Running queries across multiple tables. This will involve the concept of joins—that is, how we join tables together.
Using joins to run queries over multiple tables, including:
Natural, inner, and cross joins
Straight joins
Left and right joins
Writing subqueries
Using SELECT statement options
Modern SQL in Open Source and Commercial DatabasesMarkus Winand
SQL has gone out of fashion lately—partly due to the NoSQL movement, but mostly because SQL is often still used like 20 years ago. As a matter of fact, the SQL standard continued to evolve during the past decades resulting in the current release of 2016. In this session, we will go through the most important additions since the widely known SQL-92. We will cover common table expressions and window functions in detail and have a very short look at the temporal features of SQL:2011 and row pattern matching from SQL:2016.
Links:
http://modern-sql.com/
http://winand.at/
http://sql-performance-explained.com/
Pagination is common pattern for most web based applications, most often developers use LIMIT OFFSET, NUMBER mysql specific sql to paginate and get very slow response as user paginate to deep pages.
In this talk Surat Singh Bhati & Rick James are going to share efficient MySQL queries to paginate through large data set.
This presentation covers the basic DB2 objects description. Covers the basic administration using IBM utilities. Their complete phase wise operation and termination recoveries. Also have talked about the most frequently used DB2 catalog tables, what's the need for them in DB2. And finally have shown some SPUFI panels and their usage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
3. Giuseppe Maxia
• a.k.a. The Data Charmer
• MySQL Community Team Lead
• Long time hacking with MySQL features
• Formerly, database consultant, designer, coder.
• A passion for QA
• An even greater passion for open source
• ... and community
• Passionate blogger
• http://datacharmer.blogspot.com 3
27. What partitions can do
• logical split
• but you can also split the data physically
• granular partition (subpartitioning)
• different methods (range, list, hash, key)
26
29. 4 main reasons to use partitions
• To make single inserts and selects faster
• To make range selects faster
• To help split the data across different paths
• To store historical data efficiently
• If you need to delete large chunks of data
INSTANTLY
28
33. HOW TO MAKE PARTITIONS
•RTFM ...
• No, seriously, the manual has everything
30
34. HOW TO MAKE PARTITIONS
•RTFM ...
• No, seriously, the manual has everything
• But if you absolutely insist ...
30
35. HOW TO MAKE PARTITIONS
CREATE TABLE t1 (
id int
) ENGINE=InnoDB
# or MyISAM, ARCHIVE
PARTITION BY RANGE (id)
(
PARTITION P1 VALUES LESS THAN (10),
PARTITION P2 VALUES LESS THAN (20)
) 31
36. HOW TO MAKE PARTITIONS
CREATE TABLE t1 (
id int
) ENGINE=InnoDB
PARTITION BY LIST (id)
(
PARTITION P1 VALUES IN (1,2,4),
PARTITION P2 VALUES IN (3,5,9)
)
32
37. HOW TO MAKE PARTITIONS
CREATE TABLE t1 (
id int not null primary key
) ENGINE=InnoDB
PARTITION BY HASH (id)
PARTITIONS 10;
33
38. HOW TO MAKE PARTITIONS
CREATE TABLE t1 (
id int not null primary key
) ENGINE=InnoDB
PARTITION BY KEY ()
PARTITIONS 10;
34
39. Limitations
• Can partition only by INTEGER columns
• OR you can partition by an expression, which must
return an integer
• Maximum 1024 partitions
• If you have a Unique Key or PK, the partition
column must be part of that key
• No Foreign Key support
• No Fulltext or GIS support
35
40. PARTITIONING BY DATE
CREATE TABLE t1 (
d date
) ENGINE=InnoDB
PARTITION BY RANGE (year(d))
(
PARTITION P1 VALUES
LESS THAN (1999),
PARTITION P2 VALUES
LESS THAN (2000) 36
)
41. PARTITIONING BY DATE
CREATE TABLE t1 (
d date
) ENGINE=InnoDB
PARTITION BY RANGE (to_days(d))
(
PARTITION P1 VALUES
LESS THAN (to_days('1999-01-01')),
PARTITION P2 VALUES
LESS THAN (to_days('2000-01-01'))
37
)
43. Partition pruning
1a - unpartitioned table - SINGLE RECORD
select *
from
table_name
where colx =
120
44. Partition pruning
1a - unpartitioned table - SINGLE RECORD
I select *
D from
N
A table_name
D
T where colx =
E
A 120
X
45. Partition pruning
1b - unpartitioned table - SINGLE RECORD
select *
from
table_name
where colx =
350
46. Partition pruning
1c - unpartitioned table - RANGE
select *
from
table_name
where colx
between 120
and 230
47. Partition pruning
2a - table partitioned by colx - SINGLE REC
1-99
100-199
select *
from
200-299 table_name
where colx =
300-399 120
400-499
500-599
48. Partition pruning
2a - table partitioned by colx - SINGLE REC
1-99
100-199
select *
D from
A 200-299 table_name
T where colx =
A I
300-399 120
N
D
400-499
E
X
500-599
49. Partition pruning
2b - table partitioned by colx - SINGLE REC
1-99
100-199
select *
from
200-299 table_name
where colx =
300-399 350
400-499
500-599
50. Partition pruning
2c - table partitioned by colx - RANGE
1-99
select *
100-199 from
table_name
200-299
where colx
300-399 between 120
and 230
400-499
500-599
51. Partition pruning
EXPLAIN
select *
before from table_name
where colx = 120
EXPLAIN PARTITIONS
in 5.1 select *
from table_name
where colx = 120
52. Partition pruning - not using partition column
explain partitions select count(*)
from table_name where colx=120G
***** 1. row ****
id: 1
select_type: SIMPLE
table: table_name
partitions:
p01,p02,p03,p04,p05,p06,p07,p08,p09,
p10,p11,p12,p13,p14,p15,p16,p17,p18,
p19
type: index
...
53. Partition pruning - not using partition column
explain partitions select count(*)
from table_name where colx between
120 and 230G
***** 1. row ****
id: 1
select_type: SIMPLE
table: table_name
partitions:
p01,p02,p03,p04,p05,p06,p07,p08,p09,
p10,p11,p12,p13,p14,p15,p16,p17,p18,
p19
type: index
...
54. Partition pruning - table partitioned by colx
explain partitions select count(*)
from table_name where colx between
120 and 230G
***** 1. row ****
id: 1
select_type: SIMPLE
table: table_name
partitions: p02,p03
type: index
...
56. When to use partitions?
• if you have large tables
• if you know that you will always query for the
partitioning column
• if you have historical tables that you want to purge
quickly
• if your indexes are larger than the available RAM
52
59. components used for testing
• MySQL Sandbox
> created MySQL instances in seconds
> http://launchpad.net/mysql-sandbox
54
60. components used for testing
• MySQL Sandbox
> created MySQL instances in seconds
> http://launchpad.net/mysql-sandbox
• MySQL Employees Test Database
> has ~ 4 mil records in 6 tables
> http://launchpad.net/test-db
54
61. components used for testing
• MySQL Sandbox
> created MySQL instances in seconds
> http://launchpad.net/mysql-sandbox
• MySQL Employees Test Database
> has ~ 4 mil records in 6 tables
> http://launchpad.net/test-db
• Command line ability
> fingers
> ingenuity
54
64. How many records
select count(*) from salaries;
+----------+
| count(*) |
+----------+
| 2844047 |
+----------+
1 row in set (0.01 sec)
57
65. How many records in 1998
not partitioned
select count(*) from salaries
where from_date between
'1998-01-01' and '1998-12-31';
+----------+
| count(*) |
+----------+
| 247489 |
+----------+
1 row in set (1.52 sec)
# NOT PARTITIONED 58
66. How many records in 1998
PARTITIONED
select count(*) from salaries
where from_date between
'1998-01-01' and '1998-12-31';
+----------+
| count(*) |
+----------+
| 247489 |
+----------+
1 row in set (0.41 sec)
# partition p15
59
67. How many records in 1999
not partitioned
select count(*) from salaries
where from_date between
'1999-01-01' and '1999-12-31';
+----------+
| count(*) |
+----------+
| 260957 |
+----------+
1 row in set (1.54 sec)
# NOT PARTITIONED
60
68. How many records in 1999
PARTITIONED
select count(*) from salaries
where from_date between
'1999-01-01' and '1999-12-31';
+----------+
| count(*) |
+----------+
| 260957 |
+----------+
1 row in set (0.17 sec)
# partition p16
61
69. Deleting 1998 records
not partitioned
delete from salaries where
from_date between '1998-01-01'
and '1998-12-31';
Query OK, 247489 rows affected
(19.13 sec)
# NOT PARTITIONED
62
72. Replication schemes
INNODB INNODB
NOT NOT
MASTER PARTITIONED PARTITIONED
concurrent insert SLAVE
concurrent read
INNODB MyISAM
PARTITIONED PARTITIONED
BY RANGE BY RANGE
SLAVE SLAVE
65
concurrent batch processing large batch processing
78. RIGHT
PARTITION BY RANGE(year(from_date))
select count(*) from salaries where
from_date between '1998-01-01' and
'1998-12-31';
+----------+
| count(*) |
+----------+
| 247489 |
+----------+
1 row in set (0.46 sec)
71
79. EXPLAIN
explain partitions select count(*)
from salaries where year(from_date)
= 1998G
***** 1. row ****
id: 1
select_type: SIMPLE
table: salaries
partitions:
p01,p02,p03,p04,p05,p06,p07,p08,p09
,p10,p11,p12,p13,p14,p15,p16,p17,p1
8,p19
type: index 72
...
80. EXPLAIN
explain partitions select count(*)
from salaries where from_date
between '1998-01-01' and
'1998-12-31'G
***** 1. row ****
id: 1
select_type: SIMPLE
table: salaries
partitions: p14,p15
...
73
85. Benchmarking results (huge server)
engine 6 month range
InnoDB 4 min 30s
MyISAM 25.03s
Archive 22 min 25s
InnoDB partitioned by month 13.19
MyISAM partitioned by year 6.31
MyISAM partitioned by month 4.45
Archive partitioned by year 16.67
Archive partitioned by month 8.97
87. The partition helper
• The INFORMATION_SCHEMA.PARTITIONS table
• The partition helper
> http://datacharmer.blogspot.com/2008/12/partition-
helper-improving-usability.html
> A Perl script that creates partitioning statements
• ... anything you are creating right now :)
80
88. the partition helper
./partitions_helper
--table=mytable --column=prod_id
--interval=1000 --start=1 --end=10000
ALTER TABLE mytable
PARTITION by range (prod_id)
(
partition p001 VALUES LESS THAN (1000)
, partition p002 VALUES LESS THAN (2000)
, partition p003 VALUES LESS THAN (3000)
, partition p004 VALUES LESS THAN (4000)
, partition p005 VALUES LESS THAN (5000)
, partition p006 VALUES LESS THAN (6000)
, partition p007 VALUES LESS THAN (7000)
, partition p008 VALUES LESS THAN (8000)
, partition p009 VALUES LESS THAN (9000)
, partition p010 VALUES LESS THAN (10000)
81
);
89. the partition helper
./partitions_helper
--table=mytable --column=d --interval=year
--start=2004-01-01 --end=2009-01-01
ALTER TABLE mytable
PARTITION by range (to_date(d))
(
partition p001 VALUES LESS THAN (to_days('2004-01-01'))
, partition p002 VALUES LESS THAN (to_days('2005-01-01'))
, partition p003 VALUES LESS THAN (to_days('2006-01-01'))
, partition p004 VALUES LESS THAN (to_days('2007-01-01'))
, partition p005 VALUES LESS THAN (to_days('2008-01-01'))
, partition p006 VALUES LESS THAN (to_days('2009-01-01'))
);
82
90. the partition helper
./partitions_helper --table=mytable
--column=d --interval=month
--start=2008-01-01 --end=2009-01-01
ALTER TABLE mytable
PARTITION by range (to_date(d))
(
partition p001 VALUES LESS THAN (to_days('2008-01-01'))
, partition p002 VALUES LESS THAN (to_days('2008-02-01'))
, partition p003 VALUES LESS THAN (to_days('2008-03-01'))
, partition p004 VALUES LESS THAN (to_days('2008-04-01'))
, partition p005 VALUES LESS THAN (to_days('2008-05-01'))
, partition p006 VALUES LESS THAN (to_days('2008-06-01'))
, partition p007 VALUES LESS THAN (to_days('2008-07-01'))
, partition p008 VALUES LESS THAN (to_days('2008-08-01'))
, partition p009 VALUES LESS THAN (to_days('2008-09-01'))
, partition p010 VALUES LESS THAN (to_days('2008-10-01'))
, partition p011 VALUES LESS THAN (to_days('2008-11-01'))
, partition p012 VALUES LESS THAN (to_days('2008-12-01'))
, partition p013 VALUES LESS THAN (to_days('2009-01-01'))
83
);
92. TIPS
• For large table ( indexes > available RAM)
> DO NOT USE INDEXES
> PARTITIONS ARE MORE EFFICIENT
85
93. TIPS
• Before adopting partitions in production, benchmark!
• you can create partitions of different sizes
> in historical tables, you can partition
– current data by day
– recent data by month
– distant past data by year
> ALL IN THE SAME TABLE!
• If you want to enforce a constraint on a integer
column, you may set a partition by list, with only the
values you want to admit.
86
94. TIPS
• For large historical tables, consider using ARCHIVE
+ partitions
> You see benefits for very large amounts of data ( > 500
MB)
> Very effective when you run statistics
> Almost the same performance of MyISAM
> but 1/10 of storage
87
102. Annoyances with partitions
• CREATE TABLE only keeps the result of the
expression for each partition.
• (you can use the information_schema to ease the
pain in this case)
93
103. Annoyances with partitions
CREATE TABLE t1 (
d date
) ENGINE=InnoDB
PARTITION BY RANGE (to_days(d))
(
PARTITION P1 VALUES
LESS THAN (to_days('1999-01-01')),
PARTITION P2 VALUES
LESS THAN (to_days('2000-01-01'))
94
)
104. Annoyances with partitions
CREATE TABLE `t1` (
`d` date DEFAULT NULL
) ENGINE=InnoDB DEFAULT
CHARSET=latin1
/*!50100 PARTITION BY RANGE
(to_days(d))
(PARTITION P1 VALUES LESS THAN
(730120) ENGINE = InnoDB,
PARTITION P2 VALUES LESS THAN
(730485) ENGINE = InnoDB) */
95
105. Fixing annoyances with partitions
select partition_name part,
partition_expression expr,
partition_description val
from information_schema.partitions
where table_name='t1';
+------+------------+--------+
| part | expr | val |
+------+------------+--------+
| P1 | to_days(d) | 730120 |
| P2 | to_days(d) | 730485 |
96
+------+------------+--------+
106. fixing annoyances with partitions
select partition_name part ,
partition_expression expr,
from_days(partition_description) val
from information_schema.partitions
where table_name='t1';
+------+------------+------------+
| part | expr | val |
+------+------------+------------+
| P1 | to_days(d) | 1999-01-01 |
| P2 | to_days(d) | 2000-01-01 |
+------+------------+------------+
97