This document provides documentation for Percona XtraDB Cluster, an open-source high availability and scalability solution for MySQL users. It includes sections on installation from binaries or source code, key features like high availability and multi-master replication, FAQs, how-tos, limitations, and other documentation. Percona XtraDB Cluster provides synchronous replication across multiple MySQL/Percona Server nodes, allowing for high availability and the ability to write to any node.
Built-in MySQL Replication is known for its capability to enable to scale reads easily. However, there are some limitations and known issues with this solution because of the asynchronous nature of this replication. This talk will describe another way of doing MySQL replication, by using synchronous replication, available in Percona XtraDB Cluster. The open source solution will be explained and compared to traditional asynchronous MySQL replication, as well as some known use cases will be described. Percona XtraDB Cluster is an, open source, high availability and high scalability solution for MySQL clustering. Features include: Synchronous replication, Multi-master replication support, Parallel replication, Automatic node provisioning.
Upgrading MySQL databases do not come without risk. There is no guarantee that no problems will happen if you move to a new major MySQL version.
Should we just upgrade and rollback immediately if problems occur? But what if these problems only happen a few days after migrating to this new version?
You might have a database environment that is risk-adverse, where you really have to be sure that this new MySQL version will handle the workload properly.
Examples:
- Both MySQL 5.6 and 5.7 have a lot of changes in the MySQL Optimizer. It is expected that this improves performance of my queries, but is it really the case? What if there is a performance regression? How will this affect my database performance?
- Also, there are a lot of incompatible changes which are documented in the release notes, how do I know if I'm affected by this in my workload? It's a lot to read..
- Can I go immediately from MySQL 5.5 to 5.7 and skip MySQL 5.6 even though the MySQL documentation states that this is not supported?
- Many companies have staging environments, but is there a QA team and do they really test all functionality, under a similar workload?
This presentation will show you a process, using open source tools, of these types of migrations with a focus on assessing risk and fixing any problems you might run into prior to the migration.
This process can then be used for various changes:
- MySQL upgrades for major version upgrades
- Switching storage engines
- Changing hardware architecture
Additionally, we will describe ways to do the actual migration and rollback with the least amount of downtime.
Advanced Percona XtraDB Cluster in a nutshell... la suiteKenny Gryp
This document provides a hands-on tutorial for advanced Percona XtraDB Cluster users. It discusses setting up a 3 node PXC cluster environment in VirtualBox and bootstrapping the initial cluster. It then covers topics like avoiding state snapshot transfers when restarting MySQL, recovering from clean and unclean shutdowns, and reproducing and diagnosing different types of conflicts through examples.
This document provides an overview of Percona XtraDB Cluster, a high availability and data replication solution for MySQL databases. It discusses key features like synchronous multi-master replication, parallel transaction application across nodes, and automatic node provisioning to maintain data consistency even during network failures. It also notes some current limitations around supported table types, optimistic transaction locking, and write scalability being limited by the weakest node. The document aims to explain how Percona XtraDB Cluster improves on traditional MySQL replication to provide both high availability and data consistency for mission critical database applications.
Building Apache Cassandra clusters for massive scaleAlex Thompson
Covering theory and operational aspects of bring up Apache Cassandra clusters - this presentation can be used as a field reference. Presented by Alex Thompson at the Sydney Cassandra Meetup.
Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera synchronous replication library in a single product package which enables you to create a cost-effective MySQL cluster.
This tutorial will cover the following topics:
- Migration from standard MySQL Master-Slave Architecture to PXC
- Configuration differences between standard MySQLl and Xtradb Cluster
- How to add a node and what does SST, IST mean ? How to use them ?
- How to backup the cluster
- How to monitor the cluster
- 2 nodes servers- Why this isn't ideal but reasons and steps to setting it up anyway.
- Galera Arbitrator: Defining what it is.
- How to maintain the cluster
- Setting up load balancing for Xtradb cluster
- How to handle the cluster in the cloud
- Tips and tricks
- ... and if available cover PXC 5.6 with Galera 3 !!
Webinar Slides: Migrating to Galera ClusterSeveralnines
This document discusses considerations for migrating to Galera Cluster replication from MySQL or other database systems. It covers differences in supported features between Galera and MySQL, including storage engines, tables without primary keys, auto-increment handling, and DDL processing. It also addresses multi-master conflicts, long transactions, LOAD DATA processing, and using Galera with MySQL replication. An overview of online migration is provided along with guidance on validating schemas and checking for compatibility prior to migration.
The document discusses high availability and scalability in MySQL. It describes various techniques for achieving high availability including replication, clustering, and shared storage solutions. It also discusses different approaches for scaling MySQL including replication, sharding, and clustering. MySQL replication is described as asynchronous with a single master and multiple read-only slaves. MySQL Cluster provides synchronous replication across nodes and automatic failover for high availability.
Built-in MySQL Replication is known for its capability to enable to scale reads easily. However, there are some limitations and known issues with this solution because of the asynchronous nature of this replication. This talk will describe another way of doing MySQL replication, by using synchronous replication, available in Percona XtraDB Cluster. The open source solution will be explained and compared to traditional asynchronous MySQL replication, as well as some known use cases will be described. Percona XtraDB Cluster is an, open source, high availability and high scalability solution for MySQL clustering. Features include: Synchronous replication, Multi-master replication support, Parallel replication, Automatic node provisioning.
Upgrading MySQL databases do not come without risk. There is no guarantee that no problems will happen if you move to a new major MySQL version.
Should we just upgrade and rollback immediately if problems occur? But what if these problems only happen a few days after migrating to this new version?
You might have a database environment that is risk-adverse, where you really have to be sure that this new MySQL version will handle the workload properly.
Examples:
- Both MySQL 5.6 and 5.7 have a lot of changes in the MySQL Optimizer. It is expected that this improves performance of my queries, but is it really the case? What if there is a performance regression? How will this affect my database performance?
- Also, there are a lot of incompatible changes which are documented in the release notes, how do I know if I'm affected by this in my workload? It's a lot to read..
- Can I go immediately from MySQL 5.5 to 5.7 and skip MySQL 5.6 even though the MySQL documentation states that this is not supported?
- Many companies have staging environments, but is there a QA team and do they really test all functionality, under a similar workload?
This presentation will show you a process, using open source tools, of these types of migrations with a focus on assessing risk and fixing any problems you might run into prior to the migration.
This process can then be used for various changes:
- MySQL upgrades for major version upgrades
- Switching storage engines
- Changing hardware architecture
Additionally, we will describe ways to do the actual migration and rollback with the least amount of downtime.
Advanced Percona XtraDB Cluster in a nutshell... la suiteKenny Gryp
This document provides a hands-on tutorial for advanced Percona XtraDB Cluster users. It discusses setting up a 3 node PXC cluster environment in VirtualBox and bootstrapping the initial cluster. It then covers topics like avoiding state snapshot transfers when restarting MySQL, recovering from clean and unclean shutdowns, and reproducing and diagnosing different types of conflicts through examples.
This document provides an overview of Percona XtraDB Cluster, a high availability and data replication solution for MySQL databases. It discusses key features like synchronous multi-master replication, parallel transaction application across nodes, and automatic node provisioning to maintain data consistency even during network failures. It also notes some current limitations around supported table types, optimistic transaction locking, and write scalability being limited by the weakest node. The document aims to explain how Percona XtraDB Cluster improves on traditional MySQL replication to provide both high availability and data consistency for mission critical database applications.
Building Apache Cassandra clusters for massive scaleAlex Thompson
Covering theory and operational aspects of bring up Apache Cassandra clusters - this presentation can be used as a field reference. Presented by Alex Thompson at the Sydney Cassandra Meetup.
Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera synchronous replication library in a single product package which enables you to create a cost-effective MySQL cluster.
This tutorial will cover the following topics:
- Migration from standard MySQL Master-Slave Architecture to PXC
- Configuration differences between standard MySQLl and Xtradb Cluster
- How to add a node and what does SST, IST mean ? How to use them ?
- How to backup the cluster
- How to monitor the cluster
- 2 nodes servers- Why this isn't ideal but reasons and steps to setting it up anyway.
- Galera Arbitrator: Defining what it is.
- How to maintain the cluster
- Setting up load balancing for Xtradb cluster
- How to handle the cluster in the cloud
- Tips and tricks
- ... and if available cover PXC 5.6 with Galera 3 !!
Webinar Slides: Migrating to Galera ClusterSeveralnines
This document discusses considerations for migrating to Galera Cluster replication from MySQL or other database systems. It covers differences in supported features between Galera and MySQL, including storage engines, tables without primary keys, auto-increment handling, and DDL processing. It also addresses multi-master conflicts, long transactions, LOAD DATA processing, and using Galera with MySQL replication. An overview of online migration is provided along with guidance on validating schemas and checking for compatibility prior to migration.
The document discusses high availability and scalability in MySQL. It describes various techniques for achieving high availability including replication, clustering, and shared storage solutions. It also discusses different approaches for scaling MySQL including replication, sharding, and clustering. MySQL replication is described as asynchronous with a single master and multiple read-only slaves. MySQL Cluster provides synchronous replication across nodes and automatic failover for high availability.
Pacemaker is a high availability cluster resource manager that can be used to provide high availability for MySQL databases. It monitors MySQL instances and replicates data between nodes using replication. If the primary MySQL node fails, Pacemaker detects the failure and fails over to the secondary node, bringing the MySQL service back online without downtime. Pacemaker manages shared storage and virtual IP failover to ensure connections are direct to the active MySQL node. It is important to monitor replication state and lag to ensure data consistency between nodes.
This document discusses online migration from an existing MySQL master-slave setup to a Galera cluster. It outlines the steps to enable binary logging on the slave, dump the schema and data, load this into the first Galera node to initialize replication, and transition reads to the Galera cluster while writes continue on the master initially at 90% before being cut over fully to the cluster. Operational checklists, backup procedures, and disaster recovery options for the new Galera cluster configuration are also reviewed.
RAC-Installing your First Cluster and DatabaseNikhil Kumar
RAC - Installing your First RAC
Abstract : Oracle Real Application Clusters have been one of the hottest technologies in the market since 2001 prior this is know OPS in 8i. Oracle has brought revolution in the field of database by enhancing RAC technologies in it each version. This presentation will give introduction of RAC and features introduced in each version of RAC. This presentation contains the demo of building Oracle clusterware from the scratch. Also we will discuss the new components and its features during installation. This presentation and demo will be done on version 11GR2. Which will be used as a base for our next presentation Viz. Upgradation of RAC 11GR2 to 12C RAC.
This presentation will give brief insight information of RAC infrastructure setup. Sometimes DBA doesn’t fully aware of prerequisite and verification steps that needs to perform before installing clusterware, So this session will cover thing to consider before installing clusterware and best practices followed during the whole process.
Agenda
Introduction of RAC
Installation of Clusterware.
Creating diskgroup / Adding disk to Diskgroup using ASMCA.
Creation of ACFS Volume.
Installation of RAC Database using DBCA.
This document provides instructions for an exercise to familiarize users with cluster administration basics in Data ONTAP. The objectives are to connect to the command shell, explore the command hierarchy, manage privileges and licenses, and install and configure OnCommand System Manager. The tasks include connecting to the cluster shell, exploring commands and options, comparing privilege levels, using tab completion, installing and configuring OnCommand System Manager, and managing feature licenses.
This document provides an overview of virtualization overheads and benchmarks. It discusses different types of hypervisors like VMware ESX, KVM, Hyper-V, and Xen. It describes the overheads introduced at the CPU, memory, disk, and network levels. It covers how hardware assists have helped reduce overheads. It also discusses nested scheduling of I/O and page caches. Benchmark results show the impact of different hypervisor and guest OS configuration combinations on performance. The document concludes with sections on benchmarking virtualization performance and isolation.
This document provides instructions for setting up different types of MySQL replication architectures:
1) It describes how to configure basic master-slave replication between two servers with step-by-step instructions for configuring the master and slave servers.
2) It also provides a second method for implementing master-slave replication with additional details on configuring the replication user and importing databases.
3) Finally, it outlines how to set up a master-master replication configuration between two MySQL servers to provide high availability, with each server acting as both a master and slave.
This document summarizes a presentation about MySQL Group Replication. The presentation discusses how Group Replication provides enhanced high availability for MySQL databases by allowing multiple MySQL servers to act as equal masters that can handle writes and remain available even if one server fails. It covers the theory behind Group Replication, how to configure and use it, and management of Group Replication deployments.
Training Slides: Intermediate 202: Performing Cluster Maintenance with Zero-D...Continuent
Join us for this intermediate training session as we explore how to leverage the power of the Tungsten Clustering to perform database and OS maintenance with zero-downtime. This training is for anyone new to Continuent without prior experience, but will also serve as a wonderful refresher for any current users. Basic MySQL knowledge is assumed.
AGENDA
- Review the cluster architecture
- Describe the rolling maintenance process
- Explore what happens during a master switch
- Discuss cluster states
- Demonstrate rolling maintenance
- Re-cap commands and resources used during the demo
Streaming Replication Made Easy in v9.3Sameer Kumar
This document discusses setting up streaming replication in PostgreSQL v9.3 to enable high availability. It covers preparing primary and standby servers, configuring wal_level and max_wal_senders on the primary, taking a backup and restoring on the standby, creating a recovery.conf file, starting the servers to test replication, triggering failover by promoting the standby, handling multiple replicas without rebuilding, and rebuilding the original primary as a new standby. Monitoring replication status is also addressed using views like pg_stat_replication.
ZFS and MySQL on Linux, the Sweet SpotsJervin Real
The document discusses using ZFS as the storage backend for MySQL and Percona XtraDB Cluster. It finds that while ZFS can provide reliable storage, encryption, compression and backups for MySQL, direct performance is limited by disk throughput. Adding an NVMe SLOG helps improve performance for a large MySQL dataset, but is still limited by the underlying storage. ZFS snapshots provide an alternative to XtraBackup for state snapshot transfers in Percona XtraDB Cluster that keeps the donor node available. Testing backups with ZFS snapshots on MySQL shows initial steady performance, but degradation over time as reads saturate the disks.
This document provides an overview of troubleshooting storage performance issues in vSphere environments. It discusses using vCenter performance charts and ESXTop to analyze latency and I/O statistics at the storage path, disk, and LUN level. The document also covers topics like disk alignment, considerations for using SCSI versus SATA disks, identifying APD issues, multipathing, and how VMware uses SCSI reservations for metadata locking on shared VMFS datastores.
Group Replication went Generally Available end of 2016, it introduces a 'synchronous' active:active multi-master eplication, in addition to asynchronous and semi-synchronous replication, the latter 2 being available in in MySQL for longtime.
As with any new feature, and especially with introducing active:active multi-master replication, it takes a while before companies are adopting the software in production database environment.
For example, even though MySQL 5.7 has been GA for more than a year, adoption is only starting to increase recently.
We can, and should, expect the same from Group Replication. As with every release, bugs will be found, and with new features, best practises still need to formed out of practical experience.
After giving a short introduction on what Group Replication is, I will cover my experience so far in evaluating Group Replication.
This document summarizes a presentation about Percona XtraDB Cluster (PXC), a high availability solution for MySQL. It introduces PXC and how it uses synchronous replication and a Galera library to provide scalability and redundancy. A simple demonstration of PXC is provided. The presenter is identified as Javier Tomás Zon, a platform reliable engineer at Percona with over 12 years of system administration experience, including 8 years working with MySQL.
Upgrading mysql version 5.5.30 to 5.6.10Vasudeva Rao
The document provides steps to upgrade a MySQL database from version 5.5.30 to 5.6.10 on a Linux server. It involves downloading the MySQL 5.6 RPM files, stopping the existing 5.5 server, moving the existing data directory, removing the 5.5 RPMs, installing the 5.6 RPMs, moving the data directory back, starting the 5.6 server, and running mysql_upgrade to convert the database to the new version's format. Additional configuration changes for the new 5.6 version are also recommended.
This document provides instructions for implementing an Oracle 11g R2 Real Application Cluster on a Red Hat Enterprise Linux 5.0 system using a two-node configuration. It describes pre-installation steps including hardware and network configuration, installing prerequisite packages and libraries, and configuring the Oracle ASM library driver. Detailed steps are provided for installing Oracle Grid Infrastructure and database software, and configuring the single client access name and storage area network.
This document discusses Go programming patterns and best practices presented by MegaEase, an enterprise cloud native architecture provider. It covers topics like slices, interfaces, performance optimization, and common Go mistakes. Examples are provided to demonstrate slice internals, deep comparison, interface patterns, and how to check interface compliance.
With employees based in countries around the globe which provide 24x7 services to MySQL users worldwide, Percona provides enterprise-grade MySQL Support, Consulting, Training, Managed Services, and Server Development services to companies ranging from large organizations, such as Cisco Systems, Alcatel-Lucent, Groupon, and the BBC, to recent startups building MySQL-powered solutions for businesses and consumers.
This document provides an introduction to parallel synchronous replication using Percona XtraDB Cluster (PXC). It discusses the limitations of traditional MySQL replication and how PXC implements a data-centric approach with synchronous multi-master replication between nodes. Key features of PXC highlighted include parallel replication, data consistency, and automatic provisioning of new nodes. The document also covers integration with load balancers and limitations to be aware of for write-intensive or large transaction workloads.
Multi Source Replication With MySQL 5.7 @ VerisureKenny Gryp
Verisure migrated their data warehouse from using Tungsten Replicator to native multi-source replication in MySQL 5.7 to simplify operations. They loaded data from production shards into the new data warehouse setup using XtraBackup backups and improved replication capacity with MySQL's parallel replication features. Some issues were encountered with replication lag reporting and crashes during the upgrade but most were resolved. Monitoring and management tools also required updates to support the new multi-source replication configuration.
Pacemaker is a high availability cluster resource manager that can be used to provide high availability for MySQL databases. It monitors MySQL instances and replicates data between nodes using replication. If the primary MySQL node fails, Pacemaker detects the failure and fails over to the secondary node, bringing the MySQL service back online without downtime. Pacemaker manages shared storage and virtual IP failover to ensure connections are direct to the active MySQL node. It is important to monitor replication state and lag to ensure data consistency between nodes.
This document discusses online migration from an existing MySQL master-slave setup to a Galera cluster. It outlines the steps to enable binary logging on the slave, dump the schema and data, load this into the first Galera node to initialize replication, and transition reads to the Galera cluster while writes continue on the master initially at 90% before being cut over fully to the cluster. Operational checklists, backup procedures, and disaster recovery options for the new Galera cluster configuration are also reviewed.
RAC-Installing your First Cluster and DatabaseNikhil Kumar
RAC - Installing your First RAC
Abstract : Oracle Real Application Clusters have been one of the hottest technologies in the market since 2001 prior this is know OPS in 8i. Oracle has brought revolution in the field of database by enhancing RAC technologies in it each version. This presentation will give introduction of RAC and features introduced in each version of RAC. This presentation contains the demo of building Oracle clusterware from the scratch. Also we will discuss the new components and its features during installation. This presentation and demo will be done on version 11GR2. Which will be used as a base for our next presentation Viz. Upgradation of RAC 11GR2 to 12C RAC.
This presentation will give brief insight information of RAC infrastructure setup. Sometimes DBA doesn’t fully aware of prerequisite and verification steps that needs to perform before installing clusterware, So this session will cover thing to consider before installing clusterware and best practices followed during the whole process.
Agenda
Introduction of RAC
Installation of Clusterware.
Creating diskgroup / Adding disk to Diskgroup using ASMCA.
Creation of ACFS Volume.
Installation of RAC Database using DBCA.
This document provides instructions for an exercise to familiarize users with cluster administration basics in Data ONTAP. The objectives are to connect to the command shell, explore the command hierarchy, manage privileges and licenses, and install and configure OnCommand System Manager. The tasks include connecting to the cluster shell, exploring commands and options, comparing privilege levels, using tab completion, installing and configuring OnCommand System Manager, and managing feature licenses.
This document provides an overview of virtualization overheads and benchmarks. It discusses different types of hypervisors like VMware ESX, KVM, Hyper-V, and Xen. It describes the overheads introduced at the CPU, memory, disk, and network levels. It covers how hardware assists have helped reduce overheads. It also discusses nested scheduling of I/O and page caches. Benchmark results show the impact of different hypervisor and guest OS configuration combinations on performance. The document concludes with sections on benchmarking virtualization performance and isolation.
This document provides instructions for setting up different types of MySQL replication architectures:
1) It describes how to configure basic master-slave replication between two servers with step-by-step instructions for configuring the master and slave servers.
2) It also provides a second method for implementing master-slave replication with additional details on configuring the replication user and importing databases.
3) Finally, it outlines how to set up a master-master replication configuration between two MySQL servers to provide high availability, with each server acting as both a master and slave.
This document summarizes a presentation about MySQL Group Replication. The presentation discusses how Group Replication provides enhanced high availability for MySQL databases by allowing multiple MySQL servers to act as equal masters that can handle writes and remain available even if one server fails. It covers the theory behind Group Replication, how to configure and use it, and management of Group Replication deployments.
Training Slides: Intermediate 202: Performing Cluster Maintenance with Zero-D...Continuent
Join us for this intermediate training session as we explore how to leverage the power of the Tungsten Clustering to perform database and OS maintenance with zero-downtime. This training is for anyone new to Continuent without prior experience, but will also serve as a wonderful refresher for any current users. Basic MySQL knowledge is assumed.
AGENDA
- Review the cluster architecture
- Describe the rolling maintenance process
- Explore what happens during a master switch
- Discuss cluster states
- Demonstrate rolling maintenance
- Re-cap commands and resources used during the demo
Streaming Replication Made Easy in v9.3Sameer Kumar
This document discusses setting up streaming replication in PostgreSQL v9.3 to enable high availability. It covers preparing primary and standby servers, configuring wal_level and max_wal_senders on the primary, taking a backup and restoring on the standby, creating a recovery.conf file, starting the servers to test replication, triggering failover by promoting the standby, handling multiple replicas without rebuilding, and rebuilding the original primary as a new standby. Monitoring replication status is also addressed using views like pg_stat_replication.
ZFS and MySQL on Linux, the Sweet SpotsJervin Real
The document discusses using ZFS as the storage backend for MySQL and Percona XtraDB Cluster. It finds that while ZFS can provide reliable storage, encryption, compression and backups for MySQL, direct performance is limited by disk throughput. Adding an NVMe SLOG helps improve performance for a large MySQL dataset, but is still limited by the underlying storage. ZFS snapshots provide an alternative to XtraBackup for state snapshot transfers in Percona XtraDB Cluster that keeps the donor node available. Testing backups with ZFS snapshots on MySQL shows initial steady performance, but degradation over time as reads saturate the disks.
This document provides an overview of troubleshooting storage performance issues in vSphere environments. It discusses using vCenter performance charts and ESXTop to analyze latency and I/O statistics at the storage path, disk, and LUN level. The document also covers topics like disk alignment, considerations for using SCSI versus SATA disks, identifying APD issues, multipathing, and how VMware uses SCSI reservations for metadata locking on shared VMFS datastores.
Group Replication went Generally Available end of 2016, it introduces a 'synchronous' active:active multi-master eplication, in addition to asynchronous and semi-synchronous replication, the latter 2 being available in in MySQL for longtime.
As with any new feature, and especially with introducing active:active multi-master replication, it takes a while before companies are adopting the software in production database environment.
For example, even though MySQL 5.7 has been GA for more than a year, adoption is only starting to increase recently.
We can, and should, expect the same from Group Replication. As with every release, bugs will be found, and with new features, best practises still need to formed out of practical experience.
After giving a short introduction on what Group Replication is, I will cover my experience so far in evaluating Group Replication.
This document summarizes a presentation about Percona XtraDB Cluster (PXC), a high availability solution for MySQL. It introduces PXC and how it uses synchronous replication and a Galera library to provide scalability and redundancy. A simple demonstration of PXC is provided. The presenter is identified as Javier Tomás Zon, a platform reliable engineer at Percona with over 12 years of system administration experience, including 8 years working with MySQL.
Upgrading mysql version 5.5.30 to 5.6.10Vasudeva Rao
The document provides steps to upgrade a MySQL database from version 5.5.30 to 5.6.10 on a Linux server. It involves downloading the MySQL 5.6 RPM files, stopping the existing 5.5 server, moving the existing data directory, removing the 5.5 RPMs, installing the 5.6 RPMs, moving the data directory back, starting the 5.6 server, and running mysql_upgrade to convert the database to the new version's format. Additional configuration changes for the new 5.6 version are also recommended.
This document provides instructions for implementing an Oracle 11g R2 Real Application Cluster on a Red Hat Enterprise Linux 5.0 system using a two-node configuration. It describes pre-installation steps including hardware and network configuration, installing prerequisite packages and libraries, and configuring the Oracle ASM library driver. Detailed steps are provided for installing Oracle Grid Infrastructure and database software, and configuring the single client access name and storage area network.
This document discusses Go programming patterns and best practices presented by MegaEase, an enterprise cloud native architecture provider. It covers topics like slices, interfaces, performance optimization, and common Go mistakes. Examples are provided to demonstrate slice internals, deep comparison, interface patterns, and how to check interface compliance.
With employees based in countries around the globe which provide 24x7 services to MySQL users worldwide, Percona provides enterprise-grade MySQL Support, Consulting, Training, Managed Services, and Server Development services to companies ranging from large organizations, such as Cisco Systems, Alcatel-Lucent, Groupon, and the BBC, to recent startups building MySQL-powered solutions for businesses and consumers.
This document provides an introduction to parallel synchronous replication using Percona XtraDB Cluster (PXC). It discusses the limitations of traditional MySQL replication and how PXC implements a data-centric approach with synchronous multi-master replication between nodes. Key features of PXC highlighted include parallel replication, data consistency, and automatic provisioning of new nodes. The document also covers integration with load balancers and limitations to be aware of for write-intensive or large transaction workloads.
Multi Source Replication With MySQL 5.7 @ VerisureKenny Gryp
Verisure migrated their data warehouse from using Tungsten Replicator to native multi-source replication in MySQL 5.7 to simplify operations. They loaded data from production shards into the new data warehouse setup using XtraBackup backups and improved replication capacity with MySQL's parallel replication features. Some issues were encountered with replication lag reporting and crashes during the upgrade but most were resolved. Monitoring and management tools also required updates to support the new multi-source replication configuration.
#VirtualDesignMaster 3 Challenge 3 – James Brownvdmchallenge
While things on Mars have been going well, since we now have multiple options for our infrastructure, the fact remains that we are working on the colonization of a foreign planet.
The document provides information on MongoDB replication and sharding. Replication allows for redundancy and increased data availability by synchronizing data across multiple database servers. A replica set consists of a primary node that receives writes and secondary nodes that replicate the primary. Sharding partitions data across multiple machines or shards to improve scalability and allow for larger data sets and higher throughput. Sharded clusters have shards that store data, config servers that store metadata, and query routers that direct operations to shards.
Troubleshooting common oslo.messaging and RabbitMQ issuesMichael Klishin
This document discusses common issues with oslo.messaging and RabbitMQ and how to diagnose and resolve them. It provides an overview of oslo.messaging and how it uses RabbitMQ for RPC calls and notifications. Examples are given of where timeouts could occur in RPC calls. Methods for debugging include enabling debug logging, examining RabbitMQ queues and connections, and correlating logs from services. Specific issues covered include RAM usage, unresponsive nodes, rejected TCP connections, TLS connection failures, and high latency. General tips emphasized are using tools to gather data and consulting log files.
This document provides instructions for installing and configuring Cloudian object storage software and CloudBerry Lab products to enable cloud backup services. Key steps include installing Linux, Cloudian, third party software, and configuring Cloudian. Credentials from Cloudian can then be used to configure CloudBerry Managed Backup for offering backup services to customers using the private cloud storage. Standalone CloudBerry products like Backup, Explorer and Drive can also access the storage.
Percona Cluster with Master_Slave for Disaster RecoveryRam Gautam
The document describes setting up asynchronous master-slave database replication between a production database cluster and a disaster recovery database cluster using Percona tools. It provides configuration details for the master and slave databases including enabling binary logging and setting the server IDs. The process involves taking a backup of the master database using Innobackupex, preparing the backup, and copying it to the slave database server. Replication is then started by configuring the master to replicate and the slave as a replica.
Real Application Cluster (RAC) allows multiple computers to simultaneously run Oracle RDBMS while accessing a single database, providing clustering. RAC provides high availability, scalability, and ease of administration by making multiple instances transparent to users. Nodes must have identical environments. Oracle Clusterware manages node additions and removals. Instances from different nodes write to the same physical database. The presentation covers RAC architecture, components, startup sequence, single instance configuration, node eviction, and tips for monitoring and improving the RAC environment.
Christian Johannsen presents on evaluating Apache Cassandra as a cloud database. Cassandra is optimized for cloud infrastructure with features like transparent elasticity, scalability, high availability, easy data distribution and redundancy. It supports multiple data types, is easy to manage, low cost, supports multiple infrastructures and has security features. A demo of DataStax OpsCenter and Apache Spark on Cassandra is shown.
A Detailed Look At cassandra.yaml (Edward Capriolo, The Last Pickle) | Cassan...DataStax
Successfully running Apache Cassandra in production often means knowing what configuration settings to change and which ones to leave as default. Over the years the cassandra.yaml file has grown to provide a number of settings that can improve stability and performance. While the file contains plenty of helpful comments, there is more to be said about the settings and when to change them.
In this talk Edward Capriolo, Consultant at The Last Pickle, will break down the parameters in the configuration files. Looking at those that are essential to getting started, those that impact performance, those that improve availability, the exotic ones, and the ones that should not be played with. This talk is ideal for someone someone setting up Cassandra for the first time up to people with deployments in productions and wondering what the more exotic configuration options do.
About the Speaker
Edward Capriolo Consultant, The Last Pickle
Long time Apache Cassandra user, big data enthusiast.
- Galera is a MySQL clustering solution that provides true multi-master replication with synchronous replication and no single point of failure.
- It allows high availability, data integrity, and elastic scaling of databases across multiple nodes.
- Companies like Percona and MariaDB have integrated Galera to provide highly available database clusters.
This document provides instructions for installing and configuring Cloudian object storage software and CloudBerry backup products to enable an individual or company to become a cloud backup provider using their own hardware. It outlines the requirements, installation steps, and configuration of Cloudian, Linux server, third party software, and CloudBerry Managed Backup. Following these steps allows one to offer cloud backup services to customers using a private cloud storage built on their own resources.
Training Slides: Basics 102: Introduction to Tungsten ClusteringContinuent
This document provides an introduction to Continuent Tungsten clustering. It discusses key benefits like high availability, multi-site deployment, and ease of use. It examines the clustering architecture including topologies, automatic and manual failover, and rolling maintenance procedures. Commands for monitoring and managing the cluster are also reviewed, including cctrl and tpm diag. A demo shows using cctrl to perform a manual failover by promoting a slave to master.
Container Performance Analysis Brendan Gregg, NetflixDocker, Inc.
The document summarizes a talk on container performance analysis. It discusses identifying bottlenecks at the host, container, and kernel level using various Linux performance tools. It also provides an overview of how containers work in Linux using namespaces and control groups (cgroups). Specifically, it demonstrates analyzing resource usage and limitations for containers using tools like docker stats, systemd-cgtop, and investigating namespaces.
Developing Realtime Data Pipelines With Apache KafkaJoe Stein
Developing Realtime Data Pipelines With Apache Kafka. Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients. Kafka is designed to allow a single cluster to serve as the central data backbone for a large organization. It can be elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of co-ordinated consumers. Messages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages without performance impact. Kafka has a modern cluster-centric design that offers strong durability and fault-tolerance guarantees.
Cassandra Day Atlanta 2015: Software Development with Apache Cassandra: A Wal...DataStax Academy
Adding a new technology to your development process can be challenging, and the distributed nature of Apache Cassandra can make it daunting. However the drivers, utilities and tooling now available for Apache Cassandra make this process as familiar as possible to developers, with a few minor caveats. After all, it is still a distributed system.
In this presentation, we will do several quick iterations through a simple Java project, demonstrating the following:
• Creating and modifying a data model
• Writing some code working with this model
• Using your local environment for single and multi-node cluster tests
• Integration testing with Jenkins
• Sending it off to production
New and existing users will leave this presentation with the necessary knowledge to make their next Apache Cassandra-based project a success.
Large Scale Data center Solution Guide: eBGP based designDhiman Chowdhury
Network Automation provides IT administrators and network operators significant benefits. This solution guide
describes an approach to build data centers using Layer3 BGP routing protocol.
It also summarizes on some design philosophies for data center and why E-BGP is better suited.
■ Large-scale data center requirements
■ Large-scale data center topologies
■ Large-scale data center routing
■ EBGP-routed large-scale Clos topology-based data cente
Percona Cluster Installation with High AvailabilityRam Gautam
This document provides configuration instructions for setting up a 3-node Percona XtraDB Cluster with high availability and load balancing. It describes:
1. Installing and configuring Percona 5.7 on three CentOS nodes to form a database cluster with a shared configuration and automatic replication.
2. Configuring HAProxy on two load balancing servers to distribute connections across the database nodes and provide health monitoring.
3. Configuring Keepalived on the load balancing servers to provide a virtual IP address and failover capability so that if one load balancer fails, the other will take over.
This document provides an introduction and overview of MySQL, including how to download and access MySQL, basic commands to manage databases and tables, examples of SQL queries, and how to modify data. It covers topics such as creating databases and tables, selecting, joining, aggregating data, and updating records in MySQL. Examples demonstrate how to retrieve customer names, loan amounts, branch details, and more from the sample banking database.
This document provides an introduction to MySQL, an open source relational database management system. It discusses that MySQL is pronounced "my-es-que-el" and includes both a SQL server and client programs. It also summarizes that MySQL AB is the commercial entity behind MySQL that provides marketing, development, services, support and consulting. Additionally, it notes that MySQL is the most popular open source database with over 100 million downloads, it is certified for SAP applications, and is widely used by developers along with PHP and Apache.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
5. Percona XtraDB Cluster Documentation, Release Beta
Percona XtraDB Cluster is High Availability and Scalability solution for MySQL Users.
Percona XtraDB Cluster provides:
• Synchronous replication. Transaction either commited on all nodes or none.
• Multi-master replication. You can write to any node.
• Parallel applying events on slave. Real “parallel replication”.
• Automatic node provisioning.
• Data consistency. No more unsynchronized slaves.
Percona XtraDB Cluster is fully compatible with MySQL or Percona Server in the following meaning:
• Data compatibility. Percona XtraDB Cluster works with databases created in MySQL / Percona Server
• Application compatibility. There is no or minimal application changes required to start work with Percona
XtraDB Cluster
CONTENTS 1
7. CHAPTER
ONE
INTRODUCTION
1.1 About Percona XtraDB Cluster
Percona XtraDB Cluster is open-source, free MySQL High Availability software
1.1.1 General introduction
1. The Cluster consists of Nodes. Recommended configuration is to have at least 3 nodes, but you can make it
running with 2 nodes as well.
2. Each Node is regular MySQL / Percona Server setup. The point is that you can convert your existing MySQL /
Percona Server into Node and roll Cluster using it as a base. Or otherwise – you can detach Node from Cluster
and use it as just a regular server.
3. Each Node contains the full copy of data. That defines XtraDB Cluster behavior in many ways. And obviously
there are benefits and drawbacks.
3
8. Percona XtraDB Cluster Documentation, Release Beta
Benefits of such approach:
• When you execute a query, it is executed locally on the node. All data is available locally, no need for
remote access.
• No central management. You can loose any node at any point of time, and the cluster will continue to
function.
• Good solution for scaling a read workload. You can put read queries to any of the nodes.
Drawbacks:
• Overhead of joining new node. The new node has to copy full dataset from one of existing nodes. If it is
100GB, it copies 100GB.
• This can’t be used as an effective write scaling solution. There might be some improvements in write
throughput when you run write traffic to 2 nodes vs all traffic to 1 node, but you can’t expect a lot. All
writes still have to go on all nodes.
• You have several duplicates of data, for 3 nodes – 3 duplicates.
1.1.2 What is core difference Percona XtraDB Cluster from MySQL Replication ?
Let’s take look into the well known CAP theorem for Distributed systems. Characteristics of Distributed systems:
C - Consistency (all your data is consistent on all nodes),
4 Chapter 1. Introduction
9. Percona XtraDB Cluster Documentation, Release Beta
A - Availability (your system is AVAILABLE to handle requests in case of failure of one or several nodes
),
P - Partitioning tolerance (in case of inter-node connection failure, each node is still available to handle
requests).
CAP theorem says that each Distributed system can have only two out of these three.
MySQL replication has: Availability and Partitioning tolerance.
Percona XtraDB Cluster has: Consistency and Availability.
That is MySQL replication does not guarantee Consistency of your data, while Percona XtraDB Cluster provides data
Consistency. (And yes, Percona XtraDB Cluster looses Partitioning tolerance property).
1.1.3 Components
Percona XtraDB Cluster is based on Percona Server with XtraDB and includes Write Set Replication patches. It uses
the Galera library, version 2.x, a generic Synchronous Multi-Master replication plugin for transactional applications.
Galera library is developed by Codership Oy.
Galera 2.x supports such new features as:
• Incremental State Transfer (IST), especially useful for WAN deployments,
• RSU, Rolling Schema Update. Schema change does not block operations against table.
1.2 Resources
In general there are 4 resources that need to be different when you want to run several MySQL/Galera nodes on one
host:
1. data directory
2. mysql client port and/or address
3. galera replication listen port and/or address
4. receive address for state snapshot transfer
and later incremental state transfer receive address will be added to the bunch. (I know, it is kinda a lot, but we don’t
see how it can be meaningfully reduced yet).
The first two are the usual mysql stuff.
You figured out the third. It is also possible to pass it via:
wsrep_provider_options="gmcast.listen_addr=tcp://127.0.0.1:5678"
as most other Galera options. This may save you some extra typing.
The fourth one is wsrep_sst_receive_address. This is the address at which the node will be listening for and receiving
the state. Note that in galera cluster _joining_ nodes are waiting for connections from donors. It goes contrary to
tradition and seems to confuse people time and again, but there are good reasons it was made like that.
If you use mysqldump SST it should be the same as this mysql client connection address plus you need to set ws-
rep_sst_auth variable to hold user:password pair. The user should be privileged enough to read system tables from
donor and create system tables on this node. For simplicity that could be just the root user. Note that it also means that
you need to properly set up the privileges on the new node before attempting to join the cluster.
1.2. Resources 5
10. Percona XtraDB Cluster Documentation, Release Beta
If you use rsync or xtrabackup SST, wsrep_sst_auth is not necessary unless your SST script makes use of it. ws-
rep_sst_address can be anything local (it may even be the same on all nodes provided you’ll be starting them one at a
time).
6 Chapter 1. Introduction
11. CHAPTER
TWO
INSTALLATION
2.1 Installing Percona XtraDB Cluster from Binaries
2.1.1 Distribution-specific installation notes
SUSE 11
Although there is no specific build for SUSE, the build for RHEL is suitable for this distribution.
• Download the RPM package of XtraBackup for your architecture at
http://www.percona.com/downloads/XtraBackup/LATEST/
• Copy to a directory (like /tmp) and extract the RPM:
rpm2cpio xtrabackup-1.5-9.rhel5.$ARCH.rpm | cpio -idmv
• Copy binaries to /usr/bin:
cp ./usr/bin/xtrabackup_55 /usr/bin
cp ./usr/bin/tar4ibd /usr/bin
cp ./usr/bin/innobackupex-1.5.1 /usr/bin
• If you use a version prior to 1.6, the stock perl causes an issue with the backup scripts version detection. Edit
/usr/bin/innobackupex-1.5.1. Comment out the lines below as shown below:
$perl_version = chr($required_perl_version[0])
. chr($required_perl_version[1])
. chr($required_perl_version[2]);
#if ($^V lt $perl_version) {
#my $version = chr(48 + $required_perl_version[0])
# . "." . chr(48 + $required_perl_version[1])
# . "." . chr(48 + $required_perl_version[2]);
#print STDERR "$prefix Warning: " .
# "Your perl is too old! Innobackup requiresn";
#print STDERR "$prefix Warning: perl $version or newer!n";
#}
Ready-to-use binaries are available from the Percona XtraDB Cluster download page, including:
• RPM packages for RHEL 5 and RHEL 6
• Debian packages
• Generic .tar.gz packages
7
12. Percona XtraDB Cluster Documentation, Release Beta
2.1.2 Using Percona Software Repositories
Percona yum Testing Repository
The Percona yum repository supports popular RPM-based operating systems, including the Amazon Linux AMI.
The easiest way to install the Percona Yum repository is to install an RPM that configures yum and installs the Percona
GPG key. You can also do the installation manually.
Automatic Installation
Execute the following command as a root user, replacing x86_64 with i386 if you are not running a 64-bit
operating system:
$ rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm
You may want to install also Percona stable repository, which provides Percona-shared-compat rpm, needed to satisfy
dependencies:
$ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
The RPMs for the automatic installation are available at http://www.percona.com/downloads/percona-release/ and
include source code.
Install XtraDB Cluster
Following command will install Cluster packages:
$ yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client xtrabackup
Percona provides repositories for yum (RPM packages for Red Hat, CentOS, Amazon Linux AMI, and Fedora) and
apt (.deb packages for Ubuntu and Debian) for software such as Percona Server, XtraDB, XtraBackup, and Percona
Toolkit. This makes it easy to install and update your software and its dependencies through your operating system’s
package manager.
This is the recommend way of installing where possible.
2.1.3 Initial configuration
In order to start using XtraDB Cluster, you need to configure my.cnf file. Following options are needed:
wsrep_provider -- a path to Galera library.
wsrep_cluster_address -- cluster connection URL.
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
Additional parameters to tune:
wsrep_slave_threads # specifies amount of threads to apply events
wsrep_sst_method
Example:
8 Chapter 2. Installation
13. Percona XtraDB Cluster Documentation, Release Beta
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://10.11.12.206
wsrep_slave_threads=8
wsrep_sst_method=rsync
#wsrep_sst_method=xtrabackup - alternative way to do SST
wsrep_cluster_name=percona_test_cluster
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
2.1.4 Install XtraBackup SST method
To use Percona XtraBackup for State Transfer method (copy snapshot of data between nodes) you can use the regular
xtrabackup package with the script what supports Galera information. You can take innobackupex script from source
code innobackupex.
To inform node to use xtrabackup you need to specify in my.cnf:
wsrep_sst_method=xtrabackup
2.2 Compiling and Installing from Source Code
The source code is available from the Launchpad project here. The easiest way to get the code is with bzr branch of
the desired release, such as the following:
bzr branch lp:percona-xtradb-cluster
You should then have a directory named after the release you branched, such as percona-xtradb-cluster.
2.2.1 Compiling on Linux
Prerequisites
The following packages and tools must be installed to compile Percona XtraDB Cluster from source. These might
vary from system to system.
In Debian-based distributions, you need to:
$ apt-get install build-essential flex bison automake autoconf bzr
libtool cmake libaio-dev mysql-client libncurses-dev zlib1g-dev
In RPM-based distributions, you need to:
$ yum install cmake gcc gcc-c++ libaio libaio-devel automake autoconf bzr
bison libtool ncurses5-devel
Compiling
The most esiest way to build binaries is to run script:
2.2. Compiling and Installing from Source Code 9
14. Percona XtraDB Cluster Documentation, Release Beta
BUILD/compile-pentium64-wsrep
If you feel confident to use cmake, you make compile with cmake adding -DWITH_WSREP=1 to parameters.
Examples how to build RPM and DEB packages you can find in packaging/percona directory in the source code.
10 Chapter 2. Installation
15. CHAPTER
THREE
FEATURES
3.1 High Availability
In a basic setup with 3 nodes, the Percona XtraDB Cluster will continue to function if you take any of the nodes down.
At any point in time you can shutdown any Node to perform maintenance or make configuration changes. Even in
unplanned situations like Node crash or if it becomes unavailable over the network, the Cluster will continue to work
and you’ll be able to run queries on working nodes.
In case there were changes to data while node was down, there are two options that Node may use when it joins the
cluster: State Snapshot Transfer: (SST) and Incremental State Transfer (IST).
• SST is the full copy of data from one node to another. It’s used when a new node joins the cluster, it has
to transfer data from existing node. There are three methods of SST available in Percona XtraDB Cluster:
mysqldump, rsync and xtrabackup (Percona XtraBackup with support of XtraDB Cluster will be released
soon, currently you need to use our source code repository). The downside of mysqldump and rsync is that
your cluster becomes READ-ONLY while data is being copied from one node to another (SST applies FLUSH
TABLES WITH READ LOCK command). Xtrabackup SST does not require READ LOCK for the entire
syncing process, only for syncing .frm files (the same as with regular backup).
• Even with that, SST may be intrusive, that’s why there is IST mechanism. If you put your node down for a
short period of time and then start it, the node is able to fetch only those changes made during the period it was
down. This is done using caching mechanism on nodes. Each node contains a cache, ring-buffer, (the size is
configurable) of last N changes, and the node is able to transfer part of this cache. Obviously, IST can be done
only if the amount of changes needed to transfer is less than N. If it exceeds N, then the joining node has to
perform SST.
You can monitor current state of Node by using
SHOW STATUS LIKE ’wsrep_local_state_comment’;
When it is ‘Synced (6)’, the node is ready to handle traffic.
3.2 Multi-Master replication
Multi-Master replication stands for the ability to write to any node in the cluster, and not to worry that eventually it
will get out-of-sync situation, as it regularly happens with regular MySQL replication if you imprudently write to the
wrong server. This is a long-waited feature and there has been growing demand for it for the last two years, or even
more.
With Percona XtraDB Cluster you can write to any node, and the Cluster guarantees consistency of writes. That is,
the write is either committed on all the nodes or not committed at all. For the simplicity, this diagram shows the use
of the two-node example, but the same logic is applied with the N nodes:
11
16. Percona XtraDB Cluster Documentation, Release Beta
All queries are executed locally on the node, and there is a special handling only on COMMIT. When the COMMIT
is issued, the transaction has to pass certification on all the nodes. If it does not pass, you will receive ERROR as a
response on that query. After that, transaction is applied on the local node.
Response time of COMMIT consists of several parts:
• Network round-trip time,
• Certification time,
• Local applying
Please note that applying the transaction on remote nodes does not affect the response time of COMMIT, as it happens
in the background after the response on certification.
The two important consequences of this architecture:
• First: we can have several appliers working in parallel. This gives us a true parallel replication. Slave can
have many parallel threads, and this can be tuned by variable wsrep_slave_threads.
• Second: There might be a small period of time when the slave is out-of-sync from master. This happens
because the master may apply event faster than a slave. And if you do read from the slave, you may
read the data that has not changed yet. You can see that from the diagram. However, this behavior can
be changed by using variable wsrep_causal_reads=ON. In this case, the read on the slave will wait
until event is applied (this however will increase the response time of the read). This gap between the slave
and the master is the reason why this replication is called “virtually synchronous replication”, and not real
“synchronous replication”.
The described behavior of COMMIT also has the second serious implication. If you run write transactions to two
different nodes, the cluster will use an optimistic locking model. That means a transaction will not check on possible
locking conflicts during the individual queries, but rather on the COMMIT stage, and you may get ERROR response on
COMMIT. This is mentioned because it is one of the incompatibilities with regular InnoDB that you might experience.
In InnoDB usually DEADLOCK and LOCK TIMEOUT errors happen in response on particular query, but not on
COMMIT. It’s good practice to check the error codes after COMMIT query, but there are still many applications that
do not do that.
12 Chapter 3. Features
17. Percona XtraDB Cluster Documentation, Release Beta
If you plan to use Multi-Master capabilities of XtraDB Cluster and run write transactions on several nodes, you may
need to make sure you handle response on COMMIT query.
3.2. Multi-Master replication 13
19. CHAPTER
FOUR
FAQ
4.1 Frequently Asked Questions
4.1.1 Q: How do you solve locking issues like auto increment?
A: For auto-increment particular, Cluster changes auto_increment_offset for each new node. In the single node work-
load, locking handled by usual way how InnoDB handles locks. In case of write load on several nodes, Cluster uses
optimistic locking and application may receive lock error in the response on COMMIT query.
4.1.2 Q: What if one of the nodes crashes and innodb recovery roll back some
transactions?
A: When the node crashes, after the restart it will copy whole dataset from another node (if there were changes to data
since crash).
4.1.3 Q: How can I check the Galera node health?
A: Your check should be simply:
SELECT * FROM someinnodbtable WHERE id=1;
3 different results are possible:
• You get the row with id=1 (node is healthy)
• Unknown error (node is online but Galera is not connected/synced with the cluster)
• Connection error (node is not online)
4.1.4 Q: Is there a chance to have different table structure on the nodes?
What I mean is like having 4 nodes, 4 tables like sessions_a, sessions_b, sessions_c and sessions_d and have each only
on one of the nodes?
A: Not at the moment for InnoDB tables. But it will work for MEMORY tables.
15
20. Percona XtraDB Cluster Documentation, Release Beta
4.1.5 Q: What if a node fail and/or what if there is a network issue between them?
A: Then Quorum mechanism in XtraDB Cluster will decide what nodes can accept traffic and will shutdown nodes
that not belong to quorum. Later when the failure is fixed, the nodes will need to copy data from working cluster.
4.1.6 Q: How would it handle split brain?
A: It would not handle it. The split brain is hard stop, XtraDB Cluster can’t resolve it. That’s why the minimal
recommendation is to have 3 nodes. However there is possibility to allow a node to handle the traffic, option is:
wsrep_provider_options="pc.ignore_sb = yes"
4.1.7 Q: Is it possible to set up cluster without state transfer
A: It is possible in two ways:
1. by default Galera reads starting position from a text file <datadir>/grastate.dat. Just make this file identical on
all nodes, and there will be no state transfer upon start.
2. wsrep_start_position variable - start the nodes with the same UUID:seqno value and there you are.
4.1.8 Q: I have a two nodes setup. When node1 fails, node2 does not accept com-
mands, why?
A: This is expected behaviour, to prevent split brain. See previous question.
4.1.9 Q: What tcp ports are used by Percona XtraDB Cluster?
A: You may need to open up to 4 ports if you are using firewall.
1. Regular MySQL port, default 3306.
2. Port for group communication, default 4567. It can be changed by the option:
wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:4010; "
3. Port for State Transfer, default 4444. It can be changed by the option:
wsrep_sst_receive_address=10.11.12.205:5555
4. Port for Incremental State Transfer, default port for group communication + 1 (4568). It can be changed by the
option:
wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "
4.1.10 Q: Is there “async” mode for Cluster or only “sync” commits are supported?
A: There is no “async” mode, all commits are synchronous on all nodes. Or, to be fully correct, the commits are
“virtually” synchronous. Which means that transaction should pass “certification” on nodes, not physical commit.
“Certification” means a guarantee that transaction does not have conflicts with another transactions on corresponding
node.
16 Chapter 4. FAQ
21. Percona XtraDB Cluster Documentation, Release Beta
4.1.11 Q: Does it work with regular MySQL replication?
A: Yes. On the node you are going to use as master, you should enable log-bin and log-slave-update options.
4.1.12 Q: Init script (/etc/init.d/mysql) does not start
A: Try to disable SELinux. Quick way is:
echo 0 > /selinux/enforce
4.1. Frequently Asked Questions 17
23. CHAPTER
FIVE
HOW-TO
5.1 How to setup 3 node cluster on single box
This is how to setup 3-node cluster on the single physical box.
Assume you installed Percona XtraDB Cluster from binary .tar.gz into directory
/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6
Now we need to create couple my.cnf files and couple of data directories.
Assume we created (see the content of files at the end of document):
• /etc/my.4000.cnf
• /etc/my.5000.cnf
• /etc/my.6000.cnf
and data directories:
• /data/bench/d1
• /data/bench/d2
• /data/bench/d3
and assume the local IP address is 10.11.12.205.
Then we should be able to start initial node as (from directory /usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-
3673.Linux.x86_64.rhel6):
bin/mysqld --defaults-file=/etc/my.4000.cnf
Following output will let out know that node was started successfully:
111215 19:01:49 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
111215 19:01:49 [Note] WSREP: New cluster view: global state: 4c286ccc-2792-11e1-0800-94bd91e32efa:0,
And you can check used ports:
netstat -anp | grep mysqld
tcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 8218/mysqld
tcp 0 0 0.0.0.0:4010 0.0.0.0:* LISTEN 8218/mysqld
After first node, we start second and third:
bin/mysqld --defaults-file=/etc/my.5000.cnf
bin/mysqld --defaults-file=/etc/my.6000.cnf
19
24. Percona XtraDB Cluster Documentation, Release Beta
Successful start will produce the following output:
111215 19:22:26 [Note] WSREP: Shifting JOINER -> JOINED (TO: 2)
111215 19:22:26 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2)
111215 19:22:26 [Note] WSREP: Synchronized with group, ready for connections
Now you can connect to any node and create database, which will be automatically propagated to other nodes:
mysql -h127.0.0.1 -P5000 -e "CREATE DATABASE hello_peter"
Configuration files (/etc/my.4000.cnf):
/etc/my.4000.cnf
[mysqld]
gdb
datadir=/data/bench/d1
basedir=/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6
port = 4000
socket=/tmp/mysql.4000.sock
user=root
binlog_format=ROW
wsrep_provider=/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6/lib/libgalera_sm
wsrep_cluster_address=gcomm://
wsrep_provider_options = "gmcast.listen_addr=tcp://0.0.0.0:4010; "
wsrep_sst_receive_address=10.11.12.205:4020
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
wsrep_node_name=node4000
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
Configuration files (/etc/my.5000.cnf). PLEASE see the difference in wsrep_cluster_address:
/etc/my.5000.cnf
[mysqld]
gdb
datadir=/data/bench/d2
basedir=/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6
port = 5000
socket=/tmp/mysql.5000.sock
user=root
binlog_format=ROW
wsrep_provider=/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6/lib/libgalera_sm
20 Chapter 5. How-to
25. Percona XtraDB Cluster Documentation, Release Beta
wsrep_cluster_address=gcomm://10.11.12.205:4010
wsrep_provider_options = "gmcast.listen_addr=tcp://0.0.0.0:5010; "
wsrep_sst_receive_address=10.11.12.205:5020
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
wsrep_node_name=node5000
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
Configuration files (/etc/my.6000.cnf). PLEASE see the difference in wsrep_cluster_address:
/etc/my.6000.cnf
[mysqld]
gdb
datadir=/data/bench/d3
basedir=/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6
port = 6000
socket=/tmp/mysql.6000.sock
user=root
binlog_format=ROW
wsrep_provider=/usr/local/Percona-XtraDB-Cluster-5.5.17-22.1-3673.Linux.x86_64.rhel6/lib/libgalera_sm
wsrep_cluster_address=gcomm://10.11.12.205:4010
wsrep_provider_options = "gmcast.listen_addr=tcp://0.0.0.0:6010; "
wsrep_sst_receive_address=10.11.12.205:6020
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
wsrep_node_name=node6000
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
5.2 How to setup 3 node cluster in EC2 enviroment
This is how to setup 3-node cluster in EC2 enviroment.
Assume you are running m1.xlarge instances with OS Red Hat Enterprise Linux 6.1 64-bit.
Install XtraDB Cluster from RPM:
1. Install Percona’s regular and testing repositories:
rpm -Uhv http://repo.percona.com/testing/centos/6/os/noarch/percona-testing-0.0-1.noarch.rpm
rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
5.2. How to setup 3 node cluster in EC2 enviroment 21
26. Percona XtraDB Cluster Documentation, Release Beta
2. Install Percona XtraDB Cluster packages:
yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client
3. Create data directories:
mkdir -p /mnt/data
mysql_install_db --datadir=/mnt/data
4. Stop firewall. Cluster requires couple TCP ports to operate. Easiest way:
service iptables stop
If you want to open only specific ports, you need to open 3306, 4444, 4567, 4568 ports. For example for 4567 port
(substitute 192.168.0.1 by your IP):
iptables -A INPUT -i eth0 -p tcp -m tcp --source 192.168.0.1/24 --dport 4567 -j ACCEPT
5. Create /etc/my.cnf files.
On the first node (assume IP 10.93.46.58):
[mysqld]
datadir=/mnt/data
user=mysql
binlog_format=ROW
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
wsrep_node_name=node1
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
On the second node:
[mysqld]
datadir=/mnt/data
user=mysql
binlog_format=ROW
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://10.93.46.58
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
wsrep_node_name=node2
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
On the third (and following nodes) config is similar, with the following change:
22 Chapter 5. How-to
27. Percona XtraDB Cluster Documentation, Release Beta
wsrep_node_name=node3
6. Start mysqld
On the first node:
/usr/sbin/mysqld
or
mysqld_safe
You should be able to see in console (or in error-log file):
111216 0:16:42 [Note] /usr/sbin/mysqld: ready for connections.
Version: ’5.5.17’ socket: ’/var/lib/mysql/mysql.sock’ port: 3306 Percona XtraDB Cluster (GPL), Rel
111216 0:16:42 [Note] WSREP: Assign initial position for certification: 0, protocol version: 1
111216 0:16:42 [Note] WSREP: Synchronized with group, ready for connections
On the second (and following nodes):
/usr/sbin/mysqld
or
mysqld_safe
You should be able to see in console (or in error-log file):
111216 0:21:39 [Note] WSREP: Flow-control interval: [12, 23]
111216 0:21:39 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0)
111216 0:21:39 [Note] WSREP: New cluster view: global state: f912d2eb-27a2-11e1-0800-f34c520a3d4b:0,
111216 0:21:39 [Warning] WSREP: Gap in state sequence. Need state transfer.
111216 0:21:41 [Note] WSREP: Running: ’wsrep_sst_rsync ’joiner’ ’10.93.62.178’ ’’ ’/mnt/data/’ ’/etc
111216 0:21:41 [Note] WSREP: Prepared SST request: rsync|10.93.62.178:4444/rsync_sst
111216 0:21:41 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
111216 0:21:41 [Note] WSREP: Assign initial position for certification: 0, protocol version: 1
111216 0:21:41 [Note] WSREP: prepared IST receiver, listening in: tcp://10.93.62.178:4568
111216 0:21:41 [Note] WSREP: Node 1 (node2) requested state transfer from ’*any*’. Selected 0 (node1
111216 0:21:41 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 0)
111216 0:21:41 [Note] WSREP: Requesting state transfer: success, donor: 0
111216 0:21:42 [Note] WSREP: 0 (node1): State transfer to 1 (node2) complete.
111216 0:21:42 [Note] WSREP: Member 0 (node1) synced with group.
111216 0:21:42 [Note] WSREP: SST complete, seqno: 0
111216 0:21:42 [Note] Plugin ’FEDERATED’ is disabled.
111216 0:21:42 InnoDB: The InnoDB memory heap is disabled
111216 0:21:42 InnoDB: Mutexes and rw_locks use GCC atomic builtins
111216 0:21:42 InnoDB: Compressed tables use zlib 1.2.3
111216 0:21:42 InnoDB: Using Linux native AIO
111216 0:21:42 InnoDB: Initializing buffer pool, size = 128.0M
111216 0:21:42 InnoDB: Completed initialization of buffer pool
111216 0:21:42 InnoDB: highest supported file format is Barracuda.
111216 0:21:42 InnoDB: Waiting for the background threads to start
111216 0:21:43 Percona XtraDB (http://www.percona.com) 1.1.8-20.1 started; log sequence number 15979
111216 0:21:43 [Note] Event Scheduler: Loaded 0 events
111216 0:21:43 [Note] WSREP: Signalling provider to continue.
111216 0:21:43 [Note] WSREP: Received SST: f912d2eb-27a2-11e1-0800-f34c520a3d4b:0
111216 0:21:43 [Note] WSREP: SST finished: f912d2eb-27a2-11e1-0800-f34c520a3d4b:0
111216 0:21:43 [Note] /usr/sbin/mysqld: ready for connections.
Version: ’5.5.17’ socket: ’/var/lib/mysql/mysql.sock’ port: 3306 Percona XtraDB Cluster (GPL), Rel
111216 0:21:43 [Note] WSREP: 1 (node2): State transfer from 0 (node1) complete.
111216 0:21:43 [Note] WSREP: Shifting JOINER -> JOINED (TO: 0)
111216 0:21:43 [Note] WSREP: Member 1 (node2) synced with group.
5.2. How to setup 3 node cluster in EC2 enviroment 23
28. Percona XtraDB Cluster Documentation, Release Beta
111216 0:21:43 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
111216 0:21:43 [Note] WSREP: Synchronized with group, ready for connections
When all nodes are in SYNCED stage your cluster is ready!
7. Connect to database on any node and create database:
mysql
> CREATE DATABASE hello_tom;
The new database will be propagated to all nodes.
Enjoy!
5.3 How to Execute Kewpie Tests
To use kewpie for testing it’s recommended to use this MP. As it removes dbqp and integrates kewpie (and cuts size
down to 25MB from 400+). To execute tests:
cd kewpie ; ./kewpie.py [--force ] [--libeatmydata] [--wsrep-provider-path=...]
The defaults are to run the cluster_basic and cluster_randgen suites against a 3 node cluster. Cluster_basic is used
for small atomic tests like ADD/DROP single/multiple columns on a table and ensuring the change is replicated.
cluster_randgen is used for high stress transactional loads. There are single and multi-threaded variants. The load is
a mix of INSERT/UPDATE/DELETE/SELECT statements. This includes both regular transactions, single queries,
ROLLBACK’s and SAVEPOINTs, and a mix of good and bad SQL statements.
To view all options, one may look at ”./kewpie.py –help”. Basic documentation is also available as sphinx docs in
kewpie/docs folder. Here are the some of the most used options:
-force
Run all tests despite failures (default is to stop test execution on first failure)
-libeatmydata
Use libeatmydata if installed. This can greatly speed up testing in many cases. Can be used in conjunction with:
-libeatmydata-path to specify where the library is located.
-wsrep-provider-path
By default, we expect / look for it in /usr/lib/galera/libgalera_smm.so (where it ends up via ‘make install’...at
least on Ubuntu). If one has an alternate library/location, specify it with this option.
Any additional suites may be run this way:
./kewpie.py [options] --suite=any/suitedir/from/kewpie/percona_tests
./kewpie.py --suite=crashme
5.4 How to Report Bugs
All bugs can be reported on Launchpad. Please note that error.log files from all the nodes need to be submitted.
24 Chapter 5. How-to
29. CHAPTER
SIX
PERCONA XTRADB CLUSTER
LIMITATIONS
6.1 Percona XtraDB Cluster Limitations
There are some limitations which you should be aware of. Some of them will be eliminated later as product is improved
and some are design limitations.
• Currently replication works only with InnoDB storage engine. Any writes to tables of other types, including
system (mysql.*) tables, are not replicated. However, DDL statements are replicated in statement level, and
changes to mysql.* tables will get replicated that way. So, you can safely issue: CREATE USER..., but issuing:
INSERT INTO mysql.user..., will not be replicated.
• DELETE operation is unsupported on tables without primary key. Also rows in tables without primary key may
appear in different order on different nodes. As a result SELECT...LIMIT... may return slightly different sets.
• Unsupported queries:
– LOCK/UNLOCK TABLES cannot be supported in multi-master setups.
– lock functions (GET_LOCK(), RELEASE_LOCK()... )
• Query log cannot be directed to table. If you enable query logging, you must forward the log to a file: log_output
= FILE. Use general_log and general_log_file to choose query logging and the log file name.
• Maximum allowed transaction size is defined by wsrep_max_ws_rows and wsrep_max_ws_size. Anything
bigger (e.g. huge LOAD DATA) will be rejected.
• Due to cluster level optimistic concurrency control, transaction issuing COMMIT may still be aborted at that
stage. There can be two transactions writing to same rows and committing in separate XtraDB Cluster nodes,
and only one of the them can successfully commit. The failing one will be aborted. For cluster level aborts,
XtraDB Cluster gives back deadlock error code: (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
• XA transactions can not be supported due to possible rollback on commit.
• The write throughput of the whole cluster is limited by weakest node. If one node becomes slow, whole cluster
is slow. If you have requirements for stable high performance, then it should be supported by corresponding
hardware (10Gb network, SSD).
• The minimal recommended size of cluster is 3 nodes.
• DDL statements are problematic and may stall cluster. Later, the support of DDL will be improved, but will
always require special treatment.
25
33. CHAPTER
EIGHT
MISC
8.1 Glossary
LSN Each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system
version number for the entire database. Each page’s LSN shows how recently it was changed.
InnoDB Storage engine which provides ACID-compliant transactions and foreign key support, among others im-
provements over MyISAM. It is the default engine for MySQL as of the 5.5 series.
MyISAM Previous default storage engine for MySQL for versions prior to 5.5. It doesn’t fully support transactions
but in some scenarios may be faster than InnoDB. Each table is stored on disk in 3 files: .frm, .MYD, .MYI.
IST Incremental State Transfer. Functionallity which instead of whole state snapshot can catch up with te group by
receiving the missing writesets, but only if the writeset is still in the donor’s writeset cache.
XtraBackup Percona XtraBackup is an open-source hot backup utility for MySQL - based servers that doesn’t lock
your database during the backup.
XtraDB Percona XtraDB is an enhanced version of the InnoDB storage engine, designed to better scale on modern
hardware, and including a variety of other features useful in high performance environments. It is fully back-
wards compatible, and so can be used as a drop-in replacement for standard InnoDB. More information here
.
XtraDB Cluster Percona XtraDB Cluster is a high availability solution for MySQL.
Percona XtraDB Cluster Percona XtraDB Cluster is a high availability solution for MySQL.
my.cnf This file refers to the database server’s main configuration file. Most Linux distributions place it as
/etc/mysql/my.cnf, but the location and name depends on the particular installation. Note that this is
not the only way of configuring the server, some systems does not have one even and rely on the command
options to start the server and its defaults values.
datadir The directory in which the database server stores its databases. Most Linux distribution use
/var/lib/mysql by default.
ibdata Default prefix for tablespace files, e.g. ibdata1 is a 10MB autoextendable file that MySQL creates for the
shared tablespace by default.
innodb_file_per_table InnoDB option to use separate .ibd files for each table.
split brain Split brain occurs when two parts of a computer cluster are disconnected, each part believing that the
other is no longer running. This problem can lead to data inconsistency.
.frm For each table, the server will create a file with the .frm extension containing the table definition (for all
storage engines).
29
34. Percona XtraDB Cluster Documentation, Release Beta
.ibd On a multiple tablespace setup (innodb_file_per_table enabled), MySQL will store each newly created table on
a file with a .ibd extension.
.MYD Each MyISAM table has .MYD (MYData) file which contains the data on it.
.MYI Each MyISAM table has .MYI (MYIndex) file which contains the table’s indexes.
.MRG Each table using the MERGE storage engine, besides of a .frm file, will have .MRG file containing the names
of the MyISAM tables associated with it.
.TRG File containing the triggers associated to a table, e.g. :file:‘mytable.TRG. With the .TRN file, they represent all
the trigger definitions.
.TRN File containing the triggers’ Names associated to a table, e.g. :file:‘mytable.TRN. With the .TRG file, they
represent all the trigger definitions.
.ARM Each table with the Archive Storage Engine has .ARM file which contains the metadata of it.
.ARZ Each table with the Archive Storage Engine has .ARZ file which contains the data of it.
.CSM Each table with the CSV Storage Engine has .CSM file which contains the metadata of it.
.CSV Each table with the CSV Storage engine has .CSV file which contains the data of it (which is a standard
Comma Separated Value file).
.opt MySQL stores options of a database (like charset) in a file with a .opt extension in the database directory.
30 Chapter 8. Misc
37. INDEX
Symbols
–force
command line option, 24
–libeatmydata
command line option, 24
–libeatmydata-path to specify where the library is located.
command line option, 24
–wsrep-provider-path
command line option, 24
.ARM, 30
.ARZ, 30
.CSM, 30
.CSV, 30
.MRG, 30
.MYD, 30
.MYI, 30
.TRG, 30
.TRN, 30
.frm, 29
.ibd, 29
.opt, 30
C
command line option
–force, 24
–libeatmydata, 24
–libeatmydata-path to specify where the library is lo-
cated., 24
–wsrep-provider-path, 24
D
datadir, 29
I
ibdata, 29
InnoDB, 29
innodb_file_per_table, 29
IST, 29
L
LSN, 29
M
my.cnf, 29
MyISAM, 29
P
Percona XtraDB Cluster, 29
S
split brain, 29
X
XtraBackup, 29
XtraDB, 29
XtraDB Cluster, 29
33