Percona Xtrabackup provides fully open-source, high-performance, non-blocking backups for MySQL databases. It can create full or incremental backups of InnoDB or MyISAM tables with features like parallel backups, compression, point-in-time recovery, and streaming backups. The backups can be applied using the innobackupex command line tool to restore databases efficiently.
Highly efficient backups with percona xtrabackupNilnandan Joshi
Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. In this talk we'll describe below things.
- How it works with MySQL/Percona Server and what are the features provided
- Difference between Xtrabackup and Innobackupex
- How to take full/increment/partial backup and restore
- How to use features like streaming, compression, remote and compact backups
- How to troubleshoot the issue with xtrabackup
This presentation shows the best practices when using Percona Xtrabackup to take backups. It shows how to compress, encrypt, stream and speed up backups.
Optimizing MariaDB for maximum performanceMariaDB plc
When it comes to optimizing the performance of a database, DBAs have to look at everything from the OS to the network. In this session, MariaDB Enterprise Architect Manjot Singh shares best practices for getting the most out of MariaDB. He highlights recommended OS settings, important configuration and tuning parameters, options for improving replication and clustering performance and features such as query result caching.
Highly efficient backups with percona xtrabackupNilnandan Joshi
Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. In this talk we'll describe below things.
- How it works with MySQL/Percona Server and what are the features provided
- Difference between Xtrabackup and Innobackupex
- How to take full/increment/partial backup and restore
- How to use features like streaming, compression, remote and compact backups
- How to troubleshoot the issue with xtrabackup
This presentation shows the best practices when using Percona Xtrabackup to take backups. It shows how to compress, encrypt, stream and speed up backups.
Optimizing MariaDB for maximum performanceMariaDB plc
When it comes to optimizing the performance of a database, DBAs have to look at everything from the OS to the network. In this session, MariaDB Enterprise Architect Manjot Singh shares best practices for getting the most out of MariaDB. He highlights recommended OS settings, important configuration and tuning parameters, options for improving replication and clustering performance and features such as query result caching.
The presentation covers improvements made to the redo logs in MySQL 8.0 and their impact on the MySQL performance and Operations. This covers the MySQL version still MySQL 8.0.30.
MySQL Administrator
Basic course
- MySQL 개요
- MySQL 설치 / 설정
- MySQL 아키텍처 - MySQL 스토리지 엔진
- MySQL 관리
- MySQL 백업 / 복구
- MySQL 모니터링
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
네오클로바
http://neoclova.co.kr/
Maxscale switchover, failover, and auto rejoinWagner Bianchi
How the MariaDB Maxscale Switchover, Failover, and Rejoin works under the hood by Esa Korhonen and Wagner Bianchi.
You can watch the video of the presentation at
https://www.linkedin.com/feed/update/urn:li:activity:6381185640607809536
ProxySQL is a popular database proxy for MySQL/MariaDB servers. This focuses on the possible High availability options for ProxySQL and operations of inbuilt clustering feature in ProxySQL. This tech talk was presented at Mydbops Database Meetup on 27-04-2019 by Aakash M, Database Administrator with Mydbops and Vignesh Prabhu, Database Administrator with Mydbops.
24시간 365일 서비스를 위한 MySQL DB 이중화.
MySQL 이중화 방안들에 대해 알아보고 운영하면서 겪은 고민들을 이야기해 봅니다.
목차
1. DB 이중화 필요성
2. 이중화 방안
- HW 이중화
- MySQL Replication 이중화
3. 이중화 운영 장애
4. DNS와 VIP
5. MySQL 이중화 솔루션 비교
대상
- MySQL을 서비스하고 있는 인프라 담당자
- MySQL 이중화에 관심 있는 개발자
MySQL Performance Tuning: Top 10 Tips intended for PHP, Ruby and Java developers on performance tuning and optimization of MySQL. We will cover the deadly mistakes to be avoided. We will take real life examples of optimizing application many times. Here is the summary of what we intend to cover:
• Selection of Storage Engine
• Schema Optimization
• Server Tuning
• Hardware Selection and Tuning
• Effective uses of Index, when to use and when not to use.
• Partitions
• Speeding up using Stored Procedures
• Implementing prepared statements?
• Deadly Sins to be avoided
• Performance Tuning and Benchmarking Tools
In the first part of Galera Cluster best practices series, we will discuss the following topics:
* ongoing monitoring of the cluster and detection of bottlenecks;
* fine-tuning the configuration based on the actual database workload;
* selecting the optimal State Snapshot Transfer (SST) method;
* backup strategies
(video:http://galeracluster.com/videos/2159/)
Fast Incremental Backups with Percona Server and Percona XtraBackup / PLMCE 2014Laurynas Biveinis
Percona Live 2014 presentation
https://www.percona.com/live/mysql-conference-2014/sessions/fast-incremental-backups-percona-server-and-percona-xtrabackup
The presentation covers improvements made to the redo logs in MySQL 8.0 and their impact on the MySQL performance and Operations. This covers the MySQL version still MySQL 8.0.30.
MySQL Administrator
Basic course
- MySQL 개요
- MySQL 설치 / 설정
- MySQL 아키텍처 - MySQL 스토리지 엔진
- MySQL 관리
- MySQL 백업 / 복구
- MySQL 모니터링
Advanced course
- MySQL Optimization
- MariaDB / Percona
- MySQL HA (High Availability)
- MySQL troubleshooting
네오클로바
http://neoclova.co.kr/
Maxscale switchover, failover, and auto rejoinWagner Bianchi
How the MariaDB Maxscale Switchover, Failover, and Rejoin works under the hood by Esa Korhonen and Wagner Bianchi.
You can watch the video of the presentation at
https://www.linkedin.com/feed/update/urn:li:activity:6381185640607809536
ProxySQL is a popular database proxy for MySQL/MariaDB servers. This focuses on the possible High availability options for ProxySQL and operations of inbuilt clustering feature in ProxySQL. This tech talk was presented at Mydbops Database Meetup on 27-04-2019 by Aakash M, Database Administrator with Mydbops and Vignesh Prabhu, Database Administrator with Mydbops.
24시간 365일 서비스를 위한 MySQL DB 이중화.
MySQL 이중화 방안들에 대해 알아보고 운영하면서 겪은 고민들을 이야기해 봅니다.
목차
1. DB 이중화 필요성
2. 이중화 방안
- HW 이중화
- MySQL Replication 이중화
3. 이중화 운영 장애
4. DNS와 VIP
5. MySQL 이중화 솔루션 비교
대상
- MySQL을 서비스하고 있는 인프라 담당자
- MySQL 이중화에 관심 있는 개발자
MySQL Performance Tuning: Top 10 Tips intended for PHP, Ruby and Java developers on performance tuning and optimization of MySQL. We will cover the deadly mistakes to be avoided. We will take real life examples of optimizing application many times. Here is the summary of what we intend to cover:
• Selection of Storage Engine
• Schema Optimization
• Server Tuning
• Hardware Selection and Tuning
• Effective uses of Index, when to use and when not to use.
• Partitions
• Speeding up using Stored Procedures
• Implementing prepared statements?
• Deadly Sins to be avoided
• Performance Tuning and Benchmarking Tools
In the first part of Galera Cluster best practices series, we will discuss the following topics:
* ongoing monitoring of the cluster and detection of bottlenecks;
* fine-tuning the configuration based on the actual database workload;
* selecting the optimal State Snapshot Transfer (SST) method;
* backup strategies
(video:http://galeracluster.com/videos/2159/)
Fast Incremental Backups with Percona Server and Percona XtraBackup / PLMCE 2014Laurynas Biveinis
Percona Live 2014 presentation
https://www.percona.com/live/mysql-conference-2014/sessions/fast-incremental-backups-percona-server-and-percona-xtrabackup
Failure happens, and we can learn from it. We need to think about backups, but also verification of them. We should definitely make use of replication and think about automatic failover. And security is key, but don't forget that encryption is now available in MySQL, Percona Server and MariaDB Server.
Working with Ansible and AWS together. Provisioning servers, setting up Cloudwatch alarms automatically, setting up Route53 records and a simple Autoscaling workflow.
Sharding and Scale-out using MySQL FabricMats Kindahl
MySQL Fabric is an open-source solution released by the MySQL Engineering team at Oracle. It make management of farms of MySQL servers easy and available for both applications with small and large number of servers.
This is the presentation from Percona Live Santa Clara.
MySQL Replication Evolution -- Confoo Montreal 2017Dave Stokes
MySQL Replication has evolved since the early days with simple async master/slave replication with better security, high availability, and now InnoDB Cluster
Building Scalable High Availability Systems using MySQL FabricMats Kindahl
Building scalable, high-availability systems offers several challenges: managing the redundancy in the farm using replication, monitoring the system to find hotspots and rebalancing the system, automating scaling reads and writes, and upgrades and replacement without downtime. MySQL Fabric is a framework for building scalable, high-availability systems that are easy to use and flexible. It uses existing MySQL features to manage a high-availability system, and can also be used with existing systems where some parts of the high-availability solution are already in place. In this presentation from Oracle Open World you will learn about the new features in MySQL Fabric and how you can use it to build scalable high availability system or enhance your existing system.
Using Ansible for Deploying to Cloud Environmentsahamilton55
A short presentation on using Ansible for deploying services into a cloud environment. The talk focuses on simplifying playbooks to allow them to work across a set of services.
Percona XtraBackup - New Features and ImprovementsMarcelo Altmann
Percona XtraBackup is an open-source hot backup utility for MySQL - based servers that doesn't lock your database during the backup. In this talk, we will cover the latest development and new features introduced on Xtrabackup and its auxiliary tools: - Page Tracking - Azure Blob Storage Support - Exponential Backoff - Keyring Components - and more.
Online MySQL Backups with Percona XtraBackupKenny Gryp
Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server, MySQL® and MariaDB®.
Percona XtraBackup provides:
* Fast and reliable backups
* Uninterrupted transaction processing during backups
* Savings on disk space and network bandwidth with better compression
* Automatic backup verification
* Higher uptime due to faster restore time
This talk will discuss the various different features of Percona XtraBackup, including:
* Full & Incremental Backups
* Compression, Streaming & Encryption of Backups
* Backing Up To The Cloud (Swift).
* Percona XtraDB Cluster / Galera Cluster.
* Percona Server Specific features
Have you ever dreamed to have an "MKSYSB like" solution to quickly backup/restore your Linux on Power ?
If the answer is YES, the opensource solution named Relax and Recover (ReaR) may be for you.
Come to this session to learn more about how to implement and the capabilities of this solution through presentation and live demonstration.
Lock, Stock and Backup: Data GuaranteedJervin Real
Percona Live 2017 - the decisions you need to make, the tools we recommend, the process you need to consider for a successful backup implementation for your MySQL services.
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...NETWAYS
It gives an introduction to the architecture of Bareos, and how the components of Bareos interact. The configuration of Bareos will be discussed and the main Bareos features will be shown. As a practical part of the workshop the adaption of the preconfigured standard backup scheme to the attendees’ wishes will be developed.
Attendees are kindly asked to contribute configuration tasks that they want to have solved.
"Relax and Recover", an Open Source mksysb for Linux on PowerSebastien Chabrolles
This deck was presented during IBM systems technical university in London (2016).
Have you ever dreamed to have an "MKSYSB like" solution to quickly backup/restore your Linux on Power ? If the answer is YES, the opensource solution named Relax and Recover (ReaR) may be for you. Come to this session to learn more about how to implement and the capabilities of this solution through presentation and live demonstration.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
2. www.percona.com
Introduction
Percona Xtrabackup provides a fully open-
source, free, high-performance, non-blocking
backup system for InnoDB or XtraDB tables
It can also backup MyISAM, Archive, Merge,
and other SQL-level objects
It is a reliable, widely-used alternative to
Oracle's MySQL Enterprise Backup
3. www.percona.com
Features
Full Binary Backups Incremental backups Open Source (GPL)
Online (non-blocking)* Parallel backups & compression Cost: Free
InnoDB & MyISAM Streaming backups MySQL compatibility
Compressed backups Compact backups MySQL Forks' support
Point-in-time recovery
support
Export individual tables; restore
to a different server
MySQL replication &
Galera support
Incremental backups LRU backups Community Help
Throttling OS buffer optimizations Commercial Support
Partial Backups Encrypted backups Easy migration
4. www.percona.com
Compatibility
Versions <= 2.0 are compatible with :
Oracle MySQL 5.0, 5.1, 5.5 and 5.6
Equivalent versions of Percona Server, MariaDB,
and Drizzle
Version 2.1 is compatible with:
Oracle MySQL 5.1 (plugin version [1] ), 5.5, 5.6
Equivalent versions of Percona Server, MariaDB,
and Drizzle
[1] The InnoDB Engine plugin 1.0 is distributed -but not enabled- by default. See:
http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-plugin-installation.html
5. www.percona.com
xtrabackup vs. innobackupex
Percona XtraBackup includes two main
executables
xtrabackup is the low level binary that
handles the creation and consistency
checking of InnoDB backups.
innobackupex is a higher-level Perl wrapper
for xtrabackup and is also responsible for
the rest of the operations, including the
backup of other engines
You usually want to use innobackupex
7. www.percona.com
Installation
The most straightforward way is installing the
Percona apt or yum repositories
Then install the package with:
- yum install percona-xtrabackup
- aptitude update && aptitude install percona-
xtrabackup
Other installation options can be found at
http://www.percona.com/downloads/XtraBack
up/
8. www.percona.com
Requirements and Limitations
Only fully supported on Linux machines
innobackupex is a Perl script
Package should take care of all
dependencies (Perl interpreter, mysql
driver, other libraries)
Local access to the MySQL datadir is
required
READ, WRITE and EXECUTE filesystem
privileges are needed
This makes xtrabackup incompatible
with RDS or some web hosting providers
11. www.percona.com
Command Line Execution
Minimal options for a full backup:
# innobackupex /path/to/backup/
• Actual backup is stored in a timestamped
subdirectory.
• It reads the datadir and innodb paths from
my.cnf
• A more complete set of options:
# innobackupex --host=localhost
--user=root --password=secret /tmp
12. www.percona.com
What it looks like
root@nilnandan-Dell-XPS:/var/lib/mysql# sudo innobackupex
--user=root --password=root /home/nilnandan/backup
InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009
Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights
Reserved.
This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup, documentation, and
help resources:
http://www.percona.com/xb/p
140709 13:52:04 innobackupex: Connecting to MySQL server with DSN
'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root' (using
password: YES).
140709 13:52:04 innobackupex: Connected to MySQL server
140709 13:52:04 innobackupex: Executing a version check against the
server...
140709 13:52:04 innobackupex: Done.
14. www.percona.com
Files in Target Directory
Hierarchy of files and directories mirroring the original
database structure.
backup-my.cnf: Not a backup of the server
configuration. It only contains the minimal InnoDB
settings at the time the backup to execute the --apply-
log phase
xtrabackup_info: Contains all informations about
xtrabackup binary, server version, start and end time
of backup, binlog position etc.
xtrabackup_checkpoints: Metadata about the
backup (type of backup, lsn, etc.)
xtrabackup_logfile: Data needed for the --apply-log
phase
15. www.percona.com
Preparation Phase
Before the backup directory can be used, we must:
Make sure that the InnoDB tablespaces have
consistent,non-corrupt data
Create new transaction logs
innobackupex executes xtrabackup twice:
First, to do a controlled crash recovery, applying the
log entries with xtrabackup's embedded InnoDB.
This redoes all written committed transactions and
undoes uncommitted ones.
Second, to create the empty logs
16. www.percona.com
Preparation Phase (cont)
Only xtrabackup/innobackupex and the backup
directory are needed
It can be done on different machine than the
original server (we have the xtrabackup binary and
the innodb configuration parameters)
Generally, you do it just before restore. If you only do
full backups and want to minimize the MTTR (Mean
Time To Recover), you can prepare immediately after
backup
17. www.percona.com
Command Line Execution
The simplest form:
$ innobackupex --apply-log
/media/backups/2017-07-08_13-22-12
You may specify a memory parameter:
$ innobackupex –apply-log --use-memory=4G
/media/backups/2017-07-08_13-22-12
this increases the available buffer pool for the
recovery phase (100M by default), to improve
performance
18. www.percona.com
How It Looks Like
nilnandan@nilnandan-Dell-XPS:~/backup$ sudo innobackupex --apply-log 2014-07-
09_13-52-04
InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.
This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup, documentation, and help
resources:
http://www.percona.com/xb/p
IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints "completed OK!".
140710 10:06:57 innobackupex: Starting ibbackup with command: xtrabackup
--defaults-file="/home/nilnandan/backup/2014-07-09_13-52-04/backup-my.cnf"
--defaults-group="mysqld" --prepare --target-dir=/home/nilnandan/backup/2014-
07-09_13-52-04 --tmpdir=/tmp
19. www.percona.com
Restore
After --apply-log, we have a consistent full raw
backup
We can just move it back to the datadir with
standard OS commands once the server is shut
down
As a physical backup, restoration is as fast as you
can copy the directory contents to the local filesystem
There are specific commands for restoration in
XtraBackup
20. www.percona.com
Copy Back
The integrated command to restore a backup is:
# innobackupex --copy-back /media/backups/2017-07-
08_13-22-12
It complains if the datadir, as read from the config
file, is not empty [1] (it will not overwrite existing
data)
Only partial restores can be done to a running
server
xtrabackup_* files are not needed
[1] We can override this behavior with the option --force-non-empty-
directories , but it will never overwrite existent files
21. www.percona.com
What It Looks Like
root@nilnandan-Dell-XPS:~# sudo innobackupex --copy-back
/home/nilnandan/backup/2014-07-09_13-52-04/
...
innobackupex: Starting to copy files in '/home/nilnandan/backup/2014-07-
09_13-52-04'
innobackupex: back to original data directory '/var/lib/mysql'
innobackupex: Copying '/home/nilnandan/backup/2014-07-09_13-52-
04/xtrabackup_info' to '/var/lib/mysql/xtrabackup_info'
innobackupex: Creating directory '/var/lib/mysql/mysql'
innobackupex: Copying '/home/nilnandan/backup/2014-07-09_13-52-
04/mysql/time_zone_leap_second.frm' to
...
innobackupex: Starting to copy InnoDB log files
innobackupex: in '/home/nilnandan/backup/2014-07-09_13-52-04'
innobackupex: back to original InnoDB log directory '/var/lib/mysql'
innobackupex: Copying '/home/nilnandan/backup/2014-07-09_13-52-
04/ib_logfile1' to '/var/lib/mysql/ib_logfile1'
innobackupex: Copying '/home/nilnandan/backup/2014-07-09_13-52-
04/ib_logfile0' to '/var/lib/mysql/ib_logfile0'
innobackupex: Finished copying back files.
140710 11:04:12 innobackupex: completed OK!
root@nilnandan-Dell-XPS:~#
22. www.percona.com
Move Back
As an alternative to --copy-back, we can also do:
# innobackupex --move-back /media/backups/2017-07-
08_13-22-12
This may be faster or more convenient if the backup is
already on the same partition or don't have enough
free space
23. www.percona.com
Restoring Permissions
After restore, as file permissions are preserved, you
will probably have to fix file ownership:
# chown -R mysql:mysql /var/lib/mysql
If MySQL cannot write to the data directory or some of
it files, it will refuse to start
24. www.percona.com
Incremental Backup
Differential & Incremental Backups
Differential backup: saves only the difference in the
data since the last full backup
Incremental backup: saves only the difference in
the data since the last backup (full or another
incremental)
Xtrabackup can perform both, which can lead to an
important save in space (and potentially, time),
specially if the backups are very frequent or the
database is very static
25. www.percona.com
Incremental Backup(cont..)
Differential and incremental backups work the
same way with xtrabackup:
Every page stores the Log Sequence Number (LSN)
when it was last changed
Only pages that have a higher LSN than the base
backup (full or incremental) are copied
27. www.percona.com
Incremental Backup(cont..)
Restoration of Incremental Backups
On prepare time, the new pages have to be applied in
order to the full backup before restore
This is the reason why you want to wait to fully apply
the log just before restore time
Uncommitted transactions must not be undone, as
they may have finished on a subsequent
incremental backup
Intermediate backups have to apply changes with
the option --redo-only
29. www.percona.com
Partial Backups
When using xtrabackup (or innobackupex), we can
filter out some of the tables, effectively copying only
some databases or tables
Two main restrictions apply for InnoDB tables:
Original database must use innodb_file_per_table,
as xtrabackup works at filesystem level
Recovering individual tables is only possible for
MySQL 5.6. Previous versions require Percona
XtraDB extensions with tablespace import enabled
30. www.percona.com
Partial Backups (cont..)
Command Line Options for Segmented Backups:
--include : Matches the fully qualified name
(database.table) against a regular expression
--databases : Matches a list of databases (or tables)
separated by spaces.
--tables-file : Matches a list of tables, one per line,
stored on the specified file
31. www.percona.com
Partial Backups (cont..)
Examples
Backup only the test database:
# innobackupex --include=^test[.] /data/backups
Backup all except the mysql database*:
$ mysql -e "SELECT concat(table_schema, '.',
table_name) from information_schema.tables WHERE
table_schema NOT IN ('mysql', 'performance_schema',
'information_schema')" > /tmp/tables
# innobackupex --tables-file=/tmp/tables
/data/backups
32. www.percona.com
Streaming
Xtrabackup allows the streaming of the backup files
to the UNIX standard output
This allows to post-process them or send them to a
remote location while the backup is ongoing
33. www.percona.com
Streaming (cont..)
The stream option accepts two parameters:
--stream=tar
It sends all files to the standard output in the tar
archive format (single file without compression)
--stream=xbstream
The standard output will using the new included
xbstream format, which allows simultaneous
compression, encryption and parallelization
A temporal file path is still required
34. www.percona.com
Streaming (cont..)
Using tar:
# innobackupex --stream=tar /tmp > backup.tar
# innobackupex --stream=tar /tmp |
bzip2 - > backup.tar.bz2
To extract, the tar -i option (ignore zeros) must be
used:
# tar -xivf backup.tar
# tar -jxif backup.tar.bz2
35. www.percona.com
Streaming (cont..)
Using xbstream
# innobackupex --stream=xbstream /tmp >
backup.xbstream
To extract, use the xbstream binary provided with
Percona Xtrabackup:
# xbstream -x < backup.xbstream -C /root
36. www.percona.com
Compression
We can use --compress to produce quicklz-
compressed files:
# innobackupex --compress /media/backups
To decompress, we can use --decompress or
qpress* to decompress the individual .qp files
manually:
# innobackupex --decompress
/media/backups/2017-07-08_13-22-12
# find . -name "*.qp" -execdir qpress -d {} . ;
37. www.percona.com
Compression and xbstream
The --compress flag can be combined with
xbstream (not with tar):
# innobackupex --compress --stream=xbstream /tmp
> /media/backups/backup.xbstream
Before preparation, we must use first xbstream,
then decompress:
# xbstream -x -C /media/backups < backup.xbstream
# innobackupex --decompress /media/backups
38. www.percona.com
Parallel Copy and Compression
If using xbstream, we can use --parallel to archive
several files at the same time:
# innobackupex --parallel=4 --stream=xbstream
/tmp > /media/backups/backup.tar
If we are using compression, we can parallelize the
process for each file with –compress-threads
# innobackupex --compress --compress-threads=8
--stream=xbstream /tmp > /media/backups/backup.tar
39. www.percona.com
Remote Backups
xtrabackup can send automatically the backup files
to a remote host by redirecting the output to
applications like ssh or netcat:
# innobackupex --stream=tar /tmp
| ssh user@host "cat - >
/media/backups/backup.tar"
# ssh user@host "( nc -l 9999 >
/media/backups/backup.tar & )"
&& innobackupex --stream=tar /tmp
| nc host 9999
40. www.percona.com
Throttling Remote Backups
pv (pipe viewer) utility can be useful for both
monitoring and limiting the bandwidth of the copy
process
The following limits the transmission to 10MB/s:
# innobackupex --stream=tar ./
| pv -q -L10m
| ssh user@desthost
"cat - > /media/backups/backup.tar"
41. www.percona.com
Compact Backups
In order to reduce backup size, we can skip the copy
of secondary index pages:
# innobackupex --compact /media/backups
The space saved will depend on the size of the
secondary indexes. For example:
#backup size without --compact
2.0G 2018-02-01_10-18-38
#backup size taken with --compact option
1.4G 2018-02-01_10-29-48
42. www.percona.com
Preparing Compact Backups
The downside of compact backups is that the
prepare phase will take longer, as the secondary
keys have to be regenerated ( --rebuild-indexes ):
# cat xtrabackup_checkpoints
backup_type = full-backuped
from_lsn = 0
to_lsn = 9541454813
last_lsn = 9541454813
compact = 1
# innobackupex --apply-log --rebuild-indexes
/media/backups/2017-07-08_13-22-12
43. www.percona.com
Rebuilding Indexes
The --rebuild-threads option can be used to speed up
the recreation of the keys, by doing it in parallel:
# innobackupex --apply-log --rebuild-indexes
--rebuild-threads=8 /media/backups/2017-07-08_13-
22-12
Rebuilding indexes has the side effect of
defragmenting them, as they are done in order. We
can rebuild regular full-backups, too.
44. www.percona.com
Encrypted Backups
Xtrabackup has support for symmetric encryption
through the libgcrypt library for both local and
xbstream backups
Before using encryption, a key must be generated.
For example, to create an SHA256 key:
# openssl enc -aes-256-cbc -pass pass:mypass -P -md
sha1 salt=9C8CE740FD0FE6AD
key=0A632731C0EABA6131B6683068E04629524FC8F
C1E0F69037D5ED03AF126BD0B
iv =03E872849A8DA83647A5FCEFDF1C68F2
* Please note that specifying passwords on the command line input is considered
insecure.
45. www.percona.com
Encrypted Backups (cont..)
We can now use the previous value of iv as our
encryption key:
$ innobackupex --encrypt=AES256 --encrypt-
key="03E872849A8DA83647A5FCEFDF1C68F2"
/data/backups
Options:
--encryption : Defines the algorithm to be used
(supports AES128, AES192 and AES256)
--encrypt-key : Defines the encryption key to use
directly on the command line.
--encrypt-key-file : Defines the encryption key by
pointing to a file that contains it. This is recommended
over the encrypt-key option
46. www.percona.com
Speeding up Encryption
Parallel encryption is possible by encrypting more
than one file at a time (--parallel) and using more than one
thread for each file (--encrypt-threads):
# innobackupex --encrypt=AES256
--encryption-key-file=/media/backups/keyfile
--stream=xbstream --encrypt-threads=4 /tmp
> /media/backups/backup.xbstream
We can also change the buffer for each encryption thread
with --encrypt-chunk-size (default 64K)
47. www.percona.com
Decrypting Backups
Before preparing it, the backup must be unarchived (if
xbstream was used) and decrypted. We can use --decrypt or
do it for each individual .xbscript file using xbscript
(included with xtrabackup):
# innobackupex --decrypt=AES256
--encrypt-key="A1EDC73815467C083B0869508406637E"
/data/backups/2013-08-01_08-31-35/
# for i in `find . -iname "*.xbcrypt"`; do
xbcrypt -d --encrypt-key-file=/media/backups/filekey
--encrypt-algo=AES256 < $i
> $(dirname $i)/$(basename $i .xbcrypt) && rm $i; done
--parallel can speed up the process as well
48. www.percona.com
TROUBLESHOOTING AND
PERFORMANCE
Locking Issues
By default, innobackupex will lock the tables (FTWRL)
during the last phase of the backup
You can use --no-lock if you are only backing up
InnoDB tables and no DDL are executing on that last
Phase*
You can use --no-lock if you backup other engines if no
DDLs are executed, and non-innodb tables are not
modified (e.g. users are created on the mysql
database)*
You can use --rsync to minimize the copy time when
locked
* Please note that --no-lock also makes impossible to obtain a reliable binary log
coordinates.
49. www.percona.com
TROUBLESHOOTING AND
PERFORMANCE
File Not Found
2017-07-08 12:56:56 7f0dfe4737e0 InnoDB: Operating system error
number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
2017-07-08 12:56:56 7f0dfe4737e0 InnoDB: File name ./ibdata1
2017-07-08 12:56:56 7f0dfe4737e0 InnoDB: File operation call: 'open'
returned OS error 71.
2017-07-08 12:56:56 7f0dfe4737e0 InnoDB: Cannot continue operation.
innobackupex: Error: ibbackup child process has died at /usr/bin/
innobackupex line 389.
Set a correct datadir option on /etc/my.cnf
50. www.percona.com
TROUBLESHOOTING AND
PERFORMANCE
Permission Denied
InnoDB: read-write can't be opened in ./ibdata1 mode
xtrabackup: Could not open or create data files.
[...]
xtrabackup: error: xb_data_files_init() failed witherror code 1000
innobackupex: Error: ibbackup child process has died at /usr/bin/
innobackupex line 389.
innobackupex: Error: Failed to create backup directory /backups/
2017-07-08_12-19-01: Permission denied at /usr/bin/innobackupex line
389.
Use the Linux root user or any other account with
read/write permissions on datadir and on target-dir
51. www.percona.com
TROUBLESHOOTING AND
PERFORMANCE
MySQL Does Not Start After
Recovery
Starting MySQL server... FAILED
The server quit without updating PID file
Did you change file permissions of the files to the
mysql user?
Did you remember to apply the log before restart?
Failing to do so can also corrupt the backup.
Did you restore an appropriate my.cnf file? Previous
versions of MySQL (< 5.6) do not start if transaction log
files have a different size than specified/default
52. www.percona.com
TROUBLESHOOTING AND
PERFORMANCE
The Backup Takes Too Long
As MySQL performs physical backups, the speed
should be close to to a filesystem copy. If IO impact
is not a concern, we can speed it up by:
Using parallelization ( --parallel , --compression-
threads , --encrypt-threads , --rebuild-threads )
Using compression ( --compress ) if the extra CPU
compensates the lower bandwidth required
Using incremental backups (specially with Percona
extensions) to perform rolling full backups