(Valerie and Ivan)
Hi, I’m Valerie. I’m in the open source database cluster at Pythian.
Hi, I’m Ivan...
Today we’re here to referee a battle of the online schema change methods.
(Valerie starts)
First we’ll walk through a few definitions,
give a basic overview of online DDL,
and then discuss alternate methods and tools.
Then we’ll engage in battle!
What does it mean when we say we want to do an “online” schema change?
Online can mean several things.
The change may be happening only to metadata of the table, the data definition files.
The change might be happening in-place (not a table copy per se, but sometimes requiring reorganization), and allowing changes to the data (like inserts and updates) to happen at the same time.
The change might require a full table copy on disk, but also not block statements.
With some engines, this can even mean the changes are made in temporary memory structures, and written by row to the main storage as data is accessed.
So it means a lot of things at the database level. From the application’s perspective, though, it means statements aren’t blocked while your schema change is happening.
Why do we care about changes being online? Again, from the perspective of the application, statements can continue to run against the database, reading and writing without issue, while the change is happening. That means no downtime for your application.
There’s no promise that performance might not dip some, and there may be a brief pause at the beginning or end of a long-running change, but in general there will be no application failures.
As database technologies have matured, they’ve all started to allow more native capabilities for online changes.
Native online schema changes have been supported in MariaDB since version 10.0, and in MySQL since version 5.6.
Status output about progress of long-running changes has improved, but remains limited. Remember this when we get to the battle: when a change has been running for 2 days, a manager might ask you when it’s going to be done.
You can specify three alter algorithms: default, copy, or inplace. If nothing is specified, then the default behavior will occur for this type of change.
You might specify “copy” if you want to force a table rebuild, but in general, you’ll want to choose inplace. If the alter doesn’t support inplace, you’ll receive an error message. If you get into the habit of specifying algorithm=inplace, you’ll be prepared if a change will require a table copy.
You can specify four alter locks: default, none, shared, exclusive. Again, get into the habit of using “none” to specify that that alter won’t block operations.
Her are some changes remaining that require table copies.
A critical thing to consider is how much replication lag will be incurred on any single-threaded replicas. That SQL thread will be blocked while the replica is running the change. If it’s a long-running change, that’s a lot of lag.
Another important thing to highlight is that you can’t pause a native online alter. Once it starts, you’re committed to completing it, or stopping it and waiting for what can be an expensive rollback.
Information about native online alters above was specific to InnoDB.
A newer engine, TokuDB, offers unique online alter functionality. The several available alters are completed in memory and then pushed down to storage as a part of regular Toku operations.
You just need to be careful about how you plan changes in Toku. Do changes one at a time; for example, don’t change names and data types at the same time.
If you need to do something like add a column and drop a column, separate those alters.
If you need to drop a column that has an index, drop the index online first, then drop the column online.
Changing the size of a basic datatype will be done online, but only if the column doesn’t have an index attached.
The most critical limitations of online alters in TokuDB are centered around performance.
Two things stand out: A table lock with a flush is done at the end of the change, so if there are many dirty pages to be flushed, that process can take several minutes. And, because Toku will push the changes down as part of its regular operations; if you have a very busy workload (or even a batch workload, if timed poorly), this background work can use a lot of system resources.
Take these limitations very seriously, especially in the presence of replication. Test your workload very well.
Adding indexes in TokuDB is a specific online alter case. It’s important to use the correct syntax, highlighted here.
Again, we will see a block with single-threaded replication, causing replication lag.
For another newer engine, RocksDB, all changes require a table copy, except creating/dropping indexes.
Community support may soon be saving the day here.
With Amazon RDS, traditional native online alter functionality is available.
In addition, Aurora Fast DDL allows online changes because of versioning. Consider overhead of these additional background processes.
Specific to RDS, if you have ephemeral storage, and you’re doing any change that requires temp space, the required temp space can fill your allocation.
Aurora Fast DDL is also limited in nonstandard cases, like on tables with partitions.
(Ivan takes over)
Okay. Online alters are pretty robust these days, with most standard table definitions and changes covered.
Alternate methods and tools exist to help with changes not covered by online alter functionality, and, for all changes, to allow for better insight and control.
We’ll cover two alternate methods briefly.
A common method used in the past often for large tables, or for replication scenarios requiring read replicas to be completely up-to-date, is to do a rolling schema update.
In this method, the replica is stopped and changes are made there. Then a failover is done in conjunction with the application change requiring the new schema (if the application code isn’t forward- and backward-compatible), the change is done on the old primary, and replication is restored.
The major limitation here is the time-consuming manual process, compounded in the presence of complex replication setups.
Any writes to the replica can cause serious data inconsistency when GTIDs are used, resulting in errant transactions that need to be manually resolved.
Here’s an example of how GTID modifies the rolling schema approach. If we are not careful we can end up with duplicate statements or broken slaves after we promote a new master
Just as a public service announcement, In case errant transactions occur, here’s how this would be resolved.
Another commonly used schema change process in the past was just to take complete downtime. The downside of downtime is… well, downtime, and it doesn’t work well in today 24x7 environments.
A “light” way to do this is to do changes during relative downtime. That is, do changes when the application is relatively quiet, to avoid performance degradation or replication lag.
Alternate tools were developed to address the limitations of online alters, before the native online DDL was quite robust enough.
These tools remain useful, however.
pt-online-schema-change is a well-known and widely used tool.
With this tool, a new table is created, rows are written in chunks, and concurrent changes are applied via triggers. At the end of the alter, a table rename is done.
pt-online-schema-change can be controlled in various ways when starting the change, including controlling the size of the chunks, and specifying which replicas will be monitored for replication lag.
When starting the tool, you can also specify the thresholds of variables related to workload and replication lag.
To avoid a major limitation, don’t drop the old table right after the table rename. Do this at a later time when load is quiet, to avoid stalls.
The use of triggers has been the most widely critiqued feature of pt-online-schema-change, that being a poorly optimized part of the database code.
Other limitations exist around foreign keys, but this can be configured to a certain degree.
Especially note the bug around foreign keys including an underscore in the name.
Another tool that is similar to pt-osc is Facebook’s OSC, created specifically for their environment, but open-sourced and available to all, so I think it’s interesting to look at.
Facebook osc works similarly to pt-osc. We create a table with the new structure and then add some triggers on the old table.
We start dumping rows in chunks from the source table and loading onto the new table, while replaying changes.
There is a checksum performed, doing a consistent snapshot at the beginning, and then a quick replay to fill the gap between shadow and existing table.
At the end, the old and new tables are swapped.
The most important thing to keep in mind is the tool is designed to run rolling schema updates, as changes do not propagate to the slaves.
The main difference with pt-osc is the change capture table that is written to by the triggers, rather than the target table itself, which allows the change replay to be asynchronous.
Also rather than insert/select as pt-osc does, the tool uses select into outfile and load data infile. This avoids the use of gap-lock mechanism thus reducing locking.
Since the tool relies on a file with a create table statement, you can easily integrate changes with your source control mechanism.
The main limitations of Facebook online schema change are no support for triggers and foreign keys.
Only one migration can be run at a time.
Python 2.7 support is tricky if you are running an older distribution.
Your application needs to be able to tolerate “table not found” errors for a short-period.
As mentioned before, the tool is meant to be run on each of your servers individually.
Issues with trigger-based tools led to the development of Github’s gh-ost, which is a triggerless online schema migration solution.
It is testable and provides the ability to pause changes, dynamic control/reconfiguration, auditing, and many operational perks.
Gh-ost works in the same way as other tools by copying the table rows in chunks, but rather than using triggers, gh-ost will attach itself as a virtual replica, and capture ongoing changes directly from the binary logs.
In addition to the benefits of not having triggers, gh-ost can be reconfigured on the fly by sending commands to a UNIX socket.
Gh-ost can attach to either the master or a slave, as long as log-slave-updates is set. An accurate progress counter, which requires a full scan of the table is also available to be offloaded to a slave.
gh-ost can operate directly on a slave, and also do a test migration. In that mode, the replication is left stopped so you can manually compare and checksum the old and new tables.
One of the available mechanisms for throttling is the usual check for threads connected/running. If you hit max-load, the operation is paused, while critical load is used to abort the change. There are some thresholds available to do a second check before aborting.
The throttling configuration can be set to check for slave lag; you can also specify which slaves are monitored.
Since there are no triggers, you can interactively pause an ongoing change by sending the command to the unix socket.
The dynamic reconfiguration is possible by sending the commands via unix socket. You can check the status and also query or modify the chunk size. The operation can also be throttled manually.
The cutover can be delayed by specifying a command line argument, and manually triggered on a low traffic period.
If the interactive commands don’t work because the -U flag is not available, be sure you have the correct version of netcat installed.
The main thing that prevents using gh-ost is If you have foreign keys or triggers.
You also need at least a slave running row-based replication, although gh-ost can flip the format for you if you let it.
The newer columns are also not supported yet, and if you have a master-master setup where you write to both masters at the same time that is also a problem.
Now we move on to some use cases, and let the battle begin!
RDS and Aurora take away many complexities of db administration, so you can think of your database as a black box
Unique to Aurora, consider that the underlying storage for nodes is shared.
Any online alter method or tool must take these two constraints into account.
Related, stopping and starting the replica is done through procedures, not directly in the database.
Let’s battle!
Given the differences in database administration statements, there are several complications.
In Amazon’s version of MySQL, either RDS MySQL or Aurora, privileges are limited. This often turns out to be an issue with operational tasks, and online schema changes are no exception.
Commands using “super” are disabled, processlist doesn’t give accessible IP information, filters are in place, read_only can’t be modified.
Also, binlogs aren’t enabled by default, but these can be configured by setting the backup retention appropriately.
With Amazon, gh-ost wins! The additional flexibility provided by dynamic reconfiguration, plus reduced overhead due to no triggers gives gh-ost the crown here, as long as you don’t have Foreign Keys
For Aurora, the only available option is to run against the master, as read replicas are effectively using the same underlying database files
Enable binlogs, and set binlog format to “row” in the parameter group.
For gh-ost, use assume RBR so it does not get stuck on trying to set the binlog format, and use allow on master (specify the master in the host flag).
And here’s an example of the gh-ost statement.
For rds the recommended way is to run the tool against a replica. We also need to manually give the master external ip address
And here’s an example of the gh-ost statement.
Next contestant up is Tungsten.
Tungsten uses an external replication process, as a replacement to the native built-in replication.
Notably when doing operational tasks such as online schema changes, replication status via “show slave status” is not seen inside the database.
Let’s battle!
The complications with native online DDL remain with Tungsten, and additional complications arise with external tools.
With pt-osc, because of the lack of replication information, the replicas have to be specified manually via the DSN table. And because of the triggers used in pt-osc, binlog format matters. With statement-based replication, pt-osc works out of the box. With row-based replication, you’ll need to use a Tungsten plugin. Mixed replication is not recommended.
With gh-ost, you can’t detect replication because of the lack of status within the database, and you can’t start or stop replication as well.
For Tungsten, gh-ost wins. It’s the least difficult to configure correctly for Tungsten, and allows more control than pt-osc.
Gh-ost can be configured with Tungsten to run against the master or a replica. Here’s an overview of the process when running against a replica, which is what we’ll recommend here.
Some important configuration notes: You will need to set log-slave-updates in the Tungsten configuration. That’s in the Tungsten configuration, distinct from the database configuration.
In the gh-ost statement, specify the master with assume-master-host, and use the tungsten flag.
Gh-ost requires binlog format to be set to row wherever it is running, so in this case, you’ll set the replica’s binlog-format to row. If this isn’t already set, you will need to restart the Tungsten replicator as well.
And here’s an example of the gh-ost configuration.
Up next, Galera.
Galera has built-in methods for making schema changes, but completely online DDL is not one of them.
You can choose total order isolation, but this blocks all writes to the cluster. Without taking application downtime, we don’t want this.
You can use rolling schema updates, removing one node at a time as described earlier in alternate methods. This method incurs the same complications as described earlier: it is a manual process and requires the application to be forward- and backward-compatible.
Maybe we should use an alternate tool.
Let’s battle!
Alternate tools have their own complications, but do a better job with online alters than the built-in methods.
With pt-osc, only innodb is supported, and the online schema upgrade configuration has to be set to total order isolation, so there will be minimal blocking.
With gh-ost, there’s no official support, so you can run into untested or undocumented issues. In Galera 5.6, there was a workaround to use the two-step cutover, but this does not work in Galera 5.7.
And pt-osc wins! It is the more straight forward option for Galera cluster.
To use pt-osc with Galera, set the online schema upgrade setting to total order isolation. This does not cause downtime, but indicates that Galera will process the changes on all nodes.
Other than that, no special arguments are required. You won’t see any replication lag.
(Valerie takes over)
Multisource replication can introduce several operational complexities, online schema change among them.
Here is what multisource replication looks like. It’s often found in environments running MariaDB, because it has been available for a while.
The main disadvantage to using native online alter with this setup is the space on the replica, which is already at a premium. If the change is not metadata-only, then you can get a surprise in disk utilization on the replica.
Let’s battle!
Gh-ost wins here, for the ability to control the change in what can be a complex replication environment. If we run into any surprises with disk utilization or otherwise, we can pause and sort it out.
Run gh-ost on the replica, specifying assume-master-host so gh-ost knows which master the table change should be run on.
Daisy-chained replication introduces similar complexities with operational tasks.
Here’s an example of daisy-chained replication. The middle replica has log-slave-updates turned on, and changes are sent down the line to another replica.
Let’s battle!
Because gh-ost is unwilling to set RBR when it detects a slave, you would need to set binlog_format=ROW on the intermediate slave beforehand.
To avoid that additional change, pt-osc wins here for changes that aren’t metadata only.
Run pt online schema change against the master, of course.
Daisy-chained replication can incur replication lag on a downstream replica as a matter of course. If that’s the case in your environment, exclude the downstream replica in any replication lag checks in the pt-osc configuration to avoid unnecessary pauses in the change.
Now for two use cases that are fairly common in our experience: very large tables, and very busy tables.
The first we’ll tackle is very large tables.
What is very large? Let’s define that somewhat vaguely as having many rows of a wide table. That means: it depends. But you know very large when you see it in your environment.
(We will exclude for the moment the once-in-a-lifetime very-very-very large table situation, for which there’s no canned recommendation.)
Let’s battle!
All the things that can complicate a schema change become exacerbated with a very large table. Replication lag becomes laggier. Expensive rollbacks become unaffordable. Disk space headroom is likely not available. You’re getting major resource contention.
In this case, pt-online-schema-change wins, because in both benchmarks and real world cases it has been proven to be faster.
When you’re considering all the complications above, reducing the time a change runs by even 12 hours can be positive.
Be absolutely sure you have the disk space required before starting, though (see very-very-very large table disclaimer above).
Again, test in your environment with your workload. Experiment with setting flags in a test environment (you won’t be able to change those on the fly).
Set high timeouts and retries to allow the tool to work without constant throttling.
Here are some pt-osc typical settings for a very large table.
The other common scenario is a very busy table. Unlike the case with very large tables, recommendations here can cover very-very-very busy tables as well.
What is “busy”? Busy tables are those with many changes in a short period of time. Think of an e-commerce site on Black Friday.
Busy tables can be large tables, and so this use case is often intertwined with the previous one. There’s a particular feature of busy tables to take into consideration, though, when making a recommendation.
Let’s battle!
Again, the typical complications with online changes become exacerbated with very busy tables.
There is an additional problem with native online DDL. If the change buffer used fills up, the operation will fail. If you configure this to be too high, the table lock at the end of the change will be extended.
In the case of very busy tables, gh-ost wins. This is even true when the table is very large, because when a table is very busy, there are typically times when it is more busy (think about the workload in the daytime versus middle-of-the-night), and you will want the ability to pause a change on a very busy table during a very-very-very busy time. So if you have a change against a very large table that will require pausing during a very busy time during the day, you’ll want to pause it.
Plan ahead, and use your testing to predict how long this change will take. Time it so that the cutover is done manually after the change is complete, to time it with any application changes and to avoid doing that during a very-very-very busy time.
REDUNDANT -- fix Valerie
Again, external tools are going to work better than native online DDL.
Configure the change to allow for small chunks. In this case, you’ll want low timeouts to avoid performance degradation, and configure for retries.
As mentioned above, schedule the cutover to happen manually at a time when there is low traffic workload.
And here’s how you would configure gh-ost to run a change with a very busy table.
We’ve refereed a battle for several setups you’re likely to run into. What we haven’t talked about is a change that isn’t going to require a table copy.
A handful of changes are only going to change the metadata. It’s useful to check documentation for your version of MariaDB or MySQL to see if a change is done only to that metadata, before engaging one of the complex processes described above.
A common example is dropping an index. This works on metadata only in most recent versions of both MariaDB and MySQL.
Let’s battle! We’ll say our battle involves a change to drop an index.
And in this case, native online DDL wins.
So keep an eye on release notes and documentation. Database products continue to mature, and continue to allow more online features.
OK, that wraps up our online schema change battle.
Are there any questions?