Recently, I was working with a client that asked about using Percona Xtrabackup to take a snapshot of a particular MySQL schema and then reload it with a different schema name on the same server. It caught me off guard because I’d never really thought about it – typically, I’ve used Xtrabackup simply to clone a server (for replication or migration) or migrate an existing schema to a new server. However, given the import/export functionality of Xtrabackup combined with Percona Server (Exporting and Importing Tables), it did seem possible.
Further discussion with the client clarified the use case: clients make a mistake and need to compare their old data to their current data on the current live server. Mysqldump works well for this, but can quite slow on larger schemas.
One of the downsides to mysqldump is the need to scan the full tables and in turn, load that data into and pollute the buffer pool. Note that this can be somewhat mitigated using innodb_old_blocks_time, but that is outside the scope of this post. Similarly, reloading the data will be a very IO intense operation (redo logs, binlogs, etc) as well further polluting the buffer pool. Enter Xtrabackup…
As a quick refresher, Xtrabackup works by copying the dirty tablespace files while streaming the redo-logs to ensure that all transactions are also captured. The –apply-logs phase simply utilizes the built in crash recovery and applies the redo-logs to the dirty tablespace and voila, you have a consistent binary backup at a point in time.
When running Percona Server, you can utilize the –export flag during the –apply-logs phase and then re-import those files to a running server. However, you need to have existing table structures in place. Fortunately, this can be done easily using mysqldump –no-data.
Now, enough with the theory, here is the procedure I used:
Now, you have a old version of the schema running side by side with the current version on the same server. This will allow you to compare and restore values (potentially corrupted via user error or other issues) with SQL rather than needing to import from a remote server. This can allow for more targeted restores, easier comparison, and allow remote users to compare on a live system without needed to grant access to another “backup” server.
As noted by one of my colleagues (thanks Bill Karwin!), the cumbersome part of this process is the DISCARD/IMPORT TABLESPACE step as that is done manually for each table (currently a blueprint in innobackupex). He also included this helpful script to generate all of those statements for steps 7 and 10 in two scripts:
mysql -N -B <<‘EOF’ > discard-ddl.sqlSELECT CONCAT(‘ALTER TABLE', table_name, 'DISCARD TABLESPACE;’) AS _ddlFROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA=’orig’ AND ENGINE=’InnoDB’;EOFmysql -N -B <<‘EOF’ > import-ddl.sqlSELECT CONCAT(‘ALTER TABLE', table_name, 'IMPORT TABLESPACE;’) AS _ddlFROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA=’orig’ AND ENGINE=’InnoDB’;EOF
For full disclosure, here are the versions I was using on my test VM:
Some other use cases that come to mind for this technique would be:
Resources
RELATED POSTS