EmergencyEMERGENCY? Get 24/7 Help Now!

xtrabackup-0.5, bugfixes, incremental backup introduction

 | April 7, 2009 |  Posted In: Percona Software

PREVIOUS POST
NEXT POST

I am happy to announce next build of our backup tool. This version contains several bugfixes and introduces initial implementation of incremental backup.

Incremental backup works in next way. When you do regular backup, at the end of procedure you can see output:

which gives start point 1319:813219999 for further incremental backup. This point is LSN of last checkpoint operations. Now next time when you want only copy changed pages you can do:

and only changed pages (ones with LSN greater than given) will be copied to specified dir. You may have several incremental dir, and apply them one-by-one.

Current version does not allow to copy incremental changes to remote box or to stream, it is only local copy for now, but we are going to change it in next release. Beside putting last checkpoint LSN to output we also store it in xtrabackup_checkpoint file to use it in scripts.
More about incremental you can read on our draft page https://www.percona.com/docs/wiki/percona-xtrabackup:spec:incremental

You can download current binaries RPM for RHEL4 and RHEL5 (compatible with CentOS also), DEB for Debian/Ubuntu and tar.gz for Mac OS / Intel 64bit there:
https://www.percona.com/mysql/xtrabackup/0.5/.
By the same link you can find general .tar.gz with binaries which can be run on any modern Linux distribution.
By the same link you can download source code if you do not want to deal with bazaar and Launchpad.

The project lives on Launchpad : https://launchpad.net/percona-xtrabackup and you can report bug to Launchpad bug system:
https://launchpad.net/percona-xtrabackup/+filebug. The documentation is available on our Wiki

For general questions use our Pecona-discussions group, and for development question Percona-dev group.

For support, commercial and sponsorship inquiries contact Percona

PREVIOUS POST
NEXT POST
Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.

6 Comments

  • Great stuff! Just testing this. From the doucmentation it seems to show a different use case for incremental.

    It states you just pass the last increments directory in as –incremental-basedir= and the backup process should read the lsn from the checkfile in this last increment directory. I’ve tested that and it seems to work. This suggests we just need to know the path to our last increment and pass this in each time we increment and we don’t actually need to script the reading of the incremental_lsn.

    Have I understood this correctly?

    Thanks again
    Leon

  • Well I am very impressed so far. I have mounted a remote drive using fuse and ssfhs and incrementally backing up straight to that which is working great. Not using it in production yet as need to test the restores.

    I have a couple of questions if you could possibly answer, as I can’t seem to make sense of the documents.

    When doing a –prepare, lets say the current database server has died, we can first download our backup to a new server on /data/backup we then want to restore the data to /data what would the prepare command look like?

    Also are triggers /user accounts backed up or is it just data and schema?

    Thanks very much I am honestly very impressed with it and looking forward to compression etc in future releases

  • Leon,

    To restore data you can copy backup to final /data directory directly and run
    innobackupex –apply-log /data
    it should execute prepare and create iblogs ready to use, that is MySQL will be ready to start.

    As for trigger/user it depends what tool you use.
    xtrabackup binary works only with innodb tables.
    innobackupex handles all instance including MyISAM, user accounts, triggers, views etc.

    As for compression you can use it already in stream mode, i.e.
    innobackupex –stream=tar tmp | gzip – > backup.tar.gz.

    Are you looking for different compression way ?

  • Hi,
    Thanks for the response I am using InnoDB. I have actually just been using xtrabackup at this point to create backups. As I am doing incremental backups I didn’t think compression was supported yet.

    Is it ok to use the xtrabackup command to backup and innobackupex to restore?

    The above prepare looks pretty easy, how would increments be added after the main backup?

    Thanks again

    Leon

  • Actually I see incremental is not supported in innobackupex , do you know when this is available?
    For now I guess I need to backup triggers/user accounts myself and use xtrabackup to get the data.
    How would the prepare work using xtrabackup with the above example?

    Would I copy all my backup data onto a new server at /data and issue this?

    xtrabackup –prepare –datadir=/data

    Then how would incrementals work lets say the incremental is at /data/backup/02 would this command be correct usage?

    xtrabackup –prepare –datadir=/data –incremental-dir=/data/backup/02

Leave a Reply

 
 

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.