EmergencyEMERGENCY? Get 24/7 Help Now!

xtrabackup-0.3, binaries and stream backup

 | March 13, 2009 |  Posted In: Percona Software

PREVIOUS POST
NEXT POST

We are coming with next version of xtrabackup – online backup solution for MySQL 5.0 / 5.1 and InnoDB standard version, plugin modification and XtraDB. We still consider it as alpha version, though it shows perfect stable results in our tests.

Let me address two often asked question about xtrabackup:
1) Does it work only with XtraDB or with InnoDB also ?
A: xtrabackup is designed to work with standard version of InnoDB in MySQL 5.0. MySQL 5.1 + standard InnoDB / InnoDB-plugin also are supported. It can fully work as drop-in replacement of innodbackup tool and InnoDB Hot Backup online backup.

2) Do we need to run patched MySQL, because the build instruction mentions the patch for MySQL ?
A: xtrabackup can be run with any version of MySQL: community release, enterprise release, percona builds, ourdelta distributions. You do not need to patch MySQL to run backup. The patch for MySQL is needed only to build the binaries of xtrabackup from source code. If you are not comfortable with building from source we provide the binaries for xtrabackup-0.3

New version of xtrabackup contains the feature which is missed in InnoDB Hot Backup – the backup can be produced as stream and copied to remote box or tape, or compressed without needs to store full copy of database on local disks. Very often there is no space to store second copy of database and you need to mount SAN/NAS for extra space. So now you can run it as

and get copy on remote box.
Also new features in 0.3 – it can be compiled against 5.1 and it supports OS X (sponsored feature).

Now we are developing to next feature which is not available for InnoDB users in any form so far – it is incremental and differential backups. Having that you will be able to make copy only of CHANGED data since given backup, there will not be needs to copy whole 250GB datafile if it changed only 1GB of data since yesterday.

You can download current binaries RPM for RHEL4 and RHEL5 (compatible with CentOS also) and DEB for Debian/Ubuntu there
https://www.percona.com/mysql/xtrabackup/0.3/.
By the same link you can find general .tar.gz with binaries which can be run on any modern Linux distribution.
By the same link you can download source code if you do not want to deal with bazaar and Launchpad.

The project lives on Launchpad : https://launchpad.net/percona-xtrabackup and you can report bug to Launchpad bug system:
https://launchpad.net/percona-xtrabackup/+filebug. The documentation is available on our Wiki

For general questions use our Pecona-discussions group, and for development question Percona-dev group.

For support, commercial and sponsorship inquiries contact Percona

PREVIOUS POST
NEXT POST
Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.

11 Comments

  • Wonderful!
    The streaming option is great.

    Vadim: can you please elaborate on how incremental backups are made? I’m assuming you’re not using binlogs, but rather reading internal ibdata. In what manner does InnoDB allow for getting a delta from a given timestamp / transaction pos?
    Thanks

  • If this thing does what say it does we’ve definitely found our new favored backup tool. Looking forward to see a final version of this tool. Thank you for your work.

  • Shlomi,

    Incremental backup are not finished yet, you can’t use it in current version.

    You are right – we are reading internal ibdata. There is such thing as LSN (Log Sequence Number) and each page has it, so basically we can retrieve only pages that older than given LSN.

  • Vadim,

    thanks for the info. In that case, the inremental backup would be those pages for which the LSN is larger then the one recorded in the full backup, right?
    How do you manage pages which contain uncommitted data? That is, tx#1 started, tx#2 started and committed, and due to dirty pages percent (or other reason), the page for tx#2 was flushed, along with the changes in tx#1.
    Do you also backup the undo buffer, then?

    It’s also interesting to estimate the percent of pages which will change between incremental backups. Is there some way to ask InnoDB how many pages have LSN greater than “x”?

  • Shlomi,

    Yes, in incremental backup we will copy only pages with LSN greater then in full backup.
    We do backup undo buffer, it is located in ibdata1 system table space, there is also insert buffer.

    There is no easy way to say how many pages changed since LSN ‘x’ – only to scan all pages on disk.

  • Would it be possible to output the binary log position that corresponds to the end of the backup? I don’t know if you have access to this info, but that would allow for a hot backup to be made, and then you could use the normal mysql binary logs to deal with changes made after the backup.

  • Apologies; I looked at the sample output and see that the binary log position is already printed at the time of the backup. I’m assuming all uncompleted transactions after that “point it time” are rolled back as part of the backup process from the innodb logs.

  • Neil,

    There is xtrabackup_binlog_info which contains binary log position corresponding to time of backup.
    To do “point in time” recovery you should start from this position.

  • innobackupex fail if a have innodb_data_home_dir & innodb_log_group_home_dir in my my.cnf


    innodb_data_home_dir = /var/lib/mysql
    innodb_log_group_home_dir = /var/lib/mysql

    #>innobackupex-1.5.1 –databases=saturn –user=root –password=******* –slave-info /tmp

    InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy.
    All Rights Reserved.

    This software is published under
    the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

    IMPORTANT: Please check that the backup run completes successfully.
    At the end of a successful backup run innobackup
    prints “innobackup completed OK!”.

    innobackupex: Using mysql Ver 14.14 Distrib 5.1.31, for unknown-linux-gnu (x86_64) using readline 5.1
    innobackupex: Using mysql server version 5.1.31-percona-log

    innobackupex: Created backup directory /tmp/2009-03-21_00-40-56
    090321 00:40:56 innobackupex: Starting mysql with options: –unbuffered –password=****** –user=root
    090321 00:40:56 innobackupex: Connected to database with mysql child process (pid=32443)
    090321 00:41:00 innobackupex: Connection to database server closed

    090321 00:41:00 innobackupex: Starting ibbackup with command: xtrabackup –backup –suspend-at-end –target-dir=/tmp/2009-03-21_00-40-56
    innobackupex: Waiting for ibbackup (pid=32450) to suspend
    innobackupex: Suspend file ‘/tmp/2009-03-21_00-40-56/xtrabackup_suspended’

    xtrabackup Ver alpha-0.3 for 5.0.77 unknown-linux-gnu (x86_64)
    >> log scanned up to (34 1558149409)
    090321 0:41:00 InnoDB: Operating system error number 2 in a file operation.
    InnoDB: The error means the system cannot find the path specified.
    InnoDB: File name /tmp/2009-03-21_00-40-56/var/lib/mysql/ibdata1
    InnoDB: File operation call: ‘open’.
    InnoDB: Cannot continue operation.
    innobackupex: Error: ibbackup child process has died at /usr/bin/innobackupex-1.5.1 line 427.

Leave a Reply

 
 

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.