EmergencyEMERGENCY? Get 24/7 Help Now!

Backing up binary log files with mysqlbinlog

 | January 18, 2012 |  Posted In: Insight for DBAs, MySQL

PREVIOUS POST
NEXT POST

Backing up binary logs are essential part of creating good backup infrastructure as it gives you the possibility for point in time recovery. After restoring a database from backup you have the option to recover changes that happend after taking a backup. The problem with this approach was that you had to do periodic filesystem level backups of the binary log files which could still lead to data loss depending on the interval you back them up.
Recently in MySQL 5.6, mysqlbinlog got a new feature addition that supports connecting to remote MySQL instances and dumping binary log data to local disks ( http://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html ). This can be used as a foundation of our live binary log backups.

The wrapper script below will connect to the remote server specified in the config and ensure mysqlbinlog utility is up and running. By default if you do not supply the binary log file, mysqlbinlog deletes and overwrites them all that is undesired behaviour in our case, so we have to supply the name of the last binary log. This last file will be still overwritten hence we make a backup first.

Configuration file:

Starting in the background with logging to /var/log/livebinlog/server2.log:

As a great addition, older logfiles that have been rotated can be checked against the MySQL server’s version if they are the same or not. For this purpose you can use rsync in “dry-run” mode.

Please note MySQL 5.6 is not yet released as GA but you can use mysqlbinlog to backup MySQL 5.1 and 5.5 databases.

PREVIOUS POST
NEXT POST

8 Comments

  • Many thanks – I’m right there developing my homebrew backup system. Your script ran straight out of the box for me – thanks again. I get this notice: “nohup: ignoring input and redirecting stderr to stdout”. Google says something about standard error, standard output and nohup.out but it’s a bit above my paygrade.

  • Many thanks for this add, i have tried this on my production system all went well with one exception, once all the binlogs has been backed up and we re remained with the last one we start having multiple backups of the same binlog just identified with timestamp is this how its men’t to do?

  • Thanks for this blog post, it opened my eyes on the need to make live backup of binlogs.

    Unfortunatly we work on Windows so i started to work on a powershell script to achieve this with some more feature.

    Result is on my blog : http://mysql.on.windows.free.fr/index.php/mysql-binlog-dumper/

    Thanks a lot for inspiration.

    Hope this help

  • Two questions about this line
    $MBL –raw –read-from-remote-server –stop-never –host $MYSQLHOST –port $MYSQLPORT -u $MYSQLUSER -p$MYSQLPASS $LASTFILE
    1 At start ,$LASTFILE would be null,and the script wouldn’t work as you don’t specify the binlog file
    2 If I ran this command ,the bin log would be in the script dir, not in the $BACKUPDIR

    Might be something wrong!
    Would be glad if answered!

  • @jadd
    I agree with your $LASTFILE statement, this should be populated by looking at the binlog folder instead of the backup folder, or by running show binary logs on the server.
    The $BACKUPDIR would contain the backup files, as the script does a cd $BACKUPDIR on line 4 before doing anything else.

    Other than that, the script looks ok.

  • if remote used purge binary logs then must to find the remote server first binary log name,like jadd said,fisrt shuold give the firsst binary log name in backupdir here I collect the need to do:
    1:remote server must set server-id
    2:grant repl user replication client privileges for find remote server the first binary log name
    3:backupdir first logs name must be not null,use this commamd to find:
    FIRSTFILE=mysql -h $HOST -u$USER -p$PASSWD -Bse "show binary logs" | grep -v 0$ | sort -n | head -n1 | awk -F"\\\\\s" '{print $1}'

Leave a Reply

 
 

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.