Streaming MySQL Backups with Percona XtraBackup – Another Alternative

Streaming MySQL Backups with Percona XtraBackupToday we are going to present you with another way to make a database copy between servers using our Percona XtraBackup utility.  What is the difference with many others existing on the web?  Not many, but they are interesting in terms of performance and availability.

We will combine the xbstream utility with the power of pigz and socat, to take advantage of the multi-processing in the case of having multiple processors, and at the same time, reduce the use of network bandwidth in cases where this component is a bottleneck.  So let’s explain each component:

socat: This stands for SOcket CAT. It is a utility for data transfer between two addresses.

What makes socat so versatile is the fact that an address can represent a network socket, any file descriptor, a Unix domain datagram or stream socket, TCP and UDP (over both IPv4 and IPv6), SOCKS 4/4a over IPv4/IPv6, SCTP, PTY, datagram, and stream sockets, named and unnamed pipes, raw IP sockets, OpenSSL, or on Linux even any arbitrary network device

pigz: This stands for parallel implementation of gzip, and it is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data.

xbstream: (with parallelism) to process several files in parallel.

Packages required: pigz, socat, and, of course, Percona XtraBackup

Servers Involved in the Task

Source: the source database where the backup will be taken (MySQL 5.7 installed on CentOS 7.8)

Target: the destination where the backup will be send (MySQL 5.7 installed on CentOS 7.8)


1. Install the required packages in the Source and Target nodes:

If you haven’t installed Percona XtraBackup (in both servers), please follow the instructions explained in

Make sure you have a user with proper privileges to do the backup on Source db:

2. On Target, execute:

Stop current database service (if any):

Delete datadir contents (assuming it was installed with default settings) and make sure you’re logged in to the Target server!!

Finally, we’re going to execute the command to receive the backup from the source (Source):

3. On Source, execute the command to send the backup to target (Target).

You will see an output like this:

4. Once finished with step 3, you will see an output like this on the Target node:

And step 2 will be finished too, so you must execute the following commands in Target node:

From the documentation:

After you made a backup with the xtrabackup –backup option, you’ll first need to prepare it in order to restore it. Data files are not point-in-time consistent until they’ve been prepared, because they were copied at different times as the program ran, and they might have been changed while this was happening. If you try to start InnoDB with these data files, it will detect corruption and crash itself to prevent you from running on damaged data. The xtrabackup –prepare step makes the files perfectly consistent at a single instant in time, so you can run InnoDB on them.

And you have cloned a new database!

You can, of course, set up this new database as a replica with these additional steps executed in the Target node:

Look at the content of the file xtrabackup_binlog_info, it will be something like:

(We’re assuming they have created the following user/grants in the Source db, if not, create it)

Connect to the database, and run:

And that’s all.  Happy streaming!

Share this post

Comment (1)

  • Simon J Mudd Reply

    Hi Fernando,

    A couple of thoughts:

    1. Shouldn’t you be demonstrating this with MySQL 8.0 as that’s the current GA version? MySQL 5.7 is fine, but quite old. There are plenty of good reasons to use 8.0 in favour of 5.7.

    2. The use of pigz: Provide some performance statistics vs using xtrabackup in a “traditional fashion” so we can see the gain that’s possible in speed. It may be sometimes the network is the bottleneck and others it is not so having an idea of the improvement you can get, even in your test environment would be useful. Try this for a ~500 GB to 1 TB sized database as that’s not a trivial size and is large enough for improved performance to make a difference.

    3. I’m also curious: Is xtrabackup speed on 5.7 any different to 8.0, either doing a standard cross server backup or using piz? We rarely look at comparisons in backup speeds so it would be interesting to see if there are any noticeable differences for what from the user perspective is likely to be the sane “dataset”. I wouldn’t expect any but there might be.

    4. How do such backup / clone speeds compare to using something like “native cloning”? That’s new in 8.0 and requires copying between servers of the exact same version, but I remember looking a long time ago and performance (low clone time) was pretty good.

    A couple of extra comments on these points might make a good follow up to this post.



    January 8, 2021 at 2:03 pm

Leave a Reply