Streaming Percona XtraBackup for MySQL to Multiple Destinations

Percona XtraBackup for MySQL to Multiple DestinationsHave you ever had to provision a large number of instances from a single backup? The most common use case is having to move to new hardware, but there are other scenarios as well. This kind of procedure can involve multiple backup/restore operations which can easily become a pain to administer. Let’s look at a potential way to make it easier using Percona Xtrabackup. The Percona XtraBackup tool provides a method of performing fast and reliable backups of your MySQL data while the system is running.

Leveraging Named Pipes

As per the Linux manual page, a FIFO special file (a named pipe) is similar to a pipe except that it is accessed as part of the filesystem. It can be opened by multiple processes for reading or writing.

For this particular case, we can leverage FIFOs and netcat utility to build a “chain” of streams from one target host to the next.

The idea is we take the backup on the source server and pipe it over the network to the first target. In this target, we create a FIFO that is then piped over the network to the next target. We can then repeat this process until we reach the final target.

Since the FIFO can be read by many processes at the same time, we can use it to restore the backup locally, in addition to piping it over to the next host.


In order to perform the following operations, we need the netcat, percona-xtrabackup and qpress packages installed.

Assume we have the following servers:

  • source, target1, target2, target3, target4

We can set up a “chain” of streams as follows:

  • source -> target1 -> target2 -> target3 -> target4

Looking at the representation above, we have to build the chain in reverse order to ensure the “listener” end is started before the “sender” tries to connect. Let’s see what the process looks like:

  1. Create listener on the final node that extracts the stream (e.g. target4):

    Note: the -p argument specifies the number of worker threads for reading/writing. It should be sized based on the available resources.
  2. Setup the next listener node. On target3:
  3. Repeat step 2 for all the remaining nodes in the chain (minding the order).
    On target 2:

    On target 1:

    Note that we can introduce as many intermediate targets as we need.
  4. Finally, we start the backup on the source, and send it to the first target node:

    If we got it right, all servers should start populating the target dir.

Wrapping Up

After the backup streaming is done, we need to decompress and recover on each node:

Also, adjust permissions and start the restored server:


We have seen how using named pipes, in combination with netcat, can make our lives easier when having to distribute a single backup across many different target hosts. As a final note, keep in mind that netcat sends the output over the network unencrypted. If transferring over the public internet, it makes sense to use Percona XtraBackup encryption, or replace netcat with ssh.

Share this post

Comment (1)

  • veer Reply

    How about doing the same thing from a backup file . Suppose i need to refresh a 1 T B database in 3 nodes PXC cluster. if i have backup file copy available in one node and extract it to rest of the nodes parallel . to save restore time when Cross datacenter servers in cluster.

    September 15, 2020 at 5:28 am

Leave a Reply