In this blog post, we will compare the performance of performing a backup from a MySQL database using mysqldump, MySQL Shell feature called Instance Dump, mysqlpump, mydumper, and Percona XtraBackup. All these available options are open source and free to use for the entire community.
To start, let’s see the results of the test.
The benchmark was run on an m5dn.8xlarge instance, with 128GB RAM, 32 vCPU, and 2xNVMe disks of 600GB (one for backup and the other one for MySQL data). The MySQL version was 8.0.26 and configured with 89Gb of buffer pool, 20Gb of redo log, and a sample database of 177 GB (more details below).
We can observe the results in the chart below:

And if we analyze the chart only for the multi-threaded options:

As we can see, for each software, I’ve run each command three times in order to experiment using 16, 32, and 64 threads. The exception for this is mysqldump, which does not have a parallel option and only runs in a single-threaded mode.
We can observe interesting outcomes:
These are the specs of the benchmark:
The my.cnf configuration:
|
1 |
[mysqld]<br>innodb_buffer_pool_size = 89G<br>innodb_log_file_size = 10G |
For the test, I used sysbench to populate MySQL. To load the data, I choose the tpcc method:
|
1 |
$ ./tpcc.lua --mysql-user=sysbench --mysql-password='sysbench' --mysql-db=percona --time=300 --threads=64 --report-interval=1 --tables=10 --scale=100 --db-driver=mysql prepare |
Before starting the comparison, I ran mysqldump once and discarded the results to warm up the cache, otherwise our test would be biased because the first backup would have to fetch data from the disk and not the cache.
With everything set, I started the mysqldump with the following options:
|
1 |
$ time mysqldump --all-databases --max-allowed-packet=4294967295 --single-transaction -R --master-data=2 --flush-logs | gzip > /backup/dump.dmp.gz |
For the Shell utility:
|
1 |
$ mysqlsh<br>MySQL JS > shell.connect('root@localhost:3306');<br>MySQL localhost:3306 ssl test JS > util.dumpInstance("/backup", {ocimds: true, compatibility: ["strip_restricted_grants","ignore_missing_pks"],threads: 16}) |
For mydumper:
|
1 |
$ time mydumper --threads=16 --trx-consistency-only --events --routines --triggers --compress --outputdir /backup/ --logfile /backup/log.out --verbose=2 |
PS: To use zstd, there are no changes in the command line, but you need to download the zstd binaries.
For mysqlpump:
|
1 |
$ time mysqlpump --default-parallelism=16 --all-databases > backup.out |
For xtrabackup:
|
1 |
$ time xtrabackup --backup --parallel=16 --compress --compress-threads=16 --datadir=/mysql_data/ --target-dir=/backup/ |
And what do the results tell us?
Parallel methods have similar performance throughput. The mydumper tool cut the execution time by 50% when using zstd instead of gzip, so the compression method makes a big difference when using mydumper.
For the util.dumpInstance utility, one advantage is that the tool stores data in both binary and text format and uses zstd compression by default. Like mydumper, it uses multiple files to store the data and has a good compression ratio.
XtraBackup got third place with a few seconds of difference from MySQL shell. The main advantage of XtraBackup is its flexibility, providing PITR and encryption for example.
Next, mysqlpump is more efficient than mydumper with gzip, but only by a small margin. Both are logical backup methods and works in the same way. I tested mysqlpump with zstd compression, but the results were the same, hence the reason I didn’t add it to the chart. One possibility is because mysqlpump streams the data to a single file.
Lastly, for mysqldump, we can say that it has the most predictable behavior and has similar execution times with different runs. The lack of parallelism and compression is a disadvantage for mysqldump; however, since it was present in the earliest MySQL versions, based on Percona cases, it is still used as a logical backup method.
Please leave in the comments below what you thought about this blog post, if I missed something, or if it helped you. I will be glad to discuss it!
Finally, you can reach us through the social networks, our forum, or access our material using the links presented below:
Resources
RELATED POSTS