EmergencyEMERGENCY? Get 24/7 Help Now!

InnoDB Page Compression: the Good, the Bad and the Ugly

 | November 20, 2017 |  Posted In: Insight for DBAs, JSON, MySQL

PREVIOUS POST
NEXT POST

InnoDB Page CompressionIn this blog post, we’ll look at some of the facets of InnoDB page compression.

Somebody recently asked me about the best way to handle JSON data compression in MySQL. I took a quick look at InnoDB page compression and wanted to share my findings.

There is also some great material on this topic that was prepared and presented by Yura Sorokin at Percona Live Europe 2017: https://www.percona.com/live/e17/sessions/percona-xtradb-compressed-columns-with-dictionaries-an-alternative-to-innodb-table-compression. Yura also implemented Compressed Columns in Percona Server.

First, the good part.

InnoDB page compression is actually really easy to use and provides a decent compression ratio. To use it, I just ran CREATE TABLE commententry (...) COMPRESSION="zlib"; – and that’s all. By the way, for my experiment I used the subset of Reddit comments stored in JSON (described here: Big Dataset: All Reddit Comments – Analyzing with ClickHouse).

This method got me a compressed table of 3.9GB. Compare this to 8.4GB for an uncompressed table and it’s about a 2.15x compression ratio.

Now, the bad part.

As InnoDB page compression uses “hole punching,” the standard Linux utils do not always properly support files created this way. In fact, to see the size “3.9GB” I had to use du --block-size=1 tablespace_name.ibd , as the standard ls -l tablespace_name.ibd shows the wrong size (8.4GB). There is a similar limitation on copying files. The standard way cp old_file new_file may not always work, and to be sure I had to use cp --sparse=always old_file new_file.

Speaking about copying, here’s the ugly part.

The actual time to copy the sparse file was really bad.

On a fairly fast device (a Samsung SM863), copying the sparse file mentioned above in its compressed size of 3.9GB took 52 minutes! That’s shocking, so let me repeat it again: 52 minutes to copy a 3.9GB file on an enterprise SATA SSD.

By comparison, copying regular 8.4GB file takes 9 seconds! Compare 9 sec and 52 mins.

To be fair, the NMVe device (Intel® SSD DC D3600) handles sparse files much better. It took only 12 seconds to copy the same sparse file on this device.

Having considered all this, it is hard to recommend that you use InnoDB page compression for serious production. Well, unless you power your database servers with NVMe storage.

For JSON data, the Compressed Columns in Percona Server for MySQL should work quite well using Dictionary to store JSON keys – give it a try!

PREVIOUS POST
NEXT POST
Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks.

Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products.

He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.

9 Comments

  • Vadim, When you talk about SATA SM863 vs NVMe DC D3600 is it about NVMe vs SATA or some sort of other device differences ?

    With just 2 devices it is very hard to say what is representative here – relatively modest overhead you see on NVMe or major slow down which you have on SM863. Could it be this particular device is extremely poor handling sparse/fragmented files ?

      • You talking xtrabackup here? Well, we have a large number very large tables (hundreds of gigabytes) and we leverage InnoDB compression on all of them. Needless to say we never experienced such bad performances, our backups take reasonable times… following what you described, copying a 1 TB compressed table should take days, which is not the case….

        • Rick,

          To clarify,
          Are you using InnoDB Page Compression https://dev.mysql.com/doc/refman/5.7/en/innodb-page-compression.html
          or InnoDB Table Compression https://dev.mysql.com/doc/refman/5.7/en/innodb-table-compression.html ?

          • My bad, of course we use table level compression which doesn’t exhibit the same behaviour (I guess it is not using hole punching). In this case I would recommend everyone to steer away from page level compression if they want to take backups in reasonable time 🙂

  • What about InnoDB table compression? Would you recommend that?

    Between InnoDB table compression and Percona Server Compressed Columns how would you choose?

    • Andy,

      In this case it depends on your use case. Please also check https://www.percona.com/live/e17/sessions/percona-xtradb-compressed-columns-with-dictionaries-an-alternative-to-innodb-table-compression

Leave a Reply