InnoDB Page Compression: the Good, the Bad and the Ugly

InnoDB Page CompressionIn this blog post, we’ll look at some of the facets of InnoDB page compression.

Somebody recently asked me about the best way to handle JSON data compression in MySQL. I took a quick look at InnoDB page compression and wanted to share my findings.

There is also some great material on this topic that was prepared and presented by Yura Sorokin at Percona Live Europe 2017: Yura also implemented Compressed Columns in Percona Server.

First, the good part.

InnoDB page compression is actually really easy to use and provides a decent compression ratio. To use it, I just ran CREATE TABLE commententry (...) COMPRESSION="zlib"; – and that’s all. By the way, for my experiment I used the subset of Reddit comments stored in JSON (described here: Big Dataset: All Reddit Comments – Analyzing with ClickHouse).

This method got me a compressed table of 3.9GB. Compare this to 8.4GB for an uncompressed table and it’s about a 2.15x compression ratio.

Now, the bad part.

As InnoDB page compression uses “hole punching,” the standard Linux utils do not always properly support files created this way. In fact, to see the size “3.9GB” I had to use du --block-size=1 tablespace_name.ibd , as the standard ls -l tablespace_name.ibd shows the wrong size (8.4GB). There is a similar limitation on copying files. The standard way cp old_file new_file may not always work, and to be sure I had to use cp --sparse=always old_file new_file.

Speaking about copying, here’s the ugly part.

The actual time to copy the sparse file was really bad.

On a fairly fast device (a Samsung SM863), copying the sparse file mentioned above in its compressed size of 3.9GB took 52 minutes! That’s shocking, so let me repeat it again: 52 minutes to copy a 3.9GB file on an enterprise SATA SSD.

By comparison, copying regular 8.4GB file takes 9 seconds! Compare 9 sec and 52 mins.

To be fair, the NMVe device (Intel® SSD DC D3600) handles sparse files much better. It took only 12 seconds to copy the same sparse file on this device.

Having considered all this, it is hard to recommend that you use InnoDB page compression for serious production. Well, unless you power your database servers with NVMe storage.

For JSON data, the Compressed Columns in Percona Server for MySQL should work quite well using Dictionary to store JSON keys – give it a try!

Share this post

Comments (9)

  • Peter Zaitsev

    Vadim, When you talk about SATA SM863 vs NVMe DC D3600 is it about NVMe vs SATA or some sort of other device differences ?

    With just 2 devices it is very hard to say what is representative here – relatively modest overhead you see on NVMe or major slow down which you have on SM863. Could it be this particular device is extremely poor handling sparse/fragmented files ?

    November 20, 2017 at 2:02 pm
  • Vadim Tkachenko


    While it definitely might be the device related, I think NVMe devices are commonly known to handle sparse files better

    November 20, 2017 at 2:53 pm
  • Rick Pizzi

    Sorry Vadim, I miss the rationale for why would someone ever want to copy an .ibd file?

    November 20, 2017 at 3:31 pm
    • Vadim Tkachenko


      How would you do InnoDB backup if not copying files?

      November 20, 2017 at 3:33 pm
      • Rick Pizzi

        You talking xtrabackup here? Well, we have a large number very large tables (hundreds of gigabytes) and we leverage InnoDB compression on all of them. Needless to say we never experienced such bad performances, our backups take reasonable times… following what you described, copying a 1 TB compressed table should take days, which is not the case….

        November 20, 2017 at 3:35 pm
  • Andy

    What about InnoDB table compression? Would you recommend that?

    Between InnoDB table compression and Percona Server Compressed Columns how would you choose?

    November 22, 2017 at 5:11 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.