Evaluating Database Compression Methods

Database Compression Methods

NOTE:  This blog post contains some mistakes in data. While many of our conclusions are still valid, we’ve updated the evaluation in a new blog post. Please use that post in conjunction with this one.

Vadim Tkachenko and I have been working with Fractal Tree® storage engines (Fractal Tree engines are available in Percona Server for MySQL and MongoDB as TokuDB and PerconaFT, respectively). While doing so, we’ve become interested evaluating database compression methods, to see how to make compression algorithms work even better than they do currently.  

In this blog post, I will discuss what we found in our compression research.


Before we get to evaluation database compression methods, let’s review what compression properties are most relevant to databases in general. The first thing to consider is compression and decompression performance. Databases tend to be very sensitive to decompression performance, as it is often done in the “foreground” – adding to client response latency. Compression performance, on the other hand, is less critical because it can typically run in the background without adding client latency. It can, however, cause an issue if the database fills its data compression queue and “chokes.” The database workload also affects compression performance demands. If the data is loaded only once and essentially becomes read only, it might make sense to spend extra time compressing it – as long as the better compression ratio is achieved without impact to decompression speed.

The next important thing to consider is the compression block size, which can significantly affect compression ratio and performance. In some cases, the compression block size is fixed. Most InnoDB installations, for example, use a 16KB block size. In MySQL 5.7 it is possible to change block size from 4KB to 64KB, but since this setting applies to the whole MySQL instance it isn’t commonly used. TokuDB and PerconaFT allow a much more flexible compression block size configuration. Larger compression block sizes tend to give a better compression ratio and may be more optimal for sequential scan workloads, but if you’re accessing data in a random fashion you may see significant overhead as the complete block typically must be decompressed.

Of course, compression will also depend on the data you’re compressing, and different algorithms may be more optimal at handling different types of data. Additionally, different data structures in databases may structure data more or less optimally for compression. For example, if a database already implements prefix compression for data in the indexes, indexes are likely to be less compressible with block compression systems.

Let’s examine what choices we have when it comes to the compression algorithm selection and configuration. Typically for a given block size – which is essentially a database configuration setting – you will have a choice to select the compression algorithm (such as zlib), the library version and the compression level.  

Comparing different algorithms was tough until lzbench was introduced. Izbench allows for a simple comparison of different compression libraries through a single interface.

For our test, we loaded different kinds of data in an uncompressed InnoDB table and then used it as a source for lzbench:

This method is a good way to represent database structures and is likely to be more realistic than testing compression on the source text files.

All results shown here are for “OnTime Air Performance.” We tried a variety of data, and even though the numbers varied the main outcomes are the same. You can see results for our other data types in this document.

The results for compression are heavily CPU dependent. All the data below is single thread compression benchmarks run on an Intel Xeon E5-2643 v2 @ 3.5Ghz.

Below are some of the most interesting results we found.

Comparing Compression Algorithms

database compression methods

Using a standard 16KB block size and a low level of compression, we can see that there is a huge variety of compression and decompression speed. The results ranged from 30MB per second for LZMA to more than 1GB per second for LZ4 for compression, and 100MB per second and  3.5GB per second for decompression (for the same pair).

Now let’s look at the compression ratios achieved.

database compression methods

You can see a large variety of outcomes for this data set as well, with ratios ranging from 1.89:1 (LZ4) to  6.57:1 (LZMA). Notice how the fastest and slowest compression libraries achieve the worst and best compression: better compression generally comes at disproportionately more CPU usage. Achieving 3.5 times more compression (LZMA) requires spending 37 times more CPU resources. This ratio, though, is not at all fixed: for example, Brotli provides 2.9 times better compression at 9 times higher CPU cost while Snappy manages to provide 1.9 times better compression than LZ4 with only 1.7 times more CPU cost.

database compression methods

Another interesting compression algorithm property is how much faster decompression is than compression. It is interesting to see there is not as large a variance between compression algorithms, which implies that the default compression level is chosen in such a way that compression is 2 to 3.5 times slower than decompression.

Block Size Impact on Compression

Now let’s look at how the compression block size affects compression and decompression performance.

database compression methods

On-Time Performance Data Compression Speed vs Block Size (MB/sec)
Compression Method4KB16KB64KB128KB256KB256KB/4KB
quicklz 1.5.0 level 1128.62299.42467.9518.97550.84.28
zstd v0.4.1 level 1177.77304.16357.38396.65396.022.23
snappy 1.1.3674.99644.08622.24626.79629.830.93
lzma 9.38 level 118.6530.2336.4337.4438.012.04
zlib 1.2.8 level 164.73110.34128.85124.74124.11.92
lz4 r131996.111114.351167.111067.691043.861.05
brotli 2015-10-29 level 164.92123.92170.52177.1179.512.77


database compression methods

On-Time Performance Data Compression Speed vs Block Size (MB/sec)
Compression Method4KB16KB64KB128KB256KB256KB/4KB
quicklz 1.5.0 level 1128.62299.42467.9518.97550.84.28
zstd v0.4.1 level 1177.77304.16357.38396.65396.022.23
snappy 1.1.3674.99644.08622.24626.79629.830.93
lzma 9.38 level 118.6530.2336.4337.4438.012.04
zlib 1.2.8 level 164.73110.34128.85124.74124.11.92
lz4 r131996.111114.351167.111067.691043.861.05
brotli 2015-10-29 level 164.92123.92170.52177.1179.512.77

If we look at compression and decompression speed versus block size, we can see that there is a difference both for compression and decompression, and that it depends a lot on the compression algorithm. QuickLZ, using these settings, compresses 4.3 times faster with 256KB blocks rather than 4KB blocks. It is interesting that LZ4, which I would consider a “similar” fast compression algorithm, is not at all similar, demonstrating only minimal changes in compression and decompression performance with increased block size.

Snappy is perhaps the most curious compression algorithm of them all. It has lower performance when compressing and decompressing larger blocks.

Let’s examine how compression ratio varies with different block sizes.

database compression methods

On-Time Performance Data DeCompression Ratio vs Block Size
Compression Method4KB16KB64KB128KB256KB256KB/4KB
quicklz 1.5.0 level 13.093.914.564.794.971.61
zstd v0.4.1 level 13.955.246.416.827.171.82
lzma 9.38 level 14.866.577.968.438.711.79
zlib 1.2.8 level 13.794.735.335.445.501.45
lz4 r1311.751.891.992.002.011.15
brotli 2015-10-29 level 14.125.476.617.007.351.78

We can see all the compression libraries perform better with larger block sizes, though how much better varies. LZ4 only benefits a little from larger blocks, with only a 15% better compression ratio between 4KB to 256KB, while Zstd, Brotli and LZMA all get about an 80% better compression ratio with large block sizes. This is another area where I would expect results to be data dependent. With highly repetitive data, gains are likely to be more significant with larger block sizes.

Compression library gains from larger block sizes decrease as the base block sizes increase. For example most compression libraries are able to get at least a 20% better compression ratio going from 4KB to 16KB block size, however going from 64KB to 256KB only allows for a 4-6% better compression ratio – at least for this data set.

Compression Level Impact

Now let’s review what the compression level does to compression performance and ratios.

database compression methods

Compression Method123456789Max
zstd v0.4.1404.25415.92235.32217.69207.01146.96124.0894.9382.4321.87
lzma 9.3839.137.9636.5235.0730.853.693.693.693.693.69
zlib 1.2.8120.25114.5284.1476.9153.9733.0625.9414.776.926.92
brotli 2015-10-29172.97179.71148.3135.66119.7456.0850.1329.435.460.39

Note. Not every compression algorithm provides level selection, so we’re only looking at the ZSTD, LZMA, ZLIB and BROTLI compression libraries. Also, not every library provides ten compression levels. If more than ten levels were available, the first nine and the maximum compression level were tested. If less than ten levels were available (like LZMA), the result for the maximum compression level was used to fill the gaps.

As you might expect, higher compression levels generally mean slower compression. For most compression libraries, the difference between the fastest and slowest compression level is 10-20 times – with the exception of Brotli where the highest compression level means really slow compression (more than 400 times slower than fastest compression).

database compression methods

Compression Method123456789Max
zstd v0.4.1827.61848.71729.93809.72796.61904.85906.55843.01894.91893.31
lzma 9.38128.91142.28148.57148.72148.75157.67157.67157.67157.67157.67
zlib 1.2.8386.59404.28434.77415.5418.28438.07441.02448.56453.64453.64
brotli 2015-10-29476.89481.89543.69534.24512.68505.55513.24517.55521.84499.26

This is where things get really interesting. With a higher compression level, decompression speed doesn’t change much – if anything becomes higher. If you think about it, it makes sense: during the compression phase we’re searching for patterns in data and building some sort of dictionary, and extensive pattern searches can be very slow. Decompression, however, just restores the data using the same dictionary and doesn’t need much time finding data patterns. The smaller the compressed data size is, the better the performance should be.

Let’s examine the compression ratio.

database compression methods

Compression Method123456789Max
zstd v0.
lzma 9.388.208.718.958.968.9610.4510.4510.4510.4510.45
brotli 2015-10-297.357.357.417.467.518.708.768.808.8310.36

As we can see, higher compression levels indeed improve the compression ratio most of the time.  The ZSTD library seems to be some strange exception where a higher level of compression does not always mean a better ratio. We can also see that BROTLI’s extremely slow compression mode can really produce a significant boost to compression, getting it to the level of compression LZMA achieves – quite an accomplishment.   

Different compression levels don’t have the same effect on compression ratios as different compression methods do. While we saw a 3.5 times compression rate difference between LZ4 and LZMA, the highest compression rate difference between the fastest and slowest mode is 1.4x for Brotli – with 20-30% improvement in compression ratio more likely.

An important point, however, is that the compression ratio improvement from higher compression levels comes at no decompression slowdown – in contrast to using a more complicated compression algorithm to achieve better compression.

In practice, this means having control over the compression level is very important, especially for workloads where data is written once and read frequently. In that case, you could choose a higher compression leves rather than change and recompress the data frequently. Another factor is that the compression level is very easy to change dynamically, unlike the compression algorithm. In theory, a database engine could dynamically choose the compression level based on the workload – a higher compression level can be used if there are a lot of CPU resources available, and a lower compression level can be used if the system can’t keep up with compressing data.


It is interesting to note a few records generated from all of these tests. Among all the methods tried, the lowest compression ratio was LZ4 with a 4KB block size, providing a 1.75 compression ratio. The highest ration was LZMA with a 256K block size, providing a maximum compression ratio of 10.45.

LZ4 is fastest both in compression and decompression, showing the best compression speed of 1167MB per second with a 64KB block size, and a decompression speed of 3666MB per second with a 16KB block size.   

LZMA appears to generally be the slowest compression and decompression, compressing at 0.88MB per second with a 16KB block size and the maximum compression level. It decompresses at 82MB per second with a 4KB block size. Only Brotli at the highest compression level compressed data slower (at 0.39MB per second).

Looking at these we see three orders of magnitude difference in compression performance, and 50 times in decompression performance. This demonstrates how the right compression algorithms choices and settings can make or break compression for your application.


  • Snappy looks like a great and fast compression algorithm, offering a pretty decent compression with performance that is likely to be good enough for most workloads.
  • Zstd is new and not yet 100% stable, but once it is it will be a great replacement to zlib as a general purpose compression algorithm. It gives a better compression ratio and better compression and decompression speed. At higher levels of compression, it is able to get close to LZMA’s compression ratio, at least for some kinds of data, while having a much better decompression speed.
  • LZMA remains the choice when you want the highest compression ratio at all costs. However, it will be all costs indeed! LZMA is slow both for compression and decompression. More often than not, LZMA is chosen without a clear understanding how slow it is, leading to performance issues.
  • Generally, it’s better to get the compression ratio you’re looking for by adjusting the compression level rather than by the type of algorithm, as the compression level affects compression performance more – and may even positively impact decompression performance.
  • If your system allows it, choosing larger block sizes can be best for your workload. Larger block sizes generally have a better compression ratio and a better compression and decompression speed. However, if you have many “random” data lookups, constant decompression of large blocks of data is likely to negate all of those benefits.  
  • In our tests, block sizes up to 64K provided the most benefit, with further increases showing minimal impact. It is possible, however, that these diminishing returns on adjusting the block size upward depend significantly on the data type.

Data and Details

Raw results, data sources and additional details for our evaluation of database compression methods are available in Google Docs.

Share this post

Comments (2)

  • alexandre

    Hi Peter
    im dealing with a MySQL 5.7.11 on Windows Server 2008 64 bit and having trouble with punch hole support on Windows. innodb_page_size is 32k and the NTFS filesystem have been formatted with 2Kb blocks. While creating the table , a warning is displayed that the FS doesnt support hole puching. I’ve tried to enable disk compression on the volume, but the issue persists. Are there any step by step guide on how to do this on Windows? IM going to try with 1Kb NTFS blocks since 16Kb is the default

    April 7, 2016 at 2:13 pm
  • alexandre

    By the way, excellent article. Its really nice to think about that we could trade off some CPU usage in order to speed up databases in general, while saving some money on storage size, since SSD still is a price sensitive matter. Even LZ4 seems to me a possibility to be enabled by default. No serious database today doesnt count compression as its most basic toolset
    The only thing i feel lacking today is more flexibilization with incremental batch load. It should be an option to load data into separate pages (even reading original data) without touching the original pages and then just at the end flush the whole change into logs (ideallly compressed)

    April 7, 2016 at 3:24 pm

Comments are closed.