Few weeks ago I wrote about Innodb Caching with main idea you might need more cache when you think you are because Innodb caches data in pages, not rows, and so the whole page needs to be in memory even if you need only one row from it. I have created the simple benchmark which shows a worse case scenario by picking the random set of primary key values from sysbench table and reading them over and over again.
This time I decided to “zoom in” on the time when result drop happens – 2x increase in number of rows per step hides a lot of details, so I’m starting with some number of rows when everything was still in cache for all runs and increasing number of rows being tested 20% per step. I’m trying standard Innodb page size, 4KB page size as 16K page size compressed to 4. The data in this case compresses perfectly (all pages could be compressed to desired 4K) value so it is testing a best case for compression. Here is the graph:
The results are quite interesting from a lot of angles. First we can see how quickly performance can drop in worse case scenario when data is not in cache any more. For 16K pages we see over 10x drop from 7630 qps to 580 qps as number of keys accessed increases only 20% from 9216 to 11059.
The results for 4K results start dropping earlier. This is because the database actually takes more space with 4K pages – 2.7GB instead of 2.2GB because of larger overhead. They also drop in a lot more gradual fashion compared to 16K results. They still retain almost half of performance at 27.5K even though drops starts at about 9.2K keys, which is less than 3x performance loss with 3 times increase in number of keys, which is way different to 10x performance drop for 16K pages. I do not have a very good explanation why this is happening.
Compression results come as a huge disappointment. The compressed .idb file was only 620MB so if Innodb would chose to keep only compressed pages in memory it should be able to keep about 1/5 of pages cached in 128M buffer pool. It could be compared to 4K pages case just with 1/4 of data (and the overhead needed for page decompression) if this is the policy innodb would have chosen. In reality however results are a lot more close to 16K page results with rather quick drop happening. About same number of qps (530) is reached at 15925 compared to 11049 which is about 40% more keys.
Where Compression results were good though is with very low number of keys, when everything could be kept uncompressed in buffer pool as well as in case of complere disk IO bound workload. It is hard to see from the graph but with large number of keys compressed results were best of all, probably because the data size was smallest.
Lets introduce one more metric. Lets say “acceptable” results for our application is 2000 qps, which is about 25% of the performance when data fits in memory. I just pick this number for simplicity, though I think it is quire reasonable as the target. With such definitions 16K provides acceptable performance with 9216 keys touched, 4K with 27518 and 16K compressed with 11516. This gives us 3x more keys we can fit in memory with 4K pages, which is pretty good even though not quite 4 times increase which we could hope for in theory. This is explained by the fact data size is almost 30% larger with 4K pages.
In this case Percona Server 5.5 was tested, which allows changing page size without recompiling the server, though I expect results for 16K and 16K Compressed to be very similar on MySQL 5.5
As a summary: If you have very random access pattern with small rows consider using 4K pages, compression may not give you much gains from cache fit efficiency.
Next I should look more about performance of different storage engines in this regard. I’m very curious how PBXT and TokuDB would perform in this kind of benchmark. I also should modify benchmark to be able to do writes the fixed number of keys to see if it makes any difference. Finally I should post my benchmark script so everyone interested can repeat my results.
Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.
Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.