November 28, 2014

How to improve InnoDB performance by 55% for write-bound loads

During April’s Percona Live MySQL Conference and Expo 2014, I attended a talk on MySQL 5.7 performance an scalability given by Dimitri Kravtchuk, the Oracle MySQL benchmark specialist. He mentioned at some point that the InnoDB double write buffer was a real performance killer. For the ones that don’t know what the innodb double write buffer is, it is a disk buffer were pages are written before being written to the actual data file. Upon restart, pages in the double write buffer are rewritten to their data files if complete. This is to avoid data file corruption with half written pages. I knew it has an impact on performance, on ZFS since it is transactional I always disable it, but I never realized how important the performance impact could be. Back from PLMCE, a friend had dropped home a Dell R320 server, asking me to setup the OS and test it. How best to test a new server than to run benchmarks on it, so here we go!

ZFS is not the only transactional filesystem, ext4, with the option “data=journal”, can also be transactional. So, the question is: is it better to have the InnoDB double write buffer enabled or to use the ext4 transaction log. Also, if this is better, how does it compare with xfs, the filesystem I use to propose but which do not support transactions.

Methodology

The goal is to stress the double write buffer so the load has to be write intensive. The server has a simple mirror of two 7.2k rpm drives. There is no controller write cache and the drives write caches are disabled. I decided to use the Percona tpcc-mysql benchmark tool and with 200 warehouses, the total dataset size was around 18G, fitting all within the Innodb buffer pool (server has 24GB). Here’re the relevant part of the my.cnf:

So, I generated the dataset for 200 warehouses, added they keys but not the foreign key constraints, loaded all that in the buffer pool with a few queries and dumped the buffer pool. Then, with MySQL stopped, I did a file level backup to a different partition. I used the MySQL 5.6.16 version that comes with Ubuntu 14.04, at the time Percona server was not available for 14.04. Each benchmark followed this procedure:

  1. Stop mysql
  2. umount /var/lib/mysql
  3. comment or uncomment skip-innodb_doublewrite in my.cnf
  4. mount /var/lib/mysql with specific options
  5. copy the reference backup to /var/lib/mysql
  6. Start mysql and wait for the buffer pool load to complete
  7. start tpcc from another server

The tpcc_start I used it the following:

I used 32 connections, let the tool run for 300s of warm up, enough to reach a steady level of dirty pages, and then, I let the benchmark run for one hour, reporting results every minute.

Results

Test:Double write bufferFile system optionsAverage NOPTM over 1h
ext4_dwYesrw690
ext4_dionolock_dwYesrw,dioread_nolock668
ext4_nodwNorw1107
ext4trx_nodwNorw,data=journal1066
xfs_dwYesxfs rw,noatime754

 

So, from the above table, the first test I did was the common ext4 with the Innodb double write buffer enabled and it yielded 690 new order transactions per minute (NOTPM). Reading the ext4 doc, I also wanted to try the “dioread_nolock” setting that is supposed to reduce mutex contention and this time, I got slightly less 668 NOTPM. The difference is within the measurement error and isn’t significant. Removing the Innodb double write buffer, although unsafe, boosted the throughput to 1107 NOTPM, a 60% increase! Wow, indeed the double write buffer has a huge impact. But what is the impact of asking the file system to replace the innodb double write buffer? Surprisingly, the performance level is only slightly lower at 1066 NOTPM and vmstat did report twice the amount writes. I needed to redo the tests a few times to convince myself. Getting a 55% increase in performance with the same hardware is not common except when some trivial configuration errors are made. Finally, I used to propose xfs with the Innodb double write buffer enabled to customers, that’s about 10% higher than ext4 with the Innodb double write buffer, close to what I was expecting. The graphic below presents the numbers in a more visual form.

TPCC NOTPM for various configurations

TPCC NOTPM for various configurations

In term of performance stability, you’ll find below a graphic of the per minute NOTPM output for three of the tests, ext4 non-transactional with the double write buffer, ext4 transactional without the double write buffer and xfs with the double write buffer. The dispersion is qualitatively similar for all three. The values presented above are just the averages of those data sets.

TPCC NOTPM evolution over time

TPCC NOTPM evolution over time

Safety

Innodb data corruption is not fun and removing the innodb double write buffer is a bit scary. In order to be sure it is safe, I executed the following procedure ten times:

  1. Start mysql and wait for recovery and for the buffer pool load to complete
  2. Check the error log for no corruption
  3. start tpcc from another server
  4. After about 10 minutes, physically unplug the server
  5. Plug back and restart the server

I observed no corruption. I was still a bit preoccupied, what if the test is wrong? I removed the “data=journal” mount option and did a new run. I got corruption the first time. So given what the procedure I followed and the number of crash tests, I think it is reasonable to assume it is safe to replace the InnoDB double write buffer by the ext4 transactional journal.

I also looked at the kernel ext4 sources and changelog. Up to recently, before kernel 3.2, O_DIRECT wasn’t supported with data=journal and MySQL would have issued a warning in the error log. Now, with recent kernels, O_DIRECT is mapped to O_DSYNC and O_DIRECT is faked, always for data=journal, which is exactly what is needed. Indeed, I tried “innodb_flush_method = O_DSYNC” and found the same results. With older kernels I strongly advise to use the “innodb_flush_method = O_DSYNC” setting to make sure files are opened is a way that will cause them to be transactional for ext4. As always, test thoroughfully, I only tested on Ubuntu 14.04.

Impacts on MyISAM

Since we are no longer really using O_DIRECT, even if set in my.cnf, the OS file cache will be used for InnoDB data. If the database is only using InnoDB that’s not a big deal but if MyISAM is significantly used, that may cause performance issues since MyISAM relies on the OS file cache so be warned.

Fast SSDs

If you have a SSD setup that doesn’t offer a transactional file system like the FusionIO directFS, a very interesting setup would be to mix spinning drives and SSDs. For example, let’s suppose we have a mirror of spinning drives handled by a raid controller with a write cache (and a BBU) and an SSD storage on a PCIe card. To reduce the write load to the SSD, we could send the file system journal to the spinning drives using the “journal_path=path” or “journal_dev=devnum” options of ext4. The raid controller write cache would do an awesome job at merging the write operations for the file system journal and the amount of write operations going to the SSD would be cut by half. I don’t have access to such a setup but it seems very promising performance wise.

Conclusion

Like ZFS, ext4 can be transactional and replacing the InnoDB double write buffer with the file system transaction journal yield a 55% increase in performance for write intensive workload. Performance gains are also expected for SSD and mixed spinning/SSD configurations.

About Yves Trudeau

Yves is a Principal Consultant at Percona, specializing in technologies such as MySQL Cluster, Pacemaker and DRBD. He was previously a senior consultant for MySQL and Sun Microsystems. He holds a Ph.D. in Experimental Physics.

Comments

  1. Nils says:

    So if it is using the OS file cache, doesn’t that mean some unnecessary double buffering (once in the buffer pool, once in the OS cache)?

    I was toying with an idea similar to yours with the RAID Controller and BBU, I was thinking of just putting the doublewrite buffer (and maybe innodb logs) onto the RAID controller, perhaps just attaching two small SSD. I may post again once I have some hardware available.

    If someone were to come up with a sort of PCIe Ramdisk with a BBU (or better, capacitors) at a competitive price, that would be a great alternative to RAID controllers, especially since the CoW filesystems like ZFS and btrfs usually also implement RAID in software.

  2. What is the perf benefit for a system with HW RAID and battery backed write cache?

  3. Alan Kent says:

    Did you try sending the double-buffer to a separate disk drive or SSD? Just wondering if the double-buffer sync-ing on the same drive as the main database is causing lots of head thrashing. (I don’t have deep MySQL specific experience, but this was a problem on another system. Giving the write-ahead buffer a separate drive helped.)

  4. Yves,

    The question I have in this case – do we know why does it happen ? Did you have a chance to check the amount of writes going to device in this case ?

    Here is what puzzles me. As I understand ZFS simply stores the new data in the new location, hence it only needs to update meta-data to show where the new data is located.

    EXT4 with data=journal is I understand correctly just stores data in the journal as well as storing it in location where it should be, so it still should double amount of writes as Innodb does.

    If I’m correct when do not we have some problem in how doublewrite buffer is implemented in Innodb – I do not see big reason why filesystem should be able to do it a lot more efficiently. Or has design of data=journal changed in EXT4 ?

    I also wonder whenever results for SSDs and RAID with BBU are the same as this is where you can get dramatic differences as any “sync” is very expensive on conventional hard drives and it is much cheaper on system with BBU or SSD.

  5. @MarkC: If you send the ext4 journal to a file on an array behind a raid controller with a write cache and a BBU, it may actually never be written to disk and stay in the write cache memory since the max journal size with 4KB pages is 400MB. In such a case, spinning disks will even be faster than SSDs.

    @Alan: The issue with the double write buffer is really internal to MySQL, those 64 pages are running hot and force some kind of serialization.

    @Peter: My explanation to this is the serialization effect of the double-write buffer, in order to use some of the 64 pages in it, a thread must wait for the write operation of another thread to complete its write to the data file. I did measure the amount of writes in both cases and yes, they are the same but the ext4 journal seems to better handle concurrency. As of testing with SSDs and a raid controller with BBU, I’d love to do that, any hardware I could use for a few hours?

  6. Dimitri says:

    Hi Yves,

    your results are looking very promising!! thanks for sharing!
    I’m also curious if using O_DSYNC instead of the “classic” O_DIRECT for RW workloads will not bring any other impact within other tests cases…

    then, indeed, this EXT4 data journaling is not solving the issue with “double amount of data written”.. – however, keeping in mind arriving to the market various HW solutions for “battery protected memory” (even small amount, but if 400MB is already enough then it’ll look more than attractive, and “double write amount” will be completely avoided and keep a longer life for flash storage ;-))

    the only point with this EXT4 feature remains unclear for me: will EXT4 really guarantee the absense of partially written pages?.. (keeping in mind that EXT4 is operating with 4K, while InnoDB default is 16K, and EXT4 knows just nothing about InnoDB data, so for EXT4 “commited” 2 pages of 4K will remain safe, while there will be still 2 other pages “not comitted” and breaking a 16K page for InnoDB).. — any idea?..

    and, yes, regarding generally the “double-write buffer” feature – its main bottleneck on a fast storage will be its own kind of serialization due implementation design.. One of the solutions could be here to have several such buffers (ex. one per BP instance) to increase the throughput (while the issue with twice written data will still remain); or bypass the “buffer” and let each page double write go independently to others, etc..

    while the best solution could be probably to have just a single write but on a different place, but I’m not sure at the end we’ll really have less IO writes ;-)

    or the best of the best is simply to avoid double write buffer completely by involving atomic writes, but unfortunately currently it’s available only on Fusion-io and not yet a common standard..

    Rgds,
    -Dimitri

  7. Dimitri says:

    Re-send :

    Hi Yves,

    your results are looking very promising!! thanks for sharing!
    I’m also curious if using O_DSYNC instead of the “classic” O_DIRECT
    for RW workloads will not bring any other impact within other tests
    cases…

    then, indeed, this EXT4 data journaling is not solving the issue with
    “double amount of data written”.. – however, keeping in mind arriving
    to the market various HW solutions for “battery protected memory”
    (even small amount, but if 400MB is already enough then it’ll look
    more than attractive, and “double write amount” will be completely
    avoided and keep a longer life for flash storage ;-))

    the only point with this EXT4 feature remains unclear for me: will
    EXT4 really guarantee the absense of partially written pages?..
    (keeping in mind that EXT4 is operating with 4K, while InnoDB default
    is 16K, and EXT4 knows just nothing about InnoDB data, so for EXT4
    “commited” 2 pages of 4K will remain safe, while there will be still 2
    other pages “not comitted” and breaking a 16K page for InnoDB).. —
    any idea?..

    and, yes, regarding generally the “double-write buffer” feature – its
    main bottleneck on a fast storage will be its own kind of
    serialization due implementation design.. One of the solutions could
    be here to have several such buffers (ex. one per BP instance) to
    increase the throughput (while the issue with twice written data will
    still remain); or bypass the “buffer” and let each page double write
    go independently to others, etc..

    while the best solution could be probably to have just a single write
    but on a different place, but I’m not sure at the end we’ll really
    have less IO writes ;-)

    or the best of the best is simply to avoid double write buffer
    completely by involving atomic writes, but unfortunately currently
    it’s available only on Fusion-io and not yet a common standard..

    Rgds,
    -Dimitri

  8. Hi Dimitri,
    When you use O_SYNC or O_DSYNC, if you do a 16KB write operation with data=journal, it is transactional, you can’t have a fraction of 16KB written. It is the exact same principle has transaction in a database. Best way to convince yourself is to try :) I did many many tests recently. I am trying to get hold on server with raid_ctrl/BBU/spinning and a PCIe SSD card. I’ll redo the benchmarks splitting the journals (ext4 and innodb) on spinning drives and datafile on SSD.

    Regards,

    Yves

  9. Dimitri says:

    Hi Yves,

    great then if it’s so!..
    (and means I have to replay all my test with the following options as well ;-))

    Rgds,
    -Dimitri

  10. Nils says:

    I may have access soon to a system with a RAID Controller, BBU and a combination of SSD (only SATA though) and HDD. I will run a few tests with different combinations of log, data, doublewrite etc. location.

    Interestingly, there is already a controller for a PCIe based RAM drive that backs up into Flash, the IDT 89HF08P08AG3 [1]. Seems like nobody so far built a device based on it.

    [1] http://www.techpowerup.com/171760/idt-announces-worlds-first-pci-express-gen-3-nvme-nv-dram-controller.html

  11. Nils says:

    Just a follow-up to this post, I tried it on a busy system and there was a rather peculiar effect where the system would just stall at a determinable (is that a word?) moment. The result was a single MySQLd process hogging 100% of one CPU and every other thread stalling. This didn’t happen with data=writeback and I think I have isolated the problem to this mount option. Use caution.

  12. @Nils: I pushed quite hard on the server I tested and I never saw that behavior. I may also get access to a server with a raid ctrl, bbu and fusion io card, can you tell me more about your setup so that I can try to reproduce your results.

Speak Your Mind

*