The performance effects of new patches

We are going to show the effects of the new patches applied to Percona HighPerf release. As you see from the following graphs, there is significant difference to normal version when the data bigger than buffer pool (right graph shows CPU usage)

The workload emulates TPC-C and has a same characteristic to DBT-2 (it is not DBT-2, but custom scripts, we will publish them eventually). There are no delays between transactions (no thinking time, no keying time), it uses MySQL C API and the server side prepared statement.

The server has 8core CPU and RAID storage (RAID10 / 6 disks). The data population is along to the scale factor 40WH (:=~4GB). It is enough bigger than the data cache of the storage.

main common settings

innodb_buffer_pool_size = 2048M
innodb_thread_concurrency = 0
innodb_max_dirty_pages_pct = 70
innodb_flush_method = O_DIRECT

The next graphs show the frequency of IO, and you can see we can get more IO performance with the new patches.

Then, how the patches contribute to the improvements in this case.


The patch splits global InnoDB buffer_pool mutex into several and eliminates waitings on flush IO and mutex when there is no enough free buffers. It helps if you have performance drops when data does not fit in memory.

Attention!(Windows only): You should not use with AWE. It is not tested at all…


Generally RAID storage can parallelize IO accesses to several disks. This patch enables to control the number of the IO threads on Unix/Linux and makes the threads to be used equally for parallel using. By default InnoDB uses
1 insert buffer thread, 1 log thread, 1 read thread, 1 write thread. With innodb_file_io_threads InnoDB will use
1 insert buffer thread, 1 log thread, N/2-1 read thread, N/2-1 write thread. You can start from “innodb_file_io_threads = 10” for almost all RAID storages. The more disks the RAID have, the bigger number may make sense.

* This test uses “innodb_file_io_threads = 34” (But, it is redundant value…)


This patch makes more parameters about IO configurable. Normally, you may not need to configure the following parameters. But, in some extreme cases, it is worth to tune.

The following parameters are added.

innodb_read_ahead (default 3)

This controls [enable/disable] of read-ahead.
3: normal
2: enable linear read-ahead only
1: enable random read-ahead only
0: disable both

innodb_ibuf_contract_const (default 5) – Usual value
innodb_ibuf_contract_burst (default 20) – Burst value

The bigger value makes the more effort not to stock records to the insert buffer.
(Though, the more IO occur at the same time.)

The big value of “Ibuf: size” (like > 1000) may tend to losing the performance. If you have powerful IO system, you might set bigger values.

innodb_ibuf_contract_* is count of pages InnoDB pages tries to to
the buffer pool and merge the ibuf contents to them.

innodb_ibuf_contract_const is used during run a batch of insert buffer merge every 10 seconds, and
innodb_ibuf_contract_burst is used – comment from InnoDB code /* If there were less than 5 i/os during the one second sleep, we assume that there is free disk i/o capacity available, and it makes sense to do an insert buffer merge. */

innodb_buf_flush_const (default 10)  * Usual value
innodb_buf_flush_burst (default 100) * Burst value

These control the number of blocks flushed at once.
(flushing: writing modified db pages by flush_list order)

If you use extremely fast device (e.g. ramfs), the bigger value may help the performance.

* This test uses
“innodb_read_ahead = 0”
(The both of read-ahead are disabled)
“innodb_ibuf_contract_const = 50000”
“innodb_ibuf_contract_burst = 50000”
(For our workload it is better to not store many records in the insert buffer)

In conclusion, If you are using fast RAID storage, and/or observe performance decrease caused by shortage of free buffers you may try Percona HighPerf release.

Share this post

Comments (14)

  • Alexey Kupersthokh Reply

    The numbers are very exciting, but I would also hear about these concerns:
    1) How stable is the release comparing to the original 5.0.63 version? Do you, or are you going to generally recommend it to your customers? If not, what is usually on the another scale pan?
    2) Do you know any plans of the MySQL development team to include the same patches to their GA?
    3) How would you estimate fairness of the test? Does the environment, the test and whatever else places the HighPerf release in good light? I mean that, for example “innodb_file_io_threads = 34” seems “very optimal” for the patched release. Does the normal release have such values close to optimal too?
    4) Do you see any other disadvantages in using the HighPerf release?

    September 10, 2008 at 12:10 am
  • Matic Reply

    Where can this patches be downloaded from?

    September 10, 2008 at 1:18 am
  • Matic Reply

    Never mind my previous post, I’ve found the links at

    September 10, 2008 at 1:29 am
  • Hakan Küçükyılmaz Reply


    congratulations on this huge performance gain. I have one
    detail question. Why do you set
    innodb_max_dirty_pages_pct = 70

    I see that the default is 90. Did you do some measurements around it?

    September 10, 2008 at 6:38 am
  • Vadim Reply


    1) We consider it is less stable comparing to standard MySQL, as it is less tested. We are going to recommend this release to customers who experience scaling problem, especially on fast IO system.

    2) These are fixes to InnoDB and as Oracle owns it – it is hard to say about any plans – Oracle’s policy is not disclosure any development plans. From bug report I see InnoDB has plans to research these patches.

    3) Sorry, do not fully understand what you are asking. We show results when patches show good improvement. For sure there are cases when you will not see the improvements – for example clear CPU bound cases, when data fits into buffer pool – but there should not be also performance degradation with our patches.

    4) Disadvantage I see it is less tested at this moment. We are testing it on our production system, on TPC-C, TPC-E, TPC-H emulation workloads, on intensive sysbench benchmarks and we have no any problem so far. And this is help we expect from community – to help us with testing our releases.

    September 10, 2008 at 10:22 am
  • Vadim Reply


    We did not do any special investigation around innodb_max_dirty_pages_pct to show you results, we think in this case
    InnoDB will be less aggressive in flushing dirty buffer pool pages on disk. With default configuration we see there are periodical significant drops of TPM when InnoDB starts flushing process.

    September 10, 2008 at 10:26 am
  • Hakan Küçükyılmaz Reply


    I see those periodic drops in TPM when running DBT2, too. I will try your setting,
    maybe it will help.

    Thanks for explaining the variable.



    September 10, 2008 at 10:29 am
  • Xabier Eizmendi Reply

    We are experiencing some problems while testing this patches. The truncation of tables is very slow (5mins to truncate 32million rows table). It used to be seconds with the official version.
    We are using MySQL 5.0.67

    Has anyone else had the same problem?

    September 12, 2008 at 2:25 am
  • Koa McCullough Reply

    Is Percona offering support packages for these new builds?

    September 12, 2008 at 3:22 pm
  • Vadim Reply


    Yes, if you are customer.

    September 12, 2008 at 3:30 pm
  • Vadim Reply


    Do you use highperf release ?

    Can you provide dump of table so we could test it also ?

    September 12, 2008 at 3:38 pm
  • Olivier Reply


    How did you do these graphs ?
    – What is a session for you ? A connection to the host ? Doing what?
    – What tools did you use to bench IO/CPU/tpm ? Sysbench ?

    September 25, 2008 at 8:19 am
  • Vadim Reply


    This is custom scripts to prepare graphs, also we have custom scripts to emulate TPC-C load, where session is active connection to MySQL

    September 25, 2008 at 6:50 pm
  • S2eve Reply

    innodb_thread_concurrency = 0 is not safe setting

    February 9, 2010 at 11:58 pm

Leave a Reply