Multiple purge threads in Percona Server 5.1.56 and MySQL 5.6.2

Part of the InnoDB duties, being an MVCC-implementing storage engine, is to get rid of–purge–the old versions of the records as they become obsolete.  In MySQL 5.1 this is done by the master InnoDB thread.  Since then, InnoDB has been moving towards the parallelized purge: in MySQL 5.5 there is an option to have a single separate dedicated purge thread and in MySQL 5.6.2 one can have multiple dedicated purge threads.

Percona Server 5.1 supports multiple purge threads too, although using more than one is considered experimental at the moment. Unfortunately this patch hasn’t been ported to Percona Server 5.5 yet.

Let’s test these two implementations and find out what benefits, if any, do the additional purge threads bring.

The test workload makes a long history list and then lets purge thread(s) work through it while having a regular OLTP load on the server.  The OLTP part of this is provided by Sysbench, a single table workload, 16 client threads.  The growth of the history list and the subsequent purge work is ensured by a long-running transaction with a REPEATABLE READ isolation level and started WITH CONSISTENT SNAPSHOT.  In the middle of the workload this transaction commits and so the purge begins.

The tests were performed on Percona’s R900 machine. The scripts and results are on Benchmark Wiki and  the relevant my.cnf settings were

Percona Server 5.1 Results

Percona Server 5.1.56-rel12.7, 0 dedicated purge threadsPercona Server 5.1.56-rel12.7, 1 dedicated purge threadPercona Server 5.1.56-rel12.7, 2 dedicated purge threadsPercona Server 5.1.56-rel12.7, 4 dedicated purge threadsPercona Server 5.1.56-rel12.7, 8 dedicated purge threads

For better presentation let’s also slice this data the other way:Percona Server 5.1.56-rel12.7 history list lengthPercona Server 5.1.56-rel12.7 TPS

From these results we can see that the multiple purge threads achieve their goal of processing the history list faster.  Of course, this is achieved at the cost of lower TPS.  There is one possible area for improvement: once the purge threads fully catch up with the history list (e.g. at around 2800 seconds in 4 thread experiment), their activity could be throttled better as to not to penalize further TPS.

MySQL 5.6.2 Results

Now let’s test MySQL 5.6.2.  It is important to remember that 5.6.2 multiple purge thread support is very experimental and is likely to receive a lot of tuning in the future.  In fact, as of the time of this writing, the code is already different in the trunk.

MySQL 5.6.2, 0 dedicated purge threadsMySQL 5.6.2, 1 dedicated purge threadMySQL 5.6.2, 2 dedicated purge threadsMySQL 5.6.2, 4 dedicated purge threadsMySQL 5.6.2, 8 dedicated purge threads

Huh? The dedicated purge threads in 5.6.2 completely fail to stop the history list growth, but at least the additional threads do not penalize TPS.  On the other hand, TPS does slightly drop with time in the second half of the experiments.

Same data presented the other way:MySQL 5.6.2 history list lengthMySQL 5.6.2 TPS

Here it becomes very clear that there are no significant differences caused by varying the number of dedicated purge threads. The workload is single-table, and the InnoDB team advises using just one thread for such workloads. On the other hand Percona Server 5.1 implementation is effective in this setting with multiple threads too.

The ineffectiveness of 5.6.2 here has been found by Dimitri Kravtchuk before, and the linked post suggests a fix: in 5.6.2 the purge activity is interspersed with short sleeps and apparently those sleep delays are too long, so let’s try not sleeping at all.  This is a quick and dirty change and by no means a replacement to proper future 5.6 tuning.

MySQL 5.6.2 Results With Purge Sleeps Removed

I’ve made a tiny change along these lines (the patch can be found at the Benchmark Wiki with the rest of the scripts and results) and here are the updated 5.6.2 results.

MySQL 5.6.2-no-wait, 0 dedicated purge threads
The difference against baseline 5.6.2 is in noise if no dedicated purge threads are used.

MySQL 5.6.2-no-wait, 1 dedicated purge thread
We can see that the change has made difference and that the purge thread started working through the history list, at the cost of TPS.

MySQL 5.6.2-no-wait, 2 dedicated purge threadsMySQL 5.6.2-no-wait, 4 dedicated purge threadsMySQL 5.6.2-no-wait, 8 dedicated purge threads

Multiple dedicated purge threads even with the change are not effective enough: they only slow the growth of the history list. But, TPS in the second half of the experiments is stable.

Finally, the last two graphs:MySQL 5.6.2-no-wait history list lengthMySQL 5.6.2-no-wait TPS

Again, we can see that there is no difference in history list length or TPS caused by the number of dedicated purge threads, as long as that number is more than one.


To conclude, the only viable–at least for the tested setting–multiple purge thread implementation at the moment seems to be the one in Percona Server 5.1.  MySQL 5.6.2 multiple purge threads have major issues and just removing the purge sleeps is not enough to solve them.  However, MySQL 5.6.2 is experimental code and I am sure that next MySQL 5.6 versions will contain a fixed and much better performing implementation, which I am looking forward to.

Share this post

Comments (13)

  • Laurynas Biveinis

    @James: yes, these I plan to test when I revisit these experiments.

    May 3, 2011 at 12:00 am
  • James Day

    Laurynas, thanks. When the Oracle one is finished code without all the FIXMEs it has in 5.6.2 you could test innodb_purge_batch_size to see whether it makes a difference. Also innodb_io_capacity matters because there are cases where the Percona builds ignore it while the Oracle ones don’t. If no mention is made of innodb_io_capacity I tend to assume misconfiguration. For the moment, though, you were just testing an unfinished implementation, so not a lot to learn.

    May 3, 2011 at 12:00 am
  • Laurynas Biveinis

    For both MySQL 5.6.2 and Percona Server 5.1:
    innodb_io_capacity = 200
    innodb_purge_batch_size = 20 (in 5.1 hardcoded of course)

    May 3, 2011 at 12:00 am
  • Laurynas Biveinis

    @Peter: I plan to revisit this with the next MySQL 5.6 version and possibly multi-table sysbench, and also will dig deeper into setting and results then – including CPU vs I/O bound workloads.

    May 3, 2011 at 12:00 am
  • James Day

    What were the settings for innodb_io_capacity and innodb_purge_batch_size?

    May 3, 2011 at 12:00 am
  • Peter Zaitsev

    Yes, I know it is. Though my preference would be to have less messing with system. To have scripts which constantly change options is very fragile. I’d pay a bit of performance but have robust system which just handles changing workload well.

    May 3, 2011 at 12:00 am
  • Dimitri


    don’t forget that “innodb_max_purge_lag” is dynamic ;-) – so if you’ve some planned night tasks just set it to zero during this period.. – then bring it back to the needed value once activity is back to the normal..

    Purge has an important cost, and for the moment all proposed solution so speed-up the purge are also considerably slowing down TPS level.. – that’s why “Purge Thread + max purge lag” is looking as the most optimal combination for me for the moment..


    May 3, 2011 at 12:00 am
  • Peter Zaitsev


    innodb_max_purge_lag can indeed be good for uniform workloads this is not always the case.
    Consider for example using mysqldump –single-transaction nightly to do the backup which creates large increase in history size. I do not want in this case my OLTP workload to be penalized more than it should be, I however would like history size to be cleaned up eventually, which requires multiple purge threads for IO bound processing.

    May 3, 2011 at 12:00 am
  • Dimitri


    thanks for pointing to MT purge issue in 5.6.2! – well, work is still in progress here..

    then, regarding Purge Thread itself – for anyone expecting a “normal” or “as designed” InnoDB work should always keep a Purge Thread enabled! – I’ve explained before why ( and very hope since MySQL 5.6 there will be no other option than “enabled” :-) )

    and for the moment for an optimal performance I’m still preferring a combination of “Purge Thread + max purge lag setting” rather several Purge Threads as it gave me better performance results in the past even with XtraDB ( so with MySQL 5.6.2 I even did not try yet several threads as I know there will be a mutex contention for the moment (and Sunny mentioned it in his blog post as well)..


    May 3, 2011 at 12:00 am
  • Peter Zaitsev


    Is this CPU bound or IO bound purge operation ? In this case with short transactions in Sysbench, which probably generates 1-2KB of undo space each which may just have 10-20GB of undo space which with 16GB buffer pool can still be CPU bound.

    What kind of “hints” on it is also what Percona Server is able to catch up with workload even with 1 purge thread while I know from the practice there are many cases when it does not.

    Running the test with smaller buffer pool size (say 1GB) or longer one/larger data set may show this case. I expect there are more gains to get if you can trigger IO bound purge.

    I’d also use just update-key in sysbench instead of OLTP as it would create undo space faster avoiding all read queries.

    May 3, 2011 at 12:00 am
  • claudio nanni

    Very interesting investigation, and clear presentation, thanks.

    May 3, 2011 at 12:00 am
  • arun

    I faced a purge issue causing huge I/O activity on our poor hardware (RAID 5) but strangley I tried all possible combinations of max_purge lag variable and waited for more than 10 seconds but it did not have any affect on the load an MySQL kept on doing the background I/O work causing problems for us today

    is this a know issue in MySQL 5.0.x?

    August 11, 2011 at 9:34 pm
  • Laurynas

    Arun –

    I’m not familiar with 5.0 very much, but have you read ? Do you monitor history length value on your workload?

    August 25, 2011 at 8:51 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.