5.4 in-memory tpcc-like load

As continue to my benchmarks https://www.percona.com/blog/2009/04/30/looking-on-54-io-bound-benchmarks/ on 5.4 I tried in-memory load (basically changed buffer pool from 3GB to 15GB, and database size is 10GB). The results are on the same spreadsheet http://spreadsheets.google.com/ccc?key=rYZB2dd2j1pQsvWs2kFvTsg&hl=en#, page CPUBound.

I especially made short warmup (120 sec) and long run (2700sec) to see how different versions go through warmup stage.

The graph is

In default mode I would say XtraDB performs almost the same as 5.4, but dips are a bit worse than in 5.4.

Now about dips – all of them are caused by InnoDB checkpoint activity, InnoDB is doing intensive flushing of buffer_pool pages and that basically causes stall for some period of time in all user processes.
In XtraDB we have special mode adaptive_checkpoint, you see result in this mode. While max performance is worse, there is no dips, and average performance is better.

If Sun Perfomance engineers read this – I call attention to this problem and do not ignore it if Sun started to make changes to InnoDB anyway.

If InnoDB engineers read this and are interested – then, yes, we are ready to provide adaptive_checkpoint patch under BSD license.

Also I was asked what if we set small innodb_max_dirty_pages_pct , will not that help with such dips – you can see results for 5.4 with innodb_max_dirty_pages_pct=15. There really no dips, but average performance is not acceptable. I also tried innodb_max_dirty_pages_pct=30, but results in this case similar to usual 5.4, so I do not show them.

Share this post

Comments (11)

  • Ken Jacobs

    Good stuff, Vadim.

    As you know, we at Innobase do follow your work closely. As I mentioned to you at the MySQL Conference, and previously and recently in email to Peter, we have been and are quite interested in considering your patches for a future version of InnoDB, and can do so only if they are available to us under a BSD license. This is a great way to improve performance for all MySQL users, and to get the maximum distribution of your good work. We’re of course happy to acknowledge your contribution in source code and documentation as you wish. We are glad to see your public statement offering this patch (and I hope others) to us on a BSD license.

    I will also repeat what I’ve said before … we are and have always been open to community contributions that meet the following criteria:

    * technical correctness/robustness
    * appropriateness given our other product plans/directions
    * suitable license

    As Vasil described in this blog post (http://blogs.innodb.com/wp/2009/03/software-is-hard-sometimes/), we will evaluate community-contributed patches for correctness, make them portable, adjust as necessary to integrate with the latest version of the InnoDB Plugin, and conduct our own testing, to ensure InnoDB’s continued reliability.

    We can’t commit a priori to accepting or even evaluating all contributions, since our resources are limited. But we are, and have been open to receiving user contributions on the basis outlined above.

    The work you and the team at Percona have done to demonstrate the value of these changes is very helpful. Feel free to contact me directly by email to further discuss this topic.



    May 1, 2009 at 10:09 am
  • Mark Callaghan

    This is great work. The performance results make it obvious. I first learned of this problem from Percona, reproduced them on my hardware, and then fixed them on my source tree.

    May 1, 2009 at 10:33 am
  • Sheeri K. Cabral

    Interesting — it’s great that you have a patch that makes the dips less, but is the overall performance sacrifice worth it? I have to assume yes otherwise you wouldn’t mention it, but it looks like the performance hit is about 500 transactions per second, to avoid the dips.

    May 1, 2009 at 11:53 am
  • Mark Callaghan

    Would you accept 20% greater throughput on average if that included extended windows where TPS dropped to 0? Some variance may be acceptable but the current behavior is extreme. And another extreme is Amazon who really, really care about variance in response time — http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf

    May 1, 2009 at 12:03 pm
  • Vadim


    There are numbers for max throughput and average final result.

    5.4 – 8524 (this is for transaction per 10 sec actually)
    xtardb5 – 8507
    xtradb5-adaptive – 7648

    and final result:
    5.4 – 36628.488 TPM
    xtradb5 – 35840.578 TPM
    xtradb5-adaptive – 37223.289 TPM

    so you lose in maximal throughput about 10%, but final performance even better.

    And it is trade-off, you decide what is preferable for you.
    For some clients when slave is not able to recovery after dip and getting behind master forever – adaptive_checkpoint is helpful.

    May 1, 2009 at 1:30 pm
  • Sheeri

    Vadim — that is good. I had assumed that the overall throughput was better, because otherwise why would you tout how great it was to lose the peaks, unless the average or overall was better? It’s good to have the numbers to have proof. I didn’t want to assume if it was a wrong assumption. And of course, if the dips are detrimental to a client but a slight overall loss of the maximum is OK, then obviously the tradeoff would be appropriate.

    The best bet would be for the 5.4 and the adaptive_checkpoint to work together — whether in a Percona release or an InnoDB/MyQSL release — so that we could have the best maximal tps, the best average tps, and the least amount of dips.

    May 1, 2009 at 5:40 pm
  • Dimitri

    Hi Vadim,

    thanks to point on it! I’ve already wrote about benefit of Adaptive Checkpoint http://dimitrik.free.fr/db_STRESS_MySQL_540_and_others_Apr2009.html#note_5397 (and one single graph saying it’s better than words), but I did not realize there may be cases when performance with it may be slightly lower – I always preferred a stable AVG TPS rather a better MAX TPS (but in any case it’s important to know both).

    What was your my.conf during this test and how many sessions did you run in parallel? – I’ll try to replay a similar workload on Solaris/SPARC and see if we observe the same thing…


    May 3, 2009 at 4:30 am
  • Ryan Huddleston

    Have you tried 5.4 with new option innodb_extra_dirty_writes?

    Also I wonder if innodb changes added in 5.4 will be available to innodb team to integrate into next Innodb plugin?

    May 3, 2009 at 10:34 am
  • Mark Callaghan

    innodb_extra_diry_writes came from an earlier Google patch. I don’t think it does much and is not in the v3 Google patch. There are better changes for IO performance in the Percona and Google patches. 5.4 has not caught up to them yet.

    May 3, 2009 at 11:59 am
  • Carlo Curino

    I think it might be relevant to mention it here… I’ve run several tests of a disk-bounded TPC-C on MySQL 5.4 both with and without O_DIRECT, and also tested what happens running multiple separate workloads both on the same MySQL and on 2 MySQL on the same machine. I collected several HW statistics, and reported everything here: http://relationalcloud.com/index.php?title=Experiments

    Let me know what you think guys.

    April 14, 2010 at 9:43 pm
  • Vadim


    I do not see how many user connection do you use in your runs ?

    April 16, 2010 at 10:09 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.