MySQL 5.5.4 in tpcc-like workload

MySQL-5.5.4 ® is the great release with performance improvements, let’s see how it performs in
tpcc-like workload.

The full details are on Wiki page

I took MySQL-5.5.4 with InnoDB-1.1, tpcc-mysql benchmark with 200W ( about 18GB worth of data),
InnoDB log files are 3.8GB size, and run with different buffer pools from 20GB to 6GB. The storage is FusionIO 320GB MLC card with XFS-nobarrier. .

While the raw results are available on Wiki, there are graphical results.

I intentionally put all line on the same graph to show trends.

It seems adaptive_flushing is not able to keep up and you see periodical drops when InnoDB starts flushing. I hope InnoDB team will fix it before 5.5 GA.

I expect reasonable request how it can be compared with Percona Server/XtraDB, so there is
the same load on our server:

As you see our adaptive_checkpoint algorithm is performing much stable.

And to put direct comparison, there is side-to-side results for 10GB buffer_pool case.

So as you see InnoDB is doing great, trying to keep performance even, as in previous release, there was about 1.7x times difference. I expect to see more improvements in 5.5-GA.

UPDATE: (9-May-2010)
I posted results with

for MySQL 5.5.4.


So when data fits into memory ( buffer_pool=24Gb), it seems innodb_io_capacity=2000 will help to avoid periodical drops in MySQL 5.5.4.

It does not help though when buffer_pool is smaller than dataset.

Share this post

Comments (25)

  • angel

    Je l’aime beaucoup, Merci pour le partage

    April 21, 2010 at 12:00 am
  • Andy

    XtraDB looks great!

    Could you also benchmark XtraDB against PBXT? They have an interesting architecture. And their engine level replication looks great. Is engine level replication something that can be done for InnoDB/XtraDB too?

    April 21, 2010 at 2:40 pm
  • Vadim


    Thank you.
    Frankly I have hard time to make PBXT running tpcc-mysql workload. PBXT teams is going to fix all issues,
    and after that I will give another try for PBXT.

    As for engine level replication for InnoDB/XtraDB there is Galera replication project, but there some development
    to be done to make it production ready.

    April 21, 2010 at 2:49 pm
  • Yasufumi

    It is depend on workload as I told at UC2010.
    In another workload, adaptive_flushing may be better.
    The both methods are worth to try. (But I don’t know what is for now… :-))

    InnoDB is now improved enough comparatively.
    So, the each improvement methods are not universal improvement.
    (specific for some workloads)

    We should investigate
    “which method is good for which workload”.

    April 21, 2010 at 4:41 pm
  • Andy


    Isn’t Galera synchronous? Wouldn’t that mean an even lower performance than MySQL’s built-in async replication?

    Will XtraDB be based on 5.5/InnoDB 1.1 moving forward?

    April 21, 2010 at 4:56 pm
  • Andy


    Galera is synchronous, wouldn’t that make it even slower than MySQL’s built-in async replication?

    April 21, 2010 at 7:33 pm
  • Sebastien

    Thanks for benchmark.
    Did you notice an extra heavy load with percona server against mysql ( 5.1.44 in my case ) ?
    Specially on rename big tables ?
    Thanks again for work !

    April 22, 2010 at 1:27 am
  • Dimitri

    Hi Vadim,

    it’s an interesting observation you have here.. – is it possible to replay the same test but with setting I/O capacity to 2000 rather 500?..
    and just 20GB buffer pool?

    As you have a similar performance drop on XtraDB too on the beginning of the test it makes me think it’s not really related to the flushing algorithm, but more likely configuration settings.. But only real testing may give a real answer here 🙂


    April 22, 2010 at 8:51 am
  • Robin

    Very interesting Vadim – thanks for posting! I’d like to see the PBXT comparison too (when PBXT is ready).


    April 22, 2010 at 9:21 am
  • Baron Schwartz

    I’ve seen the InnoDB adaptive flushing algorithm behave like the graphs Vadim showed, in production servers. I do not think the algorithm is optimal. I am not sure I believe in I/O capacity as a root cause/root solution. It feels like a hack. The best solution would be for the database to write as much data as needed to maintain steady-state, not to write as much as it thinks the disks can handle. (I know I am over-simplifying.)

    April 22, 2010 at 9:40 am
  • Dimitri

    @Baron – don’t agree with you at all.. – I like very much IO capacity setting and think it’s a very powerful solution. And if you want to write as much data as needed – just set it to 1M, where is the problem? 😉

    Regarding adaptive flushing algorithm – personally, don’t think it’s optimal.. But don’t think those from XtraDB either too 🙂 More testing and analyzing on various workloads is still needed to find the most optimal way..


    April 23, 2010 at 4:42 am
  • Vadim


    I am testing with different innodb_io_capacity. I was going to see effect from this setting anyway 🙂

    April 23, 2010 at 6:42 am
  • peter


    Thanks for results. Indeed Variance is quite important in production and unfortunately not often represented in the benchmarks. In practice you’re not interested in the average but about lowest point your workload regularly drops as you probably would not like to see your web site down 5 minutes each hour because of “hickup”.

    I see XtraDB also has one hickup at large buffer pool sizes at about 10 minutes. It looks like it is not repeated and can be considered part of warmup process but we need to understand why it happens and better get rid of it.

    April 23, 2010 at 7:59 am
  • peter


    I honestly agree with Baron. io_capacity is not the best solution especially how it is used to measure server is idle etc. I think there can be better and more dynamic ways to handle IO though they probably would change IO architecture completely. IO performance may not be constant ranging from running backup concurrently to having shared storage with virtualization or EC2. One could measure the IO response times. Generally you want to keep IO subsystem busy with background operations not to extent foreground operations latency is affected. The good per IO priorities also could really help here so you could submit a lot of IOs from background threads and simply let kernel handle them with idle priority when it has nothing to do. I worked with research team in DIKU on this a while back and results were quite encouraged.

    April 23, 2010 at 8:05 am
  • peter


    Did you test “heavy artillery” in XtraDB 10 or MySQL 5.5.4 – I mean asynchronous IO for MySQL and fast checksum and small page sizes for XtraDB ?

    April 23, 2010 at 8:40 am
  • Vadim


    asynchronous IO is “on” by default in 5.5.4 so I tested with it.

    Also I used “fast_cheksums” in XtraDB, but I see it does not affect results so much (couple percents)

    April 23, 2010 at 8:49 am
  • Vadim


    I posted graphs with different io_capacity for 10GB buffer_pool here:

    we may find that io_capacity=1000 is somewhat better, but not dramatically.

    You are pretty much welcome to repeat results on my box with FusionIO cards, I can give you access.

    April 23, 2010 at 4:00 pm
  • pingshan

    I am still using MySQL5.0.x

    April 23, 2010 at 8:31 pm
  • Vadim


    For 5.0 we have Percona-Server-5.0 which shows comparable performance with XtraDB.

    April 23, 2010 at 9:02 pm
  • Baron Schwartz

    Vadim, any idea what is going on with the ioc500 graph in Percona Server in the graphs linked from your comment #14 above?

    April 24, 2010 at 5:51 am
  • Vadim


    No 🙁 I do not have clear explanation…

    April 24, 2010 at 9:18 am
  • Dimitri

    @Peter – I think the solution with IO capacity will be still more light than any other.. as well extremely simple and easily applicable 😉 Specially with all incoming SSD base solutions we’ll save a huge amount of CPU cycles.. But well – it’s my opinion, and I’m not ready to change it until you don’t present me another working solution 😉

    @Vadim – Agree to connect to your server and try to analyze together – let’s get in contact on Monday (by mail or skype) – I’ll also need another Linux host/PC to setup dim_STAT and fully monitor your workload.. (let’s get it off-list)

    I’m very curious to understand what’s going wrong here..


    April 24, 2010 at 10:28 am
  • Vadim


    I posted results with innodb_io_capacity=500 and 2000 for MySQL 5.5.4.

    So when data fits into memory ( buffer_pool=24Gb), it seems innodb_io_capacity=2000 will help to avoid periodical drops in MySQL 5.5.4.

    It does not help though when buffer_pool is smaller than dataset.

    May 9, 2010 at 6:05 pm
  • angel

    Je l’aime beaucoup, Merci pour le partage

    April 12, 2011 at 5:24 am
  • Sac a main pas cher

    Vadim, any idea what is going on with the ioc500 graph in Percona Server in the graphs linked from your comment #14 above?

    July 25, 2011 at 12:48 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.