EmergencyEMERGENCY? Get 24/7 Help Now!

Looking on 5.4 – IO bound benchmarks

 | April 30, 2009 |  Posted In: Benchmarks


With a lot of talks around 5.4 I decided to check how it works in our benchmarks. For first shoot I took tpcc-like IO-bound benchmark (100W, ~10GB of data, 3GB buffer_pool) and tested it on our Dell PowerEdge R900 box (16 cores, 32GB of RAM, RAID 10 on 8 SAS 2.5″ 15K RPM disks). For comparison I took XtraDB-release5 and 5.0.77-highperf percona release.

For raw results you can check my Google Spreadsheet (it is also being update with my next CPU benchmarks, and benchmark on SSD & FusionIO), also I post graph there:

Results are in TPM (transactions per minute, more is better).

So I can confirm that MySQL team did great job with 5.4 and it shows the best results.
Some more results you can find on Dimitri’s blog, one of Sun Performance Engineers.

From our side we will look on recent improvements and also on Google V3 patches and will integrate them into next release of XtraDB, so stay tuned 🙂

Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.


  • Vadim,

    How long was the run ? I see time starts at 0 but it does not have the end.
    It is also quite interesting to see xtradb to have some warmup while all other runs seems to have none.

  • Hey Vadim,

    Do you know if the dip in the 5.4 throughput was from checkpointing (i.e, that we could do with the adaptive checkpointing as well)?



  • Mark,

    Yes, it was because of checkpointing, you can see more interesting graphs on Sheet “CPUbound”. I will also publish them today.

  • As I understand warmup was not plotted on this graph right ?

    One thing we want to see on the graph is result stability (due to the checkpointing issues) – it would be good idea to ensure the runs we publish have enough run time to write (cycle through) log files 3x or so – this also would show checkpoint dips as several if they present.

    One more thing – I’d be careful calling benchmarks CPU bound – I assume you’re speaking “In memory” here – because there are log writes and dirty page flushes from writes there is no guaranty the disk will not be the bottleneck still.

    For read only benchmarks in-memory typically means CPU bound but it is not always so for write intensive.

Leave a Reply