MySQL 5.0, 5.1 and Innodb Plugin CPU Efficiency

We’ve recently done benchmarks comparing different MySQL versions in terms of their CPU efficiently in TPC-C like Workload. We did it couple of weeks ago so MySQL 5.0.67, MySQL 5.1.29 and Innodb Plugin 1.0.1 were used which are not very recent, though we do not think results will differ a lot with today versions.

Results are as follows:

MySQL 5.0, 5.1, Innodb Plugin, TPC-C

The system was 2* Quad Core Xeon E5310, CentOS 5, Data stored on ramfs. We controlled number of cores used with /sys/devices/system/cpu/cpuX/online Maximum performance for each number of cores was taken though it was reached with number of sessions matching number of cores. Just 1 “Data warehourse” was used to keep data small.

As you can see there is some gain for MySQL from read-write lock split patch (found in Percona Builds) though it is not very significant for this workload. To isolate effect of this patch we only use this patch not full patch set in testing.

MySQL 5.1 is 4% slower than MySQL 5.0 with two cores and just 2% slower with 8 cores, thus showing a bit better scalability.

MySQL 5.1 plugin (compiled in) is further 3% slower compared to MySQL 5.1 with 2 cores and about 6% slower with 8 cores, meaning regression from plugin increases with number of cores.

If you would not only run MySQL plugin but also use new “Barracuda” Innodb format you see just 1% slow down with 2 cores and about half percent with 8 cores which is what you can attribute to measurement error.

This tells us there are some workloads when MySQL 5.1 is slower than 5.0, and same applies to the new Innodb code. Well newer does not mean more efficient, on the contrary newer features often come together with larger code and longer execution path.

Another thing to note – if you’re using Innodb Plugin consider using new Barracuda format, though do this only after you have done your careful testing as this format will not be recognized by older Innodb versions.

Note: These are completely CPU bound test conditions, data fits to buffer pool furthermore data and logs are on ramfs so no IO is ever needed.

UPDATE: Some people are asked about CPU usage in this condition. Here is the graph:

MySQL 5.0, 5.1, Innodb Plugin, CPU Usage

The CPU usage is normalized to match number of CPU cores used, so 100% in 2 cores case is 100% of two cores in case of 8 cores it is 8 cores etc.

As you can see in general the more cores we get the more idle CPU we’re getting.

It is also very interesting to see the corellation between CPU usage and performance. We can see Plugin uses less CPU with 8 cores and has less performance – this usually shows synchronization is the issue. Barracuda format uses more CPU while delivering better performance so it is probably better with latching too though it is hard to say anything about CPU efficiency.

The RW-Lock patch is best in this case. It shows increased performance while decreased CPU usage.

Note when I say CPU usage drop means concurrency issues it does not mean CPU increase does not mean concurrency issues too. You can have 2 cases when waiting on mutexes and other synchronization objects – either everything waits so there is not enough runnable threads to cause full CPU usage, or you may be waiting by spinning on the spinlock wasting CPU cycles. Detailed profiling tells which one.

Share this post

Comments (15)

  • Daniel Lyons

    What are the units of the y-axis of your chart?

    December 4, 2008 at 12:21 am
  • René Leonhardt

    Could you repeat the test with MySQL 5.1.30 and Innodb Plugin 1.0.2?

    December 4, 2008 at 1:43 am
  • Antxon

    Data stored on ramfs? How do you achieve that? All the changes you made on de BD are lost after rebooting the system?

    December 4, 2008 at 2:44 am
  • Tim Soderstrom

    +1 on re-running the tests with the latest MySQL and InnoDB plugin. Even though the results may not change much, it still doesn’t hurt to laugh in the face of assumption 🙂

    December 4, 2008 at 7:04 am
  • peter


    The X axis is number of CPU cores enabled and Y is transactions per minute.

    December 4, 2008 at 9:43 am
  • peter


    Sure you normally do not run as such in production. This is just benchmark focusing on CPU usage only so avoiding IO… by storing data in memory.

    December 4, 2008 at 9:44 am
  • peter

    Rene, Tim

    We run benchmarks at certain times using releases available at that point in time. There are tons of benchmarks which can be run and which are in the pipeline while there is only so much time we have to do them. I’m not sure we will get to repeating this benchmark for minor updates though we may do it again in a few months to see how things change.

    December 4, 2008 at 9:52 am
  • Ivan

    I would like to know what the CPU usage (in terms of top or sar data) shows.

    As far as I understood, mysql was not able to get near 100% cpu usage during peak loads on 8 core boxes.

    Did your tests confirm that?


    December 4, 2008 at 10:55 am
  • Martin Holzhauer

    i think ramfs is used to get better benchmark results,
    because with ramfs there a lower writelatencies
    so there are more realistic results for a
    plain CPU Core Benchmark.

    December 5, 2008 at 2:35 am
  • Ken Jacobs

    Peter, these are interesting results. It is important to clarify that the Barracuda file format only ENABLES new features that require an on-disk format change. Unless you create tables that use compression, or unless your table takes advantage of the new DYNAMIC row format (where long columns are stored off-page), there should be no changes on disk or in internal processing. We would not expect a significance difference in performance between enabling the default Antelope format and the new Barracuda format.

    I know you’re not using compression for this test. (And it wouldn’t make sense anyway with an in-memory database, since compression is a way to trade off using a little more CPU (and memory possibly) to reduce disk size and most importantly i/o.) You could verify that no compression operations are being done by looking at the new Information Schema table INNODB_CMP.

    So what could explain this difference? Perhaps you have some very long rows, where columns are stored completely off-page? Can you please post the CREATE TABLE commands (or tell us whether you believe this to be occurring)? Assuming both tests used tables created and indexed using the InnoDB Plugin with its new Fast Index Creation capability (which does not require Barracuda format), there should be no difference in the density of the index vs. the traditional way indexes are created.

    Do you have any other ideas as to what could explain these differences between Antelope and Barracuda formats? Or between the Plugin and the built-in InnoDB? If you have the ability to do some instruction tracing/monitoring, so we could see in which routines the code is spending its time, that would be most helpful.

    Last comment: while these results are interesting, without a little more data and some further analysis, it is difficult to draw any reliable conclusions.

    We appreciate your interest in InnoDB and help!

    December 5, 2008 at 12:55 pm
  • Ken Jacobs

    And, by the way, we look forward to seeing any testing or experiences or comments you might have on the newly released InnoDB Plugin 1.0.2 for MySQL 5.1.30! 😉

    December 5, 2008 at 12:56 pm
  • peter

    Thanks Ken,

    The compression was not used I’m quite sure though I do not have the tables handy to look into it right now. We have not done profiling to see where exactly difference comes from but well the code is slightly different which is good explanation for slight performance difference.

    I simply wanted to share results we had got from simple investigation effort focused on understanding what we should use in 5.1-percona builds as well as trying to understand how MySQL 5.1 will impact our customers using simple workloads (not using 5.1 features)

    December 5, 2008 at 1:31 pm
  • Kevin Burton

    You didn’t test any Percona builds?

    December 8, 2008 at 9:16 am
  • peter


    I will post that separately. In this case we just looked at Plugin vs normal release 🙂

    December 8, 2008 at 9:18 am
  • Ivan Korsun


    Can anybody help me with my question –,298545,298545#msg-298545
    I think it’s because of 8 cores on my Dell PowerEdge 6850.

    Best regards.

    January 8, 2010 at 4:25 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.