EmergencyEMERGENCY? Get 24/7 Help Now!

MySQL wins C’T Database Contest

 | August 29, 2006 |  Posted In: Benchmarks, Events and Announcements


Today MySQL published the press release with results of Database Contest (results on German available here http://www.mysql.de/ct-dbcontest).

Peter and me spent quite some time working on this project while being employed by MySQL and it is great to see results finally publicly available.

The story began about year ago when C’T magazine had called for Database competiton using Dell DVD Store benchmark (details available here: http://firebird.sourceforge.net/connect/ct-dbContest.html).

Most interesting results are (more orders per minute are better 🙂 ):
MySQL5/PHP (our solution) : 3664 orders per minute
DB2/Java : 1537 opm
Oracle / Java: 1412 opm
PostgreSQL / PHP: 120 opm

MySQL5/PHP on two boxes: 6000 opm.
I wonder how C’T have got such low results because on the same hardware I got ~7000 opm on one box and
~12000 opm on two boxes, but that does not matter.

Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.


  • Such an ironic post considering this quote from the previous post on this blog:

    “In general any benchmarks I see which show something being absolute winner I smell something fishy.”

  • Mike,

    Yeah, but this is different story.
    That was the competition between undependent teams, so I think it is fair enough.
    Furthermore, I don’t pretent MySQL is the fastest – just our solution is the fastest.

  • Mike,

    There is no contradiction here. This is _one benchmark_ it is not about Winning all across the board in all kinds of various benchmarks.

    One thing I like though about DVD Store benchmark is it was defined on web response level and people were free to use whatever they like on the low level – add caching as they like etc.

    In many cases benchmarks are designed on database level and you need to show performance for particular queries. This could be misleading as you just might not design your database such way or write your queries such way. You still can solve the task but do it differently.

  • Just one remark regarding the “on two boxes” claim: according to the contest rules database and appserver were running on the same box. Another box acted as client. When testing the MySQL solution, the client box was not able to saturate the server box. So c’t added another *client* box to the setup. Then the *single* server box delivered ~6000 opm.

  • Congratulation for the winning , yah sounds like spam
    I wonder what were the firebird results in that contest , from the paper i see only mysql/oracle ones

  • Mariuz,

    Thank you!

    I don’t know why there are no Firebird’s results. Perhaps Firebird guys have not sent their solution.

  • Well I think the scale of the victory makes it very questionable. I think this contest illustrated which team had more time to spend on optimizing. I also wonder how many real world required features were dropped (as in FK’s etc) in order for mysql to be this fast. Anyways .. the results illustrate that most submissions were made for MySQL. Thats it. I dont think it really says much about performance of the entries.

  • Lucas,

    There is always ways to blame benchmark results. If queries and schema are fixed (so you do not have much room to tune unless you change the server) it can be easily blamed on being biased towards particular vendor, storage or isolation concepts. If you allow flexibility you can blame it on team which had to spend most time and money.

    Speaking about real world required features like Foreign Keys – you would be surprised how many MySQL applications actually do not have foreign keys defined and still do what they need to do.

    Regarding impact of this to benchmark – every competition benchmark is a game. There are rules and you simply play by these rules to get best results. Check for example TPC-C results database to have some fun. You can think about it as about Formula-1 racing – you would not drive same car on normal roads 🙂

    The fact most results were submitted with MySQL simply illustrate MySQL market position, at least on this kind of market of people reading this magazine and having time to spent it on creating one or another implementation.

  • Yes, but without foreign keys its a ridiculous comparison. The point is .. any benchmark for a shop that ends up with mysql 3600 and pgsql 120 tells me zip, nada, nothing. and putting out a press release on this is pathetic imho. it also does not really compare how quickly you can develop for a given rdbms, as the time allocated was not constant per submission and afaik mysql already had an implementation to work off of. the only thing worth noting in capital letters from this contest is that mysql saw the most submissions.

  • 2. Vadim, of course this is different – in this benchmark MySQL was the clear winner leaving the competitors far behind =)

  • Hmm after reading through the pdf (even thou Im not that good in german but there is a table to the lower left on the last page with results which is interesting) we can clearly see that it was probably a mistake by MySQL AB to use this article and put that into a press release. Exactly the things as Peter/Vadim were speaking about regarding benchmark results in previous blog entry on this site.

    We can see that the different teams most likely have put different amount of time into this aswell as having different solutions regarding caching etc. Would be more interesting if the code/solution from each team could be cross tested with each database/api (for example if the 3364 opm mysql result was using memcached, how would the 120 opm postgre result be affected of such caching?) – THEN we would better know which db is good or bad and how much time you need to put into a solution before it gets decent performance.

    We who have been working with a couple of databases can clearly see that there is something fishy with these results when the postgre solution gets only 120 opm while mysql gets 3664 (even thou there are mysql results as low as 137 opm in this benchmark).

    I can just use a recent personal experience from the fulltext engine I am building (in perl with mysql as backend) where it in the beginning (2-3 months ago) had a performance of below 50 posts/sec in indexing speed while it has now a speed of 300-350 posts/sec (or more) and basically does the same thing. What has changed are small api tricks along with different algorithm.

    An example is that in the perl api to communicate with mysql (DBI/DBD::mysql) there is not only one way to fire a query but a range of ways depending on your needs. A small benchmark presented at http://search.cpan.org/src/TIMB/DBI_AdvancedTalk_2004/sld017.htm shows that to retrieve just one column from mysql over DBI/DBD there is a performance difference of 51,155 fetches for the slowest method of the three tested to 348,140 fetches per second. A boost of 680% in the question of fetching just a single column from a mysql table through perl.

    On the other hand its fun with mysql at top but in this case I cant say that it was a fair test to be that exited over that mysql was the winner…

  • Lukas,

    If you see 3600 for MySQL and 120 for PostgreSQL It does indeed looks strange. Especially if Oracle scores 1400.
    But this is simply to blame given implementation, not benchmarks.

    I’m quite sure large number of changes we’ve done would work on PostgreSQL as well such as schema changes, query changes, caching.

    In fact as far as I remember PostgreSQL team had results analyses and they could get much better results after certain changes.

    Regarding time allocated – this was not part of the game rules in this case. It rarely is. Take a look for example on TPC-C or SpecJAppServer benchmarks – vendors spend millions of dollars to optimize products just for these benchmarks and this is not taken into account.

  • Apachez,

    Right. It is more about implementation rather than MySQL, as I wrote in previous comment.

    I do not think in this case benchmark is quite similar to one I wrote in previous post – both good and bad results by MySQL were published. Press release itself of course only mentions best results but it links to full article at least.

    Regarding implementation – it is always human question more than database question. You can build great performing applications on Oracle, MySQL or PostgreSQL, and you can build poorly performing one. In this case it is shown MySQL can deliver great results and there are people on MySQL Team which know how to optimize applications.

  • Guys,
    My opinion is this is not competition between databases, but between DB+App servers. So there is nothing to talk about (and only) for databases 🙂

  • Gentlemen,

    I respect MySQL team and their product. I had some pretty interesting conversation with them on LinuxWorld around 3 yrs ago. At this point, they shown a very nice chart clearly showing that Oracle, DB/2 and MS SQL are far behind, and MySQL is the fastest server on the planet.

    Unfortunately, these test were made for MySQL w/o transactions, but for other servers – with transactions. I have around 10 yrs of Informix and MS SQL (yes, it’s a shame, I know) administration. And as far as I know – transactions normally slow down any DML operations around 2..2.5 times. So, I smiled, and walked away.

    What’s demonstrated now shows the complete disrespect to the testing methods. I’d support the observation that 120 transactions for Postgress doesn’t make any sense. I tried to conduct some tests several months ago, and Postgres _with transactions_ was about 15%..25% slower than MySQL w/o transactions. Informix with transaction (buffered mode) came head-to-head. In reality that means a very simple thing. MySQL is a pretty slow server, actually. Unless, of course, you don’t care about your data integrity and therefore don’t use the transactions.

    So, with all due respect to the effort, I suggest MySQL team to shift their understanding of their server’s performance closer to reality. It’s said not to piss off anyone, but to make MySQL better, eventually.

  • HappySquirrel,

    MySQL uses transactions in first solution, especially for payment processing.
    MySQL’s solution also uses cache technics, that is why I think the difference is so big.

  • This is one of the most annoying thread I stumbled upon recently, so I’ll give my contribution, it only shows that MySQL AB is fully into the FUD thing as other database vendors are.
    This might mean nothing to any seasoned database expert (as shown in above posts) but hopefully will do for MySQL AB sales and of course shows that the guys who optimized MySQL and got the prize are great at their work (at least compared to other entries).

  • In any case, something with these tests is fundamentally wrong. Cache or no cache – there is something in this tests that is MySQL-specific. For instance, 3664 op/min is around 60 op/sec, or around 16 msec/transaction. On my Informix server, the insert into a table with ~10 fields and primary key takes between 0.4ms and 0.5ms, and Postgress takes 0.6..0.8 ms for the same operation. So, we’re talking about 30 operations per transaction. Now, the time consumption of this highly depends on the execution method. At least, it should be a stored procedure, otherwise we just compare drivers and essentials of Java vs PHP. If someone can show the number of inserts/updates/selects per transaction – we can discuss it little bit better.

  • HappySquirrel, how does Postgress get an fsync completed in 0.8ms? The MySQL number at 16 msec/transaction is about right for a single drive situation. Group commit can do things like that on average, of course, but I’m assuming that you were trying the same test on both.

    Your comment about transactions being slower in MySQL doesn’t necessarily apply. For example, LiveJournal switched from MyISAM without transactions to InnoDB with transactions and saw a big increase in the amount of work their (write-limited) servers could do. Which way is faster will depend on the situation. Sometimes that can be using MyISAM for the non-transactional work (say a catalogue updated daily) and InnoDB for the transactional part, sometimes pure InnoDB works best.

    You’re not really likely to persuade me that MySQL is slow, since I am somewhat involved with a place doing a couple of billion queries per day on it (on multiple servers). We wouldn’t still be using it if it was actually slow rather than fast. Same applies to the many other really busy and big places that use it.

  • By the way, didnt “op min” mean order per minute?

    I can guess that there are more than one sql access involved (cached or not cached) for each order to complete through this virtual dvd store.

  • Vadim,

    I noticed that the MySQL results in the PR as 3,663opm was different from the result(1967opm) in Dell’s publication for the DVD online store test on the Dell PowerEdge2800 at: http://www.dell.com/downloads/global/solutions/mysql_network_2800.pdf?
    Is there any HWs configuration, or MySQL/PHP/driver source(is it different versions of the src:linux.dell.com/dvdstore in the two publication?) and tuning changes causing the differnce result? Or it will be great if there is English version of the report for the PR result including the above information.


  • Jenny,

    The difference is Dell used original Dell DVD Store MySQL/PHP sources, but in C’T competition optimized
    by MySQL Benchmark Team. Also there are different HW configurations.
    Sorry no English version, and I don’t think it will be available.
    If you are looking for optimized version you can download it here: http://www.heise.de/ct/dbcontest/teilnehmer.shtml
    (again German, but Google Translator should help).


  • LS,
    I just stumbled upon this thread, which is interesting because it relates to our activities in the C’t benchmark.
    Claiming victory is good for marketing, but bad for science/engineering. Especially, if selective pickings are applied as well. You don’t need to understand German to see the ‘newcomer on the block’ MonetDB/SQL has beaten the MySQL in the java strand.

    Moreover, we have run many examples where out-of-the-box processing of MySQL has questionable performance. A educated DBA is necessary to improve it. A simple run of a TPC-H SF-N, where N brings you just out of memory is sufficient to experience a dreathfull system. Clearly a market niche to generate revenues.

    From a technology point of view, it would be nice if the MySQL guys again re-installed and publicly sql-bench results on multiple platforms and solicit a continuous comparison. Not to mention the multi-user version long time ago promissed.

    regards, Martin

  • Martin,

    Benchmark was Application benchmark rather than poor database benchmark. We picked to go with PHP not Java and got best results this way. There are actually more than one MySQL implementation and the fact someone had done poor implementation with Java which works better with MonetDB than with MySQL does not prove much. I mean if we would provide optimized Java solution in might have been faster with MySQL or might have not – we do not really know.

    Speaking about OutOfTheBox performance of MySQL – simply forget about it. Out of the box MySQL is tuned to consume 16-32MB of RAM so you can install it and have it running on your laptop without affecting other things. For real workloads you need to tune it ie by using one of sample configurations. This may be good or bad but not relevant benchmark results 🙂

    Regarding MySQL Performance – sure there are cases when performance is poor, TPC-H is one especially bad. It is also true you can run out of memory on misconfigured MySQL.

    All benchmarks are different and results in one often do not have anything to do with results in other, even if benchmarks are similar. I never said MySQL has best performance for every workload – for this particular one it has pretty good results however.

Leave a Reply