Whenever I see benchmark results I try to understand if it is technical benchmark – made by people seeking the truth or it is done by Marketing department to wash your brains. Watch out. Whenever you treat marketing benchmarks as technical ones, you make make wrong decision. Take a look at MySQL 5.0 Benchmarks Whitepaper and guess which type is this ?
You can also compare it to my MySQL Performance 5.0 vs 4.1 presentation to have some fun.
What can we see ? Out of all MySQL 4.1 vs 5.0 benchmarks which were done only benchmarks which show MySQL 5.0 is faster were selected and bunch of other benchmarks which show 5.0 is actually slower than 4.1 were hidden under the table.
In general any benchmarks I see which show something being absolute winner I smell something fishy. In software development there is no free lunch and your decisions to speed something up often will mean something else becomes slower. There are of course exceptions with optimizing very bad code, but it barely applies to MySQL – Monty is not bad developer at all 🙂
So in case of MySQL 5.0 a lot of great features were implemented and new optimizations being added which made certain things much faster, sometimes hundreds of times faster. This however came at cost of larger code size and more complex code which had slowed down some things few percent. This is normal and most software developers would understand it. For non developers reading this article for some reason – think about following comparison – Windows 3.1 used to boot in 10 seconds on my old 486 desktop. Would you expect Windows XP to be able to do the same ?
This was similar with MySQL 4.1 – number of great performance fixes but good number of cases when it become few percent slower (especially due to handling multiple character sets).
Of course I would not say it is normal and nothing can be done in all cases. There are performance bugs, some of them as Innodb Scalability were known long before MySQL 5.0 was stable but were not fixed, others, such as broken group commit could have been handled better.
Same applies to comparing MySQL to other vendors. If you ask me if MySQL is faster than Oracle or PostgreSQL I would say yes, in many cases, but not all cases. There are cases when MySQL sucks and you better to do things differently or use other database. For example it happens with many types of subqueries, cases when hash or sort merge join is needed and in many other advanced optimizer features. I can help in many cases to work around MySQL limitations or design application so it uses MySQL strengths but it is other story.
I am always surprised why Marketing department (not only MySQL Marketing, but any marketing) tries to show only one side of the coin ? Do people really believe it or is it just “everyone does it” and so showing weaknesses together with strengths would be understood as poor product ?
In my opinion honesty and openness is best practice. During my work for MySQL I recommended not to use MySQL for certain applications (ie when queries and schema could not be changed migrating from Oracle) more than once or twice, and interesting enough many of these customers stayed with MySQL for other applications or checked MySQL when new version was released.
In general I also have to admit it is hard to find good technical benchmark these days, competitive benchmarks which are done inside companies are rarely published. Benchmark run by individuals are very often of mediocre quality, performed without proper tuning or with other mistakes. One reason for that is – it is hard to be expert in everything, for example if I would do benchmarked of MySQL vs PostgreSQL and try my best to get best performance out of both of them, I still know MySQL much better and so it would not be 100% correct.
To get best results you need to make Vendors to compete in the same benchmark. It ether can be done by some Industry body – such as TPC or SPEC or by mass media – eWeek benchmark or recent CT benchmark are from this type. Interesting enough even in TPC or SPEC benchmarks vendors try to avoid directly comparable results – for example Oracle and Microsoft would run TPC-C on a bit different configurations, or Sun would not publish SpecJAppServer benchmarks on absolutely same configuration.
Looking at any benchmarks also make sure to check how well they apply to your application. The other big secret about benchmarks is – their results way to often have nothing to do with performance in your real environment.