eWeek tests OpenSource stacks and .NETPeter Zaitsev
Results make me ask number of questions – why WAMP would perform 6 times better than LAMP ? Why Python would be faster than PHP on Windows but slower on Linux ? Without having such basic questions answered I can’t accept results as valid. If you run benchmark and something unexpected pops up drill it down – quite frequently it would be configuration problem, bug or something similar.
Well, this is however not my main concern with this benchmark. Here is to start:
No Specifications – Good benchmark starts with specifications. What exactly bechmarked system has to be doing ? Normally you would try to optimize system to do only what needs to be done and strip everything else and configure it appropriately. For example different CMS could be doing different things internally – some could be doing live full text search indexing on update, page access statistics storage etc.
No configuration published At least I could not find them – how exactly all of the stacks were configured, what hardware was used etc. Full disclosure report is very important component of any serious benchmark. It should be detailed enough for third party to be able to repeat same results.
Missleading benchmark title – Results are called “stack comparison” while I would rather call it benchmark of different content management systems. Why XOOPS is choosen for PHP or Plone for Python ? Different projects have different quality and different performance goals.
Getting back to results – I’m most surprised about WAMP vs LAMP results (and similar results for Python) – Typically CMS would be CPU bound and unless there are configuration differences, such as acceleratior (eaccelerator, APC) installed on one system but not on the other, so results should be close. There could be 20% difference because of different compliers and OS efficiency but it should not be that high.
If someone knows what really was done let me know. I would be quite curious to take a look at least at WAMP vs LAMP results