As you likely have seen Sun has posted the new SpecJAppServer Results More information from Tom Daly can be found here These results are quite interesting for me as I worked on some of the previous SpecJAppServer Benchmarks several years ago while being employed by MySQL.
These are great results, plus they can be relevant to a lot of us because commodity x86 based hardware was used for the test. So it is not just about Sun it is about OpenSource hardware on Commodity Hardware.
As usually with results from such benchmarks there is no direct comparison available. The configuration Tom compares results to are not OpenSource and on the hardware of the different class so it is really hard to see what really caused the difference. It would be very interesting to see for example results on the same hardware just running PostgreSQL instead of MySQL or having Sun box replaced with comparable Dell or HP box. Unfortunately such benchmarks are more about marketing then fair technical comparison and we, technical people, just can assume the team had tried to get the most the given configuration and look for tuning ideas or try to read between the lines.
For example we can see for J2EE the system with 8 cores was used while for database 4 cores was enough. Does it mean the J2EE system got saturated before the database ? Or does it mean MySQL did not scale well to 8 cores on this benchmark ?
Now lets look at Java Settings:
JDBC Pool (for EJBs): max-pool-size=100 steady-pool-size=50
cachePrepStmts=true prepStmtCacheSize=512 alwaysSendSetIsolation=false
useLocalSessionState=true useServerPreparedStmts=false useLocalSessionState=true useReadAheadInput=false
elideSetAutoCommit=true useUsageAdvisor=false useReadAheadInput=false
This gives you some hints what you can try for JDBC configuration. Some caching options are must (otherwise JDBC will have a lot of extra calls checking various server changes which typically never change). We can also see Server prepared statements were disabled for the run (same as we had few years ago). Prepared statements generally should have helped this benchmark because it has a lot of same queries an Prepared Statements can be more efficient on the server side but I guess something is not as well optimized with them as it could be which makes it better to disable them.
The MySQL Settings are probably what is the most interesting:
MySQL 5.0 Tuning in /etc/my.cnf (included in FDA)
sql-mode = IGNORE_SPACE
transaction-isolation = READ-COMMITTED
max_allowed_packet = 1M
table_cache = 6000
read_rnd_buffer_size = 2M
sort_buffer_size = 32k
thread_cache = 16
query_cache_size = 0M
thread_concurrency = 8
log-output = FILE
long_query_time = 1
innodb_data_home_dir = /data/mysql/var
innodb_data_file_path = ibdata1:10000M:autoextend
innodb_log_group_home_dir = /log/mysql/var/
innodb_checksums = 0
innodb_doublewrite = 0
innodb_buffer_pool_size = 5000m
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 1600M
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 300
innodb_thread_concurrency = 0
innodb_sync_spin_loops = 40
innodb_locks_unsafe_for_binlog = 1
innodb_flush_method = O_DIRECT
First the options are somethat benchmark optimized – for example doublewrite is disabled; checksums are disabled; binary log is disabled – this is not what you would probably run in production but as I guess benchmark does not require these properties and so they are tuned away…. though I bet the over vendors do the same thing of tuning database options for benchmark rather than running production reasonable options.
It is interesting though slow query log remained enabled – probably it caused so little overhead because there were no slow queries it just was left in tact.
Other important settings:
transaction-isolation = READ-COMMITTED – this is indeed good setting for many workloads so unless you have repeatable reads requirement consider using this.
sort_buffer_size = 32k – It is interested how much this was really investigated. Indeed this is much smaller than default. If you have queries which only do small sorts it is indeed better to have smaller sort buffer because allocating it will likely be faster.
query_cache_size = 0M – query cache disabled. Not a big surprise though as this is default value I believe there was attempt to enable it and it did not make things better.
max_heap_table_size=200M – I’m not sure why is it set. Are any explicit MEMORY tables use in the benchmarks ? As otherwise you also need to boost tmp_table_size to deal with implicit memory tables.
innodb_file_per_table – so I assume storing each table in its own file worked better for this benchmark. Depending on workload it can be a bit slower or faster but it is surely better for operations.
innodb_log_file_size = 1600M – another Benchmark optimization. Such large logs typically will cause too long recovery time in case of crash so people settle on smaller log for a little bit lless of performance.
innodb_thread_concurrency = 0 Good! So Innodb performed best in this workload without restricting number of threads inside Innodb Kernel. Again this is rather workload specific.
innodb_sync_spin_loops = 40 – This tuning option is rarely used so I’m curious how much speed benefit did it really provide in this case.
innodb_max_dirty_pages_pct=15 – Another interesting one. So the it was better to restrict amount of dirty pages Innodb can have to get better performance. It probably was done to deal with “dips” which can affect peak response times a lot. The fuzzy checkpointing could be done a lot better in Innodb (and we have patches for this)
Operating System Notes from database server:
UFS Options for log and data
mysqld moved into the FX scheduling class via priocntl -s -c FX mysql-pid
“nologging” looks a bit scary here but again potential fsck on power failure is probably not the problem for benchmark specs. But what really surises me here is why not ZFS ? The ZFS MySQL success story is all over across Sun’s blogs, so why benchmarks are still run on UFS ? It would be really cool to know how ZFS would score here. And it is really OK for it to be a little bit slower – it has many nice features which are worth a bit of performance overhead.
My best wishes to the Tom Daly and his team in further benchmark results improvements. I guess we’ll see more of these to come.
Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.
Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.