I guess it is first reaction on new storage engine – show me benefits. So there is benchmark I made on one our servers. It is Dell 2950 with 8CPU cores and RAID10 on 6 disks with BBU, and 32GB RAM on board with CentOS 5.2 as OS. This is quite typical server we recommend to run MySQL on. What is important I used Noop IO scheduler, instead of default CFQ. Disclaimer: Please note you may not get similar benefits on less powerful servers, as most important fixes in XtraDB are related to multi-core and multi-disks utilization. Also results may be different if load is CPU bound.
I compared MySQL 5.1.30 trees – MySQL 5.1.30 with standard InnoDB, MySQL 5.1.30 with InnoDB-plugin-1.0.2 and MySQL 5.1.30 with XtraDB (all plugins statically compiled in MySQL)
For benchmarks I used scripts that emulate TPCC load and datasize 40W (about 4GB in size), 20 client connections. Please note I used innodb_buffer_pool_size = 2G and innodb_flush_method=O_DIRECT to emulate IO bound load.
InnoDB parameters:
|
1 |
<br>innodb_additional_mem_pool_size = 16M<br>innodb_buffer_pool_size = 2G<br>innodb_data_file_path = ibdata1:10M:autoextend<br>innodb_file_io_threads = 4<br>innodb_thread_concurrency = 16<br>innodb_flush_log_at_trx_commit = 1<br>innodb_log_buffer_size = 8M<br>innodb_log_file_size = 256M<br>innodb_log_files_in_group = 3<br>innodb_max_dirty_pages_pct = 90<br>innodb_flush_method=O_DIRECT<br>innodb_file_per_table = 1<br> |
And for XtraDB I additionally used:
|
1 |
<br>innodb_io_capacity = 10000<br>innodb_adaptive_checkpoint = 1<br>innodb_write_io_threads = 16<br>innodb_read_io_threads = 16<br> |
So what is in result:
Result is in NOTPM (New Order Transactions Per Minute), more is better. As you see XtraDB is somewhat 1.5x better than InnoDB in standard 5.1.30 and even more than InnoDB-plugin-1.0.2
And there is CPU utilization for all tested engines:
As you see XtraDB also utilizes CPUs better.
Finally let me show you why I took NOOP IO scheduler instead of CFQ, there are result for XtraDB with both:
4X difference is just giant one. And it is important to remember as Linux kernels 2.6.18+ (which are used on CentOS / RedHat 5.2) are coming with CFQ scheduler as default.
So
|
1 |
echo 'noop' > /sys/block/sda/queue/scheduler |
should be one of first things to do on new server (sure you also need to change kernel startup parameter to make it automatic after reboot).