In my recent benchmarks, such as this one about the Virident TachIon card, I used different values for innodb_buffer_pool_size, like 13GB, 52GB, and 144GB, for testing the tpcc-mysql database with size 100G. This was needed in order to test different memory/dataset size ratios. But why is it important, and how does it affect howÂ InnoDB works internally? Let me show some details.
Internally, InnoDB uses two lists for flushing (writing pages from the Innodb buffer pool memory to disk): the LRU list and the flush list. YouÂ canÂ see a tracking of these lists in SHOW ENGINE INNODB STATUS:
Pending writes: LRU 0, flush list 129, single page 0
It is important to understand which list is being used for flushing, because that defines what MySQL InnoDB tuning should apply.
What is important about this? If your workload is mostly bound by the LRU list, then innodb_adaptive_flushing, innodb_max_dirty_pages_pct, innodb_io_capacity, and innodb_log_file_size do not matter, and changing these settings won’t help.
Ironically, these settings are most important for tuning MySQL 5.5 to get stable performance (as you may see from Dimitri’s post, http://dimitrik.free.fr/blog/archives/2010/12/mysql-performance-analyzing-perconas-tpcclike-workload-on-mysql-55.html). But you can’t really detect what kind of flushing is used. There is a variable in SHOW STATUS, innodb_buffer_pool_pages_flushed, but it shows the sum of flushing from both the LRU list andÂ the flush list.
In Percona Server 5.5, I added the SHOW STATUS variable innodb_buffer_pool_pages_LRU_flushed, which shows only pages flushed by the LRU list.
As you can see, in the 13G case (where memory to data ratio is about 1/10), all flushing comes from the LRU list,
while with 52G, the LRU list does not play into it much.
(BTW, I am really impressed at what a stable level of reads and writesÂ per second the Virident tachIOn card provides.)
And here is a graph with the percentage of dirty pages for each case:
As you can see, with 13GB we are underÂ the limit ofÂ innodb_max_dirty_pages_pct=75, and
with 52GB we are reachingÂ the 75% maximum for dirty pages.
How can you detect in regular MySQL that your flushing comes from the LRU list? You could periodically check SHOW ENGINE INNODB STATUS, and see something like
Pending writes: LRU N, flush list 0, single page 0
where N>0, and flush list is 0.
A secondÂ way to detect itÂ is whenÂ the number ofÂ dirty pagesÂ is less than innodb_max_dirty_pages_pct and checkpoint ageÂ is lessÂ than 3/4 of innodb_log_file_size * innodb_log_files_in_group. If you have these conditions and observe intensive writes, InnoDB is using LRU list flushing.
So why is itÂ important to know what the mix of InnoDB flushing types is in your workload? There are a couple reasons:
(Post edited by Fred Linhoss)