In my recent benchmarks, such as this one about the Virident TachIon card, I used different values for innodb_buffer_pool_size, like 13GB, 52GB, and 144GB, for testing the tpcc-mysql database with size 100G. This was needed in order to test different memory/dataset size ratios. But why is it important, and how does it affect how InnoDB works internally? Let me show some details.
Internally, InnoDB uses two lists for flushing (writing pages from the Innodb buffer pool memory to disk): the LRU list and the flush list. You can see a tracking of these lists in SHOW ENGINE INNODB STATUS:
|
1 |
<br>...<br>Pending writes: LRU 0, flush list 129, single page 0<br>...<br> |
It is important to understand which list is being used for flushing, because that defines what MySQL InnoDB tuning should apply.
What is important about this? If your workload is mostly bound by the LRU list, then innodb_adaptive_flushing, innodb_max_dirty_pages_pct, innodb_io_capacity, and innodb_log_file_size do not matter, and changing these settings won’t help.
Ironically, these settings are most important for tuning MySQL 5.5 to get stable performance (as you may see from Dimitri’s post, http://dimitrik.free.fr/blog/archives/2010/12/mysql-performance-analyzing-perconas-tpcclike-workload-on-mysql-55.html). But you can’t really detect what kind of flushing is used. There is a variable in SHOW STATUS, innodb_buffer_pool_pages_flushed, but it shows the sum of flushing from both the LRU list and the flush list.
In Percona Server 5.5, I added the SHOW STATUS variable innodb_buffer_pool_pages_LRU_flushed, which shows only pages flushed by the LRU list.
So, let’s see what flushing we have using 13GB and 52GB for innodb_buffer_pool_size settings in Percona Server 5.5, using the Cisco UCS C250 server and the Virident tachIOn PCI-E Flash card.

As you can see, in the 13G case (where memory to data ratio is about 1/10), all flushing comes from the LRU list,
while with 52G, the LRU list does not play into it much.
Here are also graphs of reads and writes per second for both cases:

(BTW, I am really impressed at what a stable level of reads and writes per second the Virident tachIOn card provides.)
And here is a graph with the percentage of dirty pages for each case:
As you can see, with 13GB we are under the limit of innodb_max_dirty_pages_pct=75, and
with 52GB we are reaching the 75% maximum for dirty pages.
How can you detect in regular MySQL that your flushing comes from the LRU list? You could periodically check SHOW ENGINE INNODB STATUS, and see something like
|
1 |
<br>Pending writes: LRU N, flush list 0, single page 0<br> |
where N>0, and flush list is 0.
A second way to detect it is when the number of dirty pages is less than innodb_max_dirty_pages_pct and checkpoint age is less than 3/4 of innodb_log_file_size * innodb_log_files_in_group. If you have these conditions and observe intensive writes, InnoDB is using LRU list flushing.
So why is it important to know what the mix of InnoDB flushing types is in your workload? There are a couple reasons:
(Post edited by Fred Linhoss)