Choosing innodb_buffer_pool_sizePeter Zaitsev
My last post about Innodb Performance Optimization got a lot of comments choosing proper innodb_buffer_pool_size and indeed I oversimplified things a bit too much, so let me write a bit better description.
Innodb Buffer Pool is by far the most important option for Innodb Performance and it must be set correctly. I’ve seen a lot of clients which came through extreme sufferings leaving it at default value (8M). So if you have dedicated MySQL Box and you’re only using Innodb tables you will want to give all memory you do not need for other needs for Innodb Buffer Pool.
This of course assumes your database is large so you need large buffer pool, if not – setting buffer pool a bit larger than your database size will be enough. You also should account for growth of course. You need buffer pool a bit (say 10%) larger than your data (total size of Innodb TableSpaces) because it does not only contain data pages – it also contain adaptive hash indexes, insert buffer, locks which also take some time. Though it is not as critical – for most workloads if you will have your Innodb Buffer Pool 10% less than your database size you would not loose much anyway.
You also may choose to set buffer pool as if your database size is already larger than amount of memory you have – so you do not forget to readjust it later. This is also valid approach as if it is Dedicated MySQL Server you may not have a good use for that memory anyway.
Another thing you should keep into account is Innodb allocates more memory in structures related to buffer pool than you specify for it – I just checked on our set of boxes which use 12GB buffer pool (with very small additional pool and log buffer) and total amount of memory allocated by Innodb is typically a bit over 13GB. Vadim has posted some numbers on it a while back.
After you have decided with database size you need to check if there are any restrictions on Innodb Buffer Size you can use. Typically you would see restriction applying only on 32bit systems but we see these can be still spotted in a wild, especially in Windows world. The restriction would normally apply to total amount of memory process can allocate so make sure to leave space for other MySQL needs while factoring this in.
The next step would be to decide How Much Memory do you need for other needs. This needs would be OS needs – your system processes, page tables, socket buffers etc all need memory. I would put this to 256M for small sized boxes to 5% of memory size on the big boxes, though it can be even less than that. Besides Operating System needs you also have MySQL needs – these include MySQL buffers – query cache, key_buffer, mysql threads, temporary tables, per thread sort buffer which can be allocated. There are also things like innodb additional memory pool (which can grow more than memory you allocated for it, especially in case you have large amount of tables).
I could tell you some numbers, for example sum up all your global buffers plus add 1MB for each connection you’re planning to have but in reality the number can vary significantly depending on the load. Idle connections for example will consume significantly less memory than connections doing work with huge temporary tables and otherwise running complex queries. It is usually much better to simply check it. Start MySQL With 10GB Innodb buffer pool for example and see how large RSS and VSZ get in “ps” output on Unix Systems. If it gets to 12GB when you need 2GB for other stuff, and you can increase it a bit to be on the safe size and scale appropriately.
The third important memory consumer would be OS cache. You want to bypass cache for your Innodb tables but there are other things you need OS cache for – MyISAM tables (mysql database, temporary etc) will need it, .frm file, binary logs, or relay logs, Innodb Transactional Logs also like to be cached otherwise OS will need to do reads to serve writes to these log files as IO to the log files is not aligned to the page boundary. Finally you likely have some system script/processes running on the system which also need some cache. The number can be a lot different depending on system workload but generally I’d see values from 200MB to 1GB good estimates for this number.
Eliminate Double Buffering – This is again very important for buffer pool size choice. You do not want OS to cache what Innodb is caching already. Innodb cache is more efficient compared to OS cache because there is no copying, due to adaptive hash indexes, ability to buffer writes and number of other factors so you just want to make your OS to give a way to Innodb. Note it is not enough to block OS from swapping – as I already mentioned the OS cache will be needed for other things and if you will not make Innodb to bypass OS buffering Innodb TableSpace IO will wipe out cache because it typically makes most of the IO on the system. On Windows you do not need to do anything. On Linux, FreeBSD, Solaris you need to set innodb_flush_method=O_DIRECT. On other Operating Systems you may be able to select it on OS level but make sure to do it. There is a small niche case when it hurts – when you do not have RAID with BBU and your workload is very write intensive but there are always exceptions.
Make your OS Happy The other challenge you may have is making your OS happy and avoiding swapping out MySQL Process or other important processes to make room for file cache. OS may find it unfair there is MySQL process which consumes 95% of memory and the cache is just couple of percent. Some people try to solve it with disabling swap file but it can hurt another way – OS may kill MySQL Process running out of memory (or thinking it is running out of memory) which may happen ie in case of unexpected connection spike. Plus not all kernels work quite well with swap disabled and there are other reasons against it. For some people having no swap works, though they usually play on the safe side, having enough “free” memory used as Cache and Buffers. Kevin Burton wrote a good post about his experiments.
Depending on OS you may want to do different VM memory adjustments. You may want to make MySQL to use Large Pages for allocating Innodb Buffer Pool and few other buffers, which may have other performance benefits as well. Tuning your VM to be less eager to swap things by echo 0 > /proc/sys/vm/swappiness is another helpful change though it does not always save you from swapping. Some specific kernel versions may have other settings to play with. Finally you can try locking MySQL in memory by using –memlock – just be careful as in case you have memory usage spike you may have MySQL Server being killed by OS instead of temporary swapping few things out.
Two things to note about OS Swapping. First looking at “swap used” is not really helpful because you do not know what lies in the swap – there are portions of both MySQL Process and other processes which you do not need during normal operation so it is OK to get them in the swap. Make sure however the swapping is not happening ie your VMSTAT “si/so” columns are zero on Linux. Couple of swaps per minute would not hurt bad but if you’re doing 100+ pages per second of swap IO you’re in trouble.
Second, many people think – who cares if some of buffer pool is swapped out, I would have one IO to fetch page if I would have small buffer pool and now I have one IO to fetch page data swapped out. This is very wrong thinking. First OS would have to swap even clean page from Buffer Pool while Innodb can simply discard that pages in case of memory pressure. But what is more important Innodb algorithms are finely tuned with consideration what is in memory and what is on disk, for example when Innodb tries to avoid holding latches while doing IO while there could be locks set while accessing pages in the buffer pool. If page turns out to be in swap rather than memory you will have another threads piling up waiting on the same lock till IO completion, while they may well have all data they need to proceed in the innodb buffer pool.
I guess these clarifications work better than 70-80% recommendation and of course you should not stick just to 50GB if you have 64G of memory – values of 56-60G would likely make more sense, and depending on bunch of other settings 12G or 14G may well be good choice for 16GB Box, though I would take care at values close to 14G as there is not much room left for other things.
P.S I only described Innodb Buffer Pool selection for dedicated Innodb system. If you have fair amount of MyISAM, Archive, PBXT, Falcon or other storage engines then you will get into complex balancing game besides considering all these factors.
- Is 80% of RAM how you should tune your innodb_buffer_pool_size?
- MySQL server memory usage troubleshooting tips
- MySQL 5.7 performance tuning immediately after installation
- Ten MySQL performance tuning settings after installation
- Top most overlooked MySQL performance optimizations
- Percona Server for MySQL 5.7: key performance algorithms
- InnoDB Performance Optimization
- Practical MySQL Performance Tuning: section 1 – understanding query parameter tuning and MySQL optimization
- Practical MySQL Performance Tuning: section 2 – troubleshooting performance issues and optimizing MySQL
- Practical MySQL Performance Tuning: section 3 – query optimization