EmergencyEMERGENCY? Get 24/7 Help Now!

MySQL 5.7 Performance Tuning Immediately After Installation

 | October 12, 2016 |  Posted In: InnoDB, MySQL


This blog updates Stephane Combaudon’s blog on MySQL performance tuning, and covers MySQL 5.7 performance tuning immediately after installation.

A few years ago, Stephane Combaudon wrote a blog post on Ten MySQL performance tuning settings after installation that covers the (now) older versions of MySQL: 5.1, 5.5 and 5.6. In this post, I will look into what to tune in MySQL 5.7 (with a focus on InnoDB).

The good news is that MySQL 5.7 has significantly better default values. Morgan Tocker created a page with a complete list of features in MySQL 5.7, and is a great reference point. For example, the following variables are set by default:

In MySQL 5.7, there are only four really important variables that need to be changed. However, there are other InnoDB and global MySQL variables that might need to be tuned for a specific workload and hardware.

To start, add the following settings to my.cnf under the [mysqld] section. You will need to restart MySQL:


Variable Value
innodb_buffer_pool_size Start with 50% 70% of total RAM. Does not need to be larger than the database size
  • 1   (Default)
  • 0/2 (more performance, less reliability)
innodb_log_file_size 128M – 2G (does not need to be larger than buffer pool)
innodb_flush_method O_DIRECT (avoid double buffering)


What is next?

Those are a good starting point for any new installation. There are a number of other variables that can increase MySQL performance for some workloads. Usually, I would setup a MySQL monitoring/graphing tool (for example, the Percona Monitoring and Management platform) and then check the MySQL dashboard to perform further tuning.

What can we tune further based on the graphs?

InnoDB buffer pool size. Look at the graphs:

MySQL 5.7 Performance Tuning

MySQL 5.7 Performance Tuning

As we can see, we can probably benefit from increasing the InnoDB buffer pool size a bit to ~10G, as we have RAM available and the number of free pages is small compared to the total buffer pool.

InnoDB log file size. Look at the graph:

MySQL 5.7 Performance Tuning

As we can see here, InnoDB usually writes 2.26 GB of data per hour, which exceeds the total size of the log files (2G). We can now increase the innodb_log_file_size variable and restart MySQL. Alternatively, use “show engine InnoDB status” to calculate a good InnoDB log file size.

Other variables

There are a number of other InnoDB variables that can be further tuned:


Setting innodb_autoinc_lock_mode =2 (interleaved mode) can remove the need for table-level AUTO-INC lock (and can increase performance when multi-row insert statements are used to insert values into tables with auto_increment primary key). This requires binlog_format=ROW  or MIXED  (and ROW is the default in MySQL 5.7).

innodb_io_capacity and innodb_io_capacity_max

This is a more advanced tuning, and only make sense when you are performing a lot of writes all the time (it does not apply to reads, i.e. SELECTs). If you really need to tune it, the best method is knowing how many IOPS the system can do. For example, if the server has one SSD drive, we can set innodb_io_capacity_max=6000 and innodb_io_capacity=3000 (50% of the max). It is a good idea to run the sysbench or any other benchmark tool to benchmark the disk throughput.

But do we need to worry about this setting? Look at the graph of buffer pool’s “dirty pages“:


In this case, the total amount of dirty pages is high, and it looks like InnoDB can’t keep up with flushing them. If we have a fast disk subsystem (i.e., SSD), we might benefit from increasing innodb_io_capacity and innodb_io_capacity_max.

Conclusion or TL;DR version

The new MySQL 5.7 defaults are much better for general purpose workloads. At the same time, we still need to configure InnoDB variables to take advantages of the amount of RAM on the box. After installation, follow these steps:

  1. Add InnoDB variables to my.cnf (as described above) and restart MySQL
  2. Install a monitoring system, (e.g., Percona Monitoring and Management platform)
  3. Look at the graphs and determine if MySQL needs to be tuned further
Alexander Rubin

Alexander joined Percona in 2013. Alexander worked with MySQL since 2000 as DBA and Application Developer. Before joining Percona he was doing MySQL consulting as a principal consultant for over 7 years (started with MySQL AB in 2006, then Sun Microsystems and then Oracle). He helped many customers design large, scalable and highly available MySQL systems and optimize MySQL performance. Alexander also helped customers design Big Data stores with Apache Hadoop and related technologies.


  • Alex, this is very useful. One thing I see a lot is people tuning innodb_io_capacity and innodb_io_capacity_max too high because they think they have 12k provisioned iops, they can set max to 12000. This is dangerous as we see a lot of issues in systems that set this too high. I think dbas should give it a lot of thought before going over 5000 and understand why they are doing it.

    The manual also says its better to have this too low than too high.

  • You should set innodb_autoinc_lock_mode =2. If you have many inserts on one table with many clients you can see the table lock elsewise in the InnoDB Engine Status.

    innodb_buffer_pool_size should be set as high as possible. On small system (512 MB) about 70% and on large system (> 32 GB) up to 90% when they only host MySQL.

  • Can you please post “after” graphs of each? I think that would help everyone understand how the recommended change in the various settings affects the graphs. The last graph, in particular, you say ‘InnoDB cannot keep up with flushing them.’ Is that because the graph shows peaks/valleys? Would a straight-ish line indicate better performance?

  • Manjot, it’s also worth reading the latest version of the description of innodb_io_capacity and innodb_io_capacity_max. They were updated a few days ago to give better tuning guidance. In general, set them high enough but don’t set them a lot higher than you actually need.

    The reason for this is that the io operations can end up just being wasted and you can instead use them for foreground threads or faster response times. Whether that matters to you depends on just how many io operations you have available for use without negative performance effect. If you have a vast excess of io operations you might just choose to set them high to keep down possible crash recovery time, wasteful though it would be in cases where io operation shortage is a limiting factor.

    You’ll often find people used to higher end setup suggesting far higher values than the defaults but they don’t have the same need to consider the lower end setups without a DBA that the defaults have to deal with. Notice how Alexander didn’t just give higher values – he’s been around the block and knows it takes a bit of measuring, not just picking numbers. And that measure then respond is what you really should be doing.

    It’s very likely that a normal production server will have far more free io operations off peak than at peak times. So you might increase the settings off peak and perhaps also turn off innodb_adaptive_flushing and increase innodb_adaptive_flushing_lwm to say 50 (%) to catch up on background work. Things like change buffer merging if you find your change buffer getting to its limit during the busy times. You may need to raise the LWM because else that may just cause adaptive flushing to turn on again, and promptly cut the io rate that you’ve been deliberately trying to raise to get ahead of future anticipated demand.

    The LWM setting and adaptive flushing off are a pair of interesting “tricks” to increase the efficiency of redo log space use. If you normally find that you have say 20% of redo space used you might halve the log file size and set the LWM to 40%. Since stock MySQL normally has the redo logs cached by the OS because it doesn’t support O_DIRECT for redo logs until recently you may then find that you’ve freed up some RAM you can allocate to other things. Use caution with an LWM higher than 60-65% and don’t set it higher than 70% unless you really know what you’re doing; you need to allow a fair bit of margin above LWM for adaptive flushing to keep you out of trouble if something unexpected happens..

    I tend to find that these days I’m increasing the InnoDB redo log sizes to reduce the amount of dirty page flushing done by adaptive flushing. That’s usually to free it up for use by the page cleaner flushing, purge or the foreground threads. But of course only when there is actual io shortage. If there isn’t then it’s pointless unless you’re trying to cut writes to prolong SSD lifetime or have some other reason to do it.

    Longer term the server will presumably end up paying attention to available io operations and use that information to adjust background thread work rates in the context of what the foreground work is demanding to keep up with the workload, potentially eliminating the need for us to have to think about them. But even monitoring those rates is relatively new so global optimisation like that is some way off.

  • You set innodb_log_file_size to 256M instantly. In an other blog post here its stated that this is no good practice. What is right?

Leave a Reply