How to calculate a good InnoDB log file size

Peter wrote a post a while ago about choosing a good InnoDB log file size. Not to pick on Peter, but the post actually kind of talks about a lot of things and then doesn’t tell you how to choose a good log file size! So I thought I’d clarify it a little.

The basic point is that your log file needs to be big enough to let InnoDB optimize its I/O, but not so big that recovery takes a long time. That much Peter covered really well. But how do you choose that size? I’ll show you a rule of thumb that works pretty well.

In most cases, when people give you a formula for choosing a configuration setting, you should look at it with skepticism. But in this case you can calculate a reasonable value, believe it or not. Run these queries at your server’s peak usage time:

Notice the log sequence number. That’s the total number of bytes written to the transaction log. So, now you can see how many MB have been written to the log in one minute. (The technique I showed here works on all versions of MySQL. In 5.0 and newer, you can just watch Innodb_os_log_written from SHOW GLOBAL STATUS, too.)

As a rough rule of thumb, you can make the log big enough that it can hold at most an hour or so of logs. That’s generally plenty of data for InnoDB to work with; an hour’s worth is more than enough so that it can reorder the writes to use sequential I/O during the flushing and checkpointing process. At this rate, this server could use about 110 MB of logs, total. Round it up to 128 for good measure. Since there are two log files by default, divide that in half, and now you can set

Does that look surprisingly small? It might. I commonly see log file sizes in the gigabyte ranges. But that’s generally a mistake. The server I used for the measurements above is a big one doing a lot of work, not a toy. Log file sizes can’t be left at the default 5MB for any real workload, but they often don’t need to be as big as you might think, either.

If this rule-of-thumb calculation ends up showing you that your log file size ought to be many gigabytes, well, you have a more active write workload. Perhaps you’re inserting a lot of big rows or something. In this case you might want to make the log smaller so you don’t end up with GB of logs. But also realize this: the recovery time depends not only on the total log file size, but the number of entries in it. If you’re writing huge entries to the log, fewer log entries will fit into a given log file size, which will generally make recovery faster than you might expect with a big log.

However, most of the time when I run this calculation, I end up finding that the log file size needs to be a lot smaller than it’s configured to be. In part that’s because InnoDB’s log entries are very compact. The other reason is that the common advice to size the logs as a fraction of the buffer pool size is just wrong.

One final note: huge buffer pools or really unusual workloads may require bigger (or smaller!) log sizes. This is where formulas break down and judgment and experience are needed. But this “rule of thumb” is generally a good sane place to start.

More Resources


eBooks (free to download)

Database Tools

Share this post

Comments (52)

  • Oz Solomon

    Baron, thanks for a great article.

    I’ve read in a few places that changing innodb_log_file_size after the fact can cause problems, especially when you have slaves. In you experience, is this something I should be worried about or is it safe to update the conf file and restart?

    (I’m using 5.0.45 and 5.0.67 community)


    November 22, 2008 at 7:19 pm
  • Baron Schwartz

    Hi Oz,

    This parameter isn’t related to replication on either master or slave.

    If you change the parameter, you need to shut down cleanly and then move the log files away and restart. InnoDB will fail on restart if existing log files don’t match the configured size.

    November 22, 2008 at 7:28 pm
  • Sheeri K. Cabral

    Interesting, I never thought about it in such depth — I usually just log on to the system and see how often the log rotates — for instance, just now here’s a sample from a production machine:

    /var/lib/mysql> ls -lrth ib_logfile*
    -rw-r–r– 1 mysql mysql 400M Nov 22 09:00 ib_logfile1
    -rw-r–r– 1 mysql mysql 400M Nov 23 10:51 ib_logfile0

    Granted, this is a slow time, but as you can see, there’s at *least* about 2 hours’ worth of space in the logs, as the logfile was “rotated” at 9 am, and it’s 10:51 right now and the logfile is still current. I can logon later and see when the ib_logfile0 rotated as well.

    Your way works too, but depends on getting the right 60 seconds for the average workload at peak time. Certainly your method lands you in the right ballpark, and my method is inaccurate for many other reasons — I’m getting the average over what is usually a large file size, so if the peak time lasts 1 hour and the log rotates approximately every 2 hours, I probably don’t want to cut the log file size in 1/2, maybe more like 2/3 or 3/4, because the volume is averaged.

    I think both methods definitely have their merit. I’ve found it’s easier to have clients understand their log file size is too big because “look, you only want an hour’s worth, and here I can show you with an ‘ls -l’, you have at least 4 hours’ worth, so let’s try cutting it in half.”

    Though I think the nature of our work can be different — we have 100% transparency to the client, and only in dire emergencies do we do stuff without getting the client’s approval first. Most of what we do is recommendations (most recommendations are followed, but still)….we’re more a part of the DBA team, and Percona’s structure (from what I understand) is more of a “we’ll help you get standing on your feet, and prepare you for the future” but aren’t a day-to-day part of the existing DBA team.

    November 23, 2008 at 9:04 am
    • Jette

      I guess this has changed since 2008. These days the two files always has the same mtime. So I guess your “trick” doesn’t work anymore.

      Quote: The first log file will always have the checkpoint info written to it, so that is why even if ib_logfile1 is the “active” logfile, you will still see writes to ib_logfile0.

      July 27, 2017 at 3:37 pm
  • Sheeri K. Cabral

    (oh, and it’s fairly easy to have monitoring mark when a file was rotated, so I can have it figured out automatedly. Then again, the nature of our works are also different — your clients have more dire emergencies than ours, likely because yours come to you in dire emergencies, and we may have some come to us in a dire emergency, but often they stay along for general day-to-day DBA stuff….)

    But — this would be a good check for a MySQL Monitoring system, whether it’s the Enterprise offering from Sun, or a cacti monitoring thing (frequency of log rotation) or an OurDelta thing.

    November 23, 2008 at 9:07 am
  • Matt

    Sheeri, Look at your timestamps again – it’s more like 26 hours than 2!

    November 24, 2008 at 2:00 am
  • Baron Schwartz

    Actually, I think most of our work is not as you imagine — it is routine remote work, just like yours, and we get approval just like you. Percona does everything Pythian does, from what I know, except we don’t do Oracle and SQL Server (much).

    November 24, 2008 at 7:52 am
  • Ketan Bengali

    Hi Baron,

    Thanks a lot for the article. It was really good. I had a query about innodb_max_dirty_pages parameter and it’s impact on log file size. Consider a case where innodb_buffer_size is set to 2GB and max_dirty_pages is set to 90%. What it means at any point in time MySQL could have upto (2*0.9) 1.8GB worth of committed and unflushed data in the memory. After MySQL crash and during recovery, MySQL should recover 1.8GB worth of data from log files. Even if log files are not utilized heavily keeping its size small and overwriting it would result in permanently unrecoverable MySQL instance. Does anyone faced this problem before? Is parameter (innodb_max_dirty_pages*innodb_buffer_size) even used in calculating log file size? If not then how does MySQL recover unflushed data after crash?


    Ketan. Bengali

    November 24, 2008 at 9:04 am
  • Kari Pätilä

    Does anyone have an idea why I can’t set innodb_log_file_size without crashing MySQL? The restart says OK, but nothing seems to work. Other innodb variables produce no problems.

    November 24, 2008 at 11:43 am
  • Baron Schwartz

    Ketan, that’s not how InnoDB works. Full records don’t go in the logs, only very compact changes. Just because there is 1.8GB of dirty buffers doesn’t mean 1.8GB of changes have been made to them.

    And even if the logs are too small, InnoDB will still not get itself into an unrecoverable situation. If it has to, it will block further changes to data until it has a chance to free up some room to work.

    November 24, 2008 at 11:47 am
  • Baron Schwartz

    Kari, to change the log file size, you have to shut MySQL down cleanly and move the old log files out of the way, then restart. Keep the old logs until it starts succesfully (creating new ones) and then discard them.

    November 24, 2008 at 11:50 am
  • Kari Pätilä

    Thank you for the quick response.

    November 24, 2008 at 12:57 pm
  • Ketan Bengali

    Thanks a lot Baron for clarifing it. It was really useful.


    Ketan Bengali

    November 25, 2008 at 9:47 am
  • Jesper Wisborg Krogh

    I have got the impression that if you get the error message:

    081223 7:54:18 InnoDB: ERROR: the age of the last checkpoint is xxxxx,
    InnoDB: which exceeds the log group capacity yyyyy.
    InnoDB: If you are using big BLOB or TEXT rows, you must set the
    InnoDB: combined size of log files at least 10 times bigger than the
    InnoDB: largest such row.

    then it is necessary to increase the value of innodb_log_file_size. Is that only a matter of obtaining maximal performance or can it have more severe implications as well such as losing ACID compliancy, corrupt data, etc.?

    December 22, 2008 at 7:04 pm
  • Baron Schwartz

    No. That error is InnoDB telling you, “hey you! I cannot work with the resources you gave me! I refuse!” and NOT doing work instead. You’d get into trouble if InnoDB tried to do something it couldn’t. InnoDB is very careful not to break those guarantees you mentioned.

    December 22, 2008 at 7:55 pm
  • Jesper Wisborg Krogh

    That sounds good, however I assume there is no other alternative than to increase the value of innodb_log_file_size or to make the transactions smaller? In our case innodb_log_file_size is already at 512 MB, so we would prefer it not to increase to all that much.

    December 22, 2008 at 8:05 pm
  • Baron Schwartz

    Jesper, I think something else is wrong. Have you *confirmed* that the log files are 512MB? What do you get from this?

    SHOW GLOBAL VARIABLES LIKE ‘innodb_log_file_size’;

    I think you have a typo in your my.cnf, or you’re putting the variables in the wrong my.cnf file, or something like that. Either that, or you’re working with blobs that are hundreds of MB or some other extremely unusual workload. Or an InnoDB bug (very unlikely). Either way, if you can’t figure out what’s wrong after checking these things, I think you probably need expert help 😐

    December 22, 2008 at 8:40 pm
  • Jesper Wisborg Krogh

    mysql> SHOW GLOBAL VARIABLES LIKE ‘innodb_log_file_size’;
    | Variable_name | Value |
    | innodb_log_file_size | 536870912 |
    1 row in set (0.00 sec)

    The error message in the log states that the age of the last checkpoint is 966364465 and the capacity is 966363956. I’m not sure why there capacity is a bit smaller than two times the innodb_log_file_size, but I assume there are some space being used for bookkeeping.

    It is one particular transaction that seems to trigger the error and that has a total of nine columns: 3 unsigned ints, 4 ints, 1 unsigned tinyint, and one datetime. It is a cache table so the transaction first deletes all rows and then repopulates it following by a couple of table scans to manipulate the data in an ordered update. The total number of rows is 1286113 (determined with SELECT COUNT(*)). The file containing the data for the table is 772 MB. According to SHOW TABLE STATUS the stats is:

    Data_length: 165838848
    Index_length: 219807744
    Data_free: 0
    Comment: InnoDB free: 399360 kB

    Other things are going on in the server at the same time. I tried to do a test on our development system and it will run on an idle system with innodb_log_file_size set to 512 MB, so it might also be that we can get around it by trying to reschedule the tasks, however I suspect even so we would be running close to the limit. Which brings me to: is there anyway to monitor whether you are getting close to using all the available space in the innodb log files?

    December 22, 2008 at 9:05 pm
  • Baron Schwartz

    I think you need to be nicer to your database. Try building the new table in table_new, so you can commit the transaction after every command, and then swapping the table_current and table_new with RENAME TABLE. And now your free advice tokens for the day are all spent 🙂

    December 22, 2008 at 9:19 pm
  • Jesper Wisborg Krogh

    Thanks a lot – I had considered that as an option as well as well as keeping track of which rows actually need to be changed. It will of course require a bit more work and bookkeeping, but it should be doable and I believe in this case it is the best long term solution.

    Merry Christmas

    December 22, 2008 at 9:25 pm
  • Adam

    Heh, I am messing with this now for my research and I’m quite certain it says right in the readme that it should be 25% of your buffer pool size, which should be 50-80% of available memory. To me it seems like 64M is indeed way too low.

    January 4, 2009 at 1:21 pm
  • Baron Schwartz

    That is exactly the point of writing this post. Those guidelines are wrong. Both of them.

    January 4, 2009 at 2:37 pm
  • Vladimir Rusinov

    Hi. Here is translation to Russian (if someone interested):

    February 1, 2009 at 4:31 pm
  • John Marc

    Yow! Guess I need to up this!

    -rw-rw—- 1 mysql mysql 100M Mar 28 16:49 ib_logfile0
    -rw-rw—- 1 mysql mysql 100M Mar 28 16:50 ib_logfile1

    Using Barons calcs I am looking at 200MB, and Sheens method says 600MB?

    March 28, 2009 at 1:55 pm
  • Andy Geers

    I’m slightly confused by this article – it seems to match very closely the advice given by the InnoDB developers for the innodb_log_buffer_size setting here:

    … rather than the innodb_log_*file*_size setting that you are talking about. Can you confirm that you definitely mean the setting you’re referring to?

    April 20, 2009 at 4:38 am
  • Baron Schwartz

    Yes, I mean innodb_log_file_size. About the slides — actually that advice was from Peter Zaitsev. He’s not giving you all the details you need to understand what he was talking about. You would have had to be in that talk to understand it.

    April 20, 2009 at 3:28 pm
  • Catalin


    Do you guys know why my innodb data and log files haven’t changed ?
    ibdata1 has 1.8G
    ib_logfile0 has 5M
    ib_logfile1 has 5M

    Here are my settings:

    innodb_data_home_dir = /var/lib/mysql/
    innodb_data_file_path = ibdata1:1000M;ibdata2:10M:autoextend
    innodb_log_group_home_dir = /var/lib/mysql/
    innodb_log_arch_dir = /var/lib/mysql/
    innodb_buffer_pool_size = 2048M
    innodb_additional_mem_pool_size = 20M
    innodb_log_file_size = 512M
    innodb_log_files_in_group = 2
    innodb_log_buffer_size = 8M
    innodb_thread_concurrency = 8
    innodb_flush_log_at_trx_commit = 1
    innodb_lock_wait_timeout = 50

    I stopped mysqld, deleted the data+log files, but when I start mysql again the log shows:

    100601 09:08:30 mysqld started
    InnoDB: The first specified data file ./ibdata1 did not exist:
    InnoDB: a new database to be created!
    100601 9:08:30 InnoDB: Setting file ./ibdata1 size to 10 MB
    InnoDB: Database physically writes the file full: wait…
    100601 9:08:30 InnoDB: Log file ./ib_logfile0 did not exist: new to be created
    InnoDB: Setting log file ./ib_logfile0 size to 5 MB
    InnoDB: Database physically writes the file full: wait…
    100601 9:08:30 InnoDB: Log file ./ib_logfile1 did not exist: new to be created
    InnoDB: Setting log file ./ib_logfile1 size to 5 MB
    InnoDB: Database physically writes the file full: wait…
    InnoDB: Doublewrite buffer not found: creating new
    InnoDB: Doublewrite buffer created
    InnoDB: Creating foreign key constraint system tables
    InnoDB: Foreign key constraint system tables created
    100601 9:08:30 InnoDB: Started; log sequence number 0 0
    100601 9:08:30 [Note] /usr/libexec/mysqld: ready for connections.
    Version: ‘5.0.77’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 Source distribution
    A mysqld process already exists at Tue Jun 1 09:21:35 BST 2010
    100601 9:39:04 InnoDB: ERROR: the age of the last checkpoint is 9434190,
    InnoDB: which exceeds the log group capacity 9433498.
    InnoDB: If you are using big BLOB or TEXT rows, you must set the
    InnoDB: combined size of log files at least 10 times bigger than the
    InnoDB: largest such row.

    June 1, 2010 at 1:56 am
  • Baron Schwartz

    Could you ask this question on our forum instead?

    June 1, 2010 at 7:45 am

    When calculating innodb log file space usage; If we consider only “Log sequence number” and doesn’t account for checkpoints (Last checkpoint at) at all. When activity is flushed to the disk – if InnoDB has reached a checkpoint multiple times within the time (60 seconds apart) between status outputs, you might be setting the file size inappropri