EmergencyEMERGENCY? Get 24/7 Help Now!

How to calculate a good InnoDB log file size

 | November 21, 2008 |  Posted In: Insight for DBAs


Peter wrote a post a while ago about choosing a good InnoDB log file size. Not to pick on Peter, but the post actually kind of talks about a lot of things and then doesn’t tell you how to choose a good log file size! So I thought I’d clarify it a little.

The basic point is that your log file needs to be big enough to let InnoDB optimize its I/O, but not so big that recovery takes a long time. That much Peter covered really well. But how do you choose that size? I’ll show you a rule of thumb that works pretty well.

In most cases, when people give you a formula for choosing a configuration setting, you should look at it with skepticism. But in this case you can calculate a reasonable value, believe it or not. Run these queries at your server’s peak usage time:

Notice the log sequence number. That’s the total number of bytes written to the transaction log. So, now you can see how many MB have been written to the log in one minute. (The technique I showed here works on all versions of MySQL. In 5.0 and newer, you can just watch Innodb_os_log_written from SHOW GLOBAL STATUS, too.)

As a rough rule of thumb, you can make the log big enough that it can hold at most an hour or so of logs. That’s generally plenty of data for InnoDB to work with; an hour’s worth is more than enough so that it can reorder the writes to use sequential I/O during the flushing and checkpointing process. At this rate, this server could use about 110 MB of logs, total. Round it up to 128 for good measure. Since there are two log files by default, divide that in half, and now you can set

Does that look surprisingly small? It might. I commonly see log file sizes in the gigabyte ranges. But that’s generally a mistake. The server I used for the measurements above is a big one doing a lot of work, not a toy. Log file sizes can’t be left at the default 5MB for any real workload, but they often don’t need to be as big as you might think, either.

If this rule-of-thumb calculation ends up showing you that your log file size ought to be many gigabytes, well, you have a more active write workload. Perhaps you’re inserting a lot of big rows or something. In this case you might want to make the log smaller so you don’t end up with GB of logs. But also realize this: the recovery time depends not only on the total log file size, but the number of entries in it. If you’re writing huge entries to the log, fewer log entries will fit into a given log file size, which will generally make recovery faster than you might expect with a big log.

However, most of the time when I run this calculation, I end up finding that the log file size needs to be a lot smaller than it’s configured to be. In part that’s because InnoDB’s log entries are very compact. The other reason is that the common advice to size the logs as a fraction of the buffer pool size is just wrong.

One final note: huge buffer pools or really unusual workloads may require bigger (or smaller!) log sizes. This is where formulas break down and judgment and experience are needed. But this “rule of thumb” is generally a good sane place to start.

Baron Schwartz

Baron is the lead author of High Performance MySQL. He is a former Percona employee.


  • Baron, thanks for a great article.

    I’ve read in a few places that changing innodb_log_file_size after the fact can cause problems, especially when you have slaves. In you experience, is this something I should be worried about or is it safe to update the conf file and restart?

    (I’m using 5.0.45 and 5.0.67 community)


  • Hi Oz,

    This parameter isn’t related to replication on either master or slave.

    If you change the parameter, you need to shut down cleanly and then move the log files away and restart. InnoDB will fail on restart if existing log files don’t match the configured size.

  • Interesting, I never thought about it in such depth — I usually just log on to the system and see how often the log rotates — for instance, just now here’s a sample from a production machine:

    /var/lib/mysql> ls -lrth ib_logfile*
    -rw-r–r– 1 mysql mysql 400M Nov 22 09:00 ib_logfile1
    -rw-r–r– 1 mysql mysql 400M Nov 23 10:51 ib_logfile0

    Granted, this is a slow time, but as you can see, there’s at *least* about 2 hours’ worth of space in the logs, as the logfile was “rotated” at 9 am, and it’s 10:51 right now and the logfile is still current. I can logon later and see when the ib_logfile0 rotated as well.

    Your way works too, but depends on getting the right 60 seconds for the average workload at peak time. Certainly your method lands you in the right ballpark, and my method is inaccurate for many other reasons — I’m getting the average over what is usually a large file size, so if the peak time lasts 1 hour and the log rotates approximately every 2 hours, I probably don’t want to cut the log file size in 1/2, maybe more like 2/3 or 3/4, because the volume is averaged.

    I think both methods definitely have their merit. I’ve found it’s easier to have clients understand their log file size is too big because “look, you only want an hour’s worth, and here I can show you with an ‘ls -l’, you have at least 4 hours’ worth, so let’s try cutting it in half.”

    Though I think the nature of our work can be different — we have 100% transparency to the client, and only in dire emergencies do we do stuff without getting the client’s approval first. Most of what we do is recommendations (most recommendations are followed, but still)….we’re more a part of the DBA team, and Percona’s structure (from what I understand) is more of a “we’ll help you get standing on your feet, and prepare you for the future” but aren’t a day-to-day part of the existing DBA team.

  • (oh, and it’s fairly easy to have monitoring mark when a file was rotated, so I can have it figured out automatedly. Then again, the nature of our works are also different — your clients have more dire emergencies than ours, likely because yours come to you in dire emergencies, and we may have some come to us in a dire emergency, but often they stay along for general day-to-day DBA stuff….)

    But — this would be a good check for a MySQL Monitoring system, whether it’s the Enterprise offering from Sun, or a cacti monitoring thing (frequency of log rotation) or an OurDelta thing.

  • Actually, I think most of our work is not as you imagine — it is routine remote work, just like yours, and we get approval just like you. Percona does everything Pythian does, from what I know, except we don’t do Oracle and SQL Server (much).

  • Hi Baron,

    Thanks a lot for the article. It was really good. I had a query about innodb_max_dirty_pages parameter and it’s impact on log file size. Consider a case where innodb_buffer_size is set to 2GB and max_dirty_pages is set to 90%. What it means at any point in time MySQL could have upto (2*0.9) 1.8GB worth of committed and unflushed data in the memory. After MySQL crash and during recovery, MySQL should recover 1.8GB worth of data from log files. Even if log files are not utilized heavily keeping its size small and overwriting it would result in permanently unrecoverable MySQL instance. Does anyone faced this problem before? Is parameter (innodb_max_dirty_pages*innodb_buffer_size) even used in calculating log file size? If not then how does MySQL recover unflushed data after crash?


    Ketan. Bengali

  • Does anyone have an idea why I can’t set innodb_log_file_size without crashing MySQL? The restart says OK, but nothing seems to work. Other innodb variables produce no problems.

  • Ketan, that’s not how InnoDB works. Full records don’t go in the logs, only very compact changes. Just because there is 1.8GB of dirty buffers doesn’t mean 1.8GB of changes have been made to them.

    And even if the logs are too small, InnoDB will still not get itself into an unrecoverable situation. If it has to, it will block further changes to data until it has a chance to free up some room to work.

  • Kari, to change the log file size, you have to shut MySQL down cleanly and move the old log files out of the way, then restart. Keep the old logs until it starts succesfully (creating new ones) and then discard them.

  • I have got the impression that if you get the error message:

    081223 7:54:18 InnoDB: ERROR: the age of the last checkpoint is xxxxx,
    InnoDB: which exceeds the log group capacity yyyyy.
    InnoDB: If you are using big BLOB or TEXT rows, you must set the
    InnoDB: combined size of log files at least 10 times bigger than the
    InnoDB: largest such row.

    then it is necessary to increase the value of innodb_log_file_size. Is that only a matter of obtaining maximal performance or can it have more severe implications as well such as losing ACID compliancy, corrupt data, etc.?

  • No. That error is InnoDB telling you, “hey you! I cannot work with the resources you gave me! I refuse!” and NOT doing work instead. You’d get into trouble if InnoDB tried to do something it couldn’t. InnoDB is very careful not to break those guarantees you mentioned.

  • That sounds good, however I assume there is no other alternative than to increase the value of innodb_log_file_size or to make the transactions smaller? In our case innodb_log_file_size is already at 512 MB, so we would prefer it not to increase to all that much.

  • Jesper, I think something else is wrong. Have you *confirmed* that the log files are 512MB? What do you get from this?

    SHOW GLOBAL VARIABLES LIKE ‘innodb_log_file_size’;

    I think you have a typo in your my.cnf, or you’re putting the variables in the wrong my.cnf file, or something like that. Either that, or you’re working with blobs that are hundreds of MB or some other extremely unusual workload. Or an InnoDB bug (very unlikely). Either way, if you can’t figure out what’s wrong after checking these things, I think you probably need expert help 😐

  • mysql> SHOW GLOBAL VARIABLES LIKE ‘innodb_log_file_size’;
    | Variable_name | Value |
    | innodb_log_file_size | 536870912 |
    1 row in set (0.00 sec)

    The error message in the log states that the age of the last checkpoint is 966364465 and the capacity is 966363956. I’m not sure why there capacity is a bit smaller than two times the innodb_log_file_size, but I assume there are some space being used for bookkeeping.

    It is one particular transaction that seems to trigger the error and that has a total of nine columns: 3 unsigned ints, 4 ints, 1 unsigned tinyint, and one datetime. It is a cache table so the transaction first deletes all rows and then repopulates it following by a couple of table scans to manipulate the data in an ordered update. The total number of rows is 1286113 (determined with SELECT COUNT(*)). The file containing the data for the table is 772 MB. According to SHOW TABLE STATUS the stats is:

    Data_length: 165838848
    Index_length: 219807744
    Data_free: 0
    Comment: InnoDB free: 399360 kB

    Other things are going on in the server at the same time. I tried to do a test on our development system and it will run on an idle system with innodb_log_file_size set to 512 MB, so it might also be that we can get around it by trying to reschedule the tasks, however I suspect even so we would be running close to the limit. Which brings me to: is there anyway to monitor whether you are getting close to using all the available space in the innodb log files?

  • I think you need to be nicer to your database. Try building the new table in table_new, so you can commit the transaction after every command, and then swapping the table_current and table_new with RENAME TABLE. And now your free advice tokens for the day are all spent 🙂

  • Thanks a lot – I had considered that as an option as well as well as keeping track of which rows actually need to be changed. It will of course require a bit more work and bookkeeping, but it should be doable and I believe in this case it is the best long term solution.

    Merry Christmas

  • Heh, I am messing with this now for my research and I’m quite certain it says right in the readme that it should be 25% of your buffer pool size, which should be 50-80% of available memory. To me it seems like 64M is indeed way too low.

  • Yow! Guess I need to up this!

    -rw-rw—- 1 mysql mysql 100M Mar 28 16:49 ib_logfile0
    -rw-rw—- 1 mysql mysql 100M Mar 28 16:50 ib_logfile1

    Using Barons calcs I am looking at 200MB, and Sheens method says 600MB?

  • I’m slightly confused by this article – it seems to match very closely the advice given by the InnoDB developers for the innodb_log_buffer_size setting here: http://www.innodb.com/wp/wp-content/uploads/2007/05/uc2007-innodb-performance-optimization.pdf

    … rather than the innodb_log_*file*_size setting that you are talking about. Can you confirm that you definitely mean the setting you’re referring to?

  • Yes, I mean innodb_log_file_size. About the slides — actually that advice was from Peter Zaitsev. He’s not giving you all the details you need to understand what he was talking about. You would have had to be in that talk to understand it.

  • Hi,

    Do you guys know why my innodb data and log files haven’t changed ?
    ibdata1 has 1.8G
    ib_logfile0 has 5M
    ib_logfile1 has 5M

    Here are my settings:

    innodb_data_home_dir = /var/lib/mysql/
    innodb_data_file_path = ibdata1:1000M;ibdata2:10M:autoextend
    innodb_log_group_home_dir = /var/lib/mysql/
    innodb_log_arch_dir = /var/lib/mysql/
    innodb_buffer_pool_size = 2048M
    innodb_additional_mem_pool_size = 20M
    innodb_log_file_size = 512M
    innodb_log_files_in_group = 2
    innodb_log_buffer_size = 8M
    innodb_thread_concurrency = 8
    innodb_flush_log_at_trx_commit = 1
    innodb_lock_wait_timeout = 50

    I stopped mysqld, deleted the data+log files, but when I start mysql again the log shows:

    100601 09:08:30 mysqld started
    InnoDB: The first specified data file ./ibdata1 did not exist:
    InnoDB: a new database to be created!
    100601 9:08:30 InnoDB: Setting file ./ibdata1 size to 10 MB
    InnoDB: Database physically writes the file full: wait…
    100601 9:08:30 InnoDB: Log file ./ib_logfile0 did not exist: new to be created
    InnoDB: Setting log file ./ib_logfile0 size to 5 MB
    InnoDB: Database physically writes the file full: wait…
    100601 9:08:30 InnoDB: Log file ./ib_logfile1 did not exist: new to be created
    InnoDB: Setting log file ./ib_logfile1 size to 5 MB
    InnoDB: Database physically writes the file full: wait…
    InnoDB: Doublewrite buffer not found: creating new
    InnoDB: Doublewrite buffer created
    InnoDB: Creating foreign key constraint system tables
    InnoDB: Foreign key constraint system tables created
    100601 9:08:30 InnoDB: Started; log sequence number 0 0
    100601 9:08:30 [Note] /usr/libexec/mysqld: ready for connections.
    Version: ‘5.0.77’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 Source distribution
    A mysqld process already exists at Tue Jun 1 09:21:35 BST 2010
    100601 9:39:04 InnoDB: ERROR: the age of the last checkpoint is 9434190,
    InnoDB: which exceeds the log group capacity 9433498.
    InnoDB: If you are using big BLOB or TEXT rows, you must set the
    InnoDB: combined size of log files at least 10 times bigger than the
    InnoDB: largest such row.

  • When calculating innodb log file space usage; If we consider only “Log sequence number” and doesn’t account for checkpoints (Last checkpoint at) at all. When activity is flushed to the disk – if InnoDB has reached a checkpoint multiple times within the time (60 seconds apart) between status outputs, you might be setting the file size inappropriately??

    Anyway this is out of date as of MySQL 5.5, recovery times have been greatly improved and the whole log file flushing algorithm has been improved. In 5.5 you generally want larger log files as recovery is improved. Thus, the large innodb log files allow you to take advantage of the new algorithm.

  • one thing I don’t quiet understand:
    when change mysql iblog size, we need to shutdown mysql, move/or remove the old iblog, and restart with new size indicated in my.cnf. isn’t the old iblog contains “uncommit changes”/innodb crash recovery information? if we move/or remove the old iblog, where does innodb gets that information upon next restart?

  • If there’s no need for a crash recovery, the old log files are never read at all. This is a rule in InnoDB: log files are only read during recovery, and are otherwise just written to during normal operation.

  • What if the calculation above shows that I should have an innodb log file size of 1-2 MB? I’m running Percona server on an ec2 small instance with the XFS filesystem for the mysql data directory.

  • Hi!

    I come from Viet Nam. Now i want to check capacity (free space & used space) of datafile on InnoDB (MySQL). How to command?
    I just research for MySQL, hope you help me. Big thanks!

  • hello Sheeri K. Cabral s.

    My log files are looking below

    ls -lrth ib_logfile*
    -rw-rw—-. 1 mysql mysql 256M Nov 28 12:26 ib_logfile0
    -rw-rw—-. 1 mysql mysql 256M Nov 28 12:26 ib_logfile1

    Shall I need to split the log files

  • It seems that in 5.5 the second log file is touched even if a rotation has not occurred, that is to say that the ls -l approach is misleading.


    # ls -l /var/lib/mysql/ib_logfile* && date && mysql -BNe’show engine innodb status\G’ | grep sequence && sleep 60 && date && mysql -BNe’show engine innodb status\G’ | grep sequence && ls -l /var/lib/mysql/ib_logfile*
    -rw-rw—- 1 mysql mysql 536870912 May 21 22:23 /var/lib/mysql/ib_logfile0
    -rw-rw—- 1 mysql mysql 536870912 May 21 22:23 /var/lib/mysql/ib_logfile1
    Tue May 21 22:23:15 CDT 2013
    Log sequence number 17750417191
    Tue May 21 22:24:15 CDT 2013
    Log sequence number 17750981011
    -rw-rw—- 1 mysql mysql 536870912 May 21 22:24 /var/lib/mysql/ib_logfile0
    -rw-rw—- 1 mysql mysql 536870912 May 21 22:24 /var/lib/mysql/ib_logfile1

    Of course it is worth reiterating the point made by AFTAB, crash recovery is far cheaper in modern versions of InnoDB:


    And from the manual:


    “If you kept the size of your redo log files artificially low because recovery took a long time, you can consider increasing the file size.”

  • Scratch my previous comment, this is seemingly not repeatable today: (despite the fact that i ran this test several times over the course of ten minutes yesterday, maybe there is some cleanup thread that can do this under certain conditions)

    # ls -l /var/lib/mysql/ib_logfile* && date && mysql -BNe’show engine innodb status\G’ | grep sequence && sleep 60 && date && mysql -BNe’show engine innodb status\G’ | grep sequence && ls -l /var/lib/mysql/ib_logfile*
    -rw-rw—- 1 mysql mysql 536870912 May 22 13:33 /var/lib/mysql/ib_logfile0
    -rw-rw—- 1 mysql mysql 536870912 May 22 12:55 /var/lib/mysql/ib_logfile1
    Wed May 22 13:34:00 CDT 2013
    Log sequence number 18342929533
    Wed May 22 13:35:00 CDT 2013
    Log sequence number 18343465682
    -rw-rw—- 1 mysql mysql 536870912 May 22 13:35 /var/lib/mysql/ib_logfile0
    -rw-rw—- 1 mysql mysql 536870912 May 22 12:55 /var/lib/mysql/ib_logfile1

  • According to this link, https://mariadb.atlassian.net/browse/MDEV-4662 Jesper is correct and I am wrong: when InnoDB warns about not enough space in the logs, it’s overwritten its last checkpoint. Thanks to Jeremy Cole for finding out the truth.

  • Hi, when examining our server (bloated drupal installation with a lot of data), I got over 400MB writen per minute (yes, not an error, this is what Drupal does when to bloated with high number of visits…)

    My innodb log size is 256MB right now (well, 2 files of 256M)…Does it make any sense that try 1GB size, or more ? I know recovery time will be much greater…

  • mysql> pager grep sequence;
    PAGER set to ‘grep sequence’
    mysql> show engine innodb status\G select sleep(60); show engine innodb status\G
    Log sequence number 49 2161386250

    You also state that you can get the same information with “show global status”
    Innodb_os_log_written | 3067036160

    That’s not the same information, in fact its an entirely different number.
    I wouldn’t trust this blog, especially since there is no mention of what problems will be caused by changing the logfile size.

  • @Jack Tors you’re supposed to look at the DIFFERENCE between these numbers after 60 seconds, not the current literal values.

  • How long can the db survive with “InnoDB: ERROR: the age of the last checkpoint is…which exceeds the log group capacity…”?
    It is something that should be fixed immediately or there is just a performance problem?

  • Do you think that 5 MB or 64 MB is a value for real world workload ? These are my numbers;
    select (283659589588754 – 283659393725516)/1024/1024 as MB_per_min;
    | MB_per_min |
    | 186.78973961 |

    This is 10 GB for one our… 🙁 I am reading this because I am hitting this problem at backup: https://www.percona.com/forums/questions-discussions/percona-xtrabackup/14029-backup-fails-with-log-block-numbers-mismatch

  • looks like formatting in the example commands got wrong – in “show engine innodb statusG select sleep(60); show engine innodb statusG”, add a backslash before each “G”

Leave a Reply