Rotating MySQL Slow Logs Safely

This blog post is part two of two. Like part one, published Wednesday, this is a cross-post from Groupon’s engineering blog. Thanks again to Kyle Oppenheim at Groupon. And one more reminder that I’ll be at the Percona Live MySQL Conference and Expo next week in Santa Clara, California so look for me there. You can checkout the sessions I’ll be leading here.

In my last post, I described a solution for keeping the caches of a MySQL standby server hot using MySQL slow logs with long_query_time set to 0. Here are a some lessons we learned when logging a high volume of queries to the slow log.

Do not use copytruncate

Logrotate offers two techniques for log rotation (your log rotation scheme likely offers similar options with a different name):

  1. copytruncate – Copies the file to a new name, and then truncates the original file.
  2. no copytruncate – Uses the rename() system call to move the file to a new name, and then expects the daemon to be signaled to reopen its log file.

MySQL has a mutex for slow log writes. Truncation can block MySQL because the OS serializes access to the inode during the truncate operation. This problem is particularly evident when using the ext3 file system (instead of xfs).

Use FLUSH LOGS instead of sending SIGHUP

When copytruncate is disabled, MySQL must be told to reopen the slow log file. There are two options for signaling:

  1. Send a HUP signal to the mysqld process.
  2. Use the mysql console or mysqladmin utility to FLUSH LOGS;

These options should be equivalent, but MySQL bug 65481 explains that the HUP signal also flushes tables in addition to logs. Flushing tables can impact running queries.

Disable MySQL slow logs during rotation

Flushing logs takes time. Meanwhile, queries are still being executed. To prevent MySQL from filling the slow log buffer, we disable the MySQL slow logs temporarily during log rotation.

Putting it all together

Here is a logrotate configuration file for a slow log that illustrates these best practices:

Share this post

Comments (21)

  • martin.arrieta


    One important thing to check is if you are using persistent-connections.

    The global variable long_query_time is checked when the connection is created, so if you are using a connection pool or persistent connections this will not be useful because it will not affect to the connections that were already opened.

    There is a Percona Server feature to solve this issue.

    You can set the “slow_query_log_use_global_control” in 5.5 or “use_global_log_slow_control” in 5.1 in 1 and the server will use the global variable long_query_time instead the local one.

    PS 5.1
    PS 5.5



    April 18, 2013 at 10:55 am
  • Andrew

    On MySQL 5.5+ I would think its preferable to use FLUSH /*!50503 SLOW */ LOGS, rather than a general FLUSH LOGS which will touch a lot of other (likely more expensive) log files as well.

    April 18, 2013 at 11:09 am
  • Peter Boros


    Martin, Andrew, you are right on these. You can also use SET GLOBAL slow_query_log = on|off to turn slow log on and off, like Bill does in his tool here.

    April 18, 2013 at 11:24 am
  • ruochen

    Hi,Peter Boros

    Thanks for your good idea !
    I have a security misgivings with use mysql -e,which need user and password because procedure box must be setted.

    April 19, 2013 at 3:11 am
  • astrostl

    “To prevent MySQL from filling the slow log buffer, we disable the MySQL slow logs temporarily during log rotation.”

    Wouldn’t that be in the prerotate section? I don’t see one in the stanza provided.

    April 19, 2013 at 12:27 pm
  • Kyle Oppenheim

    @astrostl, the slow logs are disabled in the postrotate stanza. The order of operations is:

    1. logrotate renames the log file. MySQL continues to write to the renamed file because it has not yet reopened the file.
    2. logrotate runs the postrotate stanza
    2a. Slow logging is paused (set global long_query_time=2000;)
    2b. There is a small wait for buffered logs to be flushed (select sleep(2);)
    2c. File handles are reopened (FLUSH LOGS;)
    2d. Slow logging resumed (set global long_query_time=@lqt_save;)

    April 19, 2013 at 1:59 pm
  • astrostl

    Roger that. It was “during log rotation” that threw me. Since there are only pre and post methods, I would consider “during” to imply started in pre, and stopped in post.

    April 19, 2013 at 2:50 pm
  • braa

    is this method usable for schema with replication? What will happen with bin-logs after FLUSH LOGS?

    August 14, 2013 at 6:06 am
  • Peter Boros

    Hi braa,

    The current binary log will be closed, and a new will be opened on FLUSH LOGS, regardless of max_binlog_size. This won’t cause you any replication trouble. If you are on MySQL 5.5 or above, you can use FLUSH SLOW LOGS to rotate only the slow logs.

    August 14, 2013 at 8:29 am
  • braa

    Thanks for your respond, Peter! It’s just my case!
    I’ll try to use FLUSH SLOW LOGS

    August 14, 2013 at 8:36 am
  • braa

    Peter, I’ve got the task of benchmark testing my MySQL with some queries. And I’ve not got a clue from which point to begin!
    Please, give me direction to dig

    August 22, 2013 at 4:43 am
  • monty

    The sample logrotate configuration file use sharedscripts even though it is used to rotate a single file.

    Are similar considerations needed to safely rotate the MySQL general and error logs? Can they be merged into a single logrotate configuration file with the slow log?

    December 16, 2013 at 5:27 pm
  • Kyle Oppenheim

    @monty, it appears that I cargo-culted the sharedscripts option from our standard config. You are correct that it makes little difference in this case.

    You raised a good question regarding the general and error logs. At Groupon we use the same recipe for rotating error logs. I suspect that using copytruncate without a postrotate command would likely be fine unless you have large error logs. (Our error logs are on the order of 100 bytes per hour.) Honestly, I never tried the alternative. 🙂

    December 16, 2013 at 10:03 pm
  • Valerie Parham-Thompson

    I agree with ruochen about security misgivings. Looking for slash working on a way to rotate slow logs without requiring a super user. Wondering if there’s a way to do this within mysql as event.

    January 13, 2014 at 10:16 am
  • Kyle Oppenheim

    @Valerie, you could create a new MySQL user for log rotation. That user would need the RELOAD privilege ( You won’t be able to use the part of the recipe that changes the long query time as SET GLOBAL requires the SUPER privilege. I haven’t tried this, but I imagine you could put the SET GLOBAL calls in a stored procedure that executes as user that has the SUPER privilege and call it from an unprivileged user.

    January 13, 2014 at 11:48 am
  • Valerie Parham-Thompson

    Thanks, Kyle. I’ll give that a try. For now, I’ve got it working via a MySQL event, but it doesn’t do the archive/gzip like I want.

    January 19, 2014 at 9:49 am
  • linzuxiong

    Please format the content.

    June 27, 2015 at 3:10 am
  • veeresh

    can u please tell how we need to do in windows for the same situtation

    December 22, 2015 at 6:02 am
  • Fabian

    I encountered an issue on a server which has a root password. Logrotate apparently does not read .my.cnf in the home folder. Thus it cannot log in to the MySQL server and execute the postrotate statement. The result is that the file is never released.

    I worked around that by creating a user with SUPER and RELOAD privilege. Since the password has to be passed on the command line, it is advisable to make the logrotate config exclusively readable for root.

    October 13, 2016 at 9:22 am
    • Dale Anderson

      @Fabian – This can also be addressed by storing the credentials in /root/.my.cnf (600 permissions), and then referring to the file in the mysql command:
      /usr/local/bin/mysql –defaults-extra-file=/root/.my.cnf -e “….

      February 11, 2017 at 12:22 pm
  • MySQL Rockstar

    Where is copytruncate in the script provided ?

    July 14, 2017 at 2:29 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.