Buy Percona ServicesBuy Now!

Log rotate and the (deleted) MySQL log file mystery

 | November 12, 2014 |  Posted In: Insight for DBAs, MariaDB, MySQL, Percona Monitoring Plugins, Percona Server for MySQL

PREVIOUS POST
NEXT POST

Did your logging stop working after you set up logrotate? Then this post might be for you.

Archive
Archive your log files!

Some time ago, Peter Boros wrote about Rotating MySQL Slow Logs safely, explaining the steps of a “best practice” log rotate/archive. This post will add more info about the topic.

When running logrotate for MySQL (after proper setting the /etc/logrotate.d/mysql conf file) from anacron, there’s a situation that you might potentially face if the user and password used to execute the “flush logs” command is stored in, for example, /root/.my.cnf file.

The situation:

You might find out that you have a new MySQL log file ready to receive data, but nothing is being written to it.

Why did this happen?

The logrotate script is executed, but the postrotate fails to successfully flush logs. If this happened to you, you might think, “I’ve lost my slow log file!” The good news: You didn’t lose it. What just happened is that your MySQL log file is no longer visible from the filesystem perspective, but the file still exists and is still receiving data.

So where is it? How can I find it again?

Through the file descriptor. If your mysqld still running, you can find your log under /proc/[pid of mysqld process]/fd path:

And how big is it? lsof can give us the answer with the file descriptor number, which for this example is 501:

The output of lsof tell us that this file size is 976746174 bytes, which is 931MB aprox.

Can I recover the file contents?

Yes, you can. You just need to use the “cat” command and knowing the File Descriptor number. In this case, is 501:

Remember that once you execute a success “flush logs” commands on the MySQL client, the old contents will disappear, so do this prior any further log rotation.

How did this happen?

Let’s examine the logrotate script:

Everything seems okay, except for one thing: When executing from cron, the HOME term environment variable will be blank. Meaning: /usr/bin/mysql won’t be able to find the file with the access credentials (user and password) and thus cannot execute the “flush logs” command.

What is the solution?

Add the HOME variable to the postscript line: env HOME=/root/

Can I get an alert if this happens to me?

Yes! With the Percona Nagios Plugin pmp-check-mysql-deleted-files. The Percona Nagios Plugin, which like all Percona software is free, looks at the files that the mysqld process has open and warns if any of them are deleted that shouldn’t be. For example: a slow MySQL log file that has being deleted by a poorly written logrotate script. (Download the Percona Nagios Plugin here)

In conclusion: Don’t fall into a situation where you suddenly realize, to your horror, that you’ve lost your slow MySQL log file. And if you do, relax: Recover your MySQL log file contents and add the proper monitoring alert. Problem solved!

PREVIOUS POST
NEXT POST
Daniel Guzmán Burgos

Daniel studied Electronic Engineering, but quickly becomes interested in all data things. He has worked as a DBA since 2007 for several companies including one of the major social gaming company at that time in Latin America. Daniel lives in Cali, Colombia and in his free time he loves to spend time with his family and go out to eat at any kind of restaurant. Working for Percona since 2014, he is part of the Consulting team.

9 Comments

  • Instead of mysql -e ‘………’ (which can also fail if credentials are elsewhere or in some other root or few other things), sending a COM_DEBUG (essentially a mysqladmin debug) with SIGHUP (like in other web servers etc.) should also do. Similar thing is done in init scripts and/or packaging post-installs where requirement of credentials to do non-SQL operations (like logrotate) is not required, and cannot be expected to be present at known locations/

  • Sending SIGHUP involves more things than just flushing logs, as explained here http://bugs.mysql.com/bug.php?id=65481, so if you just want to flush logs, the ideal thing is to use the CLI with the right credentials.

  • a simpler solution may just be use copytruncate option in logrotate, I use this in a lot of cases where the apps don’t have built in log rotation. slow query log and mysql logs don’t get appended to all that much in a lot of cases(my logs are still small and go back a year). Sure there is some tiny risk of losing a bit of data during the copytruncate but in the past what 12 years of me using this I’ve never had a complaint that someone lost data in any log file.

    not everyone’s db is the same of course but for the past year on my main prod database slow query log is 1.6G and error.log is 22k (all of this data feeds into splunk as well, pretty rare to reference the log files directly).

  • mysqladmin with the wrong credentials will still return true which is why its normally not required to set HOME for it.

    If your ~/.my.cnf contains any setting of non-default location of socket or port then mysqladmin will also need HOME=/root/ set.

  • Burn:

    Logrotate is not a Percona tool nor it comes with Percona rpms. Logrotate is a tool created by RedHat that comes with several Linux distros.

    However, Percona Server does it come with a feature to manage Slow Query Log Rotation and Expiration in case you don’t want to/can’t use Logrotate: https://www.percona.com/doc/percona-server/5.6/flexibility/slowlog_rotation.html

  • for the user/password use mysqladmin –defaults-file=/etc/mysql/debian.cnf for ubuntu 14.04 so no need to set HOME

  • I had same issue on mysql 5.0 when I executed logrotate through crontab (Worked perfectly on mysql5.5 though) and got my script fixed by passing HOME in cron itself.
    But even after executing flush-logs successfully, mysql is not able to write into –err-log file and lsof shows the (deleted) files are still there.
    Is there a way I can get rid of these (deleted) error file(there are 2 FD for deleted error file) without stopping mysql.
    Thanks

Leave a Reply