Getting MySQL Core file on Linux

Core file can be quite helpful to troubleshoot MySQL Crashes yet it is not always easy to get, especially with recent Linux distributions which have security features to prevent core files to be dumped by
setuid processes (and MySQL Server is most commonly ran changing user from “root” to “mysql”). Before you embark on enabling core file you should consider two things – disk space and restart time. The core file will dump all MySQL Server memory content including buffer pool which can be tens on even hundreds GB of disk space. It can also take very long time to write this amount of data to the disk. If you are using “pid” with core files, which you probably should, as getting different samples often help developers to find what is wrong easier, you may be looking at many times the amount of memory MySQL consumes worth of disk space.

You have to do couple of changes to enable core files. First you need “core-file” option to my.cnf which will instruct MySQL to dump core on crash. This alone will unlikely work though.
I found you need to do several other changes:

echo 2 > /proc/sys/fs/suid_dumpable
mkdir /tmp/corefiles
chmod 777 /tmp/corefiles
echo “/tmp/corefiles/core” > /proc/sys/kernel/core_pattern
echo “1” > /proc/sys/kernel/core_uses_pid

First we enable dumping cores of suid applications, when we create separate directory for core files, which is good idea anyway as you can put it on different partition etc so you are not risking to run
out of space, but the real reason is I could not really get core dumped to /var/lib/mysql (datadir) on my system (Ubuntu). You might be lucky and it might work in your system. I also enable multiple “versions” of core files here with different pid numbers which I think can be quite helpful.

After you have configured dumping core I suggest you to test it for example on the test box, which has same operating system. This is important. There have been many changes to core file handling on Linux and what worked on one system might not be enough for other.

To check the if it works you can do kill -sigsegv `pidof mysqld` which will trigger the same code as if MySQL crashes accessing the wrong memory area, you will even see some stack trace, probably something like this from main thread:

stack_bottom = (nil) thread_stack 0x40000
The manual page at contains
information that should help you find out what is causing the crash.
Writing a core file
Segmentation fault (core dumped)

Note the end of this message – you should see Segmentation fault(core dumped) after Writing a core file. If core file was not written you will just have “Segmentation fault” with no “core dumped” attached to it.

If you’re looking for more core file options and some more explanations check out this Fromdual page it has a lot of good information.

Now as I explained how you can get core files from MySQL I should say they are often impractical – waiting, sometimes over half an hour for core dump to complete and when having huge file to work with
is not very convenient. The alternative in many cases could be to connect as “gdb -p `pidof mysqld`” select “continue” and let MySQL run. If it crashes you will have process ready to work with GDB, which is even more helpful than core file. The disadvantage though of course is you can’t restart the server while debugging it.

P.S Also do not forget to install “debuginfo” package if you expect to do any MySQL profiling or dealing with crashes. It does not slow MySQL performance yet it is very helpful for working with gdb, oprofile etc.

Share this post

Comment (1)

  • Simon Mudd

    I was trying to do the same but adjusting to my environment (and also puppetising) and could not get things to work as expected. It seems there is a _requirement_ to make the coredump directory chmod 777 as in the case I had been using the mysql had full access to the directory I was trying to dump to but the core dump did not appear. Changing the directory permissions as you stated made all the difference. It’s likely that something like this will catch other people too.

    I’m not sure if this is documented, or entirely why this is the case but it does seem rather counterintuitive. Took a while to figure out. So thanks for the orientation…

    Note: I implemented a couple of the proc changes via sysctl (/etc/sysctl.conf) that has the advantage of being durable compared to changing the parameters on the running kernel so if implementing as part of a permanent configuration might be more convenient.

    April 19, 2012 at 8:17 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.