How to Extract All Running Queries (Including the Last Executed Statement) from a Core File?

This post builds on the How to obtain the “LES” (Last Executed Statement) from an Optimized Core Dump? post written about a year ago.

A day after that post was released, Shane Bester wrote an improved version, How to obtain all executing queries from a core file on his blog. Reading that post is key to understanding what follows.

I am faced with some complex bugs which would do well with SQL testcases. Extracting the last executed statement (and maybe all queries running at the time of the crash/asserts) is crucial to generate testcases well. E.g. you may have a full SQL trace from RQG or elsewhere, but what if the actual crashing statement never made it to the logs? This is often the case, especially if you use an older version of RQG or use –log-output=FILE logging.

The idea here is to extract all running queries (failing/crashing mysqld or not) from a core file, and then add these SQL statements to the end of a testcase to ensure the crash/assert/Valgrind issue is reproduced.

Now, I found that not all mysqld core files have the global threads variable available, which rules out using Shane’s script as a universal automated solution. So, looks like we need another solution to get all queries out which works in all cases.

Using Shane’s idea of macro’s seemed like the way forward (i.e. loop through threads and frames for each thread), but the next problem was that gdb stops as soon as an error is found. This is a known shortcoming in gdb’s macro language. The outcome is that by frame x in thread y gdb will stop execution because “there is no such thread/frame”. And, one does not know upfront which frame number the do_command frame will have or even if there is one. Either will give an error and cause gdb macro’s to abort immediately. Not usable thus.

There seemed to be only one other solution mentioned online; to use python for gdb as this has a “try:” function. However, this is not really a good general solution either as it requires gdb to be compiled with –with-python, and so it requires per-server customization of gdb.

Looking further, I found a workaround here

One can specify the function name like this:
print do_command::thd->query_string.string.str

This is a great way to avoid having to cycle through all frames when checking statements in an automated script. It also allows one to run this query for all threads by sending the following input to gdb (and you can do so using the < shell redirector):

thread 1
print do_command::thd->query_string.string.str
thread 2
print do_command::thd->query_string.string.str
thread 3

The idea here is that the redirector into gdb keeps executing statements one after the other, even if an error occurs. The result: a nice list of all running queries at the time of the assert/crash.

For a full version of the script, branch lp:percona-qa (bzr branch lp:percona-qa) and checkout the extract_query.gdb file. For an example on how to post-process results, you could use something similar to these shell statements:

gdb ${BIN} ${CORE} >/dev/null 2>&1 < extract_query.gdb
grep '^$' /tmp/gdb_PARSE.txt | sed 's/^[$0-9a-fx =]*"//;s/[" t]*$//;s/$/;/' | grep -v '^$' > ./queries.txt

In this example, ${BIN} is a BIN variable pointing to the mysqld binary, ${CORE} points to the core file, and /tmp/gdb_PARSE.txt is generated by the extract_query.gdb gdb script. The queries.txt file will contain all queries, nicely terminated by the usual “;” SQL terminator used in MySQL.


Share this post

Comments (7)

  • kris

    1. I need query for 5 fields, out of 10 fields

    January 30, 2014 at 11:16 pm
  • Roel Van de Paar

    @kris, as per our discussion via email, your question was not related to this article. Thanks

    February 2, 2014 at 3:45 pm
  • Roel Van de Paar

    The variable name has changed in more recent versions of mysql. Now you can use;

    print do_command::thd->m_query_string.str

    I will leave both in my script (the old and the new) as this makes it easy/universally usable.

    January 29, 2015 at 7:47 pm
  • Michael Mior

    Is this script actually posted anywhere? It seems like it would quite useful but I couldn’t find a link.

    April 14, 2015 at 6:37 pm
  • Roel Van de Paar


    Yes! Simply;

    $ sudo yum install git # (or apt-get)
    $ git clone

    Checkout percona-qa/extract_query.gdb (a gdb script). To see some implementation examples, see or and search for the filename above, and let me know if you have any other questions!

    April 14, 2015 at 8:15 pm
  • Nickolay Ihalainen

    Missing backslash in first grep, correct command is:

    If you have empty output, check /tmp/gdb_PARSE.txt
    it could say that thd variable is optimized yet.
    In this case you should find where thd value is defined, for example in Percona Server 5.6.38-83.0 running on CentOS 7.4:

    How to easily test the extract_query.gdb script:
    docker pull percona:latest;docker run –privileged –rm -it –name p57 -e MYSQL_ALLOW_EMPTY_PASSWORD=1 percona:latest # replace latest with specific version if needed
    docker exec -it p57 bash # login to just created docker container:

    Good luck and have your database instrumented before you will get a crash!

    January 13, 2018 at 12:12 am
    • Roel Van de Paar

      @Nickolay, great update! Thank you

      January 13, 2018 at 12:19 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.