pt-stalk - Gather forensic data about MySQL when a problem occurs.




pt-stalk watches for a trigger condition to become true, and then collects data to help in diagnosing problems. It is designed to run as a daemon with root privileges, so that you can diagnose intermittent problems that you cannot observe directly. You can also use it to execute a custom command, or to gather the data on demand without waiting for the trigger to happen.


The following section is included to inform users about the potential risks, whether known or unknown, of using this tool. The two main categories of risks are those created by the nature of the tool (e.g. read-only tools vs. read-write tools) and those created by bugs.

pt-stalk is a read-write tool; it collects data from the system and writes it into a series of files. It should be very low-risk. Some of the options can cause intrusive data collection to be performed, however, so if you enable any non-default options, you should read their documentation carefully.

At the time of this release, we know of no bugs that could cause serious harm to users.

The authoritative source for updated information is always the online issue tracking system. Issues that affect this tool will be marked as such. You can see a list of such issues at the following URL:

See also “BUGS” for more information on filing bugs and getting help.


Sometimes a problem happens infrequently and for a short time, giving you no chance to see the system when it happens. How do you solve intermittent MySQL problems when you can’t observe them? That’s why pt-stalk exists. In addition to using it when there’s a known problem on your servers, it is a good idea to run pt-stalk all the time, even when you think nothing is wrong. You will appreciate the data it gathers when a problem occurs, because problems such as MySQL lockups or spikes of activity typically leave no evidence to use in root cause analysis.

This tool does two things: it watches a server (typically MySQL) for a trigger to occur, and it gathers diagnostic data. To use it effectively, you need to define a good trigger condition. A good trigger is sensitive enough to fire reliably when a problem occurs, so that you don’t miss a chance to solve problems. On the other hand, a good trigger isn’t prone to false positives, so you don’t gather information when the server is functioning normally.

The most reliable triggers for MySQL tend to be the number of connections to the server, and the number of queries running concurrently. These are available in the SHOW GLOBAL STATUS command as Threads_connected and Threads_running. Sometimes Threads_connected is not a reliable indicator of trouble, but Threads_running usually is. Your job, as the tool’s user, is to define an appropriate trigger condition for the tool. Choose carefully, because the quality of your results will depend on the trigger you choose.

You can define the trigger with the --function, --variable, and --threshold options, among others. Please read the documentation for –function to learn how to do this.

The pt-stalk tool, by default, simply watches MySQL repeatedly until the trigger becomes true. It then gathers diagnostics for a while, and sleeps afterwards for some time to prevent repeatedly gathering data if the condition remains true. In crude pseudocode, omitting some subtleties,

while true; do
  if --variable from --function is greater than --threshold; then
    if observations is greater than --cycles; then
      capture diagnostics for --run-time seconds
      exit if --iterations is exceeded
      sleep for --sleep seconds
  clean up data that's older than --retention-time
  sleep for --interval seconds

The diagnostic data is written to files whose names begin with a timestamp, so you can distinguish samples from each other in case the tool collects data multiple times. The pt-sift tool is designed to help you browse and analyze the resulting samples of data.

Although this sounds simple enough, in practice there are a number of subtleties, such as detecting when the disk is beginning to fill up so that the tool doesn’t cause the server to run out of disk space. This tool handles these types of potential problems, so it’s a good idea to use this tool instead of writing something from scratch and possibly experiencing some of the hazards this tool is designed to prevent.


You can use standard Percona Toolkit configuration files to set commandline options.

You will probably want to run the tool as a daemon and customize at least the diagnostic threshold. Here’s a sample configuration file for triggering when there are more than 20 queries running at once:


If you’re not running the tool as it’s designed (as a root user, daemonized) then you’ll need to set several options, such as --dest, to locations that are writable by non-root users.



default: yes; negatable: yes

Collect system information. You can negate this option to make the tool watch the system but not actually gather any diagnostic data.

See also --stalk.


Collect GDB stacktraces. This is achieved by attaching to MySQL and printing stack traces from all threads. This will freeze the server for some period of time, ranging from a second or so to much longer on very busy systems with a lot of memory and many threads in the server. For this reason, it is disabled by default. However, if you are trying to diagnose a server stall or lockup, freezing the server causes no additional harm, and the stack traces can be vital for diagnosis.

In addition to freezing the server, there is also some risk of the server crashing or performing badly after GDB detaches from it.


Collect oprofile data. This is achieved by starting an oprofile session, letting it run for the collection time, and then stopping and saving the resulting profile data in the system’s default location. Please read your system’s oprofile documentation to learn more about this.


Collect strace data. This is achieved by attaching strace to the server, which will make it run very slowly until strace detaches. The same cautions apply as those listed in –collect-gdb. You should not enable this option together with –collect-gdb, because GDB and strace can’t attach to the server process simultaneously.


Collect tcpdump data. This option causes tcpdump to capture all traffic on all interfaces for the port on which MySQL is listening. You can later use pt-query-digest to decode the MySQL protocol and extract a log of query traffic from it.


type: string

Read this comma-separated list of config files. If specified, this must be the first option on the command line.


type: int; default: 5

The number of times the trigger condition must be true before collecting data. This helps prevent false positives, and makes the trigger condition less likely to fire when the problem recovers quickly.


Daemonize the tool. This causes the tool to fork into the background and log its output as specified in –log.


type: string; default: /var/lib/pt-stalk

Where to store the diagnostic data. Each time the tool collects data, it writes to a new set of files, which are named with the current system timestamp.


type: size; default: 100M

Don’t collect data if the disk has less than this much free space. This prevents the tool from filling up the disk with diagnostic data.

If the --dest directory contains a previously captured sample of data, the tool will measure its size and use that as an estimate of how much data is likely to be gathered this time, too. It will then be even more pessimistic, and will refuse to collect data unless the disk has enough free space to hold the sample and still have the desired amount of free space. For example, if you’d like 100MB of free space and the previous diagnostic sample consumed 100MB, the tool won’t collect any data unless the disk has 200MB free.

Valid size value suffixes are k, M, G, and T.


type: int; default: 5

Don’t collect data if the disk has less than this percent free space. This prevents the tool from filling up the disk with diagnostic data.

This option works similarly to --disk-bytes-free but specifies a percentage margin of safety instead of a bytes margin of safety. The tool honors both options, and will not collect any data unless both margins are satisfied.


type: string; default: status

Specifies what to watch for a diagnostic trigger. The default value watches SHOW GLOBAL STATUS, but you can also watch SHOW PROCESSLIST or supply a plugin file with your own custom code. This function supplies the value of --variable, which is then compared against --threshold to see if the trigger condition is met. Additional options may be required as well; see below. Possible values:

  • status
This value specifies that the source of data for the diagnostic trigger is SHOW GLOBAL STATUS. The value of --variable then defines which status counter is the trigger.
  • processlist

This value specifies that the data for the diagnostic trigger comes from SHOW FULL PROCESSLIST. The trigger value is the count of processes whose --variable column matches the --match option. For example, to trigger when more than 10 processes are in the “statistics” state, use the following options:

--function processlist --variable State \
  --match statistics --threshold 10

In addition, you can specify a file that contains your custom trigger function, written in Unix shell script. This can be a wrapper that executes anything you wish. If the argument to –function is a file, then it takes precedence over builtin functions, so if there is a file in the working directory named “status” or “processlist” then the tool will use that file as a plugin, even though those are otherwise recognized as reserved words for this option.

The plugin file works by providing a function called trg_plugin, and the tool simply sources the file and executes the function. For example, the function might look like the following:

trg_plugin() {
     | grep -c "has waited at"

This snippet will count the number of mutex waits inside of InnoDB. It illustrates the general principle: the function must output a number, which is then compared to the threshold as usual. The $EXT_ARGV variable contains the MySQL options mentioned in the “SYNOPSIS” above.

The plugin should not alter the tool’s existing global variables. Prefix any plugin-specific global variables with “PLUGIN_” or make them local.


Print help and exit.


type: int; default: 1

Interval between checks for the diagnostic trigger.


type: int

Exit after collecting diagnostics this many times. By default, the tool will continue to watch the server forever, but this is useful for scenarios where you want to capture once and then exit, for example.


type: string; default: /var/log/pt-stalk.log

Print all output to this file when daemonized.


type: string

The pattern to use when watching SHOW PROCESSLIST. See the documentation for --function for details.


type: string

Send mail to this list of addresses when data is collected.


type: string; default: /var/run/

Create a PID file when daemonized.


type: string

Load a plugin to hook into the tool and extend is functionality. The specified file does not need to be executable, nor does its first line need to be shebang line. It only needs to define one or more of these Bash functions:


Called before stalking.


Called when the stalk condition is triggered, before running a collector process as a backgrounded subshell.


Called after running a collector process. The PID of the collector process is passed as the first argument. This hook is called before after_collect_sleep.


Called after sleeping --sleep seconds for the collector process to finish. This hook is called after after_collect.


Called after sleeping --interval seconds after each trigger check.


Called after stalking. Since pt-stalk stalks forever by default, this hook is only called if --iterations is specified.

For example, a very simple plugin that touches a file when a collector process is triggered:

before_colllect() {
   touch /tmp/foo

Since the plugin is completely sourced (imported) into the tool’s namespace, be careful not to define other functions or global variables that already exist in the tool. You should prefix all plugin-specific functions and global variables with plugin_ or PLUGIN_.

Plugins have access to all command line options but they should not modify them. Each option is a global variable like $OPT_DEST which corresponds to --dest. Therefore, the global variable for each command line option is OPT_ plus the option name in all caps with hyphens replaced by underscores.

Plugins can stop the tool by setting the global variable OKTORUN to 1. In this case, the global variable EXIT_REASON should also be set to indicate why the tool was stopped.


type: string

The filename prefix for diagnostic samples. By default, samples have a timestamp prefix based on the current local time, such as 2011_12_06_14_02_02, which is December 6, 2011 at 14:02:02.


type: int; default: 30

Number of days to retain collected samples. Any samples that are older will be purged.


type: int; default: 30

How long the tool will collect data when it triggers. This should not be longer than --sleep. It is usually not necessary to change this; if the default 30 seconds hasn’t gathered enough diagnostic data, running longer is not likely to do so. In fact, in many cases a shorter collection period is appropriate.

This value is used two other times. After collecting, the collect subprocess will wait another --run-time seconds for its commands to finish. Some commands can take awhile if the system is running very slowly (which can likely be the case given that a collection was triggered). Since empty files are deleted, the extra wait gives commands time to finish and write their data. The value is potentially used again just before the tool exits to wait again for any collect subprocesses to finish. In most cases this won’t happen because of the aforementioned extra wait. If it happens, the tool will log “Waiting up to N seconds for collectors to finish...” where N is three times --run-time. In both cases, after waiting, the tool kills all of its subprocesses.


type: int; default: 300

How long to sleep after collecting data. This prevents the tool from triggering continuously, which might be a problem if the collection process is intrusive. It also prevents filling up the disk or gathering too much data to analyze reasonably.


default: yes; negatable: yes

Watch the server and wait for the trigger to occur. You can negate this option to make the tool immediately gather any diagnostic data once and exit. This is useful if a problem is already happening, but pt-stalk is not running, so you only want to collect diagnostic data.

If this option is negate, --daemonize, --log, --pid, and other stalking-related options have no effect; the tool simply collects diagnostic data and exits. Safeguard options, like --disk-bytes-free and --disk-pct-free, are still respected.

See also --collect.


type: int; default: 25

The threshold at which the diagnostic trigger should fire. See --function for details.


type: string; default: Threads_running

The variable to compare against the threshold. See --function for details.


type: int; default: 2

Print more or less information while running. Since the tool is designed to be a long-running daemon, the default verbosity level only prints the most important information. If you run the tool interactively, you may want to use a higher verbosity level.

===== =====================================
0     Errors
1     Warnings
2     Matching triggers and collection info
3     Non-matching triggers

Print tool’s version and exit.


This tool does not require any environment variables for configuration, although it can be influenced to work differently by through several variables. Keep in mind that these are expert settings, and should not be used in most cases.

Specifically, the variables that can be set are:













For example, during collection iostat is called with a -dx argument, but because you have an NFS partition, you also need the -n flag there. Instead of editing the source, you can call pt-stalk as

CMD_IOSTAT="iostat -n" pt-stalk ...

which will do exactly what you need. Combined with the plugin hooks, this gives you a fine-grained control of what the tool does.


This tool requires Bash v3 or newer. Certain options require other programs:

--collect-gdb requires gdb

--collect-oprofile requires opcontrol and opreport

--collect-strace requires strace

--collect-tcpdump requires tcpdump


For a list of known bugs, see

Please report bugs at Include the following information in your bug report:

  • Complete command-line used to run the tool
  • Tool --version
  • MySQL version of all servers involved
  • Output from the tool including STDERR
  • Input files (log/dump/config files, etc.)

If possible, include debugging output by running the tool with PTDEBUG; see “ENVIRONMENT”.


Visit to download the latest release of Percona Toolkit. Or, get the latest release from the command line:




You can also get individual tools from the latest release:


Replace TOOL with the name of any tool.


Baron Schwartz, Justin Swanhart, Fernando Ipar, and Daniel Nichter


This tool is part of Percona Toolkit, a collection of advanced command-line tools developed by Percona for MySQL support and consulting. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and developed primarily by him and Daniel Nichter, both of whom are employed by Percona. Visit for more software developed by Percona.


pt-stalk 2.1.10

Contact Us

For free technical help, visit the Percona Community Forum.
To report bugs or submit feature requests, open a JIRA ticket.
For paid support and managed or professional services, contact Percona Sales.