Profiling MySQL queries from Performance Schema

When optimizing queries and investigating performance issues, MySQL comes with built in support for profiling queries aka SET profiling = 1; . This is already awesome and simple to use, but why the PERFORMANCE_SCHEMA alternative?

Because profiling will be removed soon (already deprecated on MySQL 5.6 ad 5.7); the built-in profiling capability can only be enabled per session. This means that you cannot capture profiling information for queries running from other connections. If you are using Percona Server, the profiling option for log_slow_verbosity is a nice alternative, unfortunately, not everyone is using Percona Server.

Now, for a quick demo: I execute a simple query and profile it below. Note that all of these commands are executed from a single session to my test instance.

To demonstrate how we can achieve the same with Performance Schema, we first identify our current connection id. In the real world, you might want to get the connection/processlist id of the thread you want to watch i.e. from SHOW PROCESSLIST .

Next, we identify the bounding EVENT_IDs for the statement stages. We will look for the statement we wanted to profile using the query below from the events_statements_history_long table. Your LIMIT clause may vary depending on how much queries the server might be getting.

From the results above, we are mostly interested with the EVENT_ID and END_EVENT_ID values from the second row, this will give us the stage events of this particular query from the events_stages_history_long table.

As you can see the results are pretty close, not exactly the same but close. SHOW PROFILE shows Duration in seconds, while the results above is in milliseconds.

Some limitations to this method though:

  • As we’ve seen it takes a few hoops to dish out the information we need. Because we have to identify the statement we have to profile manually, this procedure may not be easy to port into tools like the sys schema or pstop.
  • Only possible if Performance Schema is enabled (by default its enabled since MySQL 5.6.6, yay!)
  • Does not cover all metrics compared to the native profiling i.e. CONTEXT SWITCHES, BLOCK IO, SWAPS
  • Depending on how busy the server you are running the tests, the sizes of the history tables may be too small, as such you either have to increase or loose the history to early i.e. performance_schema_events_stages_history_long_size variable. Using ps_history might help in this case though with a little modification to the queries.
  • The resulting Duration per event may vary, I would think this may be due to the additional as described on performance_timers table. In any case we hope to get this cleared up as result when this bug is fixed.

Share this post

Comments (5)

  • Vlad Reply

    I could not repeat this with mysql 5.6.11.

    mysql> SHOW VARIABLES LIKE ‘performance_schema’;
    | Variable_name | Value |
    | performance_schema | ON |
    1 row in set (0.00 sec)

    Empty set (0.00 sec)

    April 17, 2015 at 10:35 am
  • Jervin Real Reply


    Have you enabled your consumers and instruments for statements and stages?

    April 17, 2015 at 10:58 am
  • Jayaram Reply


    I could not get any from events_statements_history_long even after enabled setup_instruments and setup_consumers

    version 5.6.24 (Percona)

    June 17, 2015 at 6:39 am
  • Daniel Guzmán Burgos Reply

    You’ll have to enable the proper consumers and instruments. What Jervin said in the comment above is correct but the consumers query does have a typo. The fixed query is:

    UPDATE performance_schema.setup_consumers
    WHERE NAME LIKE ‘events_statements%’ OR NAME LIKE ‘events_stages%’;

    Instruments one is okay:

    UPDATE performance_schema.setup_instruments
    WHERE NAME LIKE ‘statement/%’ OR NAME LIKE ‘stage/%’;

    February 2, 2016 at 12:01 pm
  • littlemisstechy Reply

    It’s a shame SET profiling = 1; is deprecated > 5.5. I just activated performance_schema and a query that took 0.51 seconds, now takes 2.32 seconds. There may be other reasons (hence why I am optimizing in the first place), but for now, this tool is doing a great job to impact performance!

    November 25, 2016 at 7:14 am

Leave a Reply