This feature adds microsecond time resolution and additional statistics to the slow query log output. It lets you enable or disable the slow query log at runtime, adds logging for the slave SQL thread, and adds fine-grained control over what and how much to log into the slow query log.
You can use Percona-Toolkit‘s pt-query-digest tool to aggregate similar queries together and report on those that consume the most execution time.
Filters the slow log by the query’s execution plan. The value is a comma-delimited string, and can contain any combination of the following values:
qc_miss: The query was not found in the query cache.
full_scan: The query performed a full table scan.
full_join: The query performed a full join (a join without indexes).
tmp_table: The query created an implicit internal temporary table.
tmp_table_on_disk: The query’s temporary table was stored on disk.
filesort: The query used a filesort.
filesort_on_disk: The filesort was performed on disk.
Values are OR’ed together. If the string is empty, then the filter is disabled. If it is not empty, then queries will only be logged to the slow log if their execution plan matches one of the types of plans present in the filter.
For example, to log only queries that perform a full table scan, set the value to
full_scan. To log only queries that use on-disk temporary storage for intermediate results, set the value to
Specifies semantic of
Behavior of this variable depends from
Specifies that only a fraction of
session/query should be logged. Logging is enabled for every nth
session/query. By default, n is 1, so logging is enabled for every
session/query. Please note: when
session rate limiting is disabled for the replication thread.
session, this option lets you log full sessions, so you have complete records of sessions for later analysis; but you can rate-limit the number of sessions that are logged. Note that this feature will not work well if your application uses any type of connection pooling or persistent connections. Note that you change
sessionmode, you should reconnect for get effect.
query, this option lets you log just some queries for later analysis. For example, if you set the value to 100, then one percent of queries will be logged.
Note that every query has global unique
query_id and every connection can has it own (session)
Decision “log or no” calculated in following manner:
log_slow_rate_limitis 1 - log every query
log_slow_rate_limit> 1 - randomly log every 1/
This allows flexible setup logging behavior.
For example, if you set the value to 100, then one percent of
sessions/queries will be logged. In Percona Server information about the
log_slow_rate_limit has been added to the slow query log. This means that if the
log_slow_rate_limit is effective it will be reflected in the slow query log for each written query. Example of the output looks like this:
Log_slow_rate_type: query Log_slow_rate_limit: 10
TRUE, statements executed by stored procedures are logged to the slow if it is open.
CALLitself isn’t logged to the slow query log anymore as this would be counting twice for the same query which would lead to incorrect results
Example of the improved stored procedure slow query log entry:
mysql> DELIMITER // mysql> CREATE PROCEDURE improved_sp_log() BEGIN SELECT * FROM City; SELECT * FROM Country; END// mysql> DELIMITER ; mysql> CALL improved_sp_log();
When we check the slow query log after running the stored procedure ,with variable:log_slow_sp_statements set to
TRUE, it should look like this:
# Time: 150109 11:38:55 # User@Host: root[root] @ localhost  # Thread_id: 40 Schema: world Last_errno: 0 Killed: 0 # Query_time: 0.012989 Lock_time: 0.000033 Rows_sent: 4079 Rows_examined: 4079 Rows_affected: 0 Rows_read: 4079 # Bytes_sent: 161085 # Stored routine: world.improved_sp_log SET timestamp=1420803535; SELECT * FROM City; # User@Host: root[root] @ localhost  # Thread_id: 40 Schema: world Last_errno: 0 Killed: 0 # Query_time: 0.001413 Lock_time: 0.000017 Rows_sent: 4318 Rows_examined: 4318 Rows_affected: 0 Rows_read: 4318 # Bytes_sent: 194601 # Stored routine: world.improved_sp_log SET timestamp=1420803535;
log_slow_sp_statements is set to
- Entry is added to a slow-log for a
CALLstatement only and not for any of the individual statements run in that stored procedure
- Execution time is reported for the
CALLstatement as the total execution time of the
CALLincluding all its statements
If we run the same stored procedure with the variable
log_slow_sp_statements is set to
FALSE slow query log should look like this:
# Time: 150109 11:51:42 # User@Host: root[root] @ localhost  # Thread_id: 40 Schema: world Last_errno: 0 Killed: 0 # Query_time: 0.013947 Lock_time: 0.000000 Rows_sent: 4318 Rows_examined: 4318 Rows_affected: 0 Rows_read: 4318 # Bytes_sent: 194612 SET timestamp=1420804302; CALL improved_sp_log();
Support for logging stored procedures doesn’t involve triggers, so they won’t be logged even if this feature is enabled.
Specifies how much information to include in your slow log. The value is a comma-delimited string, and can contain any combination of the following values:
microtime: Log queries with microsecond precision.
query_plan: Log information about the query’s execution plan.
innodb: Log InnoDB statistics.
minimal: Equivalent to enabling just
standard: Equivalent to enabling
full: Equivalent to all other values OR’ed together without the
profiling: Enables profiling of all queries in all connections.
profiling_use_getrusage: Enables usage of the getrusage function.
Values are OR’ed together.
For example, to enable microsecond query timing and InnoDB statistics, set this option to
standard. To turn all options on, set the option to
Specifies which variables have global scope instead of local. For such variables, the global variable value is used in the current session, but without copying this value to the session value. Value is a “flag” variable - you can specify multiple values separated by commas
none: All variables use local scope
log_slow_filter: Global variable
log_slow_filterhas effect (instead of local)
log_slow_rate_limit: Global variable
log_slow_rate_limithas effect (instead of local)
log_slow_verbosity: Global variable
log_slow_verbosityhas effect (instead of local)
long_query_time: Global variable
long_query_timehas effect (instead of local)
min_examined_row_limit: Global variable
min_examined_row_limithas effect (instead of local)
allGlobal variables has effect (instead of local)
This variable can be used to specify the query execution time after which the query will be written to the slow query log. It can be used to specify an additional execution time threshold for the slow query log, that, when exceeded, will cause a query to be logged unconditionally, that is,
log_slow_rate_limit will not apply to it.
The feature adds more information to the slow log output. Here is a sample log entry:
# Time: 130601 8:01:06.058915 # User@Host: root[root] @ localhost  Id: 42 # Schema: imdb Last_errno: 0 Killed: 0 # Query_time: 7.725616 Lock_time: 0.000328 Rows_sent: 4 Rows_examined: 1543720 Rows_affected: 0 # Bytes_sent: 272 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0 # QC_Hit: No Full_scan: Yes Full_join: No Tmp_table: No Tmp_table_on_disk: No # Filesort: No Filesort_on_disk: No Merge_passes: 0 SET timestamp=1370073666; SELECT id,title,production_year FROM title WHERE title = 'Bambi';
Another example (
# Time: 130601 8:03:20.700441 # User@Host: root[root] @ localhost  Id: 43 # Schema: imdb Last_errno: 0 Killed: 0 # Query_time: 7.815071 Lock_time: 0.000261 Rows_sent: 4 Rows_examined: 1543720 Rows_affected: 0 # Bytes_sent: 272 # Profile_starting: 0.000125 Profile_starting_cpu: 0.000120 Profile_checking_permissions: 0.000021 Profile_checking_permissions_cpu: 0.000021 Profile_Opening_tables: 0.000049 Profile_Opening_tables_cpu: 0.000048 Profile_init: 0.000048 Profile_init_cpu: 0.000049 Profile_System_lock: 0.000049 Profile_System_lock_cpu: 0.000048 Profile_optimizing: 0.000024 Profile_optimizing_cpu: 0.000024 Profile_statistics: 0.000036 Profile_statistics_cpu: 0.000037 Profile_preparing: 0.000029 Profile_preparing_cpu: 0.000029 Profile_executing: 0.000012 Profile_executing_cpu: 0.000012 Profile_Sending_data: 7.814583 Profile_Sending_data_cpu: 7.811634 Profile_end: 0.000013 Profile_end_cpu: 0.000012 Profile_query_end: 0.000014 Profile_query_end_cpu: 0.000014 Profile_closing_tables: 0.000023 Profile_closing_tables_cpu: 0.000023 Profile_freeing_items: 0.000051 Profile_freeing_items_cpu: 0.000050 Profile_logging_slow_query: 0.000006 Profile_logging_slow_query_cpu: 0.000006 # Profile_total: 7.815085 Profile_total_cpu: 7.812127 SET timestamp=1370073800; SELECT id,title,production_year FROM title WHERE title = 'Bambi';
Each slow log entry now contains a connection identifier, so you can trace all the queries coming from a single connection. This is the same value that is shown in the Id column in
SHOW FULL PROCESSLIST or returned from the
Each entry also contains a schema name, so you can trace all the queries whose default database was set to a particular schema.
# Id: 43 Schema: imdb
This is the original functionality offered by the
Lock_time are logged with microsecond resolution.
The feature also adds information about how many rows were examined for
SELECT queries, and how many were analyzed and affected for
# Query_time: 0.962742 Lock_time: 0.000202 Rows_sent: 4 Rows_examined: 1543719 Rows_affected: 0
Values and context:
Rows_examined: Number of rows scanned -
Rows_affected: Number of rows changed -
The feature provides information about the amount of bytes sent for the result of the query and the number of temporary tables created for its execution - differentiated by whether they were created on memory or on disk - with the total number of bytes used by them.
# Bytes_sent: 8053 Tmp_tables: 1 Tmp_disk_tables: 0 Tmp_table_sizes: 950528
Values and context:
Bytes_sent: The amount of bytes sent for the result of the query
Tmp_tables: Number of temporary tables created on memory for the query
Tmp_disk_tables: Number of temporary tables created on disk for the query
Tmp_table_sizes: Total Size in bytes for all temporary tables used in the query
Each query can be executed in various ways. For example, it may use indexes or do a full table scan, or a temporary table may be needed. These are the things that you can usually see by running
EXPLAIN on the query. The feature will now allow you to see the most important facts about the execution in the log file.
# QC_Hit: No Full_scan: Yes Full_join: No Tmp_table: No Tmp_table_on_disk: No # Filesort: No Filesort_on_disk: No Merge_passes: 0
The values and their meanings are documented with the
The final part of the output is the InnoDB usage statistics. MySQL currently shows many per-session statistics for operations with
SHOW SESSION STATUS, but that does not include those of InnoDB, which are always global and shared by all threads. This feature lets you see those values for a given query.
# InnoDB_IO_r_ops: 6415 InnoDB_IO_r_bytes: 105103360 InnoDB_IO_r_wait: 0.001279 # InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000 # InnoDB_pages_distinct: 6430
innodb_IO_r_ops: Counts the number of page read operations scheduled. The actual number of read operations may be different, but since this can be done asynchronously, there is no good way to measure it.
innodb_IO_r_bytes: Similar to innodb_IO_r_ops, but the unit is bytes.
innodb_IO_r_wait: Shows how long (in seconds) it took InnoDB to actually read the data from storage.
innodb_rec_lock_wait: Shows how long (in seconds) the query waited for row locks.
innodb_queue_wait: Shows how long (in seconds) the query spent either waiting to enter the InnoDB queue or inside that queue waiting for execution.
innodb_pages_distinct: Counts approximately the number of unique pages the query accessed. The approximation is based on a small hash array representing the entire buffer pool, because it could take a lot of memory to map all the pages. The inaccuracy grows with the number of pages accessed by a query, because there is a higher probability of hash collisions.
If the query did not use InnoDB tables, that information is written into the log instead of the above statistics.
For general inquiries about our open source software and database management tools, please send us your question and someone will contact you.