In my Webinar on Using Percona Monitoring and Management (PMM) for MySQL Troubleshooting, I showed how to use direct queries to ClickHouse for advanced query analysis tasks. In the followup Webinar Q&A, I promised to describe it in more detail and share some queries, so here it goes.
PMM uses ClickHouse to store query performance data which gives us great performance and a very high compression ratio. ClickHouse stores data in column-store format so it handles denormalized data very well. As a result, all query performance data is stored in one simple “metrics” table:
hash of query fingerprint
Name of service (IP or hostname of DB server by default)
MySQL: database; PostgreSQL: schema
client user name
client IP or hostname
Name of replication set
Type of service
Custom labels names
Custom labels values
Identifier of agent that collect and send metrics
qan-agent-type-invalid = 0
Agent Type that collect of metrics: slowlog,perf schema,etc.
Time when collection of bucket started
Duration of collection bucket
mysql digest_text; query without data
One of query example from set found in bucket
EXAMPLE_FORMAT_INVALID = 0
EXAMPLE = 1
FINGERPRINT = 2
Indicates that collect real query examples is prohibited
Indicates if query examples is too long and was truncated
EXAMPLE_TYPE_INVALID = 0
RANDOM = 1
SLOWEST = 2
FASTEST = 3
WITH_ERROR = 4
Indicates what query example was picked up
Metrics of query example in JSON format.
How many queries was with warnings in bucket
List of warnings
Count of each warnings in bucket
How many queries was with error in bucket
List of Last_errno
Count of each Last_errno in bucket
Amount queries in this bucket
The statement execution time in seconds was met.
The statement execution time in seconds.
Smallest value of query_time in bucket
Biggest value of query_time in bucket
99 percentile of value of query_time in bucket
The time to acquire locks in seconds.
The number of rows sent to the client.
Number of rows scanned – SELECT.
Number of rows changed – UPDATE
The number of rows read from tables.
The number of merge passes that the sort algorithm has had to do.
Counts the number of page read operations scheduled.
Similar to innodb_IO_r_ops
Shows how long (in seconds) it took InnoDB to actually read the data from storage.
Shows how long (in seconds) the query waited for row locks.
Shows how long (in seconds) the query spent either waiting to enter the InnoDB queue or inside that queue waiting for execution.
Counts approximately the number of unique pages the query accessed.
Shows how long the query is.
The number of bytes sent to all clients.
Number of temporary tables created on memory for the query.
Number of temporary tables created on disk for the query.
Total Size in bytes for all temporary tables used in the query.
Query Cache hits.
The query performed a full table scan.
The query performed a full join (a join without indexes).
The query created an implicit internal temporary table.
The querys temporary table was stored on disk.
The query used a filesort.
The filesort was performed on disk.
The number of joins that used a range search on a reference table.
The number of joins that used ranges on the first table.
The number of joins without keys that check for key usage after each row.
The number of sorts that were done using ranges.
The number of sorted rows.
The number of sorts that were done by scanning the table.
The number of queries without index.
The number of queries without good index.
The number of returned documents.
The response length of the query result in bytes.
The number of scanned documents.
Total number of shared blocks cache hits by the statement
Total number of shared blocks read by the statement.
Total number of shared blocks dirtied by the statement.
Total number of shared blocks written by the statement.
Total number of local block cache hits by the statement
Total number of local blocks read by the statement.
Total number of local blocks dirtied by the statement.
Total number of local blocks written by the statement.
Total number of temp blocks read by the statement.
Total number of temp blocks written by the statement.
Total time the statement spent reading blocks
Total time the statement spent writing blocks
I provided the whole table structure here as it includes a description for many columns. Note not all columns will contain data for all database engines in all configurations, and some are not yet used at all.
Before we get to queries let me explain some general design considerations for this table.
We do not store performance information for every single query; it is not always available to begin with (for example, if using MySQL Performance Schema). Even if it was available with modern database engines capable of serving 1M+ QPS, it would still be a lot of data to store and process.
Instead, we aggregate statistics by “buckets” which can be seen as sort key in the “metrics” table:
ORDER BY (queryid,service_name,database,schema,username,client_host,period_start)
You can think about Sort Key as similar to Clustered Index in MySQL. Basically, for every period (1 minute by default) we store information for every queried, service_name, database, schema, username, and client_host combination.
Period_Start is stored in the UTC timezone.
QueryID – is a hash which identifies unique query pattern, such as “select c from sbtest1 where id=?”
Service_Name is the name of the database instance
Database – is the database or Catalog. We use it in PostgreSQL terminology, not MySQL one
Schema – this is Schema, which also can be referred to as Database in MySQL
UserName – The Database level User Name, which ran this given query.
Client_Host – HostName or IP of the Client
This data storage format allows us to provide a very detailed workload analysis, for example, you can see if there is a difference in performance profile between different schemas, which is very valuable for many applications that use the “tenant per schema” approach. Or you can see specific workloads that different users generate on your database fleet.
Another thing you may notice is that each metric for each grouping bucket stores several statistical values, such as:
`m_query_time_cnt` Float32 COMMENT 'The statement execution time in seconds was met.',
`m_query_time_sum` Float32 COMMENT 'The statement execution time in seconds.',
`m_query_time_min` Float32 COMMENT 'Smallest value of query_time in bucket',
`m_query_time_max` Float32 COMMENT 'Biggest value of query_time in bucket',
`m_query_time_p99` Float32 COMMENT '99 percentile of value of query_time in bucket',
The _cnt value is the number of times this metric was reported. Every Query should have query_time available but many other measurements may not be available for every engine and any configuration.
The _sum value is the sum for the metric among all _cnt queries. So if you want to compute _avg you should divide _sum by _cnt.
_min, _max and _p99 store the minimum, maximum, and 99 percentile value.
How to Access ClickHouse
To access ClickHouse on PMM Server you should run the “clickhouse-client” command line tool.
If you’re deploying MySQL with Docker you can just run:
docker exec -it pmm2-server clickhouse-client
Where pmm2-server is the name of the container you’re using for PMM.
Run “use pmm” to select the current schema to PMM.
ClickHouse uses SQL-like language as its query language. I call it SQL-like as it does not implement SQL standard fully, yet it has many additional and very useful extensions. You can find the complete ClickHouse Query Language reference here.
# Number of Queries for the period
select sum(num_queries) from metrics where period_start>'2020-03-18 00:00:00';
# Average Query Execution Time for Last 6 hours
select avg(m_query_time_sum/m_query_time_cnt) from metrics where period_start>subtractHours(now(),6);
# What are most frequent query ids ? Also calculate total number of queries in the same query
select queryid,sum(num_queries) cnt from metrics where period_start>subtractHours(now(),6) group by queryid with totals order by cnt desc limit 10;
# How do actual queries look for those IDs ?
select any(example),sum(num_queries) cnt from metrics where period_start>subtractHours(now(),6) group by queryid order by cnt desc limit 10 \G
# queries hitting particular host
select any(example),sum(num_queries) cnt from metrics where period_start>subtractHours(now(),6) and node_name='mysql1' group by queryid order by cnt desc limit 10 \G
# slowest instances of the queries
select any(example),sum(num_queries) cnt, max(m_query_time_max) slowest from metrics where period_start>subtractHours(now(),6) group by queryid order by slowest desc limit 10 \G
# Query pattern which resulted in the largest temporary table created
select example, m_tmp_table_sizes_max from metrics where period_start>subtractHours(now(),6) order by m_tmp_table_sizes_max desc limit 1 \G
# Slowest Queries Containing Delete in the text
select any(example),sum(num_queries) cnt, max(m_query_time_max) slowest from metrics where period_start>subtractHours(now(),6) and lowerUTF8(example) like '%delete%' group by queryid order by slowest desc limit 10 \G
I hope this gets you started!
If you create some other queries which you find particularly helpful, please feel free to leave them in the comments for others to enjoy!