Analyze Your Raw MySQL Query Logs with ClickHousePeter Zaitsev
In this blog post, I’ll look at how you can analyze raw MySQL query logs with ClickHouse.
For typical query performance analyses, we have an excellent tool in Percona Monitoring and Management. You may want to go deeper, though. You might be longing for the ability to query raw MySQL “slow” query logs with SQL.
There are a number of tools to load the MySQL slow query logs to a variety of data stores. For example, you can find posts showing how to do it with LogStash. While very flexible, these solutions always look too complicated and limited in functionality to me.
By far the best solution to parse and load MySQL slow query logs (among multiple log types supported) is Charity Majors Honeytail. It is great self-contained tool written in Go that has excellent documentation and is very easy to get started with. The only catch is it is only designed to work with SaaS log monitoring platform HoneyComb.io.
My friends at Altinity replaced Honeytail’s interface that writes logs to HoneyComb with one that writes data to ClickHouse – giving us Clicktail! Clicktail retains most of Honeytail’s features, including support for MySQL slow query logs, yet uses ClickHouse instead of HoneyComb.io as a backend store.
Once we have Clicktail setup, we’ll have slow query log data flowing to ClickHouse basically in real time (and we can query it):
This query computes query avg latency and 99 percentile latency for queries with given fingerprints as well as the load this query pattern generates.
It is amazing to see how Clickhouse – which I’m running here on a single small Intel NUC computer – can run this query processing at almost 100 million of rows/sec:
This query shows us how many distinct InnoDB pages different queries touched during last day. We can also easily find the slowest query in our log:
The ability to query both specific queries as presented in the log file, as well as their normalized versions, is very powerful.
You may also notice in this case that ClickHouse is not particularly fast. We would have better performance in traditional MySQL if we indexed the query_time column. This is a trade-off of index-less column-store design. While it allows for very fast parallel scans, it does not allow us to use indexes.
You may also notice query #2 used a range on column _time and ended up scanning fewer rows than other queries. While ClickHouse does not use indexes, it does allow you to define a sort key (one per table) that can be pretty much used as a traditional index for range queries.
If you ever need to do some advanced digging in raw MySQL slow query logs, consider doing it with ClickHouse and Clicktail!