Understand Your PostgreSQL Workloads Better with pg_stat_monitor

May 5, 2022
Author
Matt Yonkovit
Share this Post:

Awesome news, pg_stat_monitor has reached a GA STATUS!  Percona has long been associated with pushing the limits and understanding the nuances involved in running database systems at scale, so building a tool that helps get us there brings a bit more insight and details around query performance and scale on PostgreSQL systems fits with our history. So what the hell does pg_stat_monitor do, and why should you care?  Excellent question!

Currently, for collecting and reviewing query metrics, the defacto standard is pg_stat_statements.  This extension collects query metrics and allows you to go back and see which queries have impacted your system.  Querying the extension would yield something like this:

You can see here that this particular statement has been executed 61,559 times, and had a total time taken of 27,326 Milliseconds, for a mean time of 0.44 MS.

You can also get metrics on if this statement is writing data, generating wal, etc.  This is valuable to help find what statement may be missing cache and hitting disk, or which statements may be blowing up your wal logs.

While this data is great, it could be better. Specifically, it’s hard to determine if problems are getting worse or better.  Also, what if that particular query that executed 61K times runs in .01ms 60K times and 1000 ms 1K times.  Collecting enough data here to make better, more targeted decisions around optimization is needed.  This is where pg_stat_monitor can help.

First let me show you the output from one of the collected queries (note I am only selecting a single bucket, more on that in a second):

You can see there is a lot of extra data.   Let’s view these side by side:


There are 19 additional columns of collected data. Some of that extra data is used to break down the data into more granular and useful views of the data.

First up is the introduction of the concept of “buckets”.  What are buckets? This is a configurable slice of time. Instead of everything stored in a single big bucket, you can now add the ability to break query stats into timed buckets that allow you to look at performance changes for a query over a time period.  Note these default to a max of 10 buckets each containing 60 seconds of data (this is configurable).  This means the query data is easily consumable by your favorite time-series database for even more historical analysis capabilities.  We use these buckets internally to pull data into our query analytics tool and store them in a click house time-series database to provide even more analytic capabilities.

Note the difference between pg_stat_statement and pg_stat_monitor with regard to data retention.  Pg_stat_monitor is best used in conjunction with another monitoring tool if you need long-term storage of query data.

Next, you will notice the inclusion of user/connection details. Many applications use the same user, but have several endpoints connecting.  Breaking up data via the client IP helps track down that rogue user or application server causing issues.  

You can get a full breakdown of the features, settings, and columns here in the docs:  https://percona.github.io/pg_stat_monitor/REL1_0_STABLE/USER_GUIDE.html 

But I want to highlight a few of the new metrics and capabilities I am most excited about.  For me, the most interesting is the ability to collect histogram data.  This enables you to see if queries that deviate from the normal.  One of the key things our support engineers are always looking at is how is the P99 latency, and this helps with that.  You can see Percona Monitoring and Management take advantage of these features here:

PMM PostgreSQL

With the histograms enabled, I can see and help track down where queries and performance deviate from the normal.  

Additionally, you will notice the inclusion of CPU time.  Why is this important?  Query timings include things like waiting on disk and network resources.  If you have a system with a CPU bottleneck, the queries taking the longest time may or may not be the offender.  

Finally, you can configure pg_stat_monitor to store explain plans from previously run queries.  This is incredibly useful when plans change over time, and you are trying to recreate what took place an hour or two ago.  

Gaining additional insights and understanding your workload is critical, and pg_stat_monitor can help you do both.  pg_stat_monitor enables end-to-end traceability, aggregated stats across configurable time windows, and query-wise execution time, but it is PMM that visualizes this and lets the user get even more insight into PostgreSQL behavior.

Want to try this out for yourself?  The instructions are available here:  https://percona.github.io/pg_stat_monitor/REL1_0_STABLE/setup.html#installing-from-percona-repositories

Also, check out the video walkthrough where I installed the plugin:

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Far
Enough.

Said no pioneer ever.
MySQL, PostgreSQL, InnoDB, MariaDB, MongoDB and Kubernetes are trademarks for their respective owners.
© 2026 Percona All Rights Reserved