EmergencyEMERGENCY? Get 24/7 Help Now!

Thread_Statistics and High Memory Usage

 | July 11, 2017 |  Posted In: MySQL, open source databases, Percona Server for MySQL

PREVIOUS POST
NEXT POST

thread_statisticsIn this blog post, we’ll look at how using thread_statistics can cause high memory usage.

I was recently working on a high memory usage issue for one of our clients, and made some interesting discoveries: high memory usage with no bounds. It was really tricky to diagnose.

Below, I am going to show you how to identify that having thread_statistics enabled causes high memory usage on busy systems with many threads.

Part 1: Issue Background

I had a server with 55.0G of available memory. Percona Server for MySQL version:

We have calculated approximately how much memory MySQL can use in a worst case scenario for max_connections=250:

So in our case, this shouldn’t be more than ~12.5G.

After MySQL Server has been restarted, it allocated about 7G. After running for a week, it reached 44G:

I checked everything that could be related to the high memory usage (for example, operating system settings such as Transparent Huge Pages (THP), etc.). But I still didn’t find the cause (THP was disabled on the server). I asked my teammates if they had any ideas.

Part 2: Team Is on Rescue

After brainstorming and reviewing the status, metrics and profiles again and again, my colleague (Yves Trudeau) pointed out that User Statistics is enabled on the server.

User Statistics adds several INFORMATION_SCHEMA tables, several commands, and the userstat variable. The tables and commands can be used to better understand different server activity, and to identify the different load sources. Check out the documentation for more information.

Since we saw many threads running, it was a good option to verify this as the cause of the issue.

Part 3: Cause Verification – Did It Really Eat Our Memory?

I decided to apply some calculations, and the following test cases to verify the cause:

  1. Looking at the THREAD_STATISTICS table in the INFORMATION_SCHEMA, we can see that for each connection there is a row like the following:
  2. We have 22 columns, each of them BIGINT, which gives us ~ 176 bytes per row.
  3. Let’s calculate how many rows we have in this table at this time, and check once again in an hour:

    In an hour:

  4. Now let’s check on how much memory is currently in use:

  5. We have 12190801 rows in the THREAD_STATISTICS table, which is ~2G in size.
  6. Issuing the following statement cleans up the statistics:

  7. Now, let’s check again on how much memory is in use:

As we can see, memory usage drops to the approximate value of 2G that we had calculated earlier!

That was the root cause of the high memory usage in this case.

Conclusion

User Statistics (basically Thread_Statistics) is a great feature that allows us to identify load sources and better understand server activity. At the same time, though, it can be dangerous (from the memory usage point of view) to use as a permanent monitoring solution due to no limitations on memory usage.

As a reminder, thread_statistics is NOT enabled by default when you enable User_Statistics. If you have enabled Thread_Statistics for monitoring purposes, please don’t forget to pay attention to it.

As a next step, we are considering submitting a feature request to implement some default limits that can prevent Out of Memory issues on busy systems.

PREVIOUS POST
NEXT POST

One Comment

  • Very interesting post!

    I have a question: I’ve always thought it is impossible to establish a worst case memory usage for MySQL, at least not without knowing a lot about the workload, so I’m surprised to see how you arrived at the 12.5G figure. For example, multiple join buffers may be needed for a single connection depending on how many tables are joined and how they are joined (https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_join_buffer_size), so I think that you could get to 12GB of used memory or even more without the need of using all 250 connections, and you can use a lot more than 12.5 GB if all 250 connections are used by queries that join multiple tables.

Leave a Reply to Fernando Ipar Cancel reply