Extending Percona Monitoring and Management for MySQL InnoDB Cluster with Custom Queries

pmm innodb custom queriesA few days ago, a customer got in touch asking how they could use Percona Monitoring and Management (PMM) to monitor the roles played by each node in an InnoDB cluster. More specifically, they wanted to check when one of the nodes changed its role from Primary to Secondary, or vice-versa. PMM allows for a high level of flexibility and customization through its support for custom queries, and we just have to be creative on how we approach the problem. In this post, we present the steps we did to test our solution, including the deployment of a 3-node InnoDB Cluster hosted in the same server (for testing) and a PMM 2 server, and connecting them together. Even though this has already been covered in other blog posts and in the online documentation, we hope that providing the full procedure will be helpful if you want to create a similar test environment and try your hand at creating custom dashboards for InnoDB Cluster on PMM.

Deploying Our Test Cluster

Starting with a fresh new CentOS 7.6 test server, we disabled SELinux (to make things simpler in this test as we will be using a set of non-standard ports in this environment) by setting SELINUX=disabled in /etc/selinux/config and restarting the server. Once this was done, we configured the Percona YUM repository in order to install the latest version of Percona Server for MySQL available at the time of this draft.

Starting With a Regular Percona Server for MySQL

Note: if you are strictly looking at deploying a test InnoDB Cluster using local sandboxes, MySQL Shell includes a built-in functionality for this, dba.deploySandboxInstance(). The procedure we use below is less straight-forward but can be easily adjusted to deploy a regular cluster by having each instance installed in a dedicated server instead.

A fresh installation of Percona Server 8 on CentOS will leave us with an empty datadir, we need to start the server in order to initialize it. But before we do it, let’s take advantage that the server was not yet started and add some extra settings required for InnoDB cluster to work by editing its main configuration file, /etc/my.cnf. Here’s how it looks like now:

The last two settings are used to configure access to the MySQL Shell, which we will use to set up our InnoDB Cluster. We can now start the server:

To access MySQL for the first time, we need to get hold of the temporary password from the log file:

We can then connect to the server:

And set a new password:

In order to manage an InnoDB Cluster, we need a privileged user that can connect from any of the nodes (i.e., from other than localhost). Since our instances will all be hosted in the same server, we created a privileged account restricted to the server’s IP address, using the same password as above:

The Group Replication plugin itself is not loaded by default so we have to do it manually (or otherwise make use of plugin_load_add for this):

One last requirement to fulfill; the report-host defined in the MySQL configuration file needs to be resolved internally; we do this by editing the server’s /etc/hosts file and linking it to the server’s private IP address:

Creating an InnoDB Cluster

Now that the server is up and properly configured we can deploy our test cluster from it. This is done through the MySQL Shell, so let’s install the respective package:

and access it:

The Shell is an advanced MySQL client that provides JavaScript scripting capabilities by default. From it, we can validate that the user we created is “suitable for cluster usage”:

Providing all goes well, we can connect to the server in the Shell:

and proceed to create our test cluster. This being JavaScript, after all, the cluster itself is instantiated as cluster (or whatever term you prefer to use):

We can verify the status of our cluster with:

Adding a Second Node

So far we have a 1-node test cluster up and running. Let’s add a couple of extra nodes to it. We start by editing the /etc/my.cnf configuration file of our server and adding a new mysqld option group below the existing [mysqld] one in order to define the new instance that will become our second node:

As you can see above, the definition of this new mysqld instance is very similar to the main one; we highlighted the differences between them in bold in the table above.

Before initializing this new instance, we had to create its own log file since the mysql user does not have write permission under /var/log:

Until not long ago, we would rely on mysqld_multi to manage multiple MySQL instances running on the same server but this tool is not necessary for systemd-managed distributions; we just need to indicate which of the option groups defined in /etc/my.cnf we want the command to affect:

After we initialize the new instance, we need to do exactly as we did with the main we configured as our first node, starting by finding what is the temporary password attributed to it:

Then connecting to the instance to update the password, create the new account, and installing the Group Replication plugin:

This second node reports itself as node-2 so we need to adjust /etc/hosts to support the name-resolving:

We can now go back to our MySQL Shell session and add the node to the existing cluster:

During this process, the system will identify in this new node “errant GTIDs that do not exist in the cluster“; chose [C]lone to “completely overwriting the state of node-2:3307 with a physical snapshot from an existing cluster member“; the seed instance, in this case, will be the only other node in the cluster, node-1.

Running cluster.status() again will now show a 2-node cluster:

Adding a Third Node

There are other ways we could deploy a new mysqld instance other than initializing it from scratch. Adding a new group option to /etc/my.cnf remains indispensable though:

For this third instance, instead of initializing the server and then proceeding to adjust its password, create the new account, and install the plugin, let’s make a cold copy of the second instance’s datadir and start from there (a binary backup with Percona XtraBackup would also work):

Under the datadir, the auto.cnf file will contain the source server’s UUID while the mysqld-auto.cnf file will contain that server’s group replication settings. Let’s remove them to allow a fresh start for this new instance:

One last edit to /etc/hosts:

And we can add the new instance to the cluster from the Shell session:

This completes our 3-node InnoDB test cluster:

Note that at any time you can connect to the cluster by instantiating the cluster object on MySQL Shell once you are connected to any of the nodes that compose it:

Deploying a PMM Server

Deploying a Percona Monitoring and Management (PMM) version 2 server with Docker remains as simple as it has been since its early days:

  1. Make sure to have the latest Docker image of PMM2:

  2. Create the data container:

  3. Create the server container:

That’s it! Now, it is just a matter of finding out which IP address has been assigned to pmm-server in order to access its web interface:

Monitoring the InnoDB Cluster on PMM

We start by installing the PMM 2 client package on our test server. Since the Percona Repository is already configured there, it is as simple as executing:

Our test server is a Vagrant machine with two network interfaces, eth0, and eth1. The latter, not the former, is the one that can access the PMM server we just created so when configuring the PMM client we need to indicate that communication should be established through its IP address, otherwise the main network interface would be used by default:

Since we have 3 mysqld instances running on the server, we will have to set up 3 distinct agents (or exporters) to retrieve data from each of them. To differentiate the instances on PMM, we will name them node1, node2, and node3. And to make things simpler, we will use the existing main root user to connect to the instances:

A final check to make sure everything is in order:


Note: the mysql_perfschema_agent agents are used for Query Analytics (QAN).

Creating a Custom Query and Dashboard

Now that the test cluster is in place and monitored by a PMM server, we can finally get to focus on the main task at hand:

How can we monitor the roles played by each node in an InnoDB cluster on PMM?

We can find the current role played by each node in the cluster by querying the PERFORMANCE_SCHEMA:

It is not possible to plot this information on a graph and track changes between PRIMARY and SECONDARY roles as is. But we can be a little creative here and convert these values to numbers, say 1 to PRIMARY and 2 otherwise;

Even though this data is not currently being collected by the MySQL exporter we can configure it to start doing so by creating a custom query:

Starting from the version 1.15.0, PMM provides user the ability to take a SQL SELECT statement and turn the result set into metric series in PMM.

On PMM 2, custom queries are stored under the /usr/local/percona/pmm2/collectors/custom-queries directory. You choose how often you want the data to be retrieved by configuring it in the respective resolution level sub-directory. For this experiment, let’s configure it to run often by editing /usr/local/percona/pmm2/collectors/custom-queries/mysql/high-resolution/queries-mysqld.yml as follows:

Note we modified the above query adding a WHERE clause so it returns only the value for the role of the node being queried.

To create a graph to display this metric, click on the + (“plus”) icon in the menu occupying the left side of the PMM web interface home page, then Create, Dashboard, and Add Query. In the Query section, on the bottom of that screen, enter the following under Metrics: mysql_performance_schema_group_replication_member_ROLE – that is the custom query name suffixed by the metric name, ROLE. Below it, in the Legend field, enter: {{service_name}} – that’s because we’ve created all three MySQL instances on the same server, each being covered by a separate agent and thus considered a “service”. In a production environment, you would probably use {{node_name}} instead, or maybe {{node_name}}.{{service_name}}

Creating a custom dashboard

Creating a Dashboard for our custom query

Finally, click on the little “disk” icon on the top of the screen, Save dashboard, and choose a name for it; we named ours “InnoDB Cluster node role”:

Dashboard showing change of nodes' roles

The custom dashboard showing the exact moment node2 is promoted to PRIMARY

Note there are other dashboards available on PMM2. Instead of the traditional Graph used for data visualization we can choose a Discrete panel and map the members’ roles back to their original naming:

Discrete panel displaying nodes' roles

Using a Discrete panel to display the nodes’ roles

There you have it: two dashboards to monitor InnoDB Cluster node roles powered by a custom query. What other metrics do you want to monitor?

Share this post

Leave a Reply