ProxySQL Behavior in the Percona Kubernetes Operator for Percona XtraDB Cluster

ProxySQL Percona Kubernetes OperatorThe Percona Kubernetes Operator for Percona XtraDB Cluster(PXC) comes with ProxySQL as part of the deal. And to be honest, the behavior of ProxySQL is pretty much the same as in a regular non-k8s deployment of it. So why bother to write a blog about it? Because what happens around ProxySQL in the context of the operator is actually interesting.

ProxySQL is deployed on its own POD (that can be scaled as well as the PXC Pods can). Each ProxySQL Pod has its own ProxySQL Container and a sidecar container. If you are curious, you can find out which node holds the pod by running

Login into and ask for the running containers. You will see something like this:

Now, what’s the purpose of the sidecar container in this case? To find out if there are new PXC nodes (pods) or on the contrary, PXC pods have been removed (due to scale down) and configure ProxySQL accordingly.

Adding and Removing PXC Nodes (Pods)

Let’s see it in action. A regular PXC kubernetes deployment with 3 PXC pods, like this:

Will have the mysql_server information as following:

What do we have?

  • 3 PXC pods
  • 3 ProxySQL POD
  • The 3 PXC pods (or nodes) registered inside ProxySQL
  • And several host groups.

What are those host groups?

ProxySQL is using the native galera support and has defined a writer hg, a backup writer hg, and a reader hg. Looking back at the server configuration we have 1 writer, 2 readers, and those same 2 readers are also backup writers.

And what are the query rules?