The Percona Kubernetes Operator for Percona XtraDB Cluster(PXC) comes with ProxySQL as part of the deal. And to be honest, the behavior of ProxySQL is pretty much the same as in a regular non-k8s deployment of it. So why bother to write a blog about it? Because what happens around ProxySQL in the context of the operator is actually interesting.
ProxySQL is deployed on its own POD (that can be scaled as well as the PXC Pods can). Each ProxySQL Pod has its own ProxySQL Container and a sidecar container. If you are curious, you can find out which node holds the pod by running
1 2 |
kubectl describe pod cluster1-proxysql-0 | grep Node: Node: ip-192-168-37-111.ec2.internal/192.168.37.111 |
Login into and ask for the running containers. You will see something like this:
1 2 3 4 5 6 7 |
[root@ip-192-168-37-111 ~]# docker ps | grep -i proxysql d63c55d063c5 percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_proxysql-monit_cluster1-proxys ql-0_pxc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 d75002a3847e percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_pxc-monit_cluster1-proxysql-0_ pxc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 e34d551594a8 percona/percona-xtradb-cluster-operator "/entrypoint.sh /usr…" 2 hours ago Up 2 hours k8s_proxysql_cluster1-proxysql-0_p xc_631a2c34-5de2-4c0f-b02e-cb077df4ee13_0 |
Now, what’s the purpose of the sidecar container in this case? To find out if there are new PXC nodes (pods) or on the contrary, PXC pods have been removed (due to scale down) and configure ProxySQL accordingly.
Adding and Removing PXC Nodes (Pods)
Let’s see it in action. A regular PXC kubernetes deployment with 3 PXC pods, like this:
1 2 3 4 5 6 7 8 |
kubectl get pod NAME READY STATUS RESTARTS AGE cluster1-proxysql-0 3/3 Running 0 106m cluster1-proxysql-1 3/3 Running 0 106m cluster1-proxysql-2 3/3 Running 0 106m cluster1-pxc-0 1/1 Running 0 131m cluster1-pxc-1 1/1 Running 0 128m cluster1-pxc-2 1/1 Running 0 129m |
Will have the mysql_server information as following:
1 2 3 4 5 6 7 8 9 10 11 |
mysql> select hostgroup_id,hostname,status, weight from runtime_mysql_servers; +--------------+---------------------------------------------------+--------+--------+ | hostgroup_id | hostname | status | weight | +--------------+---------------------------------------------------+--------+--------+ | 11 | cluster1-pxc-2.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 10 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-0.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | | 12 | cluster1-pxc-1.cluster1-pxc.pxc.svc.cluster.local | ONLINE | 1000 | +--------------+---------------------------------------------------+--------+--------+ 5 rows in set (0.00 sec) |
What do we have?
- 3 PXC pods
- 3 ProxySQL POD
- The 3 PXC pods (or nodes) registered inside ProxySQL
- And several host groups.
What are those host groups?
1 2 3 4 5 6 7 8 9 10 11 12 |
mysql> select * from runtime_mysql_galera_hostgroups\G *************************** 1. row *************************** writer_hostgroup: 11 backup_writer_hostgroup: 12 reader_hostgroup: 10 offline_hostgroup: 13 active: 1 max_writers: 1 writer_is_also_reader: 2 max_transactions_behind: 100 comment: NULL 1 row in set (0.01 sec) |
ProxySQL is using the native galera support and has defined a writer hg, a backup writer hg, and a reader hg. Looking back at the server configuration we have 1 writer, 2 readers, and those same 2 readers are also backup writers.
And what are the query rules?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
mysql> select rule_id, username, match_digest, active, destination_hostgroup from runtime_mysql_query_rules; +---------+--------------+---------------------+--------+-----------------------+ | rule_id | username | match_digest | active | destination_hostgroup | +---------+--------------+---------------------+--------+-----------------------+ | 1 | clustercheck | ^SELECT.*FOR UPDATE | 1 | 11 | | 2 | clustercheck | ^SELECT | 1 | 10 | | 3 | monitor | ^SELECT.*FOR UPDATE | 1 | 11 | | 4 | monitor | ^SELECT | 1 | 10 | | 5 | root | ^SELECT.*FOR UPDATE | 1 | 11 | | 6 | root | ^SELECT | 1 | 10 | | 7 | xtrabackup | ^SELECT.* |