Scaling Percona XtraDB Cluster with ProxySQL in Kubernetes

Percona XtraDB Cluster with ProxySQL in KubernetesHow do you scale Percona XtraDB Cluster with ProxySQL in Kubernetes?

In my previous post I looked how to run Percona XtraDB Cluster in a Docker Swarm orchestration system, and today I want to review how can we do it in the more advanced Kubernetes environment.

There are already some existing posts from Patrick Galbraith ( and Raghavendra Prabhu ( on this topic. For this post, I will show how to run as many nodes as I want, see what happens if we add/remove nodes dynamically and handle incoming traffic with ProxySQL (which routes queries to one of working nodes). I also want to see if we can reuse the ReplicationController infrastructure from Kubernetes to scale nodes to a given number.

These goals should be easy to accomplish using our existing Docker images for Percona XtraDB Cluster (, and I will again rely on the running service discovery (right now the images only work with etcd).

The process of setting up Kubernetes can be pretty involved (but it can be done; check out the Kubernetes documentation to see how: It is much more convenient to use a cloud that supports it already (Google Cloud, for example). I will use Microsoft Azure, and follow this guide: Unfortunately the scripts from the guide install previous version of Kubernetes (1.1.2), which does not allow me to use ConfigMap. To compensate, I will duplicate the ENVIRONMENT variables definitions for Percona XtraDB Cluster and ProxySQL pods. This can be done more optimally in the recent version of Kubernetes.

After getting Kurbernetes running, starting Percona XtraDB Cluster with ProxySQL is easy using following pxc.yaml file (which you also can find with our Docker sources

Here is the command to start the cluster:

The command will start three pods with Percona XtraDB Cluster and one pod with ProxySQL.

Percona XtraDB Cluster nodes will register themselves in the discovery service and we will need to add them to ProxySQL (it can be done automatically with scripting, for now it is a manual task):

Increasing the cluster size can be done with the scale command:

You can connect to the cluster using a single connection point with ProxySQL: You can find it this way:

It exposes the endpoint IP address and two ports: 3306 for the MySQL connection and 6032 for the ProxySQL admin connection.

So you can see that scaling Percona XtraDB Cluster with ProxySQL in Kubernetes is pretty easy. In the next post, I want to run benchmarks in the different Docker network environments.

Share this post

Comments (5)

  • lex Reply

    Can You post yaml’s for etcd ?

    June 24, 2016 at 7:08 am
  • Vadim Tkachenko Reply June 24, 2016 at 12:01 pm
  • lex Reply

    Thank You for explanation. So You use plain docker container in conjunction with kubernetes pods, I’ve managed to launch etcd as a pod in kubernetes but unfortunately the script inside proxy pod did not work correctly. It turned out to be wrong port for communication. After updating curl url with 2379 port all worked well.
    Thank You for publishing those images and configs. They are very handy in evaluating perconadb cluster performance and reliability inside Kubernetes environment.

    June 24, 2016 at 1:00 pm
  • lex Reply

    One remark, Your script adds proxyuser with ip from the network which proxysql resides in. This works if You have one node in kubernetes. But if You have additional nodes, they will have their own network ranges (in bare metal environment with flannel networking) and proxyuser will not be able to authenticate on percona pods located on other nodes.

    June 27, 2016 at 6:13 am
  • Francisco Andrade Reply

    Just an observation, when you run the scale command from your example, you’re scaling both proxysql and pxc servers to 6.

    To scale only the pxc servers you can use:
    $ kubectl scale –replicas=4 ReplicationController/pxc-rc

    November 30, 2017 at 11:26 am

Leave a Reply