In this post, we’ll look at scaling Percona XtraDB Cluster with ProxySQL in Docker Swarm.
In my previous post, I showed how to employ Percona XtraDB Cluster on multiple nodes in a Docker network.
The intention is to be able to start/stop nodes and increase/decrease the cluster size dynamically. This means that we should track running nodes, but also to have an easy way to connect to the cluster.
So there are two components we need: service discovery to register nodes and ProxySQL to handle incoming traffic.
The work with service discovery is already bundled with Percona XtraDB Cluster Docker images, and I have experimental images for ProxySQL https://hub.docker.com/r/perconalab/proxysql/.
For multi-node management, we also need some orchestration tool, and a good start is Docker Swarm. Docker Swarm is simple and only provides basic functionality, but it works for a good start. (For more complicated setups, consider Kubernetes.)
I assume you have Docker Swarm running, but if not here is some good material on how to get it rolling. You also need to have service discovery running (see http://chunqi.li/2015/11/09/docker-multi-host-networking/ and my previous post).
To start a cluster with ProxySQL, we need a docker-compose definition file docker-compose.yml
.:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
version: '2' services: proxy: image: perconalab/proxysql networks: - front - Theistareykjarbunga ports: - "3306:3306" - "6032:6032" env_file: .env percona-xtradb-cluster: image: percona/percona-xtradb-cluster:5.6 networks: - Theistareykjarbunga ports: - "3306" env_file: .env networks: Theistareykjarbunga: driver: overlay front: driver: overlay |
For convenience, both proxy
and percona-xtradb-cluster
share the same environment files (.env):
1 2 3 4 5 |
MYSQL_ROOT_PASSWORD=secret DISCOVERY_SERVICE=10.20.2.4:2379 CLUSTER_NAME=cluster15 MYSQL_PROXY_USER=proxyuser MYSQL_PROXY_PASSWORD=s3cret |
You can also get both files from https://github.com/percona/percona-docker/tree/master/pxc-56/swarm.
To start both the cluster node and proxy:
1 |
docker-compose up -d |
We can start as many Percona XtraDB Cluster nodes as we want:
1 |
docker-compose scale percona-xtradb-cluster=5 |
The command above will make sure that five nodes are running.
We can check it with docker ps:
1 2 3 4 5 6 7 8 |
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 725f5f2699cc percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10284->3306/tcp smblade04/swarm_percona-xtradb-cluster_5 1c85ea1367e8 percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10285->3306/tcp smblade04/swarm_percona-xtradb-cluster_2 df87e9c1342e percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10283->3306/tcp smblade04/swarm_percona-xtradb-cluster_4 cbb82f7a9789 perconalab/proxysql "/entrypoint.sh " 36 minutes ago Up 40 minutes 10.20.2.66:3306->3306/tcp, 10.20.2.66:6032->6032/tcp smblade04/swarm_proxy_1 59e049fe22a9 percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 36 minutes ago Up 40 minutes 4567-4568/tcp, 10.20.2.66:10282->3306/tcp smblade04/swarm_percona-xtradb-cluster_1 0921a2611c3c percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 37 minutes ago Up 42 minutes 4567-4568/tcp, 10.20.2.5:32774->3306/tcp centos/swarm_percona-xtradb-cluster_3 |
We can see that Docker schedules containers on two different nodes, the Proxy SQL container is smblade04/swarm_proxy_1
, and the connection point is 10.20.2.66:6032
.
To register Percona XtraDB Cluster in ProxySQL we can just execute the following:
1 |
docker exec -it smblade04/swarm_proxy_1 add_cluster_nodes.sh |
The script will connect to the service discovery DISCOVERY_SERVICE (defined in .env file) and register nodes in ProxySQL.
To check that they are all running:
1 2 3 4 5 6 7 8 9 10 11 |
mysql -h10.20.2.66 -P6032 -uadmin -padmin MySQL [(none)]> select * from stats.stats_mysql_connection_pool; +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms | +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | 0 | 10.0.14.2 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 212 | | 0 | 10.0.14.4 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 155 | | 0 | 10.0.14.5 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 136 | | 0 | 10.0.14.6 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 123 | | 0 | 10.0.14.7 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 287 | +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ |
We can connect to a cluster using a ProxySQL endpoint:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
mysql -h10.20.2.66 -uproxyuser -psecret mysql -h10.20.2.66 -P3306 -uproxyuser -ps3cret -e "SELECT @@hostname" +--------------+ | @@hostname | +--------------+ | 59e049fe22a9 | +--------------+ mysql -h10.20.2.66 -P3306 -uproxyuser -ps3cret -e "SELECT @@hostname" +--------------+ | @@hostname | +--------------+ | 725f5f2699cc | +--------------+ |
We can see that we connect to a different node every time.
Now if we want to get crazy and make sure we have ten Percona XtraDB Cluster nodes running, we can execute the following:
1 2 3 4 5 6 |
docker-compose scale percona-xtradb-cluster=10 Creating and starting swarm_percona-xtradb-cluster_6 ... Creating and starting swarm_percona-xtradb-cluster_7 ... Creating and starting swarm_percona-xtradb-cluster_8 ... Creating and starting swarm_percona-xtradb-cluster_9 ... Creating and starting swarm_percona-xtradb-cluster_10 ... |
And Docker Swarm will make sure ten nodes are running.
I hope this demonstrates that you can easily start playing with multi-nodes using Percona XtraDB Cluster. In the next post, I will show how to use Percona XtraDB Cluster with Kubernetes.
Vadim,
It is not totally clear to me how do we handle initialization of the new cluster vs stopping all the nodes and when existing cluster back ?
In PXC we have –wsrep-new-cluster to set up the new cluster vs adding nodes to existing clusters. I’m not sure how it works in this case
Peter,
So there is when the service discovery comes into play.
When a node starts, it checks if there any records in the service discovery for
CLUSTER_NAME
. If there are records – the node will try to join existing nodes by their IP addresses. If there is no records forCLUSTER_NAME
– the node will initialize new cluster. The all logic is done in the image entry script –pxc-entry.sh
.What about two hosts? Would it be possible to configure this in a way, that it will use two (real) machines and deploy only a limited amount of containers on each of them?
Example:
Server 1
– Percona 1
– Percona 2
Server 2
– Percona 3
– Percona 4
Also: How are the database files handled? In the “normal” percona image, there are configuration options where i can load an external directory for them. Also i can do some other stuff, like adding a my.cnf file. Do those options work there too?
Is there any better documentation for this setup? This does not work correctly on latest docker 1.13. Every starts, but none of the auto discovery works and such.
Hi,
Thanks for this article, it was a great help for me.
But I still don’t get the point of proxySQL, could we just let docker swarm handles load balancing?
Thanks for this article, it was a great help for me.
+1 for BlindPenguin’s request to have database scaling between two hosts with replicated data