EmergencyEMERGENCY? Get 24/7 Help Now!

Running Percona XtraDB Cluster in a multi-host Docker network

 | June 10, 2016 |  Posted In: Docker, MySQL

PREVIOUS POST
NEXT POST

Percona XtraDB Cluster in a multi-host Docker networkIn this post, I’ll discuss how to run Percona XtraDB Cluster in a multi-host Docker network.

With our release of Percona XtraDB Cluster 5.7 beta, we’ve also decided to provide Docker images for both Percona XtraDB Cluster 5.6 and Percona XtraDB Cluster 5.7.

Starting one node is very easy, and not that different from starting Percona Server image. The only an extra requirement is to have the CLUSTER_NAME variable defined. The startup command might look like this:

You might also notice we can optionally define an XTRABACKUP_PASSWORD password, which a xtrabackup@localhost user will employ for the xtrabackup-SST method.

Running Percona XtraDB Cluster in single mode kind of defeats the purpose of having the cluster. With our docker images, we tried to resolve the following tasks:

  1. Run in multiple-host environment (followed by running in Docker Swarm and Kubernetes)
  2. Start as many nodes in the cluster as we want
  3. Register all nodes in the service discovery, so that the client can see how many nodes are running and their status
  4. Integrate with ProxySQL

Let’s review these points one by one.

Using a multi-host network is when a Docker network becomes helpful. The recent Docker versions come with a network overlay driver, which we will use to run a virtual network over multiple boxes. Starting Docker overlay network is out of scope for this post, but check out this great introduction material on how to get it working.

With the network running, we can create an overlay network for our cluster:

Then we can start containers:

The cool bit is that we can start Percona XtraDB Cluster on any node in the network, and they will communicate over the virtual network.

If you want to stay within a single Docker host (for example during testing), you still can create a bridge network and use it in one host environment.

The script above will run . . . almost. The problem is that every additional node needs to know the address of the running cluster.

To address this (if you prefer a manual process) we introduced the CLUSTER_JOIN variable, which should point to the IP address of one running nodes (or be empty to start the new cluster).

In this case, getting the script above to work might look like below:

I think manually tracking IP addresses requires unnecessary extra work, especially if we want to start and stop nodes on the fly. So we also decided to use service discovery — especially since you need it to run the Docker overlay network overlay. Right now we support the etcd discovery service, but it isn’t a problem to add more (such as Consul).

Starting etcd is also out of the scope of this post, but you can read about the procedure in the manual.

When you run etcd service discovery (on the host 10.20.2.4:2379, for example) you can start the nodes:

The node will register itself in the service discovery and will join existing $CLUSTER_NAME.

There is convenient way to check all nodes:

With this, you can start as many cluster nodes as you want and on any host in Docker Network. Now it is convenient to use an SQL proxy in front of the cluster. In this case, we will use ProxySQL (I will show that in a follow-up post).

In later posts, we will also review how to run Percona XtraDB Cluster nodes in an orchestration environment (like Docker Swarm and Kubernetes).

PREVIOUS POST
NEXT POST
Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks.

Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products.

He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.

9 Comments

  • For service discovery, rather than depending on an external service like etcd, shouldn’t you just use Docker’s Internal DNS? That is, associate a name or network alias on the Docker run command and provide this name for the CLUSTER_JOIN env on subsequent node deployments?

    The /etc/resolv.conf in running containers now points to Docker’s internal DNS. No need to hard code IPs any more!

  • Kevin,

    My Docker knowledge is still limited, so I am open to idea.
    How would you run, say N nodes and register them in DNS, given that containers can start and stop on the fly?

    • With user defined networks, like you are using in this article, you pretty much get DNS for free. That is, you don’t have to register them in DNS, the docker run command does the registration for you when the container starts/stops. Even when the containers are running, you can use the “docker network” command to add/remove containers dynamically to any user defined network.

      See Docker documentation: https://docs.docker.com/engine/userguide/networking/dockernetworks/ and https://docs.docker.com/engine/userguide/networking/work-with-networks/

      It all simply works. Using an external Key-Value store or external DNS container for networking was required before Docker supported networking plugins (like the Overlay plugin). Note that the Overlay plugin requires a Key-Value store like etcd or consul for use by the Docker Daemon, but this is completely hidden from the Docker client. The Key-Value store is used by the Docker Daemon to implement the multi-host Overlay networks.

  • When I start second Percona node with command:

    sudo docker run -d -p 3306 –net=percona -e MYSQL_ROOT_PASSWORD=Theistareyk -e CLUSTER_NAME=cluster1 -e XTRABACKUP_PASSWORD=Theistare -e DISCOVERY_SERVICE=12.34.56.78:2379 –name percona2 percona/percona-xtradb-cluster

    I get errors in Docker container logs:

    2016-12-21T15:35:56.691657Z 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -131 (State not recoverable)
    2016-12-21T15:35:56.691821Z 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1407: Failed to open channel ‘cluster1’ at ‘gcomm://10.0.0.3,10.0.0.3’: -131 (State not recoverable)
    2016-12-21T15:35:56.691849Z 0 [ERROR] WSREP: gcs connect failed: State not recoverable
    2016-12-21T15:35:56.691861Z 0 [ERROR] WSREP: wsrep::connect(gcomm://10.0.0.3,10.0.0.3) failed: 7
    2016-12-21T15:35:56.691866Z 0 [ERROR] Aborting

    As I understand, new node tries to connect to nodes with IP 10.0.0.3,10.0.0.3 , but I have only one node and its IP is not 10.0.0.3. How can I connect to master node without this error?

    • I have the same issue with this docker image.
      there is a mistake with the pxc-entry.sh
      at line 135:
      i=$(curl http://$DISCOVERY_SERVICE/v2/keys/pxc-cluster/queue/$CLUSTER_NAME | jq -r '.node.nodes[].value')
      shoud be
      i=(curl http://$DISCOVERY_SERVICE/v2/keys/pxc-cluster/queue/$CLUSTER_NAME | jq -r '.node.nodes[].value')
      and line 139:
      i=$(curl http://$DISCOVERY_SERVICE/v2/keys/pxc-cluster/$CLUSTER_NAME/?quorum=true | jq -r '.node.nodes[]?.key' | awk -F'/' '{print $(NF)}')
      should be
      i=(curl http://$DISCOVERY_SERVICE/v2/keys/pxc-cluster/$CLUSTER_NAME/?quorum=true | jq -r '.node.nodes[]?.key' | awk -F'/' '{print $(NF)}')

      The script want deploy an array to variable i with shell format like this i=(10.0.0.2 10.0.0.3 ….)
      but if add $ just like this i=$(10.0.0.2 10.0.0.3 ….) something is not correct format in my system, maybe in your system too.

  • Hi! I use docker a small time. But what about my solution for run percona cluster in docker swarm?

    I create first node for bootstrap: docker service create –network skynet -e “CLUSTER_NAME=mycluster” -e “MYSQL_ROOT_PASSWORD=PassWord123” –name mysql_init percona/percona-xtradb-cluster:5.7.16

    Then I run second node and join it to cluster: docker service create –network skynet -e “CLUSTER_NAME=mycluster” -e “MYSQL_ROOT_PASSWORD=PassWord123” -e “CLUSTER_JOIN=mysql_init,mysql” –name mysql percona/percona-xtradb-cluster:5.7.16

    Then I no longer need for first node. I must remove this: docker service rm mysql_init
    And now I can scale my galera cluster up and down: docker service scale mysql=3

      • Hi Vadim,

        Any luck with it? I played a little with xtradbcluster in latest (17.09) Docker Swarm and couldn’t find a way to avoid having a bootstrap node. The only difference between service definition for bootstrap and non-bootstrap node is “–wsrep-new-cluster” flag.

Leave a Reply