Percona Live: Data Performance Conference 2016 Logo

April 18-21, 2016

Santa Clara, California

Clusternaut: Orchestrating Percona XtraDB Cluster with Kubernetes

Clusternaut: Orchestrating Percona XtraDB Cluster with Kubernetes

 19 April 3:50 PM - 04:40 PM @ Ballroom D
Experience level: 
50 minutes conference


This talk is about orchestrating Percona XtraDB Cluster (PXC) nodes atop Google Container Engine (GCE) with Kubernetes. PXC provides for synchronous replication among MySQL nodes through the WSREP (writeset replication) API, and Galera plugin which implements it, to provide group communication and configuration through extended virtual synchrony (EVS). While it can be run in isolation, GCE provides other architectural elements such as fluentd for logging, etcd for co-ordination, skydns for DNS among others, which are vital in this design. Key elements of the talk will be: ======================== a) Details of PXC and synchronous replication that it provides while ensuring ACID compliance with MVCC. Extended virtual synchrony (EVS) will also be described as are its CAP limitations. Finally, existing deployment strategies of PXC will also be mentioned. b) The Docker image built for PXC. The design is intended to be flexible and extensible, either from git or from release packages. c) Initial docker-compose, designing and porting to Kubernetes. Docker-compose has been used for a while to bring up a ‘N’ node cluster with minimal configuration. Some of the elements in this design cannot be used with Kubernetes as is. Thus, details of porting will be discussed as follows: i ) Each PXC node goes into a Pod. Same Pod may also contain other optional services like xinetd or haproxy. While having Pods may be sufficient, it will not be making use of Kubernetes fully. Hence, Replication Controllers (RC) are used to control Pod placement and lifetimes. The Pod and RC configuration will be discussed here. ii) The nature of architecture and bootstrapping of cluster. PXC is a master-less cluster which requires a bootstrapped node that others can connect to in order to form the cluster. Kubernetes, while not allowing for direct linking among containers, allows for service endpoints. A ‘cluster’ service endpoint is created for cluster group communication and client connections. The service endpoint also provides load-balancing and high availability as desired side effects. This addresses the agnostic approach and allows for simpler, more elegant bootstrapping for PXC (a strong benefit of deployment with Kubernetes over normal deployment of PXC). iii) Database itself is mounted through volumes allowed for by both Docker and Kubernetes. This also allows for persistence and separation of data and design. iv) Dynamic generation of JSON Pod configuration is required to allow for certain runtime elements to be injected into it. Finally, future work will be discussed: benchmarking, application Pods, CAP testing (akin to jepsen), integration with Apache Mesos and so on. The Go code for this is already up and running at


Raghavendra Prabhu's picture

Raghavendra Prabhu

Software Engineer, Distributed Systems, Yelp, Yelp


Raghavendra Prabhu works as a Software Engineer in Distributed Systems team at Yelp. Prior to that he was the Product Lead of Galera-based Percona XtraDB Cluster (PXC) at Percona. He also works on Xtrabackup and Percona Server intermittently. He joined Percona in the fall of 2011. Before joining, he worked at Yahoo! SDC, Bangalore for 3 years as Systems Engineer, primarily dealing with databases (MySQL and Yahoo Sherpa/PNUTS) and configuration management. Raghavendra's main interests include databases, virtualization and containers, distributed systems, operating systems including linux kernel. He also likes to contribute and has contributed code upstream to several FOSS projects — for more details on that visit In his spare time, he likes to read books, technical papers/literature, listen to music, hack on software, nature hiking, and exploring in general while on vacations. You may also often find him on IRC channels on freenode and/or oftc.

Share this talk