EmergencyEMERGENCY? Get 24/7 Help Now!

Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process

 | November 21, 2014 |  Posted In: Cloud and MySQL, MySQL, MySQL DBaaS, Percona XtraDB Cluster

PREVIOUS POST
NEXT POST

I’m happy to announce that Google has added Click-to-deploy functionality for Percona XtraDB Cluster (PXC) on Google Cloud Compute nodes. This gives you the ability to rapidly spin up a cluster for experimentation, performance testing, or even production use on Google Compute Engine virtual machines.

What is Percona XtraDB Cluster?

Google Cloud Platform adds Percona XtraDB Cluster to click-to-deploy processPercona XtraDB Cluster is a virtually synchronous cluster of MySQL Innodb nodes. Unlike conventional MySQL asynchronous replication which has a specific network topology of master and slaves, PXC’s nodes have no specific topology.  Functionally, this means that there are no masters and slaves, so you can read and write on any node.

Further, any failure in the cluster does not require any re-arranging of the replication topology. Instead, clients just reconnect to another node and continue reading and writing.

We have a ton of material about Percona XtraDB Cluster in previous posts, in the PXC manual, and in various webinars. If you want a concentrated hour overview of Percona XtraDB Cluster, I’d recommend watching this webinar.

How do I use Click-to-deploy?

Simply visit Google Cloud’s solutions page here: https://cloud.google.com/solutions/percona to get started. You are given a simple setup wizard that allows you choose the size and quantity of nodes you want, disk storage type and volume, etc.  Once you ‘Deploy Cluster’, your instances will launch and form a cluster automatically with sane default tunings. After that, it’s all up to you what you want to do.

Seeing it in action

Once your instances launch, you can add an SSH key to access (you can also install the Google Cloud SDK). This, handily, creates a new account based on the username in the SSH public key text. Once I add the key, I can easily just ssh to the public IP of the instance and I have my own account:

Once there I installed sysbench 0.5 from the Percona apt repo:

Now we can load some data and run a quick test:

So, we see ~1200 tps on an update test in our little cluster, not too bad!

What it is not

This wizard is a handy way to get a cluster setup for some experimentation or testing. However, it is not a managed service:  after you launch it, you’re responsible for maintaining, tuning, etc. You could use it for production, but you may want some further fine tuning, operational procedures, etc. All of this is absolutely something Percona can help you with.

PREVIOUS POST
NEXT POST
Jay Janssen

Jay joined Percona in 2011 after 7 years at Yahoo working in a variety of fields including High Availability architectures, MySQL training, tool building, global server load balancing, multi-datacenter environments, operationalization, and monitoring. He holds a B.S. of Computer Science from Rochester Institute of Technology.

2 Comments

  • Hi,

    Is it possible to add external node(s) to this cluster? I mean for example I have a working server somewhere and want to keep it, but also want to set up an XtraDB cluster on Google Compute Engine and want the old server to be the part of the new cluster.

    Many thanks,

    Jozsef Toth

  • @Jozsef,
    There’s nothing magical about the cluster this tool creates, it’s a normal cluster and you can spin up other machines for them to join the cluster. Once the cluster is spun up initially by this tool, then the management (adding/removing nodes, etc.) is up to you.

    The main limitation in what you are asking here (and in any PXC cluster) would be that all the nodes must be able to communicate with each other. So, providing you can work out the networking/firewall details between all these nodes, then you can connect them.

    One caveat would be commit latency would increase on the GCE cluster by adding an external node. How much depends on how far away the extra node is.

    A second caveat would be that this setup tool will create a new cluster with the nodes in GCE. You’d have to add the other node later, which would copy the data from the new cluster to the older node. If you want the data sync to go in the opposite direction, you’d need to ensure the old node was a cluster first, and then connect the GCE nodes to it.

    A third caveat would be what will happen when the GCE nodes and your remote node lose network connectivity for whatever reason. What do you want to happen in terms of node failure? I’d consider this carefully based on how Galera handles node failure and network partitions before putting something like this into production.

Leave a Reply