There and in coming posts I am going to cover main features of Percona XtraDB Cluster. The first feature is High Availability.
But before jumping to HA, let’s review general architecture of the Percona XtraDB Cluster.
1. The Cluster consists of Nodes. Recommended configuration is to have at least 3 nodes, but you can make it running with 2 nodes too.
2. Each Node is regular MySQL / Percona Server setup. The point is that you can convert your existing MySQL / Percona Server into Node and roll Cluster using it as base. Or otherwise – you can detach Node from Cluster and use it as just a regular server.
3. Each Node contains the full copy of data. That defines XtraDB Cluster behavior in many ways. And obviously there are benefits and drawbacks.
Benefits of such approach:
This basically defines how Percona XtraDB Cluster can be used for High Availability.
Basic setup: you run 3-nodes setup.
The Percona XtraDB Cluster will continue to function when you take any of nodes down.
At any point of time you can shutdown any Node to perform maintenance or make configuration changes.
Or Node may crash or become network unavailable. The Cluster will continue to work, you can continue to run queries on working nodes.
The biggest question there, what will happen when the Node joins the cluster back, and there were changes to data while the node
Let’s focus on this with details.
There is two ways that Node may use when it joins the cluster: State Snapshot Transfer (SST) and Incremental State Transfer (IST).
mysqldumpand rsync is that your cluster becomes READ-ONLY for time that takes to copy data from one node to another (SST applies
FLUSH TABLES WITH READ LOCKcommand).
XtrabackupSST does not require READ LOCK for full time, only for syncing .frm files (the same as with regular backup).
You can monitor current state of Node by using
SHOW STATUS LIKE 'wsrep_local_state_comment', when it is ‘Synced (6)’, the node is ready to handle traffic.