Auto-bootstrapping an all-down cluster with Percona XtraDB Cluster

One new feature in Percona XtraDB Cluster (PXC) in recent releases was the inclusion of the ability for an existing cluster to auto-bootstrap after an all-node-down event.  Suppose you lose power on all nodes simultaneously or something else similar happens to your cluster. Traditionally, this meant manually re-bootstrapping the cluster, but not any more.

How it works

Given the above all-down situation, if all nodes are able to restart and see each other such that they all agree what the state was and that all nodes have returned, then the nodes will make a decision that it is safe for them to recover PRIMARY state as a whole.

This requires:

  • All nodes went down hard — that is; a kill -9, kernel panic, server power failure, or similar event
  • All nodes from the last PRIMARY component are restarted and are able to see each other again.


Suppose I have a 3 node cluster in a stable state. I then kill all nodes simultaneously (simulating a power failure or similar event):

I can see that each node maintained a state file in its datadir called ‘gvwstate.dat’. This contains the last known view of the cluster:

This file will not exist on a node if it was shutdown cleanly, only if the mysqld was uncleanly terminated. This file should exist and be the same on all the nodes for the auto-recovery to work.

I can now restart all 3 nodes more or less at the same time. Note that none of these nodes are bootstrapping and all of the nodes have the wsrep_cluster_address set to a proper list of the nodes in the cluster:

I can indeed see that they all start successfully and enter the primary state:

Checking the logs, I can see this indication that the feature is working:

Changing this behavior

This feature is enabled by default, but you can toggle it off with the pc.recovery setting in the wsrep_provider_options

This feature helps cover an edge case where manual bootstrapping was necessary in the past to recovery properly. This feature was added in Percona XtraDB Cluster version 5.6.19, but was broken due to this bug.  It was fixed in PXC 5.6.21

Share this post

Comments (9)

  • Antonio Kang

    Hi Jay,

    I was testing this feature out on my VMs and I having trouble with starting mysql on all 3 of the nodes consistently.

    Some times, was able to start mysql on 3 nodes after using the killall command listed in the tutorial but other times, I was not able to start up mysql on the nodes.

    Also, I was wondering what scenarios do you recommend using this feature?

    December 4, 2014 at 1:44 pm
  • Jay Janssen

    @Antonio — Are you using the latest release? It had some issues prior to that. You’d need to check the logs when it fails to recover, and also confirm that they all have a gvwstate.dat file in their datadirs before the restart.

    As for usage cases: it’s enabled by default, but it should gracefully handle auto-recovery the off-chance that you have a full cluster outage. The standard use case is a power failure and then recovery — once all the previous nodes from the last PRIMARY state recover, it should auto-bootstrap itself.

    December 5, 2014 at 7:59 am
  • Peter Zaitsev


    I wonder how do we find which node really was discovered to be latest and actually was PRIMARY and gave IST to others (hopefully)

    I am testing this and I see:

    2014-12-05 17:03:25 2241 [Note] WSREP: promote to primary component

    2014-12-05 17:03:25 2194 [Note] WSREP: promote to primary component

    What I’m doing is shutting boxes off with 5 seconds delay and I want to ensure the last box down is actually picked so we have indeed latest state

    December 5, 2014 at 5:26 pm
  • Jay Janssen

    @Peter — my understanding is that this works by auto-rejoining the last PRIMARY component (if any). The only reason nodes might have different GTIDs is because apply is asynchronous. My understanding is that state transfer will happen normally in that case (and I guess a full SST), but I believe this is after the decision to go primary is reached. In this case, only node(s) with the highest GTID should continue, while the others ST.

    December 7, 2014 at 2:08 pm
  • Morgan Jones


    You say that the cluster will recover if all nodes are started and they are able to recover the PRIMARY component. What will happen if they cannot recover the PRIMARY component for some reason? Will the nodes be left running, but not replicating? Will the user be able to access the database?


    April 6, 2015 at 2:09 pm
  • Jay Janssen

    @morgan — If the nodes cannot recover, then they should eventually timeout and exit. If this happens, you can manually bootstrap one and restart the others normally. During the timeout I would not expect apps to have any access at all.

    April 6, 2015 at 2:19 pm
  • Brian Kruger

    Been playing around with this. Adding some additional help for people if they come across this.

    If you do lose your whole cluster, for this to work, all of the nodes need to come back on line that are listed in the gvwstate.dat .. If you manage to not have a machine comeback after a power outage for instance, you can edit gvwstate.dat and remove the dead host uuid and restart.

    The big question is with going through this effort, is it easier to just re-bootstrap at that point ?

    July 15, 2015 at 1:48 pm
  • Jay Janssen

    @Brian — gvwstate.dat is really a best-effort auto-recovery from an all-down situation. If you’re already logged in