All bugs can be reported on
error.log files from all the nodes.
For auto-increment, Percona XtraDB Cluster changes
auto_increment_offset for each new node.
In a single-node workload, locking is handled in the same way as InnoDB.
In case of write load on several nodes, Percona XtraDB Cluster uses optimistic locking
and the application may receive lock error in response to
When a node crashes, after restarting, it will copy the whole dataset from another node (if there were changes to data since the crash).
To check the health of a Galera node, use the following query:
SELECT 1 FROM dual;
The following results of the previous query are possible:
id=1(node is healthy)
You can also check a node’s health with the
First set up the
GRANT USAGE ON *.* TO 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';
You can then check a node’s health by running the
/usr/bin/clustercheck clustercheck password 0
If the node is running, you should get the following status:
HTTP/1.1 200 OK Content-Type: text/plain Connection: close Content-Length: 40 Percona XtraDB Cluster Node is synced.
In case node isn’t synced or if it is offline, status will look like:
HTTP/1.1 503 Service Unavailable Content-Type: text/plain Connection: close Content-Length: 44 Percona XtraDB Cluster Node is not synced.
clustercheck script has the following syntax:
<user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>
server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local
server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local
Percona XtraDB Cluster populates write set in memory before replication, and this sets the limit for the size of transactions that make sense. There are wsrep variables for maximum row count and maximum size of write set to make sure that the server does not run out of memory.
For example, if there are four nodes, with four tables:
and you want each table in a separate node,
this is not possible for InnoDB tables.
However, it will work for MEMORY tables.
The quorum mechanism in Percona XtraDB Cluster will decide which nodes can accept traffic and will shut down the nodes that do not belong to the quorum. Later when the failure is fixed, the nodes will need to copy data from the working cluster.
The algorithm for quorum is Dynamic Linear Voting (DLV). The quorum is preserved if (and only if) the sum weight of the nodes in a new component strictly exceeds half that of the preceding Primary Component, minus the nodes which left gracefully.
The mechanism is described in detail in Galera documentation.
The quorum mechanism cannot handle split brain. If there is no way to decide on the primary component, Percona XtraDB Cluster has no way to resolve a split brain. The minimal recommendation is to have 3 nodes. However, it is possibile to allow a node to handle traffic with the following option:
wsrep_provider_options="pc.ignore_sb = yes"
It is possible in two ways:
<datadir>/grastate.dat. Make this file identical on all nodes, and there will be no state transfer after starting a node.
wsrep_start_positionvariable to start the nodes with the same
You may need to open up to four ports if you are using a firewall:
Regular MySQL port (default is 3306).
Port for group communication (default is 4567). It can be changed using the following option:
wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:4010; "
Port for State Snaphot Transfer (default is 4444). It can be changed using the following option:
Port for Incremental State Transfer (default is port for group communication + 1 or 4568). It can be changed using the following option:
wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "
Percona XtraDB Cluster does not support “async” mode, all commits are synchronous on all nodes. To be precise, the commits are “virtually” synchronous, which means that the transaction should pass certification on nodes, not physical commit. Certification means a guarantee that the transaction does not have conflicts with other transactions on the corresponding node.
Yes. On the node you are going to use as master,
you should enable
Try to disable SELinux with the following command:
echo 0 > /selinux/enforce
This is Debian/Ubuntu specific error.
Percona XtraDB Cluster uses
This dependency has been fixed in recent releases.
Future releases of Percona XtraDB Cluster will be compatible with any
(see bug #959970).
For general inquiries, please send us your question and someone will contact you.