Percona XtraDB Cluster operations MySQL Webinar follow-up questions anwsered

Percona XtraDB ClusterThanks to all who attended my PXC Operations webinar last week.  If you missed it you can watch the video here.

I wanted to take some time to answer the questions I didn’t get to during the broadcast.

Is there an easy way to leverage the xtrabackup SST and IST in an xtradb cluster to take your full and incremental backups of the cluster’s databases?

An SST is a full backup of one of the nodes in your database already.  If you want another backup, you may as well just run xtrabackup yourself (though don’t forget the discussion about locking from the talk).

IST is not affected by wsrep_sst_method — it is the same regardless of what SST you use.  In theory an IST donation could be used for incremental backups, but I’m not aware of any system that uses it currently.  There’s a few limitations currently that would constrict its use:

  • IST is only valid for the amount of time all needed transactions are available in the donor’s fixed-size gcache
  • Gcache files, despite the fact that they exist on disk, are non-persistent.

If you can use Xtrabackup for full backups, then I don’t see why you can’t use Xtrabackup’s incremental backup feature for your incrementals instead.  It certainly would be interesting for Galera to support IST methods so we could use Xtrabackup for IST instead of the current Gcache system, but it’s not something planned or in development that I’m aware of.

Does replication of MyISAM form any bottlenecks in XtraDB Cluster? If so, how bad?

MyISAM replication in PXC/Galera is labeled as experimental, but I think that’s a misnomer.  It should be labeled “broken by design“.  MyISAM replication really will never work properly with Galera due to its non-transactional nature.  MyISAM DML is replicated with Statement-based replication and it operates similarly to how DDL (which is also not transactional in MySQL) TOI is replicated.  To quote the manual:

TOI – Total Order Isolation – When this method is selected DDL is processed in the same order with regards to other transactions in each cluster node. This guarantees data consistency. In case of DDL statements cluster will have parts of database locked and it will behave like a single server. In some cases (like big ALTER TABLE) this could have impact on cluster’s performance and high availability, but it could be fine for quick changes that happen almost instantly (like fast index changes). When DDL is processed under total order isolation (TOI) the DDL statement will be replicated up front to the cluster. i.e. cluster will assign global transaction ID for the DDL statement before the DDL processing begins. Then every node in the cluster has the responsibility to execute the DDL in the given slot in the sequence of incoming transactions, and this DDL execution has to happen with high priority.

Innodb replication will allow more things to be happening in parallel, but TOI tightens up the cluster so it behaves much more like a single instance.  So, I expect any serious amount of MyISAM replication to perform pretty poorly in PXC, but I don’t have the benchmarks to prove it… yet.

When adding nodes to a cluster, why would we see errors about the SST not looking like a tar archive?

This depends probably on what SST method you are using, but Xtrabackup streams its backup from Donor to Joiner over netcat in a tar stream.  The Donor, therefore, is expecting a tar archive to start streaming over that netcat port, but if it gets anything else or some kind of network disconnection you may see this error.  I’d suggest checking the Donor and Joiner SST logs (especially the Donor) to see what went wrong.  Check your datadir for an innobackup.backup.log file to see if Xtrabackup failed for some reason on the Donor.  The codership-team mailing list may be able to help further.


Share this post

Leave a Reply