Thanks to all who attended my webinar today. The session was recorded and will be available to watch for free soon here.
There were a lot of great questions asked during the session, so I’d like to take this opportunity to try to answer a few of them:
Well, if you noticed, all the current SST methods are just commonly used backup tools, and the SST is, in reality, just doing a full backup. So I’m not sure there’s much to leverage with SST.
The straight-forward to way to backup XtraDB Cluster (IMHO) would to simply do run your backup of choice on a node and optionally keep the node out of production rotation while doing so. Whatever method you use should not take the node offline so much that it requires a full SST when you bring it back online. If the backup did take the node offline, IST would be leveraged to catch it back up to the cluster.
We tend to like XtraBackup around here — and using this the node should be fully functional and online during the entire backup.
You can write to any of the nodes in the cluster in parallel with the proviso about the potential for increase in deadlock errors (see the slides/recording if you missed this). If that becomes a problem for you, you can simply setup your load balancing/application to write to any single node in the cluster.
Yes, provided you use CREATE USER or GRANT. DML on the mysql.* tables directly will not be replicated unless you are using the experimental MyISAM support. This is listed as a Galera limitation.
The cluster doesn’t know the difference, each node just listens on 3306 or whatever you configure.
The HAproxy daemon will listen on those two separate ports with separate configurations, and the application must connect to the appropriate one, depending if it needs to read or write.
Splitting read and and write DB handles is common practice in applications, so that method should apply perfectly fine here.
All the documentation I have on the system variables are on the Codership Wiki. It’s not clear from this page which are dynamic, and which require a server restart, though some do mention setting them dynamically in the description.
I would file a bug on the issue. As I mentioned the SSTs are all scripts, so it should be easy for you to fix yourself as well.
As I said, some ALTER TABLE operations, such as adding and removing indexes, can be done using Galera’s rolling schema upgrade feature.
If that won’t work with your change (i.e., your change will prevent replication from working if you only do it on some nodes), then I’d suggest pt-online-schema-change.
I have not experimented with MyISAM support yet myself. Based on what I’ve heard, I wouldn’t trust it unless you specifically tested it very carefully.
The Percona Monitoring Plugins have a check that enables you to set thresholds on an arbitrary expression that includes any SHOW STATUS and/or SHOW VARIABLES on the monitored MySQL server in question. It should be easy to use this tool to monitor at least some of the monitoring recommendations from Codership.
There does not, however, AFAIK, exist any dedicated checks specifically for XtraDB Cluster nodes, or a check that will check cluster consistency across all nodes. Please let me know if I’m mistaken, or if you want some tips to write one yourself :p