There were a lot of great questions asked during the session, so I’d like to take this opportunity to try to answer a few of them:
Q: Is there an easy way to leverage the xtrabackup SST and IST in an xtradb cluster to take your full and incremental backups of the cluster’s databases?
Well, if you noticed, all the current SST methods are just commonly used backup tools, and the SST is, in reality, just doing a full backup. So I’m not sure there’s much to leverage with SST.
The straight-forward to way to backup XtraDB Cluster (IMHO) would to simply do run your backup of choice on a node and optionally keep the node out of production rotation while doing so. Whatever method you use should not take the node offline so much that it requires a full SST when you bring it back online. If the backup did take the node offline, IST would be leveraged to catch it back up to the cluster.
We tend to like XtraBackup around here — and using this the node should be fully functional and online during the entire backup.
Q: Can I run two Percona server node as Master-Master ?
You can write to any of the nodes in the cluster in parallel with the proviso about the potential for increase in deadlock errors (see the slides/recording if you missed this). If that becomes a problem for you, you can simply setup your load balancing/application to write to any single node in the cluster.
Q: Are adding/updating mysql user accounts replicated?
Yes, provided you use CREATE USER or GRANT. DML on the mysql.* tables directly will not be replicated unless you are using the experimental MyISAM support. This is listed as a Galera limitation.
Q: In the haproxy example, how do you specify with xtradb cluster the different ports for write and ready traffic?
The cluster doesn’t know the difference, each node just listens on 3306 or whatever you configure.
The HAproxy daemon will listen on those two separate ports with separate configurations, and the application must connect to the appropriate one, depending if it needs to read or write.
Splitting read and and write DB handles is common practice in applications, so that method should apply perfectly fine here.
Q: Any settings that require a mysql restart or can they all be updated with a reload?
All the documentation I have on the system variables are on the Codership Wiki. It’s not clear from this page which are dynamic, and which require a server restart, though some do mention setting them dynamically in the description.
Q: The SST does not work with rsync when the option innodb_data_home_dir is set to a different path then the datadir option. Any comments on this?
I would file a bug on the issue. As I mentioned the SSTs are all scripts, so it should be easy for you to fix yourself as well.
Q: what about humongous ALTER TABLE? Does it run one at a time in cluster and won’t affect overall availability?
As I said, some ALTER TABLE operations, such as adding and removing indexes, can be done using Galera’s rolling schema upgrade feature.
If that won’t work with your change (i.e., your change will prevent replication from working if you only do it on some nodes), then I’d suggest pt-online-schema-change.
Q: Does replication of MyISAM form any bottlenecks in XtraDB Cluster? If so, how bad?
I have not experimented with MyISAM support yet myself. Based on what I’ve heard, I wouldn’t trust it unless you specifically tested it very carefully.
There was a question about availability of Nagios or other monitoring plugins, and I misspoke:
The Percona Monitoring Plugins have a check that enables you to set thresholds on an arbitrary expression that includes any SHOW STATUS and/or SHOW VARIABLES on the monitored MySQL server in question. It should be easy to use this tool to monitor at least some of the monitoring recommendations from Codership.
There does not, however, AFAIK, exist any dedicated checks specifically for XtraDB Cluster nodes, or a check that will check cluster consistency across all nodes. Please let me know if I’m mistaken, or if you want some tips to write one yourself :p