Emergency

MySQL Server 5.6 used in a cluster of clusters

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • MySQL Server 5.6 used in a cluster of clusters

    I'll start with an apology or two. Firstly, I'm an Ops guy at heart, not a DBA, so I may use the wrong terminology, not in an effort to annoy, merely because I don't know any better. Secondly, I'm not sure if this is even the right place to ask this question, but it seemed like my best bet.
    OK, that over with, on to the question.
    I've inherited a MySQL Cluster (using CentOS, Corosync, Pacemaker and MySQL Enterprise). It's a 2-node cluster, and it has a 3rd node (no MySQL, merely Corosync/Pacemaker) as a quorum node. The MySQL instances seem to be controlled by a Percona script (prm56_mysql)
    There are 2 VIP's set up on the cluster, one allocated to each box (Writer and Reader respective to Master/Slave).
    This is all working fine, and I'm more (or less) happy with how it's set up and functioning. I should mention it's using GTID replication with full local storage (no SAN or DRBD).
    Tacked onto the edge of this, is another node (MySQL, no Corosync/Pacemaker) This node is set up to slave from the Reader VIP that is allocated to the slave in the original cluster.
    I've now been tasked with turning this extra backup into a Master/Slave cluster in it's own right, and I'm not sure where to begin with this. I have managed to get all 4 servers running as 2 clusters (independent to each other) but am failing to understand the Corosync/Pacemaker config I would need in order to make the Master of the second cluster become a Slave of the first cluster (or even if this is possible with the current software choices).
    I'm just hoping someone out there has some experience on how to cluster clusters in this fashion and would be willing to point me in the correct direction. I'm sort of hoping that there's a single line I can add to the Corosync configuration to leverage the current prm script to pass a command to the existing "master of the second cluster" to tell it that it is a slave from the first cluster.

    I have full access to the servers so I can get my hands on any configuration/log files that may help, but as an ops person, I wasn't sure which ones so I haven't attached any yet.
Working...
X