Orchestrator: Moving VIPs During Failover

OrchestratorIn this post, I’ll discuss how to moving VIPs during a failover using Orchestrator.

In our previous post, we showed you how Orchestrator works. In this post, I am going to give you a proof-of-concept on how Orchestrator can move VIPs in case of failover. For this post, I’m assuming the Orchestrator is already installed and able to manage the topology.


Orchestrator is a topology manager. Nothing less nothing more. In the case of failover, it will reorganize the topology, promote a new master and connect the slaves to it. But it won’t do any DNS changes, and it won’t move VIPs (or anything else).

However, Orchestrator supports hooks. Hooks are external scripts that can be invoked through the recovery process. There are six different hooks:

  • OnFailureDetectionProcesses
  • PreFailoverProcesses
  • PostIntermediateMasterFailoverProcesses
  • PostMasterFailoverProcesses
  • PostFailoverProcesses
  • PostUnsuccessfulFailoverProcesses

More details are in the Orchestrator manual.

With these hooks, we can call our own external scripts, which fit in our architecture and can make modifications or let the application knows who is the new master.

There are different ways to do this:

  • Updating a CNAME: if a CNAME is pointing to the master, an external script can easily do a DNS update after failover.
  • Moving a VIP to the new master: this solution is similar to a MHA and MHA-helper script (this post will discuss this solution).

When Orchestrator calls an external script, it can also use parameters. Here is an example using the parameters available with “PostFailoverProcesses”:

Without these parameters, we wouldn’t know who the new master is and which host died.

Moving VIPs

As I already mentioned, in this post I am going to show you how can you move VIPs with Orchestrator. I think many people are familiar with MHA. This solution is a bit similar to what MHA and MHA-helper does.

The main requirement is, at the same time, the main disadvantage. This solution requires SSH access from the Orchestrator node to the MySQL servers.

Adding User

First we have to add a user on the MySQL servers and Orchestrator node (you can change the username):

Adding sudo permissions:

We have to add the public key from the Orchestrator node on the MySQL servers to the “/home/orchuser/.ssh/authorized_keys” file.

Now we can SSH from the Orchestrator server to the others without a password:

Failover Script

Now we need a failover script. I wrote two small bash scripts that can do it for us.

The first one called orc_hook.sh. Orchestrator calls this script like so:

Because Orchestrator can handle multiple clusters, we have to define some cluster parameters:

Where “rep” is the name of the cluster, “eth0” is the name of the interface where the VIP should be added, “” is the VIP on this cluster and “orchuser” is the SSH user. If we have multiple clusters, we have to add more arrays like this with the cluster details.

Orchestrator executes this script with parameters:

After the script recognized the cluster, it calls the next script.

The next script is named orch_vip.sh. This is called by “orch_hook.sh” and it is going to move the VIP to the new master. It is executed like this:

  • -d 1 the master is dead
  • -n mysql2is the new master
  • -i eth0 the network interface
  • -I is the VIP
  • -u orchuser is the SSH user
  • -o mysql1 is the old master

The script requires the “arping” and “mail” commands.


With these two small scripts, Orchestrator is able to move VIPs from the master to the new master, and the application can work again. However this script is not production ready, and there could be cases that it cannot handle. You can test it, but use it at your own risk.

I would appreciate any comments or pull requests so we can make it better. But stay tuned: in my next blog post I am going to show you how Orchestrator can work with “ProxySQL.”

Share this post