This blog post discusses Orchestrator: MySQL Replication Topology Manager.
Orchestrator is a replication topology manager for MySQL.
It has many great features:
Here’s a gif that demonstrates this (click on an image to see a larger version):
Orchestrator’s manual is quite extensive and detailed, so the goal of this blogpost is not to go through every installation and configuration step. It will just give a global overview on how Orchestrator works, while mentioning some important and interesting settings.
Orchestrator is a go application (binaries, including rpm and deb packages are available for download).
It requires it’s own MySQL database as a backend server to store all information related to the Orchestrator managed database cluster topologies.
There should be at least one Orchestrator daemon, but it is recommended to run many Orchestrator daemons on different servers at the same time – they will all use the same backend database but only one Orchestrator is going to be “active” at any given moment in time. (You can check who is active under the Status menu on the web interface, or in the database in the active_node table.)
If the Orchestrator MySQL database is gone, it doesn’t mean the monitored MySQL clusters stop working. Orchestrator just won’t be able to control the replication topologies anymore. This is similar to how MHA works: everything will work but you can not perform a failover until MHA is back up again.
At this moment, it’s required to have a MySQL backend and there is no clear/tested support for having this in high availability (HA) as well. This might change in the future.
Orchestrator only needs a MySQL user with limited privileges ( SUPER, PROCESS, REPLICATION SLAVE, RELOAD) to connect to the database servers. With those permissions, it is able to check the replication status of the node and perform replication changes if necessary. It supports different ways of replication: binlog file positions, MySQL&MariaDB GTID, Pseudo GTID and Binlog servers.
There is no need to install any extra software on the database servers.
One example of what Orchestrator can do is promote a slave if a master is down. It will choose the most up to date slave to be promoted.
Let’s see what it looks like:
In this test we lost rep1 (master) and Orchestrator promoted rep4 to be the new master, and started replicating the other servers from the new master.
With the default settings, if rep1 comes back rep4 is going to continue the replication from rep1. This behavior can be changed with the setting ApplyMySQLPromotionAfterMasterFailover:True in the configuration.
Orchestrator has a nice command line interface too. Here are some examples:
|
1 |
> orchestrator -c topology -i rep1:3306 cli<br>rep1:3306 [OK,5.6.27-75.0-log,ROW,>>]<br>+ rep2:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID]<br>+ rep3:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID]<br>+ rep4:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID]<br>+ rep5:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] |
|
1 |
orchestrator -c relocate -i rep2:3306 -d rep4:3306 |
|
1 |
> orchestrator -c topology -i rep1:3306 cli<br>rep1:3306 [OK,5.6.27-75.0-log,ROW,>>]<br>+ rep3:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID]<br>+ rep4:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID]<br> + rep2:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID]<br>+ rep5:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] |
As we can see, rep2 now is replicating from rep4 .
One nice addition to the GUI is how it displays slow queries on all servers inside the replication tree. You can even kill bad queries from within the GUI.
Orchestrator’s daemon configuration can be found in /etc/orchestrator.conf.json. There are many configuration options, some of which we elaborate here:

While being a very feature-rich application, there are still some missing features and limitations of which we should be aware.
One of the key missing features is that there is no easy way to promote a slave to be the new master. This could be useful in scenarios where the master server has to be upgraded, there is a planned failover, etc. (this is a known feature request).
In order to integrate this in your HA architecture or include in your fail-over processes you still need to manage many aspects manually, which can all be done by using the different hooks available in Orchestrator:
The work that needs to be done is comparable to having a setup with MHA or MySQLFailover.
This post also doesn’t completely describe the decision process that Orchestrator takes to determine if a server is down or not. The way we understand it right now, one active Orchestrator node will make the decision if a node is down or not. It does check a broken node’s slaves replication state to determine if Orchestrator isn’t the only one losing connectivity (in which it should just do nothing with the production servers). This is already a big improvement compared to MySQLFailover, MHA or even MaxScale’s failoverscripts, but it still might cause some problems in some cases (more information can be found on Shlomi Noach’s blog).
The amount of flexibility and power and fun that this tool gives you with a very simple installation process is yet to be matched. Shlomi Noach did a great job developing this at Outbrain, Booking.com and now at GitHub.
If you are looking for MySQL Topology Manager, Orchestrator is definitely worth looking at.
Resources
RELATED POSTS