EmergencyEMERGENCY? Get 24/7 Help Now!

High availability with asynchronous replication… and transparent R/W split

 | February 25, 2016 |  Posted In: GTID, High-availability, MariaDB, Maxscale, MySQL

PREVIOUS POST
NEXT POST

High availability with asynchronous replicationIn this post, the first one of a Maxscale series, I describe how to use MariaDB’s MaxScale and MySQL-utilities with MySQL Asynchronous replication.

When we talk about high availability with asynchronous replication, we always think about MHA or PRM. But if we want to transparently use the slave(s) for READs, what can we use ?

Description:

  • Three MySQL servers, but one has very limited resources and will never be able to handle the production load. In fact this node is used for backup and some back-office queries.
  • We would like to use one of the nodes as a master and the other two as slaves, but only one will be addressed by the application for the READs. If needed, that same node will become the master.
  • The application doesn’t handle READ and WRITE connections, and it’s impossible to change it.

To achieve our goals, we will use MaxScale and it’s R/W filter. When using Maxscale and asynchronous replication with MariaDB, it’s possible to use MariaDB’s replication manager, which is a wonderful tool written in Go. Unfortunately, this tool doesn’t support standard MySQL. To replace it, I used then the Oracle’s MySQL-Utilities.

Our three nodes are:

  • percona1 (master)
  • percona2 (powerful slave)
  • percona3 (weak slave)

It’s mandatory in this solution to use GTID, as it’s the only method supported by the mysql-utilities we are using.

This is the MaxScale configuration:

As you can notice, the Splitter Service contains only the two nodes able to handle the load.

And to perform the failover, in the Replication Monitor section, we define a script to use when the master is down.

That script calls mysqlrpladmin from the mysql-utilities.

In the script we also define the following line to be sure the weak slave will never become a master.

When everything is setup and running, you should see something like this:

So as you can see, Maxscale discovers on its own which server is the master; this doesn’t need to be specified in the configuration.

You can also use mysqldrpladmin utility to verify the cluster’s health:

Try it with --verbose 😉

When we test with sysbench, and we stop the master, we can see that there are some errors due to disconnects. Also, during the promotion of the new master, sysbench can’t reconnect:

It took 8 seconds to automatically failover.

Then we can see the status of the servers:

If we re-start percona1, we can see now:

To add the node again in the asynchronous replication as a slave, we need to use another MySQL utility, mysqlreplicate:

This is source of failover.sh:

In the next post, we will focus on the monitoring module used in this configuration.

PREVIOUS POST
NEXT POST
Frederic Descamps

Frédéric joined Percona in June 2011, he is an experienced Open Source consultant with expertise in infrastructure projects as well in development tracks and database administration. Frédéric is a believer of devops culture.

4 Comments

  • What if percona1 (master& maxscale) failed?
    we need another node to deploy the maxscale, which means we need at least 4 nodes to perform truly High availability?

  • Hi!
    Is this working for you with the new version? Everything seems to work but Maxscale doesn’t detect the new topology with the new master. I tried also to set the new master manually through maxadmin but it gives this after a a while: “lost_master. [Master, Running] -> [Slave, Running]
    Do you have any idea about this?

Leave a Reply