Automation: A case for synchronous replication

PREVIOUS POST
NEXT POST

Just yesterday I wrote about math of automatic failover today I’ll share my thoughts about what makes MySQL failover different from many other components and why asynchronous nature of standard replication solution is causing problems with it.

Lets first think about properties of simple components we fail over – web servers, application servers etc. We can put these behind some simple load balancer which can provide both high availability and fault tolerance. Technology can be made very robust because any node can handle request at any time and they all work concurrently. This means I do not have to move traffic
between the nodes but just chose what nodes participate in traffic handling which can be done a lot more seamless. This also drastically reduces risk of exposure in case of problems with setup.
I’ve seen many cases when setup was not perfect and normally working server would be removed from rotation for some reason… with essentially zero impact to the end user. The performance problems are also handled different way – the load balancer typically can monitor load of the node and do not send new requests to overloaded node as well as use other techniques which keep the load into account. This means the performance problems with a node can naturally reduce amount of traffic sending to it without pushing us to decision whenever failover needs to happen. Finally because there is no context nodes can be just put on and off rotation without and complex procedure.

Lets now compare it to MySQL Replication based solution. With MySQL Replication you have to have one Master node which handles traffic. You may be able to send some traffic to Slaves but because of asynchronous nature of replication it is rather tricky. Also there are a lot of states and the steps you need to do in exactly right way to maintain the consistency in the database. You can’t failover requests gradually you need to stop writes to the master at once, switch to the Slave properly (ensuring it is cought up completely) to avoid inconsistences in the database. There are many states to think about – each slave can be at different coordinates in binary log compared to masters. There can be some events which are not yet fetched by the slaves and there are others which are fetched but not processed. There is a chance for replication to be lagging, broken, stopped etc which makes failover process tricky.

Not only Failover process for MySQL replication is tricky it can be long – consider slaves lagging and need few minutes to catch up and also not downtime free. When you stopped writing on the master and waiting for slaves to catch up before you can start writing to them you experience down time.

This means the risk with automated failover come from two elements – first failover itself involves short downtime, which means failover triggered in the wrong time can cause downtime and be user visible. Second due to complexity of MySQL replication there are increased chances something would go wrong with failover and you can end up with inconsistent database, broken replication and other problems.

I should note semi-synchronous replication available in MySQL 5.5 does not solves these problems. It helps to ensure the slave can be cought up to the master without loosing last committed transactions but failover process is still complicated and you still can have slaves lagging as semi synchronous replication does not wait for execution only for queing event in relay log.

Now lets see how Synchronous Replication solutions such a Percona XtraDB Cluster and MySQL Cluster are different. Synchronous replication essentially allows you to work with database nodes (SQL nodes for MySQL Cluster) in the same way as you work with web servers – any of them can accept and handle connection, you can send more or less traffic to each node and you can remove nodes from rotation with only current connections impacted, which can be easily reestablished in well written application. The failover process is also very simple in this case which allows to use simple solutions such as HA Proxy with these technologies – node is either part of the cluster and ready to serve the traffic or it is not. There is also no complex multi step failover process – from HA system standpoint if the node is part of the cluster it can serve the traffic. as simple as that.

There is no magic or silver bullet here of course. Making HA solution more simple clustering solutions are more complicated themselves internally. They are a lot more tightly coupled and there is a lot higher chance for problems to bring down complete cluster. A lot of decisions which would be taken by HA system in case of MySQL Replication is handled by cluster internally – for example if timeouts are reconfigured the transient network slowness can be seen as loss of connections between all the nodes and stop operation. As such for these solutions there is certain hardcoded level of automation involved, for example if node discovers the problem in Percona XtraDB Cluster it will leave the cluster – you do not leave it “operating while broken” waiting for operator to decide whenever it really should kick it off. This comes as requirement with “guarantees” cluster provides – such as it is safe to read and write from any node and application will work properly.

The fact cluster solutions do a lot of dirty job internally does not mean the testing described in the previous article is not needed. On the contrary. Not well tested clusters fail and they often fail harder than MySQL Replication, taking longer time to recover and get back to operational state.

Conclusion: If you’re looking for setting up HA solution for MySQL check out synchronous replication based solutions in additionally to MySQL Replication based solutions. I would especially point out Percona XtraDB Cluster and other Galera based solutions as something worth of the look as these solutions provide benefits of synchronous replication while retaining most of operational and development properties of familiar Innodb tables. MySQL Cluster is often a lot more powerful but more complicated solution. If you’re going to Percona Live NY there is a great Percona XtraDB Cluster tutorial as well talks about Percona XtraDB Cluster, Synchronous Replication and Galera Technology, MySQL Cluster production use, and Keynote and Talk about Clustrix – another great MySQL compatible synchronous replication solution.

PREVIOUS POST
NEXT POST

Comments

  1. Jose Luis Martinez says

    What about good ol’ Active-Passive Syncronous DRBD clusters?

    At least you get syncronous replication, so no data should be lost on failovers. Having a DRBD cluster + N slaves seems like a stable, well proven solution for lot’s of people. What is your experience / opinion on these types of clusters?

  2. says

    Joe,

    Yes. DRBD is essentially in the same boat having rather simple “state” because of being synchronous. Though there are some issues to be aware with DRBD… some of this actually got worse over the years.

    DRBD passive replica wastes resources though this is perhaps least of concerns as HA never comes free anyway.

    The FailOver to DRBD note is rather expensive. The file system crash recovery need to take place, when database recovery, finally you start with cold cashes and need to warmup. This is BTW where Percona Server’s fast warmup solution is great help.

    DRBD overhead becomes a lot more significant in the times of commodity RAID with BBU and Flash. Network was fast compared to slow rotating drives but it is slow compared to flash and RAID cache. This can be improved with fast networks
    10GbE, Dolphin etc but it is not readily available.

    You also need to understand what failures DRBD really protects you from. We had more than one customer when some hardware glitches, MySQL or Kernel bugs had caused database to become corrupted… which was propagated by DRBD.

Leave a Reply

Your email address will not be published. Required fields are marked *