ProxySQL and MHA Integration

ProxySQL and MHA Integration

PREVIOUS POST
NEXT POST

MHAThis blog post discusses ProxySQL and MHA integration, and how they work together.

MHA (Master High Availability Manager and tools for MySQL) is almost fully integrated with the ProxySQL process. This means you can count on the MHA standard feature to manage failover, and ProxySQL to manage the traffic and shift from one server to another.

This is one of the main differences between MHA and VIP, and MHA and ProxySQL: with MHA/ProxySQL, there is no need to move IPs or re-define DNS.

The following is an example of an MHA configuration file for use with ProxySQL:

NOTE: Be sure to comment out the “FIX ME ” lines in the sample/scripts.

After that, just install MHA as you normally would.

In ProxySQL, be sure to have all MHA users and the servers set.

When using ProxySQL with standard replication, it’s important to set additional privileges for the ProxySQL monitor user. It must also have “Replication Client” set or it will fail to check the SLAVE LAG. The servers MUST have a defined value for the attribute max_replication_lag, or the check will be ignored.

As a reminder:

OK, now that all is ready,  let’s rock’n’roll!

Controlled fail-over

First of all, the masterha_manager should not be running or you will get an error.

Now let’s start some traffic:

Let it run for a bit, then check:

Now perform the failover. To do this, instruct MHA to do a switch, and to set the OLD master as a new slave:

Check what happened:

Check ProxySQL:

In this case, the servers weren’t behind the master and switch happened quite fast.

We can see that the WRITE operations that normally are an issue, given the need to move around a VIP or change name resolution, had a limited hiccup.

Read operations were not affected, at all. Nice, eh?

Do you know how long it takes to do a switch under these conditions? real 0m2.710s yes 2.7 seconds.

This is more evidence that, most of the time, an MHA-based switch is caused by the need to redirect traffic from A to B using the network.

Crash fail-over

What happened if instead of an easy switch, we have to cover a real failover?

First of all, let’s start masterha_manager:

Then let’s start a load again. Finally, go to the MySQL node that uses master xxx.xxx.xxx.107

As before, check what happened on the application side:

So it takes ~10 seconds to perform failover.

To understand better, let see what happened in MHA-land:

MHA sees the server failing at xx:47, but because of the retry and checks validation, it actually fully acknowledges the downtime at xx:56 (~8 seconds after).

To perform the whole failover, it only takes ~2 seconds (again). Because no movable IPs or DNSs were involved, the operations were fast. This is true when the servers have the binary-log there, but it’s a different story if MHA also has to manage and push data from the binarylog to MySQL.

As you can see, ProxySQL can also help reduce the timing for this scenario, totally skipping the network-related operations. These operations are the ones causing the most trouble in these cases.

PREVIOUS POST
NEXT POST

Share this post

Comments (6)

  • Alessandro Moretti Reply

    Is there a reason to use mysql_replication_hostgroups? This table implies that some host are read-only, in your case the 601 hostgroup. On failover will the MHA change the read_only on the old master to 1 and the new master to 0?

    December 8, 2016 at 12:48 pm
    • Marco Tusa Reply

      Hi Alessandro,
      yes that is correct here we are talking of standard MySQL replication with MHA controller. As such we have a master with READ_ONLY=0 and Slaves with READ_ONLY=1. In case of failover MHA will change the READ_ONLY variables in the new just elect MASTER to 0.
      ProxySQL will “see” that event and will copy the entry to the 600 group, re-enabling writes against the new elected Master.

      Hope this clarify

      December 8, 2016 at 12:53 pm
  • Marco Tusa Reply

    Hi Alessandro,
    yes that is correct here we are talking of standard MySQL replication with MHA controller. As such we have a master with READ_ONLY=0 and Slaves with READ_ONLY=1. In case of failover MHA will change the READ_ONLY variables in the new just elect MASTER to 0.
    ProxySQL will “see” that event and will copy the entry to the 600 group, re-enabling writes against the new elected Master.

    Hope this clarify

    December 8, 2016 at 12:53 pm
  • Wagner Bianchi Reply

    Marco, good article, as always!

    Just calling the attention to a small thing; when you create a database to hold sysbench tables on ProxySQL, you got in “test_mha” and when you run sysbench, you’re running it against “mha_test”:

    #: proxysql default_schema for mha_* users
    insert into mysql_users (username,password,active,default_hostgroup,default_schema) values (‘mha_W’,’test’,1,10,’test_mha’);

    #: sysbench
    sysbench … –mysql-db=mha_test –db-driver=mysql … run

    If it’s not a problem, pardon myself. Cheers!

    January 26, 2017 at 9:54 am
  • BHUVANESHi Reply

    Hey, Im using Orchestrator for failover. This is also will make the read_only=OFF on the new master. But I didn’t see any hostgroup changes. Did you use any parameters in proxysql?

    November 10, 2018 at 8:49 am
    • Marco Tusa Reply

      Are you using the Replication Host group Table?
      Best way is to share the mysql_server and replication host group table information to debug your problem

      November 10, 2018 at 10:34 am

Leave a Reply