EmergencyEMERGENCY? Get 24/7 Help Now!

Consul, ProxySQL and MySQL HA

 | September 16, 2016 |  Posted In: MySQL

PREVIOUS POST
NEXT POST

When it comes to “decision time” about which type of MySQL HA (high-availability) solution to implement, and how to architect the solution, many questions come to mind. The most important questions are:

  • “What are the best tools to provide HA and Load Balancing?”
  • “Should I be deploying this proxy tool on my application servers or on a standalone server?”.

Ultimately, the best tool really depends on the needs of your application and your environment. You might already be using specific tools such as Consul or MHA, or you might be looking to implement tools that provide richer features. The dilemma of deploying a proxy instance per application host versus a standalone proxy instance is usually a trade-off between “a less effective load balancing algorithm” or “a single point of failure.” Neither are desirable, but there are ways to implement a solution that balances all aspects.

In this article, we’ll go through a solution that is suitable for an application that has not been coded to split reads and writes over separate MySQL instances. An application like this would rely on a proxy or 3rd party tool to split reads/writes, and preferably a solution that has high-availability at the proxy layer. The solution described here is comprised of ProxySQLConsul and Master High Availability (MHA). Within this article, we’ll focus on the configuration required for ProxySQL and Consul since there are many articles that cover MHA configuration (such as Miguel’s recent MHA Quick Start Guide blog post).

When deploying Consul in production, a minimum of 3x instances are recommended – in this example, the Consul agents run on the Application Server (appserver) as well as on the two “ProxySQL servers” mysql1 and mysql2 (which act as the HA proxy pair). This is not a hard requirement, and these instances can easily run on another host or docker container. MySQL is deployed locally on mysql1 and mysql2, however this could just as well be 1..n separate standalone DB server instances:

Consul ProxySQL

So let’s move on to the actual configuration of this HA solution, starting with Consul.

Installation of Consul:

Firstly, we’ll need to install the required packages, download the Consul archive and perform the initial configuration. We’ll need to perform the same installation on each of the nodes (i.e., appserver, mysql1 and mysql2).

Configuration of Consul on Application Server (used as ‘bootstrap’ node):

Now, that we’re done with the installation on each of the hosts, let’s continue with the configuration. In this example we’ll bootstrap the Consul cluster using “appserver”:

The service should now be started, and you can verify this in the logs in “/var/log/consul.log”.

Configuration of Consul on Proxy Servers:

The next item is to configure each of the proxy Consul agents. Note that the “agent name” and the “IP address” need to be updated for each host (values for both must be unique):

At this point, we have Consul installed, configured and running on each of our hosts appserver (mysql1 and mysql2). Now it’s time to install and configure ProxySQL on mysql1 and mysql2.

Installation & Configuration of ProxySQL:

The same procedure should be run on both mysql1 and mysql2 hosts:

MySQL Configuration:

We also need to perform one configuration step on the MySQL servers in order to create a user for ProxySQL to monitor the instances:

We can view the configuration of the monitor user on the ProxySQL host by checking the global variables on the admin interface:

Testing Consul:

Now that Consul and ProxySQL are configured we can do some tests from the “appserver”. First, we’ll verify that the hosts we’ve added are both reporting [OK] on our DNS requests:

As you can see from the output above, DNS is reporting both 192.168.120 and 192.168.1.121 as available for the ProxySQL service. As soon as the ProxySQL check fails, the nodes will no longer report in the output above.

We can also view the status of our cluster and agents through the Consul Web GUI which runs on port 8500 of all the Consul servers in this configuration (e.g. http://192.168.1.120:8500/):

Consul GUI

Testing ProxySQL:

So now that we have this configured we can also do some basic tests to see that ProxySQL is load balancing our connections:

Perfect! We’re ready to use the hostname “proxysql.service.consul” to connect to our MySQL instances using a round-robin load balancing and HA proxy solution. If one of the two ProxySQL instances fails, we’ll continue communicating with the database through the other. Of course, this configuration is not limited to just two hosts, so feel free to add as many as you need. Be aware that in this example the two hosts’ replication hierarchy is managed by MHA in order to allow for master/slave promotion. By performing an automatic or manual failover using MHA, ProxySQL automatically detects the change in replication topology and redirect writes to the newly promoted master instance.

To make this configuration more durable, it is encouraged to create a more intelligent Consul check – i.e., a check that checks more than just the availability of the MySQL service (an example would be to select some data from a table). It is also recommended to fine tune the interval of the check to suit the requirements of your application.

PREVIOUS POST
NEXT POST
Nik Vyzas

Nik Vyzas joined the Percona Remote DBA Team in 2015 and is based out of Athens, Greece. Nik has 10+ years experience working with various SQL, NoSQL and Open-source databases. He has worked at a number of companies including Accenture and Pythian. When he is not working, Nik enjoys coding, gaming and cycling.

8 Comments

  • I’m planning to use the “read only failover” and “read-write split” support that comes embedded with the JDBC Connector J (directly from the app servers to reduce moving parts). Is there any major advantage to use a proxy instead?

    • AFAIK Connector/J is replication aware but requires you to specify operations that are RO [conn.setReadOnly(true);] and RW [conn.setReadOnly(false);] – so that they can be routed to the suitable backend, each of which runs on a different port.

      With ProxySQL you use one port for all queries (both RW and RO) and the rules will handle routing the queries to a suitable MySQL server – so you don’t need to call setReadOnly() at all. This leads to less and simpler code since you don’t need to write different code for RO / RW operations.

      • Thanks fro your reply. You’re right about the app modification… 🙁

        In my case i don’t have more than 4-8 app servers. Is ok to put a ProxySQL (or MaxScale) on each of them to keep the things simple?

        BTW why there is no such thing like a Percona Proxy? It would be great to know that there is a proxy directly supported by the Percona expert community.

        • You can install your proxy solution directly on your application server, however in this way your load balancing will not be calculated against the full workload of all application servers, but rather individually. I assume in this configuration you would only setup one proxy instance per application server so you should think about what will happen in case or failure i.e. you should find a way to automatically failover to another proxy instance or remove that application server from the application server pool. It really depends how your end to end solution is designed.

  • Nice post but there is something should be improved in consul layout:
    ‘mysqladmin –ping’ returns status code (1) aka warning if the proxysql DB cannot be reached. Consul, by default, do not exclude warnings hosts for its DNS (at least in recent versions), then this configuration can be a problem if the proxysql goes down, because the node will not be removed from the DNS.
    Then, ensure you are returning status code different than 0 or 1 (which would be a critical then) if proxysql is not accessible/down. Otherwise, you can say in consul config file you only publish ‘only passing checks’: https://www.consul.io/docs/agent/options.html#only_passing

    Thanks for the post again.

Leave a Reply