When it comes to “decision time” about which type of MySQL HA (high-availability) solution to implement, and how to architect the solution, many questions come to mind. The most important questions are:
- “What are the best tools to provide HA and Load Balancing?”
- “Should I be deploying this proxy tool on my application servers or on a standalone server?”.
Ultimately, the best tool really depends on the needs of your application and your environment. You might already be using specific tools such as Consul or MHA, or you might be looking to implement tools that provide richer features. The dilemma of deploying a proxy instance per application host versus a standalone proxy instance is usually a trade-off between “a less effective load balancing algorithm” or “a single point of failure.” Neither are desirable, but there are ways to implement a solution that balances all aspects.
In this article, we’ll go through a solution that is suitable for an application that has not been coded to split reads and writes over separate MySQL instances. An application like this would rely on a proxy or 3rd party tool to split reads/writes, and preferably a solution that has high-availability at the proxy layer. The solution described here is comprised of ProxySQL, Consul and Master High Availability (MHA). Within this article, we’ll focus on the configuration required for ProxySQL and Consul since there are many articles that cover MHA configuration (such as Miguel’s recent MHA Quick Start Guide blog post).
When deploying Consul in production, a minimum of 3x instances are recommended – in this example, the Consul agents run on the Application Server (appserver) as well as on the two “ProxySQL servers” mysql1 and mysql2 (which act as the HA proxy pair). This is not a hard requirement, and these instances can easily run on another host or docker container. MySQL is deployed locally on mysql1 and mysql2, however this could just as well be 1..n separate standalone DB server instances:
So let’s move on to the actual configuration of this HA solution, starting with Consul.
Installation of Consul:
Firstly, we’ll need to install the required packages, download the Consul archive and perform the initial configuration. We’ll need to perform the same installation on each of the nodes (i.e., appserver, mysql1 and mysql2).
1 2 3 4 5 6 7 8 9 10 11 12 |
### Install pre-requisite packages: sudo yum -y install wget unzip bind-utils dnsmasq ### Install Consul: sudo useradd consul sudo mkdir -p /opt/consul /etc/consul.d sudo touch /var/log/consul.log /etc/consul.d/proxysql.json cd /opt/consul sudo wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip sudo unzip consul_0.6.4_linux_amd64.zip sudo ln -s /opt/consul/consul /usr/bin/consul sudo chown consul:consul -R /etc/consul* /opt/consul* /var/log/consul.log |
Configuration of Consul on Application Server (used as ‘bootstrap’ node):
Now, that we’re done with the installation on each of the hosts, let’s continue with the configuration. In this example we’ll bootstrap the Consul cluster using “appserver”:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
### Edit configuration files $ sudo vi /etc/consul.conf { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "agent1", "server": true, "ui": true, "bootstrap": true, "client_addr": "0.0.0.0", "advertise_addr": "192.168.1.119" ## Add server IP here } ###### $ sudo vi /etc/consul.d/proxysql.json {"services": [ { "id": "proxy1", "name": "proxysql", "address": "192.168.1.120", "tags": ["mysql"], "port": 6033, "check": { "script": "mysqladmin ping --host=192.168.1.120 --port=6033 --user=root --password=123", "interval": "3s"} }, { "id": "proxy2", "name": "proxysql", "address": "192.168.1.121", "tags": ["mysql"], "port": 6033, "check": { "script": "mysqladmin ping --host=192.168.1.121 --port=6033 --user=root --password=123", "interval": "3s"} } ] } ###### ### Start Consul agent $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf -config-dir=/etc/consul.d > /var/log/consul.log &' ### Setup DNSMASQ (as root) echo "server=/consul/127.0.0.1#8600" > /etc/dnsmasq.d/10-consul service dnsmasq restart ### Remember to add the localhost as a DNS server (this step can vary ### depending on how your DNS servers are managed... here I'm just ### adding the following line to resolve.conf: sudo vi /etc/resolve.conf #... snippet ...# nameserver 127.0.0.1 #... snippet ...# ### Restart dnsmasq sudo service dnsmasq restart |
The service should now be started, and you can verify this in the logs in “/var/log/consul.log”.
Configuration of Consul on Proxy Servers:
The next item is to configure each of the proxy Consul agents. Note that the “agent name” and the “IP address” need to be updated for each host (values for both must be unique):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
### Edit configuration files $ sudo vi /etc/consul.conf { "datacenter": "dc1", "data_dir": "/opt/consul/", "log_level": "INFO", "node_name": "agent2", ### Agent node name must be unique "server": true, "ui": true, "bootstrap": false, ### Disable bootstrap on joiner nodes "client_addr": "0.0.0.0", "advertise_addr": "192.168.1.xxx", ### Set to local instance IP "dns_config": { "only_passing": true } } ###### $ sudo vi /etc/consul.d/proxysql.json {"services": [ { "id": "proxy1", "name": "proxysql", "address": "192.168.1.120", "tags": ["mysql"], "port": 6033, "check": { "script": "mysqladmin ping --host=192.168.1.120 --port=6033 --user=root --password=123", "interval": "3s"} }, { "id": "proxy2", "name": "proxysql", "address": "192.168.1.121", "tags": ["mysql"], "port": 6033, "check": { "script": "mysqladmin ping --host=192.168.1.121 --port=6033 --user=root password=123", "interval": "3s"} } ] } ###### ### Start Consul agent: $ sudo su - consul -c 'consul agent -config-file=/etc/consul.conf -config-dir=/etc/consul.d > /var/log/consul.log &' ### Join Consul cluster specifying 1st node IP e.g. $ consul join 192.168.1.119 ### Verify logs and look out for the following messages: $ cat /var/log/consul.log ==> Starting Consul agent... ==> Starting Consul agent RPC... ==> Consul agent running! Node name: 'agent2' Datacenter: 'dc1' Server: true (bootstrap: false) Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400) Cluster Addr: 192.168.1.120 (LAN: 8301, WAN: 8302) Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false Atlas: ==> Log data will now stream in as it occurs: # ... snippet ... 2016/09/05 19:48:04 [INFO] agent: Synced service 'consul' 2016/09/05 19:48:04 [INFO] agent: Synced check 'service:proxysql1' 2016/09/05 19:48:04 [INFO] agent: Synced check 'service:proxysql2' # ... snippet ... |
At this point, we have Consul installed, configured and running on each of our hosts appserver (mysql1 and mysql2). Now it’s time to install and configure ProxySQL on mysql1 and mysql2.
Installation & Configuration of ProxySQL:
The same procedure should be run on both mysql1 and mysql2 hosts:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
### Install ProxySQL packages and initialise ProxySQL DB sudo yum -y install https://github.com/sysown/proxysql/releases/download/v1.2.2/proxysql-1.2.2-1-centos7.x86_64.rpm sudo service proxysql initial sudo service proxysql stop ### Edit the ProxySQL configuration file to update username / password vi /etc/proxysql.cnf ### admin_variables= { admin_credentials="admin:admin" mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock" } ### ### Start ProxySQL sudo service proxysql start ### Connect to ProxySQL and configure mysql -P6032 -h127.0.0.1 -uadmin -padmin ### First we create a replication hostgroup: mysql> INSERT INTO mysql_replication_hostgroups VALUES (10,11,'Standard Replication Groups'); ### Add both nodes to the hostgroup 11 (ProxySQL will automatically put the writer node in hostgroup 10) mysql> INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.120',11,3306,1000); mysql> INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.121',11,3306,1000); ### Save server configuration mysql> LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; ### Add query rules for RW split mysql> INSERT INTO mysql_query_rules (active, match_pattern, destination_hostgroup, cache_ttl, apply) VALUES (1, '^SELECT .* FOR UPDATE', 10, NULL, 1); mysql> INSERT INTO mysql_query_rules (active, match_pattern, destination_hostgroup, cache_ttl, apply) VALUES (1, '^SELECT .*', 11, NULL, 1); mysql> LOAD MYSQL QUERY RULES TO RUNTIME; SAVE MYSQL QUERY RULES TO DISK; ### Finally configure ProxySQL user and save configuration mysql> INSERT INTO mysql_users (username,password,active,default_hostgroup,default_schema) VALUES ('root','123',1,10,'test'); mysql> LOAD MYSQL USERS TO RUNTIME; SAVE MYSQL USERS TO DISK; mysql> EXIT; |
MySQL Configuration:
We also need to perform one configuration step on the MySQL servers in order to create a user for ProxySQL to monitor the instances:
1 2 |
### ProxySQL's monitor user on the master MySQL server (default username and password is monitor/monitor) mysql -h192.168.1.120 -P3306 -uroot -p123 -e"GRANT USAGE ON *.* TO monitor@'%' IDENTIFIED BY 'monitor';" |
We can view the configuration of the monitor user on the ProxySQL host by checking the global variables on the admin interface:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
mysql> SHOW VARIABLES LIKE 'mysql-monitor%'; +----------------------------------------+---------+ | Variable_name | Value | +----------------------------------------+---------+ | mysql-monitor_enabled | true | | mysql-monitor_connect_timeout | 200 | | mysql-monitor_ping_max_failures | 3 | | mysql-monitor_ping_timeout | 100 | | mysql-monitor_replication_lag_interval | 10000 | | mysql-monitor_replication_lag_timeout | 1000 | | mysql-monitor_username | monitor | | mysql-monitor_password | monitor | | mysql-monitor_query_interval | 60000 | | mysql-monitor_query_timeout | 100 | | mysql-monitor_slave_lag_when_null | 60 | | mysql-monitor_writer_is_also_reader | true | | mysql-monitor_history | 600000 | | mysql-monitor_connect_interval | 60000 | | mysql-monitor_ping_interval | 10000 | | mysql-monitor_read_only_interval | 1500 | | mysql-monitor_read_only_timeout | 500 | +----------------------------------------+---------+ |
Testing Consul:
Now that Consul and ProxySQL are configured we can do some tests from the “appserver”. First, we’ll verify that the hosts we’ve added are both reporting [OK] on our DNS requests:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
$ dig @127.0.0.1 -p 53 proxysql.service.consul ; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> @127.0.0.1 -p 53 proxysql.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9975 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;proxysql.service.consul. IN A ;; ANSWER SECTION: proxysql.service.consul. 0 IN A 192.168.1.121 proxysql.service.consul. 0 IN A 192.168.1.120 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Sep 05 19:32:12 UTC 2016 ;; MSG SIZE rcvd: 158 |
As you can see from the output above, DNS is reporting both 192.168.120 and 192.168.1.121 as available for the ProxySQL service. As soon as the ProxySQL check fails, the nodes will no longer report in the output above.
We can also view the status of our cluster and agents through the Consul Web GUI which runs on port 8500 of all the Consul servers in this configuration (e.g. http://192.168.1.120:8500/):
Testing ProxySQL:
So now that we have this configured we can also do some basic tests to see that ProxySQL is load balancing our connections:
1 2 3 4 5 6 7 8 9 10 11 12 |
[percona@appserver consul.d]$ mysql -hproxysql.service.consul -e"select @@hostname" +--------------------+ | @@hostname | +--------------------+ | mysql1.localdomain | +--------------------+ [percona@appserver consul.d]$ mysql -hproxysql.service.consul -e"select @@hostname" +--------------------+ | @@hostname | +--------------------+ | mysql2.localdomain | +--------------------+ |
Perfect! We’re ready to use the hostname “proxysql.service.consul” to connect to our MySQL instances using a round-robin load balancing and HA proxy solution. If one of the two ProxySQL instances fails, we’ll continue communicating with the database through the other. Of course, this configuration is not limited to just two hosts, so feel free to add as many as you need. Be aware that in this example the two hosts’ replication hierarchy is managed by MHA in order to allow for master/slave promotion. By performing an automatic or manual failover using MHA, ProxySQL automatically detects the change in replication topology and redirect writes to the newly promoted master instance.
To make this configuration more durable, it is encouraged to create a more intelligent Consul check – i.e., a check that checks more than just the availability of the MySQL service (an example would be to select some data from a table). It is also recommended to fine tune the interval of the check to suit the requirements of your application.
hlo expert, I follow your blog since a long time, I have interest in Sql but facing some problem, so suggest me something to get rid of it.
nice information,very informative blog
I’m planning to use the “read only failover” and “read-write split” support that comes embedded with the JDBC Connector J (directly from the app servers to reduce moving parts). Is there any major advantage to use a proxy instead?
AFAIK Connector/J is replication aware but requires you to specify operations that are RO [conn.setReadOnly(true);] and RW [conn.setReadOnly(false);] – so that they can be routed to the suitable backend, each of which runs on a different port.
With ProxySQL you use one port for all queries (both RW and RO) and the rules will handle routing the queries to a suitable MySQL server – so you don’t need to call setReadOnly() at all. This leads to less and simpler code since you don’t need to write different code for RO / RW operations.
Thanks fro your reply. You’re right about the app modification… 🙁
In my case i don’t have more than 4-8 app servers. Is ok to put a ProxySQL (or MaxScale) on each of them to keep the things simple?
BTW why there is no such thing like a Percona Proxy? It would be great to know that there is a proxy directly supported by the Percona expert community.
You can install your proxy solution directly on your application server, however in this way your load balancing will not be calculated against the full workload of all application servers, but rather individually. I assume in this configuration you would only setup one proxy instance per application server so you should think about what will happen in case or failure i.e. you should find a way to automatically failover to another proxy instance or remove that application server from the application server pool. It really depends how your end to end solution is designed.
Would it be possible to use consult built-in dns instead of installing dnsmasq?
Nice post but there is something should be improved in consul layout:
‘mysqladmin –ping’ returns status code (1) aka warning if the proxysql DB cannot be reached. Consul, by default, do not exclude warnings hosts for its DNS (at least in recent versions), then this configuration can be a problem if the proxysql goes down, because the node will not be removed from the DNS.
Then, ensure you are returning status code different than 0 or 1 (which would be a critical then) if proxysql is not accessible/down. Otherwise, you can say in consul config file you only publish ‘only passing checks’: https://www.consul.io/docs/agent/options.html#only_passing
Thanks for the post again.