In my previous post, some time ago, I wrote about the new cluster feature of ProxySQL. For that post, we were working with three nodes, now we’ll work with even more! If you’ve installed one ProxySQL per application instance and would like to work up to more, then this post is for you. If this is new to you, though, read my earlier post first for more context.
Check the image below to understand the structure of “one ProxySQL per application”. This means you have ProxySQL installed, and your application (Java, PHP, Apache server etc) in the same VM (virtual machine).
Having taken a look at that you probably have a few questions, such as:
- What happens if you have 20 nodes synced and you now need to add 100 or more nodes?
- How can I sync the new nodes without introducing errors?
Don’t be scared, it’s a simple process.
Remember there are only four tables which can be synced over the cluster. These tables are:
- mysql_query_rules
- mysql_servers
- mysql_users
- proxysql_servers
From a new proxysql cluster installation
After a fresh installation all those four tables are empty. That means if we configure it as a new node in a cluster, all rows in those tables will be copied instantly.
Generally the installation is straightforward.
- Download the latest proxysql package from https://www.percona.com/downloads/proxysql/
- Install ProxySQL using rpm or dpkg depending on your OS
- Install the MySQL client package
- Start the proxysql service
Now ProxySQL is up and all the configuration are in default settings
Connect to the ProxySQL console:
1 |
mysql -uadmin -padmin -h127.0.0.1 -P6032 --prompt='\u (\d)>' |
Now there are two steps remaining:
- Configure global_variables table
- Configure proxysql_servers table
How to configure the global_variables table
Below is an example showing the minimal parameters to set – you can change the username and password according to your needs.
You can copy and paste the username and passwords from the current cluster and monitoring process by running the next command in a node of the current cluster:
1 |
select * from global_variables where variable_name in ('admin-admin_credentials', 'admin-cluster_password', 'mysql-monitor_password', 'admin-cluster_username', 'mysql-monitor_username'); |
You can update the parameters of the current node by using this as a template:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-admin_credentials'; update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-cluster_username'; update global_variables set variable_value='<REPLACE-HERE>' where variable_name='admin-cluster_password'; update global_variables set variable_value='<REPLACE-HERE>' where variable_name='mysql-monitor_username'; update global_variables set variable_value='<REPLACE-HERE>' where variable_name='mysql-monitor_password'; update global_variables set variable_value=1000 where variable_name='admin-cluster_check_interval_ms'; update global_variables set variable_value=10 where variable_name='admin-cluster_check_status_frequency'; update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_query_rules_save_to_disk'; update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_servers_save_to_disk'; update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_users_save_to_disk'; update global_variables set variable_value='true' where variable_name='admin-cluster_proxysql_servers_save_to_disk'; update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_query_rules_diffs_before_sync'; update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_servers_diffs_before_sync'; update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_users_diffs_before_sync'; update global_variables set variable_value=3 where variable_name='admin-cluster_proxysql_servers_diffs_before_sync'; load admin variables to RUNTIME; save admin variables to disk; |
Configure “proxysql_servers” table
At this point you need keep in mind that you will need to INSERT into this table “all” the IPs from the other ProxySQL nodes.
Why so? Because this table will have a new epoch time and this process will overwrite the rest of the nodes listed by its last table update.
In our example, let’s assume the IP of the new node is 10.0.1.3 (i.e. node 3, below)
1 2 3 4 5 6 |
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.1',6032,0,'p1'); INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.2',6032,0,'p2'); INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.1.3',6032,0,'p3'); LOAD PROXYSQL SERVERS TO RUNTIME; SAVE PROXYSQL SERVERS TO DISK; |
If you already have many ProxySQL servers in the cluster, you can run mysqldump as this will help speed up this process.
If that’s the case, you need to find the most up to date node and use mysqldump to export the data from the proxysql_servers table.
How would you find this node? There are a few stats tables in ProxySQL, and in this case we can use two of these to help identify the right node.
1 |
SELECT stats_proxysql_servers_checksums.hostname, stats_proxysql_servers_metrics.Uptime_s, stats_proxysql_servers_checksums.port, stats_proxysql_servers_checksums.name, stats_proxysql_servers_checksums.version, FROM_UNIXTIME(stats_proxysql_servers_checksums.epoch) epoch, stats_proxysql_servers_checksums.checksum, stats_proxysql_servers_checksums.diff_check FROM stats_proxysql_servers_metrics JOIN stats_proxysql_servers_checksums ON stats_proxysql_servers_checksums.hostname = stats_proxysql_servers_metrics.hostname WHERE stats_proxysql_servers_metrics.Uptime_s > 0 ORDER BY epoch DESC |
Here’s an example output
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
+------------+----------+------+-------------------+---------+---------------------+--------------------+------------+ | hostname | Uptime_s | port | name | version | epoch | checksum | diff_check | +------------+----------+------+-------------------+---------+---------------------+--------------------+------------+ | 10.0.1.1 | 1190 | 6032 | mysql_users | 2 | 2019-04-04 12:04:21 | 0xDB07AC7A298E1690 | 0 | | 10.0.1.2 | 2210 | 6032 | mysql_users | 2 | 2019-04-04 12:04:18 | 0xDB07AC7A298E1690 | 0 | | 10.0.1.1 | 1190 | 6032 | mysql_query_rules | 1 | 2019-04-04 12:00:07 | 0xBC63D734643857A5 | 0 | | 10.0.1.1 | 1190 | 6032 | mysql_servers | 1 | 2019-04-04 12:00:07 | 0x0000000000000000 | 0 | | 10.0.1.1 | 1190 | 6032 | proxysql_servers | 1 | 2019-04-04 12:00:07 | 0x233638C097DE6190 | 0 | | 10.0.1.2 | 2210 | 6032 | mysql_query_rules | 1 | 2019-04-04 11:43:13 | 0xBC63D734643857A5 | 0 | | 10.0.1.2 | 2210 | 6032 | mysql_servers | 1 | 2019-04-04 11:43:13 | 0x0000000000000000 | 0 | | 10.0.1.2 | 2210 | 6032 | proxysql_servers | 1 | 2019-04-04 11:43:13 | 0x233638C097DE6190 | 0 | | 10.0.1.2 | 2210 | 6032 | admin_variables | 0 | 1970-01-01 00:00:00 | | 0 | | 10.0.1.2 | 2210 | 6032 | mysql_variables | 0 | 1970-01-01 00:00:00 | | 0 | | 10.0.1.1 | 1190 | 6032 | admin_variables | 0 | 1970-01-01 00:00:00 | | 0 | | 10.0.1.1 | 1190 | 6032 | mysql_variables | 0 | 1970-01-01 00:00:00 | | 0 | +------------+----------+------+-------------------+---------+---------------------+--------------------+------------+ |
For each table, we can see different versions, each related to table changes. In this case, we need to look for the latest epoch time for the table “proxysql_servers“. In this example , above, we can see that the server with the IP address of 10.0.1.1 is the latest version. We can now run the next command to get a backup of all the IP data from the current cluster