Buy Percona ServicesBuy Now!

Load balancing with ProxySQL

ProxySQL is a high-performance SQL proxy. ProxySQL runs as a daemon watched by a monitoring process. The process monitors the daemon and restarts it in case of a crash to minimize downtime.

The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.

The proxy is designed to run continuously without needing to be restarted. Most configuration can be done at runtime using queries similar to SQL statements. These include runtime parameters, server grouping, and traffic-related settings.

ProxySQL supports Percona XtraDB Cluster node status check using scheduler.

Note

For more information about ProxySQL, see ProxySQL documentation.

Installing ProxySQL

ProxySQL is available from the Percona software repositories. If that is what you used to install PXC or any other Percona software, run the corresponding command:

  • On Debian or Ubuntu:
$ sudo apt-get install proxysql
  • On Red Hat Enterprise Linux or CentOS:
$ sudo yum install proxysql

Alternatively, you can download packages from https://www.percona.com/downloads/proxysql/.

To start ProxySQL, run the following command:

$ sudo service proxysql start

Warning

Do not run ProxySQL with default credentials in production.

Before starting the proxysql service, you can change the defaults in the /etc/proxysql.cnf file by changing the admin_credentials variable. For more information, see Global Variables.

Automatic Configuration

The proxysql package from Percona includes the proxysql-admin tool for configuring Percona XtraDB Cluster nodes with ProxySQL.

Note

The proxysql-admin tool can only be used for initial ProxySQL configuration.

To view usage information, run proxysql-admin without any options:

Usage: proxysql-admin [ options ]

Options:
  --config-file                      Read login credentials from a configuration file (overrides any login credentials specified on the command line)
  --quick-demo                       Setup a quick demo with no authentication
  --proxysql-username=user_name      Username for connecting to the ProxySQL service
  --proxysql-password[=password]     Password for connecting to the ProxySQL service
  --proxysql-port=port_num           Port Nr. for connecting to the ProxySQL service
  --proxysql-hostname=host_name      Hostname for connecting to the ProxySQL service
  --cluster-username=user_name       Username for connecting to the Percona XtraDB Cluster node
  --cluster-password[=password]      Password for connecting to the Percona XtraDB Cluster node
  --cluster-port=port_num            Port Nr. for connecting to the Percona XtraDB Cluster node
  --cluster-hostname=host_name       Hostname for connecting to the Percona XtraDB Cluster node
  --cluster-app-username=user_name   Application username for connecting to the Percona XtraDB Cluster node
  --cluster-app-password[=password]  Application password for connecting to the Percona XtraDB Cluster node
  --monitor-username=user_name       Username for monitoring Percona XtraDB Cluster nodes through ProxySQL
  --monitor-password[=password]      Password for monitoring Percona XtraDB Cluster nodes through ProxySQL
  --enable, -e                       Auto-configure Percona XtraDB Cluster nodes into ProxySQL
  --disable, -d                      Remove any Percona XtraDB Cluster configurations from ProxySQL
  --node-check-interval=3000         Interval for monitoring node checker script (in milliseconds)
  --mode=[loadbal|singlewrite]       ProxySQL read/write configuration mode, currently supporting: 'loadbal' and 'singlewrite' (the default) modes
  --write-node=host_name:port        Writer node to accept write statments. This option is supported only when using --mode=singlewrite
                                     Can accept comma delimited list with the first listed being the highest priority.
  --include-slaves=host_name:port    Add specified slave node(s) to ProxySQL, these nodes will go into the reader hostgroup and will only be put into
                                     the writer hostgroup if all cluster nodes are down.  Slaves must be read only.  Can accept comma delimited list.
                                     If this is used make sure 'read_only=1' is in the slave's my.cnf
  --adduser                          Adds the Percona XtraDB Cluster application user to the ProxySQL database
  --syncusers                        Sync user accounts currently configured in MySQL to ProxySQL (deletes ProxySQL users not in MySQL)
  --version, -v                      Print version info

Note

Before using the proxysql-admin tool, ensure that ProxySQL and Percona XtraDB Cluster nodes you want to add are running. For security purposes, please ensure to change the default user settings in the ProxySQL configuration file.

Preparing Configuration File

It is recommended to provide connection and authentication information in the ProxySQL configuration file (/etc/proxysql-admin.cnf), instead of specifying it on the command line.

By default, the configuration file contains the following:

# proxysql admin interface credentials.
export PROXYSQL_USERNAME="admin"
export PROXYSQL_PASSWORD="admin"
export PROXYSQL_HOSTNAME="localhost"
export PROXYSQL_PORT="6032"

# PXC admin credentials for connecting to pxc-cluster-node.
export CLUSTER_USERNAME="admin"
export CLUSTER_PASSWORD="admin"
export CLUSTER_HOSTNAME="localhost"
export CLUSTER_PORT="3306"

# proxysql monitoring user. proxysql admin script will create this user in pxc to monitor pxc-nodes.
export MONITOR_USERNAME="monitor"
export MONITOR_PASSWORD="monit0r"

# Application user to connect to pxc-node through proxysql
export CLUSTER_APP_USERNAME="proxysql_user"
export CLUSTER_APP_PASSWORD="passw0rd"

# ProxySQL read/write hostgroup
export WRITE_HOSTGROUP_ID="10"
export READ_HOSTGROUP_ID="11"

# ProxySQL read/write configuration mode.
export MODE="singlewrite"

# ProxySQL Cluster Node Priority File
export HOST_PRIORITY_FILE="/var/lib/proxysql/host_priority.conf"

Note

It is recommended to change default ProxySQL credentials before running ProxySQL in production. Make sure that you provide ProxySQL location and credentials in the configuration file.

Provide superuser credentials for one of the Percona XtraDB Cluster nodes. The proxysql-admin script will detect other nodes in the cluster automatically.

Enabling ProxySQL

Use the --enable option to automatically configure a Percona XtraDB Cluster node into ProxySQL. The proxysql-admin tool will do the following:

  • Add Percona XtraDB Cluster node into the ProxySQL database
  • Add the proxysql_galera_checker monitoring script into the ProxySQL scheduler table if it is not available. This script checks for desynced nodes and temporarily deactivates them. It also calls the proxysql_node_monitor script, which checks cluster node membership and re-configures ProxySQL if the membership changes.
  • Create two new Percona XtraDB Cluster users with the USAGE privilege on the node and add them to ProxySQL configuration, if they are not already configured. One user is for monitoring cluster nodes, and the other one is for communicating with the cluster.

Note

Please make sure to use super user credentials from Cluster to setup the default users.

The following example shows how to add a Percona XtraDB Cluster node using the ProxySQL configuration file with all necessary connection and authentication information:

$ proxysql-admin --config-file=/etc/proxysql-admin.cnf --enable

This script will assist with configuring ProxySQL (currently only Percona XtraDB cluster in combination with ProxySQL is supported)

ProxySQL read/write configuration mode is singlewrite

Configuring ProxySQL monitoring user..
ProxySQL monitor username as per command line/config-file is monitor

User 'monitor'@'127.%' has been added with USAGE privilege

Configuring the Percona XtraDB Cluster application user to connect through ProxySQL
Percona XtraDB Cluster application username as per command line/config-file is proxysql_user

Percona XtraDB Cluster application user 'proxysql_user'@'127.%' has been added with the USAGE privilege, please make sure to the grant appropriate privileges

Adding the Percona XtraDB Cluster server nodes to ProxySQL

Configuring singlewrite mode with the following nodes designated as priority order:

Write node info
+-----------+--------------+-------+---------+---------+
| hostname | hostgroup_id | port | weight | comment |
+-----------+--------------+-------+---------+---------+
| 127.0.0.1 | 10 | 25000 | 1000000 | WRITE |
+-----------+--------------+-------+---------+---------+

ProxySQL configuration completed!

ProxySQL has been successfully configured to use with Percona XtraDB Cluster

You can use the following login credentials to connect your application through ProxySQL

mysql --user=proxysql_user -p --host=localhost --port=6033 --protocol=tcp

Disabling ProxySQL

Use the --disable option to remove a Percona XtraDB Cluster node’s configuration from ProxySQL. The proxysql-admin tool will do the following:

  • Remove Percona XtraDB Cluster node from the ProxySQL database
  • Stop the ProxySQL monitoring daemon for this node

The following example shows how to disable ProxySQL and remove the Percona XtraDB Cluster node:

$ proxysql-admin --config-file=/etc/proxysql-admin.cnf --disable
ProxySQL configuration removed!

Additional Options

The following extra options can be used:

  • --adduser

    Add Percona XtraDB Cluster application user to ProxySQL database.

    $ proxysql-admin --config-file=/etc/proxysql-admin.cnf --adduser
    
    Adding Percona XtraDB Cluster application user to ProxySQL database
    Enter Percona XtraDB Cluster application user name: root
    Enter Percona XtraDB Cluster application user password:
    Added Percona XtraDB Cluster application user to ProxySQL database!
    
  • --syncusers

    Sync user accounts currently configured in Percona XtraDB Cluster to ProxySQL database except users with no password and the admin user.

    Note

    This option also deletes users that are not in Percona XtraDB Cluster from ProxySQL database.

  • --galera-check-interval

    Set the interval for monitoring proxysql_galera_checker script (in milliseconds) when enabling ProxySQL for cluster.

    $ proxysql-admin --config-file=/etc/proxysql-admin.cnf \
       --galera-check-interval=5000 --enable
    
  • --mode

    Set the read/write mode for Percona XtraDB Cluster nodes in ProxySQL database, based on the hostgroup. Supported modes are loadbal and singlewrite.

    • singlewrite is the default mode, it will accept writes only on one single node (based on the info you provide in --write-node). Remaining nodes will accept only read statements.

      Use the --write-node option to control priority for hosts to be the writer at any given time. When used, it creates a configuration file, which is by default /var/lib/proxysql/host_priority.conf (configurable in proxysql-admin.cnf). Servers can be separated by commas, for example:

      10.0.0.51:3306, 10.0.0.52:3306
      

      In the previous example, 10.0.0.51:3306 will be in the writer hostgroup if it is ONLINE. If it is OFFLINE, then 10.0.0.52:3306 will go into the writer hostgroup. And if that node also goes down, then one of the remaining nodes will be randomly chosen for the writer hostgroup. The configuration file is deleted when --disable is used.

      singlewrite mode setup:

      $ sudo grep "MODE" /etc/proxysql-admin.cnf
      export MODE="singlewrite"
      $ sudo proxysql-admin --config-file=/etc/proxysql-admin.cnf --write-node=127.0.0.1:25000 --enable
      ProxySQL read/write configuration mode is singlewrite
      [..]
      ProxySQL configuration completed!
      

      To check the configuration you can run:

      mysql> SELECT hostgroup_id,hostname,port,status,comment FROM mysql_servers;
      +--------------+-----------+-------+--------+---------+
      | hostgroup_id | hostname  | port  | status | comment |
      +--------------+-----------+-------+--------+---------+
      | 11           | 127.0.0.1 | 25400 | ONLINE | READ    |
      | 10           | 127.0.0.1 | 25000 | ONLINE | WRITE   |
      | 11           | 127.0.0.1 | 25100 | ONLINE | READ    |
      | 11           | 127.0.0.1 | 25200 | ONLINE | READ    |
      | 11           | 127.0.0.1 | 25300 | ONLINE | READ    |
      +--------------+-----------+-------+--------+---------+
      5 rows in set (0.00 sec)
      
    • The loadbal mode uses a set of evenly weighted read/write nodes.

      loadbal mode setup:

      $ sudo proxysql-admin --config-file=/etc/proxysql-admin.cnf --mode=loadbal --enable
      
      This script will assist with configuring ProxySQL (currently only Percona XtraDB cluster in combination with ProxySQL is supported)
      
      ProxySQL read/write configuration mode is loadbal
      [..]
      ProxySQL has been successfully configured to use with Percona XtraDB Cluster
      
      You can use the following login credentials to connect your application through ProxySQL
      
      mysql --user=proxysql_user --password=*****  --host=127.0.0.1 --port=6033 --protocol=tcp
      
      mysql> SELECT hostgroup_id,hostname,port,status,comment FROM mysql_servers;
      +--------------+-----------+-------+--------+-----------+
      | hostgroup_id | hostname  | port  | status | comment   |
      +--------------+-----------+-------+--------+-----------+
      | 10           | 127.0.0.1 | 25400 | ONLINE | READWRITE |
      | 10           | 127.0.0.1 | 25000 | ONLINE | READWRITE |
      | 10           | 127.0.0.1 | 25100 | ONLINE | READWRITE |
      | 10           | 127.0.0.1 | 25200 | ONLINE | READWRITE |
      | 10           | 127.0.0.1 | 25300 | ONLINE | READWRITE |
      +--------------+-----------+-------+--------+-----------+
      5 rows in set (0.01 sec)
      
  • --quick-demo

    This option is used to setup dummy ProxySQL configuration.

    $ sudo  proxysql-admin  --enable --quick-demo
    
    You have selected the dry test run mode. WARNING: This will create a test user (with all privileges) in the Percona XtraDB Cluster & ProxySQL installations.
    
    You may want to delete this user after you complete your testing!
    
    Would you like to proceed with '--quick-demo' [y/n] ? y
    
    Setting up proxysql test configuration!
    
    Do you want to use the default ProxySQL credentials (admin:admin:6032:127.0.0.1) [y/n] ? y
    Do you want to use the default Percona XtraDB Cluster credentials (root::3306:127.0.0.1) [y/n] ? n
    
    Enter the Percona XtraDB Cluster username (super user): root
    Enter the Percona XtraDB Cluster user password:
    Enter the Percona XtraDB Cluster port: 25100
    Enter the Percona XtraDB Cluster hostname: localhost
    
    
    ProxySQL read/write configuration mode is singlewrite
    
    Configuring ProxySQL monitoring user..
    
    User 'monitor'@'127.%' has been added with USAGE privilege
    
    Configuring the Percona XtraDB Cluster application user to connect through ProxySQL
    
    Percona XtraDB Cluster application user 'pxc_test_user'@'127.%' has been added with ALL privileges, this user is created for testing purposes
    
    Adding the Percona XtraDB Cluster server nodes to ProxySQL
    
    ProxySQL configuration completed!
    
    ProxySQL has been successfully configured to use with Percona XtraDB Cluster
    
    You can use the following login credentials to connect your application through ProxySQL
    
    mysql --user=pxc_test_user  --host=127.0.0.1 --port=6033 --protocol=tcp
    

Manual Configuration

This tutorial describes how to configure ProxySQL with three Percona XtraDB Cluster nodes.

Node Host Name IP address
Node 1 pxc1 192.168.70.61
Node 2 pxc2 192.168.70.62
Node 3 pxc3 192.168.70.63
Node 4 proxysql 192.168.70.64

ProxySQL can be configured either using the /etc/proxysql.cnf file or through the admin interface. Using the admin interface is preferable, because it allows you to change the configuration dynamically (without having to restart the proxy).

To connect to the ProxySQL admin interface, you need a mysql client. You can either connect to the admin interface from Percona XtraDB Cluster nodes that already have the mysql client installed (Node 1, Node 2, Node 3) or install the client on Node 4 and connect locally. For this tutorial, install Percona XtraDB Cluster on Node 4:

  • On Debian or Ubuntu:

    root@proxysql:~# apt-get install percona-xtradb-cluster-client-5.7
    
  • On Red Hat Enterprise Linux or CentOS:

    [root@proxysql ~]# yum install Percona-XtraDB-Cluster-client-57
    

To connect to the admin interface, use the credentials, host name and port specified in the global variables.

Warning

Do not use default credentials in production!

The following example shows how to connect to the ProxySQL admin interface with default credentials:

root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.30 (ProxySQL Admin Module)

Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql@proxysql>

To see the ProxySQL databases and tables use the following commands:

mysql@proxysql> SHOW DATABASES;
+-----+---------+-------------------------------+
| seq | name    | file                          |
+-----+---------+-------------------------------+
| 0   | main    |                               |
| 2   | disk    | /var/lib/proxysql/proxysql.db |
| 3   | stats   |                               |
| 4   | monitor |                               |
+-----+---------+-------------------------------+
4 rows in set (0.00 sec)
mysql@proxysql> SHOW TABLES;
+--------------------------------------+
| tables                               |
+--------------------------------------+
| global_variables                     |
| mysql_collations                     |
| mysql_query_rules                    |
| mysql_replication_hostgroups         |
| mysql_servers                        |
| mysql_users                          |
| runtime_global_variables             |
| runtime_mysql_query_rules            |
| runtime_mysql_replication_hostgroups |
| runtime_mysql_servers                |
| runtime_scheduler                    |
| scheduler                            |
+--------------------------------------+
12 rows in set (0.00 sec)

For more information about admin databases and tables, see Admin Tables

Note

ProxySQL has 3 areas where the configuration can reside:

  • MEMORY (your current working place)
  • RUNTIME (the production settings)
  • DISK (durable configuration, saved inside an SQLITE database)

When you change a parameter, you change it in MEMORY area. That is done by design to allow you to test the changes before pushing to production (RUNTIME), or save them to disk.

Adding cluster nodes to ProxySQL

To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the mysql_servers table.

Note

ProxySQL uses the concept of hostgroups to group cluster nodes. This enables you to balance the load in a cluster by routing different types of traffic to different groups. There are many ways you can configure hostgroups (for example master and slaves, read and write load, etc.) and a every node can be a member of multiple hostgroups.

This example adds three Percona XtraDB Cluster nodes to the default hostgroup (0), which receives both write and read traffic:

mysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.61',3306);
mysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.62',3306);
mysql@proxysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.70.63',3306);

To see the nodes:

mysql@proxysql> SELECT * FROM mysql_servers;

+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 0            | 192.168.70.61 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 0            | 192.168.70.62 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 0            | 192.168.70.63 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.00 sec)

Creating ProxySQL Monitoring User

To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with USAGE privilege on any node in the cluster and configure the user in ProxySQL.

The following example shows how to add a monitoring user on Node 2:

mysql@pxc2> CREATE USER 'proxysql'@'%' IDENTIFIED BY 'ProxySQLPa55';
mysql@pxc2> GRANT USAGE ON *.* TO 'proxysql'@'%';

The following example shows how to configure this user on the ProxySQL node:

mysql@proxysql> UPDATE global_variables SET variable_value='proxysql'
              WHERE variable_name='mysql-monitor_username';
mysql@proxysql> UPDATE global_variables SET variable_value='ProxySQLPa55'
              WHERE variable_name='mysql-monitor_password';

To load this configuration at runtime, issue a LOAD command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue a SAVE command.

mysql@proxysql> LOAD MYSQL VARIABLES TO RUNTIME;
mysql@proxysql> SAVE MYSQL VARIABLES TO DISK;

To ensure that monitoring is enabled, check the monitoring logs:

mysql@proxysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;
+---------------+------+------------------+----------------------+---------------+
| hostname      | port | time_start_us    | connect_success_time | connect_error |
+---------------+------+------------------+----------------------+---------------+
| 192.168.70.61 | 3306 | 1469635762434625 | 1695                 | NULL          |
| 192.168.70.62 | 3306 | 1469635762434625 | 1779                 | NULL          |
| 192.168.70.63 | 3306 | 1469635762434625 | 1627                 | NULL          |
| 192.168.70.61 | 3306 | 1469635642434517 | 1557                 | NULL          |
| 192.168.70.62 | 3306 | 1469635642434517 | 2737                 | NULL          |
| 192.168.70.63 | 3306 | 1469635642434517 | 1447                 | NULL          |
+---------------+------+------------------+----------------------+---------------+
6 rows in set (0.00 sec)
mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;
+---------------+------+------------------+-------------------+------------+
| hostname      | port | time_start_us    | ping_success_time | ping_error |
+---------------+------+------------------+-------------------+------------+
| 192.168.70.61 | 3306 | 1469635762416190 | 948               | NULL       |
| 192.168.70.62 | 3306 | 1469635762416190 | 803               | NULL       |
| 192.168.70.63 | 3306 | 1469635762416190 | 711               | NULL       |
| 192.168.70.61 | 3306 | 1469635702416062 | 783               | NULL       |
| 192.168.70.62 | 3306 | 1469635702416062 | 631               | NULL       |
| 192.168.70.63 | 3306 | 1469635702416062 | 542               | NULL       |
+---------------+------+------------------+-------------------+------------+
6 rows in set (0.00 sec)

The previous examples show that ProxySQL is able to connect and ping the nodes you added.

To enable monitoring of these nodes, load them at runtime:

mysql@proxysql> LOAD MYSQL SERVERS TO RUNTIME;

Creating ProxySQL Client User

ProxySQL must have users that can access backend nodes to manage connections.

To add a user, insert credentials into mysql_users table:

mysql@proxysql> INSERT INTO mysql_users (username,password) VALUES ('sbuser','sbpass');
Query OK, 1 row affected (0.00 sec)

Note

ProxySQL currently doesn’t encrypt passwords.

Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down):

mysql@proxysql> LOAD MYSQL USERS TO RUNTIME;
mysql@proxysql> SAVE MYSQL USERS TO DISK;

To confirm that the user has been set up correctly, you can try to log in:

root@proxysql:~# mysql -u sbuser -psbpass -h 127.0.0.1 -P 6033

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1491
Server version: 5.1.30 (ProxySQL)

Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:

mysql@pxc3> CREATE USER 'sbuser'@'192.168.70.64' IDENTIFIED BY 'sbpass';
Query OK, 0 rows affected (0.01 sec)

mysql@pxc3> GRANT ALL ON *.* TO 'sbuser'@'192.168.70.64';
Query OK, 0 rows affected (0.00 sec)

Adding Galera Support

Default ProxySQL cannot detect a node which is not in Synced state. To monitor status of Percona XtraDB Cluster nodes, use the proxysql_galera_checker script. The script is located here: /usr/bin/proxysql_galera_checker.

To use this script, load it into ProxySQL Scheduler.

The following example shows how you can load the script for default ProxySQL configuration:

mysql@proxysql> INSERT INTO scheduler(id,active,interval_ms,filename,arg1,arg2,arg3,arg4,arg5)
  VALUES (1,'1','10000','/usr/bin/proxysql_galera_checker','0','-1','0','1',
  '/var/lib/proxysql/proxysql_galera_checker.log');

This Scheduler script accepts the following arguments:

Argument Name Required Description
arg1 HOSTGROUP WRITERS YES The ID of the hostgroup with nodes that will server writes.
arg2 HOSTGROUP READERS NO The ID of the hostgroup with nodes that will server reads.
arg3 NUMBER WRITERS NO Maximum number of the node from the writer hostgroup that can be marked ONLINE. If set to 0, all nodes can be marked ONLINE.
arg4 WRITERS ARE READERS NO If set to 1 (default), ONLINE nodes in the writer hostgroup will prefer not to be ONLINE in the reader hostgroup.
arg5 LOG FILE NO File where node state checks and changes are logged to (verbose).

To load the scheduler changes into the runtime space:

mysql@proxysql> LOAD SCHEDULER TO RUNTIME;

To make sure that the script has been loaded, check the runtime_scheduler table:

mysql@proxysql> SELECT * FROM runtime_scheduler\G
*************************** 1. row ***************************
         id: 1
interval_ms: 10000
   filename: /usr/bin/proxysql/proxysql_galera_checker
       arg1: 127.0.0.1
       arg2: 6032
       arg3: 0
       arg4: /var/lib/proxysql/proxysql_galera_checker.log
       arg5: NULL
 1 row in set (0.00 sec)

To check the status of available nodes, run the following command:

mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM mysql_servers;
+--------------+---------------+------+--------+
| hostgroup_id | hostname      | port | status |
+--------------+---------------+------+--------+
| 0            | 192.168.70.61 | 3306 | ONLINE |
| 0            | 192.168.70.62 | 3306 | ONLINE |
| 0            | 192.168.70.63 | 3306 | ONLINE |
+--------------+---------------+------+--------+
3 rows in set (0.00 sec)

Note

Each node can have the following status:

  • ONLINE: backend node is fully operational.
  • SHUNNED: backend node is temporarily taken out of use, because either too many connection errors hapenned in a short time, or replication lag exceeded the allowed threshold.
  • OFFLINE_SOFT: new incoming connections aren’t accepted, while existing connections are kept until they become inactive. In other words, connections are kept in use until the current transaction is completed. This allows to gracefully detach a backend node.
  • OFFLINE_HARD: existing connections are dropped, and new incoming connections aren’t accepted. This is equivalent to deleting the node from a hostgroup, or temporarily taking it out of the hostgroup for maintenance.

Testing Cluster with sysbench

You can install sysbench from Percona software repositories:

  • For Debian or Ubuntu:
root@proxysql:~# apt-get install sysbench
  • For Red Hat Enterprise Linux or CentOS
[root@proxysql ~]# yum install sysbench

Note

sysbench requires ProxySQL client user credentials that you creted in Creating ProxySQL Client User.

  1. Create the database that will be used for testing on one of the Percona XtraDB Cluster nodes:

    mysql@pxc1> CREATE DATABASE sbtest;
    
  2. Populate the table with data for the benchmark on the ProxySQL node:

    root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \
      --num-requests=0 --max-time=20 \
      --test=/usr/share/doc/sysbench/tests/db/oltp.lua \
      --mysql-user='sbuser' --mysql-password='sbpass' \
      --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \
      prepare
    
  3. Run the benchmark on the ProxySQL node:

    root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \
      --num-requests=0 --max-time=20 \
      --test=/usr/share/doc/sysbench/tests/db/oltp.lua \
      --mysql-user='sbuser' --mysql-password='sbpass' \
      --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \
      run
    

ProxySQL stores collected data in the stats schema:

mysql@proxysql> SHOW TABLES FROM stats;
+--------------------------------+
| tables                         |
+--------------------------------+
| stats_mysql_query_rules        |
| stats_mysql_commands_counters  |
| stats_mysql_processlist        |
| stats_mysql_connection_pool    |
| stats_mysql_query_digest       |
| stats_mysql_query_digest_reset |
| stats_mysql_global             |
+--------------------------------+

For example, to see the number of commands that run on the cluster:

mysql@proxysql> SELECT * FROM stats_mysql_commands_counters;
+-------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+
| Command           | Total_Time_us | Total_cnt | cnt_100us | cnt_500us | cnt_1ms | cnt_5ms | cnt_10ms | cnt_50ms | cnt_100ms | cnt_500ms | cnt_1s | cnt_5s | cnt_10s | cnt_INFs |
+-------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+
| ALTER_TABLE       | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| ANALYZE_TABLE     | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| BEGIN             | 2212625       | 3686      | 55        | 2162      | 899     | 569     | 1        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| CHANGE_MASTER     | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| COMMIT            | 21522591      | 3628      | 0         | 0         | 0       | 1765    | 1590     | 272      | 1         | 0         | 0      | 0      | 0       | 0        |
| CREATE_DATABASE   | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| CREATE_INDEX      | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
...
| DELETE            | 2904130       | 3670      | 35        | 1546      | 1346    | 723     | 19       | 1        | 0         | 0         | 0      | 0      | 0       | 0        |
| DESCRIBE          | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
...
| INSERT            | 19531649      | 3660      | 39        | 1588      | 1292    | 723     | 12       | 2        | 0         | 1         | 0      | 1      | 2       | 0        |
...
| SELECT            | 35049794      | 51605     | 501       | 26180     | 16606   | 8241    | 70       | 3        | 4         | 0         | 0      | 0      | 0       | 0        |
| SELECT_FOR_UPDATE | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
...
| UPDATE            | 6402302       | 7367      | 75        | 2503      | 3020    | 1743    | 23       | 3        | 0         | 0         | 0      | 0      | 0       | 0        |
| USE               | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
| SHOW              | 19691         | 2         | 0         | 0         | 0       | 0       | 1        | 1        | 0         | 0         | 0      | 0      | 0       | 0        |
| UNKNOWN           | 0             | 0         | 0         | 0         | 0       | 0       | 0        | 0        | 0         | 0         | 0      | 0      | 0       | 0        |
+-------------------+---------------+-----------+-----------+-----------+---------+---------+----------+----------+-----------+-----------+--------+--------+---------+----------+
45 rows in set (0.00 sec)

Automatic Fail-over

ProxySQL will automatically detect if a node is not available or not synced with the cluster.

You can check the status of all available nodes by running:

mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM mysql_servers;
+--------------+---------------+------+--------+
| hostgroup_id | hostname      | port | status |
+--------------+---------------+------+--------+
| 0            | 192.168.70.61 | 3306 | ONLINE |
| 0            | 192.168.70.62 | 3306 | ONLINE |
| 0            | 192.168.70.63 | 3306 | ONLINE |
+--------------+---------------+------+--------+
3 rows in set (0.00 sec)

To test problem detection and fail-over mechanism, shut down Node 3:

root@pxc3:~# service mysql stop

ProxySQL will detect that the node is down and update its status to OFFLINE_SOFT:

mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM mysql_servers;
+--------------+---------------+------+--------------+
| hostgroup_id | hostname      | port | status       |
+--------------+---------------+------+--------------+
| 0            | 192.168.70.61 | 3306 | ONLINE       |
| 0            | 192.168.70.62 | 3306 | ONLINE       |
| 0            | 192.168.70.63 | 3306 | OFFLINE_SOFT |
+--------------+---------------+------+--------------+
3 rows in set (0.00 sec)

Now start Node 3 again:

root@pxc3:~# service mysql start

The script will detect the change and mark the node as ONLINE:

mysql@proxysql> SELECT hostgroup_id,hostname,port,status FROM mysql_servers;
+--------------+---------------+------+--------+
| hostgroup_id | hostname      | port | status |
+--------------+---------------+------+--------+
| 0            | 192.168.70.61 | 3306 | ONLINE |
| 0            | 192.168.70.62 | 3306 | ONLINE |
| 0            | 192.168.70.63 | 3306 | ONLINE |
+--------------+---------------+------+--------+
3 rows in set (0.00 sec)

Assisted Maintenance Mode

Usually, to take a node down for maintenance, you need to identify that node, update its status in ProxySQL to OFFLINE_SOFT, wait for ProxySQL to divert traffic from this node, and then initiate the shutdown or perform maintenance tasks. Percona XtraDB Cluster includes a special maintenance mode for nodes that enables you to take a node down without adjusting ProxySQL manually. The mode is controlled using the pxc_maint_mode variable, which is monitored by ProxySQL and can be set to one of the following values:

  • DISABLED: This is the default state that tells ProxySQL to route traffic to the node as usual.

  • SHUTDOWN: This state is set automatically when you initiate node shutdown.

    You may need to shut down a node when upgrading the OS, adding resources, changing hardware parts, relocating the server, etc.

    When you initiate node shutdown, Percona XtraDB Cluster does not send the signal immediately. Intead, it changes the state to pxc_maint_mode=SHUTDOWN and waits for a predefined period (10 seconds by default). When ProxySQL detects that the mode is set to SHUTDOWN, it changes the status of this node to OFFLINE_SOFT, which stops creation of new connections for the node. After the transition period, any long-running transactions that are still active are aborted.

  • MAINTENANCE: You can change to this state if you need to perform maintenace on a node without shutting it down.

    You may need to isolate the node for some time, so that it does not receive traffic from ProxySQL while you resize the buffer pool, truncate the undo log, defragment or check disks, etc.

    To do this, manually set pxc_maint_mode=MAINTENANCE. Control is not returned to the user for a predefined period (10 seconds by default). When ProxySQL detects that the mode is set to MAINTENANCE, it stops routing traffic to the node. Once control is returned, you can perform maintenance activity.

    Note

    Any data changes will still be replicated across the cluster.

    After you finish maintenance, set the mode back to DISABLED. When ProxySQL detects this, it starts routing traffic to the node again.

You can increase the transition period using the pxc_maint_transition_period variable to accomodate for long-running transactions. If the period is long enough for all transactions to finish, there should hardly be any disruption in cluster workload.

During the transition period, the node continues to receive existing write-set replication traffic, ProxySQL avoids openning new connections and starting transactions, but the user can still open conenctions to monitor status.

Note

If you increase the transition period, the packaging script may determine it as a server stall.

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.