In this post, the first one of a Maxscale series, I describe how to use MariaDB’s MaxScale and MySQL-utilities with MySQL Asynchronous replication.
When we talk about high availability with asynchronous replication, we always think about MHA or PRM. But if we want to transparently use the slave(s) for READs, what can we use ?
Description:
To achieve our goals, we will use MaxScale and it’s R/W filter. When using Maxscale and asynchronous replication with MariaDB, it’s possible to use MariaDB’s replication manager, which is a wonderful tool written in Go. Unfortunately, this tool doesn’t support standard MySQL. To replace it, I used then the Oracle’s MySQL-Utilities.
Our three nodes are:
It’s mandatory in this solution to use GTID, as it’s the only method supported by the mysql-utilities we are using.
This is the MaxScale configuration:
|
1 |
[maxscale]<br>threads=4<br><br>[Splitter Service]<br>type=service<br>router=readwritesplit<br>servers=percona1, percona2<br>user=maxscale<br>passwd=264D375EC77998F13F4D0EC739AABAD4<br><br>[Splitter Listener]<br>type=listener<br>service=Splitter Service<br>protocol=MySQLClient<br>port=3306<br>socket=/tmp/ClusterMaster<br><br>[percona1]<br>type=server<br>address=192.168.90.2<br>port=3306<br>protocol=MySQLBackend<br><br>[percona2]<br>type=server<br>address=192.168.90.3<br>port=3306<br>protocol=MySQLBackend<br><br>[percona3]<br>type=server<br>address=192.168.90.4<br>port=3306<br>protocol=MySQLBackend<br><br>[Replication Monitor]<br>type=monitor<br>module=mysqlmon<br>servers=percona1, percona2, percona3<br>user=maxscale<br>passwd=264D375EC77998F13F4D0EC739AABAD4<br>monitor_interval=1000<br>script=/usr/local/bin/failover.sh<br>events=master_down<br><br>[CLI]<br>type=service<br>router=cli<br><br>[CLI Listener]<br>type=listener<br>service=CLI<br>protocol=maxscaled<br>address=localhost<br>port=6603<br> |
As you can notice, the Splitter Service contains only the two nodes able to handle the load.
And to perform the failover, in the Replication Monitor section, we define a script to use when the master is down.
That script calls mysqlrpladmin from the mysql-utilities.
In the script we also define the following line to be sure the weak slave will never become a master.
|
1 |
never_master=192.168.90.4<br> |
When everything is setup and running, you should see something like this:
|
1 |
# maxadmin -pmariadb list servers<br>Servers.<br>-------------------+-----------------+-------+-------------+--------------------<br>Server | Address | Port | Connections | Status <br>-------------------+-----------------+-------+-------------+--------------------<br>percona1 | 192.168.90.2 | 3306 | 15 | Master, Running<br>percona2 | 192.168.90.3 | 3306 | 1025 | Slave, Running<br>percona3 | 192.168.90.4 | 3306 | 0 | Slave, Running<br>-------------------+-----------------+-------+-------------+--------------------<br> |
So as you can see, Maxscale discovers on its own which server is the master; this doesn’t need to be specified in the configuration.
You can also use mysqldrpladmin utility to verify the cluster’s health:
|
1 |
# /usr/bin/mysqlrpladmin --rpl-user=repl:replpercona --master=manager:[email protected]:3306 --slaves=manager:[email protected]:3306,manager:[email protected]:3306 health<br># Checking privileges.<br>#<br># Replication Topology Health:<br>+---------------+-------+---------+--------+------------+---------+<br>| host | port | role | state | gtid_mode | health |<br>+---------------+-------+---------+--------+------------+---------+<br>| 192.168.90.2 | 3306 | MASTER | UP | ON | OK |<br>| 192.168.90.3 | 3306 | SLAVE | UP | ON | OK |<br>| 192.168.90.4 | 3306 | SLAVE | UP | ON | OK |<br>+---------------+-------+---------+--------+------------+---------+<br> |
Try it with --verbose 😉
When we test with sysbench, and we stop the master, we can see that there are some errors due to disconnects. Also, during the promotion of the new master, sysbench can’t reconnect:
|
1 |
[ 20s] queue length: 0, concurrency: 0<br>[ 21s] threads: 8, tps: 2.00, reads: 28.00, writes: 8.00, response time: 107.61ms (95%), errors: 0.00, reconnects: 0.00<br>[ 21s] queue length: 0, concurrency: 0<br>[ 22s] threads: 8, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 22s] queue length: 0, concurrency: 0<br>[ 23s] threads: 8, tps: 1.00, reads: 14.00, writes: 4.00, response time: 100.85ms (95%), errors: 0.00, reconnects: 0.00<br>[ 23s] queue length: 0, concurrency: 0<br>[ 24s] threads: 8, tps: 0.00, reads: 11.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 24s] queue length: 0, concurrency: 1<br>[ 25s] threads: 8, tps: 1.00, reads: 3.00, writes: 4.00, response time: 235.41ms (95%), errors: 0.00, reconnects: 0.00<br>[ 25s] queue length: 0, concurrency: 0<br>[ 26s] threads: 8, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 26s] queue length: 0, concurrency: 0<br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>[ 27s] threads: 8, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 27s] queue length: 0, concurrency: 3<br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>[ 28s] threads: 8, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 28s] queue length: 0, concurrency: 4<br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>[ 29s] threads: 8, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 29s] queue length: 0, concurrency: 5<br>[ 30s] threads: 8, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 30s] queue length: 0, concurrency: 5<br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>[ 31s] threads: 8, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 31s] queue length: 0, concurrency: 7<br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>WARNING: Both max-requests and max-time are 0, running endless test<br>sysbench 0.5: multi-threaded system evaluation benchmark<br><br>Running the test with following options:<br>Number of threads: 8<br>Target transaction rate: 1/sec<br>Report intermediate results every 1 second(s)<br>Random number generator seed is 0 and will be ignored<br><br><br>Threads started!<br><br>FATAL: unable to connect to MySQL server, aborting...<br>FATAL: error 1045: failed to create new session<br>PANIC: unprotected error in call to Lua API (Failed to connect to the database)<br>WARNING: Both max-requests and max-time are 0, running endless test<br>sysbench 0.5: multi-threaded system evaluation benchmark<br><br>Running the test with following options:<br>Number of threads: 8<br>Target transaction rate: 1/sec<br>Report intermediate results every 1 second(s)<br>Random number generator seed is 0 and will be ignored<br><br><br>Threads started!<br><br>[ 1s] threads: 8, tps: 1.99, reads: 27.93, writes: 7.98, response time: 211.49ms (95%), errors: 0.00, reconnects: 0.00<br>[ 1s] queue length: 0, concurrency: 0<br>[ 2s] threads: 8, tps: 1.00, reads: 14.00, writes: 4.00, response time: 51.01ms (95%), errors: 0.00, reconnects: 0.00<br>[ 2s] queue length: 0, concurrency: 0<br>[ 3s] threads: 8, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00<br>[ 3s] queue length: 0, concurrency: 0<br>[ 4s] threads: 8, tps: 1.00, reads: 13.99, writes: 4.00, response time: 80.28ms (95%), errors: 0.00, reconnects: 0.00<br> |
It took 8 seconds to automatically failover.
Then we can see the status of the servers:
|
1 |
# maxadmin -pmariadb list serversServers.<br>-------------------+-----------------+-------+-------------+--------------------<br>Server | Address | Port | Connections | Status <br>-------------------+-----------------+-------+-------------+--------------------<br>percona1 | 192.168.90.2 | 3306 | 17 | Down<br>percona2 | 192.168.90.3 | 3306 | 1025 | Master, Running<br>percona3 | 192.168.90.4 | 3306 | 0 | Slave, Running<br>-------------------+-----------------+-------+-------------+--------------------<br> |
If we re-start percona1, we can see now:
|
1 |
# maxadmin -pmariadb list servers<br>Servers.<br>-------------------+-----------------+-------+-------------+--------------------<br>Server | Address | Port | Connections | Status <br>-------------------+-----------------+-------+-------------+--------------------<br>percona1 | 192.168.90.2 | 3306 | 17 | Running<br>percona2 | 192.168.90.3 | 3306 | 1025 | Master, Running<br>percona3 | 192.168.90.4 | 3306 | 0 | Slave, Running<br>-------------------+-----------------+-------+-------------+--------------------<br> |
To add the node again in the asynchronous replication as a slave, we need to use another MySQL utility, mysqlreplicate:
|
1 |
# mysqlreplicate --master=manager:[email protected] --slave=manager:[email protected] --rpl-user=repl:replpercona<br># master on 192.168.90.3: ... connected.<br># slave on 192.168.90.2: ... connected.<br># Checking for binary logging on master...<br># Setting up replication...<br># ...done.<br> |
This is source of failover.sh:
|
1 |
#!/bin/bash<br># failover.sh<br># wrapper script to mysqlrpladmin<br><br># user:password pair, must have administrative privileges.<br>user=manager:percona<br># user:password pair, must have REPLICATION SLAVE privileges. <br>repluser=repl:replpercona<br><br>never_master=192.168.90.4<br><br>ARGS=$(getopt -o '' --long 'event:,initiator:,nodelist:' -- "$@")<br><br>eval set -- "$ARGS"<br><br>while true; do<br> case "$1" in<br> --event)<br> shift;<br> event=$1<br> shift;<br> ;;<br> --initiator)<br> shift;<br> initiator=$1<br> shift;<br> ;;<br> --nodelist)<br> shift;<br> nodelist=$1<br> shift;<br> ;;<br> --)<br> shift;<br> break;<br> ;;<br> esac<br>done<br><br># find the candidates<br>for i in $(echo $nodelist | sed s/,/n/g)<br>do<br> if [[ "$i" =~ "$never_master" ]]<br> then<br> # do nothing<br> echo nothing >/dev/null<br> else<br> if [[ "$i" =~ "$initiator" ]]<br> then<br> # do nothing<br> echo nothing >/dev/null<br> else<br> candidates="$candidates,${user}@${i}"<br> fi<br> fi<br> if [[ "$i" =~ "$initiator" ]]<br> then<br> # do nothing<br> echo nothing >/dev/null<br> else<br> slaves="$slaves,${user}@${i}"<br> fi<br>done<br><br><br>cmd="/usr/bin/mysqlrpladmin --rpl-user=$repluser --slaves=${slaves#?} --candidates=${candidates#?} failover" <br># uncomment following line for debug<br>#echo $cmd >> /tmp/fred<br>eval $cmd<br> |
In the next post, we will focus on the monitoring module used in this configuration.
Resources
RELATED POSTS