In this blog post, we’ll discuss read-write split routing in MaxScale.
The two previous posts have shown how to setup high availability (HA) with Maxscale using asynchronous replication and how we monitor replication.
Now let’s focus on the routing module performing read-write splits.
This is our current configuration:
|
1 |
[Splitter Service]<br>type=service<br>router=readwritesplit<br>servers=percona1, percona2<br>max_slave_replication_lag=30<br>user=maxscale<br>passwd=264D375EC77998F13F4D0EC739AABAD4<br> |
This router module is designed to spread the read queries across multiple servers (slaves by default), and send the write queries to a single server: the master.
This module is intended to work with Asynchronous Master-Slave replication but also with Galera replication if you plan to write to a single node.
So what is routed to the Master?
Example:
Let’s connect to MaxScale with the MySQL’s interactive client:
|
1 |
mysql> select @@hostname;<br>+------------+<br>| @@hostname |<br>+------------+<br>| percona2 |<br>+------------+<br><br>mysql> start transaction;<br>mysql> select @@hostname;<br>+------------+<br>| @@hostname |<br>+------------+<br>| percona1 |<br>+------------+<br><br>mysql> rollback;<br>mysql> select @@hostname;<br>+------------+<br>| @@hostname |<br>+------------+<br>| percona2 |<br>+------------+<br> |
Now let’s try with a READ ONLY transaction:
|
1 |
mysql> start transaction read only;<br>mysql> select @@hostname;<br>+------------+<br>| @@hostname |<br>+------------+<br>| percona1 |<br>+------------+<br> |
As we can see, MaxScale doesn’t support READ ONLY transactions. It considers them the same as any transaction. This means they are routed to the master as a WRITE.
We’ve already seen the max_slave_replication_lag optional parameter, but there are some others:
master and all (the latter being the default)
Now let’s play with the weightby . So in this configuration, we will target 10% of the reads to percona2, and 90% to percona3:
|
1 |
[Splitter Service]<br>type=service<br>router=readwritesplit<br>servers=percona1, percona2, percona3<br>weightby=myweight<br>...<br><br>[percona2]<br>type=server<br>address=192.168.90.3<br>port=3306<br>protocol=MySQLBackend<br>myweight=1<br><br>[percona3]<br>type=server<br>address=192.168.90.4<br>port=3306<br>protocol=MySQLBackend<br>myweight=9<br> |
We restart MaxScale, and verify the settings of the service:
|
1 |
# maxadmin -pmariadb show service "Splitter Service"<br>Service 0x363b460<br> Service: Splitter Service<br> Router: readwritesplit (0x7fe7f1e88540)<br> State: Started<br> Number of router sessions: 0<br> Current no. of router sessions: 0<br> Number of queries forwarded: 0<br> Number of queries forwarded to master: 0<br> Number of queries forwarded to slave: 0<br> Number of queries forwarded to all: 0<br> Master/Slave percentage: 0.00%<br> Connection distribution based on myweight server parameter.<br> Server Target % Connections Operations<br> Global Router<br> percona3 90.0% 0 0 0<br> percona2 10.0% 0 0 0<br> percona1 100.0% 0 0 0<br> Started: Wed Feb 24 22:39:27 2016<br> Root user access: Disabled<br> Backend databases<br> 192.168.90.4:3306 Protocol: MySQLBackend<br> 192.168.90.3:3306 Protocol: MySQLBackend<br> 192.168.90.2:3306 Protocol: MySQLBackend<br> Routing weight parameter: myweight<br> Users data: 0x36397c0<br> Total connections: 2<br> Currently connected: 2<br> SSL: Disabled<br> |
The target % seems correct, let’s test it!
|
1 |
for i in `seq 1 10`; <br>do mysql -h 192.168.90.5 -BN -umanager -ppercona -e "select @@hostname; select sleep(10)" 2>/dev/null & <br>done<br>percona2<br>percona2<br>percona2<br>percona2<br>percona2<br>percona2<br>percona3<br>percona3<br>percona3<br>percona3<br> |
That doesn’t seem good! Let’s check the service again:
|
1 |
Service 0x363b460<br> Service: Splitter Service<br> Router: readwritesplit (0x7fe7f1e88540)<br> State: Started<br> Number of router sessions: 10<br> Current no. of router sessions: 10<br> Number of queries forwarded: 30<br> Number of queries forwarded to master: 0<br> Number of queries forwarded to slave: 30<br> Number of queries forwarded to all: 0<br> Master/Slave percentage: 0.00%<br> Connection distribution based on myweight server parameter.<br> Server Target % Connections Operations<br> Global Router<br> percona3 90.0% 10 10 5<br> percona2 10.0% 10 10 5<br> percona1 100.0% 10 10 0<br> Started: Wed Feb 24 22:39:27 2016<br> Root user access: Disabled<br> Backend databases<br> 192.168.90.4:3306 Protocol: MySQLBackend<br> 192.168.90.3:3306 Protocol: MySQLBackend<br> 192.168.90.2:3306 Protocol: MySQLBackend<br> Routing weight parameter: myweight<br> Users data: 0x36397c0<br> Total connections: 12<br> Currently connected: 12<br> SSL: Disabled<br> |
Five operations for both . . . this looks like a normal load balancer, 50%-50%.
So that doesn’t work as we expected. Let’s have a look at other router options:
The are some others; please check the online manual for:
That explains the behavior we just observed. But what if we want to use the weight setting, and not spread the reads equivalently on the slaves?
I found the answer on IRC in the #maxscale freenode. Markus Makela (markusjm) explained to me that the default configuration in 1.3.0 is to use all the slaves, and load balance the actual statements. So to achieve what we want to do, we need to use these options in the service section:
|
1 |
router_options=slave_selection_criteria=LEAST_GLOBAL_CONNECTIONS<br>max_slave_connections=1<br> |
Let’s test it:
|
1 |
for i in `seq 1 10`; <br>do mysql -h 192.168.90.5 -BN -umanager -ppercona -e "select @@hostname; select sleep(10)" 2>/dev/null & <br>done<br>percona3<br>percona3<br>percona3<br>percona3<br>percona3<br>percona2<br>percona3<br>percona3<br>percona3<br>percona3<br><br>Service 0x1d88560<br> Service: Splitter Service<br> Router: readwritesplit (0x7f9c018c3540)<br> State: Started<br> Number of router sessions: 10<br> Current no. of router sessions: 10<br> Number of queries forwarded: 30<br> Number of queries forwarded to master: 0<br> Number of queries forwarded to slave: 30<br> Number of queries forwarded to all: 0<br> Master/Slave percentage: 0.00%<br> Connection distribution based on myweight server parameter.<br> Server Target % Connections Operations<br> Global Router<br> percona3 90.0% 9 9 9<br> percona2 10.0% 1 1 1<br> percona1 100.0% 10 10 0<br> Started: Wed Feb 24 22:58:21 2016<br> Root user access: Disabled<br> Backend databases<br> 192.168.90.4:3306 Protocol: MySQLBackend<br> 192.168.90.3:3306 Protocol: MySQLBackend<br> 192.168.90.2:3306 Protocol: MySQLBackend<br> Routing weight parameter: myweight<br> Users data: 0x1d8a480<br> Total connections: 12<br> Currently connected: 12<br> SSL: Disabled<br> |
Yes! It worked as expected!
max_slave_connections sets the maximum number of slaves a router session uses at any moment. The default is to use all available slaves. When we set it to 1, we get one master and one slave connection per client, and the connections are balanced according to the server weights. The new mechanism uses statements instead of connections for load balancing (see MXS-588).
Finally, this routing module also support routing hints. I’ll cover them in my next MaxScale post.
More information: https://github.com/mariadb-corporation/MaxScale/blob/develop/Documentation/Routers/ReadWriteSplit.md
Resources
RELATED POSTS