Setting up an InnoDB Cluster requires three key components: Group Replication, MySQL Shell, and MySQL Router. In the previous post, we covered the process of building a 3-node InnoDB Cluster. In this post, we shift our focus to configuring MySQL Router and validating failover functionality.
We are using three InnoDB Cluster nodes along with one dedicated router node, and the corresponding entries need to be updated in the system’s hosts file to ensure proper name resolution across the setup.
|
1 |
# /etc/hosts entries:<br>172.31.29.239 ArunClusterND1<br>172.31.30.173 ArunClusterND2<br>172.31.20.173 ArunClusterND3<br>172.31.18.180 ArunClusterRouter |
The next step in setting up an InnoDB Cluster environment is to install MySQL Router, a lightweight middleware that enables applications to connect transparently to the correct MySQL instance in the cluster. Once installed, MySQL Router must be bootstrapped using the mysqlrouter –bootstrap command with appropriate credentials, such as innoclusteradmin, to automatically configure routing based on the current cluster topology. This process sets up the necessary configuration files and directories to route read and write traffic to the correct nodes, enabling seamless failover and high availability for client connections.
|
1 |
root@ArunClusterRouter:~# mysqlrouter --bootstrap innoclusteradmin@ArunClusterND2:3306 --directory myrouter --user=root<br>Please enter MySQL password for innoclusteradmin:<br># Bootstrapping MySQL Router 8.0.41 ((Ubuntu)) instance at '/root/myrouter'...<br><br>Fetching Cluster Members<br>trying to connect to mysql-server at ArunClusterND1:3306<br>- Creating account(s) (only those that are needed, if any)<br>- Verifying account (using it to run SQL queries that would be run by Router)<br>- Storing account in keyring<br>- Adjusting permissions of generated files<br>- Creating configuration /root/myrouter/mysqlrouter.conf<br><br># MySQL Router configured for the InnoDB Cluster 'innodbtestcluster'<br><br>After this MySQL Router has been started with the generated configuration<br><br> $ mysqlrouter -c /root/myrouter/mysqlrouter.conf<br><br>InnoDB Cluster 'innodbtestcluster' can be reached by connecting to:<br><br>## MySQL Classic protocol<br><br>- Read/Write Connections: localhost:6446<br>- Read/Only Connections: localhost:6447<br><br>## MySQL X protocol<br><br>- Read/Write Connections: localhost:6448<br>- Read/Only Connections: localhost:6449<br><br>root@ArunClusterRouter:~# |
The mysqlrouter –bootstrap innoclusteradmin@ArunClusterND2:3306 command initializes and configures MySQL Router for integration with the InnoDB Cluster. During this process, MySQL Router connects to the specified cluster node to retrieve information about the cluster topology and automatically generates a configuration tailored to the cluster’s setup. It creates necessary internal accounts, stores credentials securely using the keyring, adjusts file permissions, and writes a new configuration under the specified directory (myrouter).
This configuration enables the router to provide seamless connection routing: it sets up ports for both the MySQL Classic and X Protocols, distinguishing between read/write and read-only traffic. After successful configuration, the Router is ready to be started using the generated configuration file, allowing client applications to connect to the InnoDB Cluster transparently via the defined local ports.
From the ArunClusterND1 node, we re-verified whether any mysql_router user was present in the cluster after running the mysqlrouter –bootstrap command. As shown in the output below, a user named mysql_router1_fqkfovl was indeed created, confirming that the bootstrap process successfully registered the Router with the cluster.
|
1 |
MySQL ArunClusterND1:3306 ssl SQL > SELECT user, host FROM mysql.user WHERE user LIKE 'mysql_router%';<br>+-----------------------+------+<br>| user | host |<br>+-----------------------+------+<br>| mysql_router1_fqkfovl | % |<br>+-----------------------+------+<br>1 row in set (0.0004 sec)<br>MySQL ArunClusterND1:3306 ssl SQL > |
The default MySQL Router configuration, located at /usr/bin/myrouter/mysqlrouter.conf, is automatically generated during the MySQL Router bootstrap process. The configuration includes several important settings. Under the [DEFAULT]section, it specifies paths for logging, runtime, and data storage, as well as SSL certificates and keys for secure communication. The connection and read timeouts are set to 5 and 30 seconds, respectively, while the dynamic state and keyring files are stored in the data folder. The default SSL mode for the client is set to PREFERRED, and the server SSL mode is configured as AS_CLIENT. Additionally, the configuration defines user for the router and sets up the metadata cache for the InnoDB cluster.
The [logger] section defines the logging level as INFO. For routing configurations, the file defines specific settings for both read-write and read-only routing for the MySQL InnoDB cluster, binding to different ports for each routing method. The routing strategies are first-available for primary and round-robin-with-fallback for secondary nodes. The configuration also enables HTTP server access with SSL on port 8443, and includes basic authentication settings in the [http_auth_realm] section. These default configurations are tailored for connecting MySQL Router to a group replication-enabled InnoDB Cluster, allowing it to route traffic efficiently to the appropriate nodes based on their roles.
|
1 |
root@ArunClusterRouter:~# cat /usr/bin/myrouter/mysqlrouter.conf<br># File automatically generated during MySQL Router bootstrap<br>[DEFAULT]<br>user=root<br>logging_folder=/usr/bin/myrouter/log<br>runtime_folder=/usr/bin/myrouter/run<br>data_folder=/usr/bin/myrouter/data<br>keyring_path=/usr/bin/myrouter/data/keyring<br>master_key_path=/usr/bin/myrouter/mysqlrouter.key<br>connect_timeout=5<br>read_timeout=30<br>dynamic_state=/usr/bin/myrouter/data/state.json<br>client_ssl_cert=/usr/bin/myrouter/data/router-cert.pem<br>client_ssl_key=/usr/bin/myrouter/data/router-key.pem<br>client_ssl_mode=PREFERRED<br>server_ssl_mode=AS_CLIENT<br>server_ssl_verify=DISABLED<br>unknown_config_option=error<br><br>[logger]<br>level=INFO<br><br>[metadata_cache:bootstrap]<br>cluster_type=gr<br>router_id=1<br>user=mysql_router1_jt41pbi<br>metadata_cluster=myinnodbcluster<br>ttl=0.5<br>auth_cache_ttl=-1<br>auth_cache_refresh_interval=2<br>use_gr_notifications=0<br><br>[routing:bootstrap_rw]<br>bind_address=0.0.0.0<br>bind_port=6446<br>destinations=metadata-cache://myinnodbcluster/?role=PRIMARY<br>routing_strategy=first-available<br>protocol=classic<br><br>[routing:bootstrap_ro]<br>bind_address=0.0.0.0<br>bind_port=6447<br>destinations=metadata-cache://myinnodbcluster/?role=SECONDARY<br>routing_strategy=round-robin-with-fallback<br>protocol=classic<br><br>[routing:bootstrap_x_rw]<br>bind_address=0.0.0.0<br>bind_port=6448<br>destinations=metadata-cache://myinnodbcluster/?role=PRIMARY<br>routing_strategy=first-available<br>protocol=x<br><br>[routing:bootstrap_x_ro]<br>bind_address=0.0.0.0<br>bind_port=6449<br>destinations=metadata-cache://myinnodbcluster/?role=SECONDARY<br>routing_strategy=round-robin-with-fallback<br>protocol=x<br><br>[http_server]<br>port=8443<br>ssl=1<br>ssl_cert=/usr/bin/myrouter/data/router-cert.pem<br>ssl_key=/usr/bin/myrouter/data/router-key.pem<br><br>[http_auth_realm:default_auth_realm]<br>backend=default_auth_backend<br>method=basic<br>name=default_realm<br><br>[rest_router]<br>require_realm=default_auth_realm<br><br>[rest_api]<br><br>[http_auth_backend:default_auth_backend]<br>backend=metadata_cache<br><br>[rest_routing]<br>require_realm=default_auth_realm<br><br>[rest_metadata_cache]<br>require_realm=default_auth_realm<br><br>root@ArunClusterRouter:~# |
To start MySQL Router, the myrouter/start.sh script is used. This script acts as a custom wrapper to initiate the MySQL Router service with the default configuration settings. When executed, the script generates a PID file, such as /root/myrouter/mysqlrouter.pid, which contains the process ID of the running MySQL Router instance. The log output from MySQL Router is directed to a file, and console logging is stopped to ensure smoother operations in the background.
Once MySQL Router has started, you can verify that it is running by checking for its process ID using the ps -ef | grep -i mysqlrouter command. This command lists all running processes related to MySQL Router. The output will show the process ID (PID) of MySQL Router, confirming that it is actively running. For example:
|
1 |
root@ArunClusterRouter:~# myrouter/start.sh<br>root@ArunClusterRouter:~# PID 3864 written to '/root/myrouter/mysqlrouter.pid'<br>stopping to log to the console. Continuing to log to filelog<br><br>root@ArunClusterRouter:~# ps -ef | grep -i mysqlrouter<br>root 3864 1 6 17:04 pts/1 00:00:05 /usr/bin/mysqlrouter -c /root/myrouter/mysqlrouter.conf<br>root 3894 1260 0 17:05 pts/1 00:00:00 grep --color=auto -i mysqlrouter<br>root@ArunClusterRouter:~# |
To ensure that MySQL Router is correctly set up and running, it’s important to verify that it is listening on the expected network ports. You can use netstat or ss commands to check the listening ports.
By running the netstat -tulnp | grep mysqlrouter command, you will get an output that shows the ports MySQL Router is listening on. The output should display entries like the following:
|
1 |
root@ArunClusterRouter:~# netstat -tulnp | grep mysqlrouter<br>tcp 0 0 0.0.0.0:6449 0.0.0.0:* LISTEN 3864/mysqlrouter<br>tcp 0 0 0.0.0.0:6448 0.0.0.0:* LISTEN 3864/mysqlrouter<br>tcp 0 0 0.0.0.0:6447 0.0.0.0:* LISTEN 3864/mysqlrouter<br>tcp 0 0 0.0.0.0:6446 0.0.0.0:* LISTEN 3864/mysqlrouter<br>tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 3864/mysqlrouter<br>root@ArunClusterRouter:~# |
This confirms that MySQL Router is listening on the following ports:
Alternatively, you can use ss -tuln | grep 64 to get a more compact view of the active ports:
|
1 |
root@ArunClusterRouter:~# ss -tuln | grep 64<br>tcp LISTEN 0 1024 0.0.0.0:6449 0.0.0.0:*<br>tcp LISTEN 0 1024 0.0.0.0:6448 0.0.0.0:*<br>tcp LISTEN 0 1024 0.0.0.0:6447 0.0.0.0:*<br>tcp LISTEN 0 1024 0.0.0.0:6446 0.0.0.0:*<br>root@ArunClusterRouter:~# |
These ports confirm that MySQL Router is actively listening and ready to handle traffic according to its configuration. With this confirmation, you can be sure that MySQL Router is correctly configured and ready to route traffic to the InnoDB Cluster as intended.
After configuring MySQL Router, it’s essential to verify that the traffic is being properly directed according to the desired routing strategy.
First, let’s confirm that the read-write traffic is being properly redirected to the primary node (ArunClusterND1). This is done by connecting to the MySQL Router on port 6446, which is configured for read-write traffic. The query executed shows the current user, hostname, and read-only settings for the connection:
|
1 |
root@ArunClusterRouter:~# mysql -uappuser -p -h172.31.18.180 -P6446 -e 'select user(), @@hostname, @@read_only, @@super_read_only'<br>Enter password:<br>+---------------------------+----------------+-------------+-------------------+<br>| user() | @@hostname | @@read_only | @@super_read_only |<br>+---------------------------+----------------+-------------+-------------------+<br>| appuser@ArunClusterRouter | ArunClusterND1 | 0 | 0 |<br>+---------------------------+----------------+-------------+-------------------+<br>root@ArunClusterRouter:~# |
As expected, the hostname is ArunClusterND1, indicating that the read-write traffic is being correctly directed to the primary node. Additionally, both @@read_only and @@super_read_only are set to 0, confirming that the node is accepting read-write queries.
Next, let’s verify that the read-only traffic is being redirected to the secondary node (ArunClusterND2). For this, we connect to MySQL Router on port 6447, which is dedicated to read-only traffic. The result of the query executed here confirms that the traffic is correctly routed:
|
1 |
root@ArunClusterRouter:~# mysql -uappuser -p -h172.31.18.180 -P6447 -e 'select user(), @@hostname, @@read_only, @@super_read_only'<br>Enter password:<br>+---------------------------+----------------+-------------+-------------------+<br>| user() | @@hostname | @@read_only | @@super_read_only |<br>+---------------------------+----------------+-------------+-------------------+<br>| appuser@ArunClusterRouter | ArunClusterND2 | 1 | 1 |<br>+---------------------------+----------------+-------------+-------------------+<br>root@ArunClusterRouter:~# |
Here, the hostname is ArunClusterND2, confirming that the read-only traffic is being properly routed to the secondary node. The @@read_only and @@super_read_only variables are both set to 1, indicating that this node is configured to accept only read queries and cannot perform write operations.
This behaviour demonstrates that the MySQL Router is correctly handling the read-write and read-only traffic, directing it to the appropriate nodes in the cluster.
After setting up the InnoDB Cluster and MySQL Router, it’s crucial to test the failover process to ensure that traffic is properly redirected when the primary node fails. In this test, we simulate a failover by promoting a secondary node to be the new primary and verifying that MySQL Router redirects read-write and read-only traffic accordingly.
To begin, we start by checking the current status of the cluster using MySQL Shell. The cluster is currently operating with ArunClusterND1 as the primary node, and both ArunClusterND2 and ArunClusterND3 as secondary nodes:
|
1 |
root@ArunClusterND1:/home/ubuntu# mysqlsh innoclusteradmin@localhost:3306<br>Please provide the password for 'innoclusteradmin@localhost:3306': ****<br>Save password for 'innoclusteradmin@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): y<br>MySQL Shell 8.0.36<br><br>Copyright (c) 2016, 2023, Oracle and/or its affiliates.<br>Oracle is a registered trademark of Oracle Corporation and/or its affiliates.<br>Other names may be trademarks of their respective owners.<br><br>Type 'help' or '?' for help; 'quit' to exit.<br>Creating a session to 'innoclusteradmin@localhost:3306'<br>Fetching schema names for auto-completion... Press ^C to stop.<br>Your MySQL connection id is 7856<br>Server version: 8.0.41-0ubuntu0.24.04.1 (Ubuntu)<br>No default schema selected; type use <schema> to set one.<br> MySQL localhost:3306 ssl Py ><br><br> MySQL localhost:3306 ssl Py > cluster = dba.get_cluster()<br> MySQL localhost:3306 ssl Py > cluster.status()<br>{<br> "clusterName": "innodbtestcluster",<br> "defaultReplicaSet": {<br> "name": "default",<br> "primary": "ArunClusterND1:3306",<br> "ssl": "REQUIRED",<br> "status": "OK",<br> "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",<br> "topology": {<br> "ArunClusterND1:3306": {<br> "address": "ArunClusterND1:3306",<br> "memberRole": "PRIMARY",<br> "mode": "R/W",<br> "readReplicas": {},<br> "replicationLag": "applier_queue_applied",<br> "role": "HA",<br> "status": "ONLINE",<br> "version": "8.0.41"<br> },<br> "ArunClusterND2:3306": {<br> "address": "ArunClusterND2:3306",<br> "memberRole": "SECONDARY",<br> "mode": "R/O",<br> "readReplicas": {},<br> "replicationLag": "applier_queue_applied",<br> "role": "HA",<br> "status": "ONLINE",<br> "version": "8.0.41"<br> },<br> "ArunClusterND3:3306": {<br> "address": "ArunClusterND3:3306",<br> "memberRole": "SECONDARY",<br> "mode": "R/O",<br> "readReplicas": {},<br> "replicationLag": "applier_queue_applied",<br> "role": "HA",<br> "status": "ONLINE",<br> "version": "8.0.41"<br> }<br> },<br> "topologyMode": "Single-Primary"<br> },<br> "groupInformationSourceMember": "ArunClusterND1:3306"<br>}<br> MySQL localhost:3306 ssl Py ><br><br> |
At this point, ArunClusterND1:3306 is the primary, and ArunClusterND2 and ArunClusterND3 are secondary members of the cluster.
To test the failover, we manually promote ArunClusterND2 to be the primary node:
This command successfully switches ArunClusterND2:3306 from a secondary to the primary role, while ArunClusterND1 is demoted to secondary:
|
1 |
MySQL localhost:3306 ssl Py > cluster.set_primary_instance('ArunClusterND2:3306')<br>Setting instance 'ArunClusterND2:3306' as the primary instance of cluster 'innodbtestcluster'...<br><br>Instance 'ArunClusterND3:3306' remains SECONDARY.<br>Instance 'ArunClusterND2:3306' was switched from SECONDARY to PRIMARY.<br>Instance 'ArunClusterND1:3306' was switched from PRIMARY to SECONDARY.<br><br>The instance 'ArunClusterND2:3306' was successfully elected as primary.<br>MySQL localhost:3306 ssl Py > |
We can also use the following performance_schema query to verify the current status of the Group Replication cluster nodes. This query retrieves important information such as the replication channel name, unique member ID, host, port, role (PRIMARY for the new writer node and SECONDARY for reader nodes), and the current state of each member (ONLINE, OFFLINE, etc.).
|
1 |
MySQL localhost:3306 ssl SQL > SELECT<br> rgm.CHANNEL_NAME,<br> rgm.MEMBER_ID,<br> rgm.MEMBER_HOST,<br> rgm.MEMBER_PORT,<br> rgm.MEMBER_ROLE,<br> rgm.MEMBER_STATE,<br> rgms.COUNT_TRANSACTIONS_IN_QUEUE,<br> rgms.COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE,<br> rgms.COUNT_TRANSACTIONS_REMOTE_APPLIED<br> FROM<br> performance_schema.replication_group_members rgm<br> JOIN<br> performance_schema.replication_group_member_stats rgms<br> ON rgm.MEMBER_ID = rgms.MEMBER_IDG<br>*************************** 1. row ***************************<br> CHANNEL_NAME: group_replication_applier<br> MEMBER_ID: 217c72cb-30bf-11f0-8f67-0affef90b599<br> MEMBER_HOST: ArunClusterND1<br> MEMBER_PORT: 3306<br> MEMBER_ROLE: SECONDARY<br> MEMBER_STATE: ONLINE<br> COUNT_TRANSACTIONS_IN_QUEUE: 0<br>COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0<br> COUNT_TRANSACTIONS_REMOTE_APPLIED: 476<br>*************************** 2. row ***************************<br> CHANNEL_NAME: group_replication_applier<br> MEMBER_ID: 234d92e8-30bf-11f0-8d31-0affea343aeb<br> MEMBER_HOST: ArunClusterND2<br> MEMBER_PORT: 3306<br> MEMBER_ROLE: PRIMARY<br> MEMBER_STATE: ONLINE<br> COUNT_TRANSACTIONS_IN_QUEUE: 0<br>COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0<br> COUNT_TRANSACTIONS_REMOTE_APPLIED: 27135<br>*************************** 3. row ***************************<br> CHANNEL_NAME: group_replication_applier<br> MEMBER_ID: 24869d66-30bf-11f0-97ed-0afff8203dab<br> MEMBER_HOST: ArunClusterND3<br> MEMBER_PORT: 3306<br> MEMBER_ROLE: SECONDARY<br> MEMBER_STATE: ONLINE<br> COUNT_TRANSACTIONS_IN_QUEUE: 0<br>COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0<br> COUNT_TRANSACTIONS_REMOTE_APPLIED: 20<br>3 rows in set (0.0007 sec)<br> MySQL localhost:3306 ssl SQL > |
The cluster.status() command in MySQL Shell provides a high-level overview of the InnoDB Cluster, showing each node’s role (PRIMARY or SECONDARY), mode (R/W or R/O), replication lag (abstracted), and overall cluster health. In contrast, the SQL query on replication_group_members and replication_group_member_stats gives a lower-level view with detailed metrics like transaction queues and the number of transactions applied, which helps in diagnosing replication lag and synchronization status. While cluster.status() is ideal for quick health checks, the SQL query is better suited for in-depth troubleshooting.
After promoting ArunClusterND2 to the primary, we verify that MySQL Router is correctly routing traffic to the new primary and secondary nodes.
|
1 |
root@ArunClusterRouter:~# mysql -uappuser -p -h172.31.18.180 -P6446 -e 'select user(), @@hostname, @@read_only, @@super_read_only'<br>Enter password:<br>+---------------------------+----------------+-------------+-------------------+<br>| user() | @@hostname | @@read_only | @@super_read_only |<br>+---------------------------+----------------+-------------+-------------------+<br>| appuser@ArunClusterRouter | ArunClusterND2 | 0 | 0 |<br>+---------------------------+----------------+-------------+-------------------+<br>root@ArunClusterRouter:~# |
As expected, the hostname is now ArunClusterND2, and the node is accepting read-write queries (@@read_only is 0).
Similarly, read-only traffic should now be redirected to ArunClusterND1, which has been demoted to a secondary:
|
1 |
root@ArunClusterRouter:~# mysql -uappuser -p -h172.31.18.180 -P6447 -e 'select user(), @@hostname, @@read_only, @@super_read_only'<br>Enter password:<br>+---------------------------+----------------+-------------+-------------------+<br>| user() | @@hostname | @@read_only | @@super_read_only |<br>+---------------------------+----------------+-------------+-------------------+<br>| appuser@ArunClusterRouter | ArunClusterND1 | 1 | 1 |<br>+---------------------------+----------------+-------------+-------------------+ |
Here, the hostname is ArunClusterND1, confirming that read-only traffic is being correctly routed to the former primary, now secondary node, and is in read-only mode (@@read_only and @@super_read_only are 1).
Finally, we confirm the new cluster status after the failover. The new primary is now ArunClusterND2, while ArunClusterND1 has become a secondary:
|
1 |
MySQL localhost:3306 ssl Py > cluster.status()<br>{<br> "clusterName": "innodbtestcluster",<br> "defaultReplicaSet": {<br> "name": "default",<br> "primary": "ArunClusterND2:3306",<br> "ssl": "REQUIRED",<br> "status": "OK",<br> "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",<br> "topology": {<br> "ArunClusterND1:3306": {<br> "address": "ArunClusterND1:3306",<br> "memberRole": "SECONDARY",<br> "mode": "R/O",<br> "readReplicas": {},<br> "replicationLag": "applier_queue_applied",<br> "role": "HA",<br> "status": "ONLINE",<br> "version": "8.0.41"<br> },<br> "ArunClusterND2:3306": {<br> "address": "ArunClusterND2:3306",<br> "memberRole": "PRIMARY",<br> "mode": "R/W",<br> "readReplicas": {},<br> "replicationLag": "applier_queue_applied",<br> "role": "HA",<br> "status": "ONLINE",<br> "version": "8.0.41"<br> },<br> "ArunClusterND3:3306": {<br> "address": "ArunClusterND3:3306",<br> "memberRole": "SECONDARY",<br> "mode": "R/O",<br> "readReplicas": {},<br> "replicationLag": "applier_queue_applied",<br> "role": "HA",<br> "status": "ONLINE",<br> "version": "8.0.41"<br> }<br> },<br> "topologyMode": "Single-Primary"<br> },<br> "groupInformationSourceMember": "ArunClusterND2:3306"<br>}<br> MySQL localhost:3306 ssl Py ><br><br> |
With this successful failover, MySQL Router continues to route traffic according to the new cluster topology, ensuring that both read-write and read-only traffic are directed to the correct nodes.
This concludes the failover test, confirming that MySQL Router is functioning correctly and that the cluster can tolerate a primary node failure.
For those looking to dive deeper, our Group Replication 101 training course offers a great opportunity. This instructor-led class covers all the topics mentioned above, with hands-on labs and expert guidance. Learn more and sign up at percona.com/training.