Setting up an InnoDB Cluster requires three key components: Group Replication, MySQL Shell, and MySQL Router. In the previous post, we covered the process of building a 3-node InnoDB Cluster. In this post, we shift our focus to configuring MySQL Router and validating failover functionality.
Environment overview
We are using three InnoDB Cluster nodes along with one dedicated router node, and the corresponding entries need to be updated in the system’s hosts file to ensure proper name resolution across the setup.
1 2 3 4 5 |
# /etc/hosts entries: 172.31.29.239 ArunClusterND1 172.31.30.173 ArunClusterND2 172.31.20.173 ArunClusterND3 172.31.18.180 ArunClusterRouter |
Bootstrapping MySQL Router
The next step in setting up an InnoDB Cluster environment is to install MySQL Router, a lightweight middleware that enables applications to connect transparently to the correct MySQL instance in the cluster. Once installed, MySQL Router must be bootstrapped using the mysqlrouter –bootstrap command with appropriate credentials, such as innoclusteradmin, to automatically configure routing based on the current cluster topology. This process sets up the necessary configuration files and directories to route read and write traffic to the correct nodes, enabling seamless failover and high availability for client connections.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
root@ArunClusterRouter:~# mysqlrouter --bootstrap innoclusteradmin@ArunClusterND2:3306 --directory myrouter --user=root Please enter MySQL password for innoclusteradmin: # Bootstrapping MySQL Router 8.0.41 ((Ubuntu)) instance at '/root/myrouter'... Fetching Cluster Members trying to connect to mysql-server at ArunClusterND1:3306 - Creating account(s) (only those that are needed, if any) - Verifying account (using it to run SQL queries that would be run by Router) - Storing account in keyring - Adjusting permissions of generated files - Creating configuration /root/myrouter/mysqlrouter.conf # MySQL Router configured for the InnoDB Cluster 'innodbtestcluster' After this MySQL Router has been started with the generated configuration $ mysqlrouter -c /root/myrouter/mysqlrouter.conf InnoDB Cluster 'innodbtestcluster' can be reached by connecting to: ## MySQL Classic protocol - Read/Write Connections: localhost:6446 - Read/Only Connections: localhost:6447 ## MySQL X protocol - Read/Write Connections: localhost:6448 - Read/Only Connections: localhost:6449 root@ArunClusterRouter:~# |
MySQL Router configuration details
The mysqlrouter –bootstrap innoclusteradmin@ArunClusterND2:3306 command initializes and configures MySQL Router for integration with the InnoDB Cluster. During this process, MySQL Router connects to the specified cluster node to retrieve information about the cluster topology and automatically generates a configuration tailored to the cluster’s setup. It creates necessary internal accounts, stores credentials securely using the keyring, adjusts file permissions, and writes a new configuration under the specified directory (myrouter).
This configuration enables the router to provide seamless connection routing: it sets up ports for both the MySQL Classic and X Protocols, distinguishing between read/write and read-only traffic. After successful configuration, the Router is ready to be started using the generated configuration file, allowing client applications to connect to the InnoDB Cluster transparently via the defined local ports.
From the ArunClusterND1 node, we re-verified whether any mysql_router user was present in the cluster after running the mysqlrouter –bootstrap command. As shown in the output below, a user named mysql_router1_fqkfovl was indeed created, confirming that the bootstrap process successfully registered the Router with the cluster.
1 2 3 4 5 6 7 8 |
MySQL ArunClusterND1:3306 ssl SQL > SELECT user, host FROM mysql.user WHERE user LIKE 'mysql_router%'; +-----------------------+------+ | user | host | +-----------------------+------+ | mysql_router1_fqkfovl | % | +-----------------------+------+ 1 row in set (0.0004 sec) MySQL ArunClusterND1:3306 ssl SQL > |
The default MySQL Router configuration, located at /usr/bin/myrouter/mysqlrouter.conf, is automatically generated during the MySQL Router bootstrap process. The configuration includes several important settings. Under the [DEFAULT]section, it specifies paths for logging, runtime, and data storage, as well as SSL certificates and keys for secure communication. The connection and read timeouts are set to 5 and 30 seconds, respectively, while the dynamic state and keyring files are stored in the data folder. The default SSL mode for the client is set to PREFERRED, and the server SSL mode is configured as AS_CLIENT. Additionally, the configuration defines user for the router and sets up the metadata cache for the InnoDB cluster.
The [logger] section defines the logging level as INFO. For routing configurations, the file defines specific settings for both read-write and read-only routing for the MySQL InnoDB cluster, binding to different ports for each routing method. The routing strategies are first-available for primary and round-robin-with-fallback for secondary nodes. The configuration also enables HTTP server access with SSL on port 8443, and includes basic authentication settings in the [http_auth_realm] section. These default configurations are tailored for connecting MySQL Router to a group replication-enabled InnoDB Cluster, allowing it to route traffic efficiently to the appropriate nodes based on their roles.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
root@ArunClusterRouter:~# cat /usr/bin/myrouter/mysqlrouter.conf # File automatically generated during MySQL Router bootstrap [DEFAULT] user=root logging_folder=/usr/bin/myrouter/log runtime_folder=/usr/bin/myrouter/run data_folder=/usr/bin/myrouter/data keyring_path=/usr/bin/myrouter/data/keyring master_key_path=/usr/bin/myrouter/mysqlrouter.key connect_timeout=5 read_timeout=30 dynamic_state=/usr/bin/myrouter/data/state.json client_ssl_cert=/usr/bin/myrouter/data/router-cert.pem client_ssl_key=/usr/bin/myrouter/data/router-key.pem client_ssl_mode=PREFERRED server_ssl_mode=AS_CLIENT server_ssl_verify=DISABLED unknown_config_option=error [logger] level=INFO [metadata_cache:bootstrap] cluster_type=gr router_id=1 user=mysql_router1_jt41pbi metadata_cluster=myinnodbcluster ttl=0.5 auth_cache_ttl=-1 auth_cache_refresh_interval=2 use_gr_notifications=0 [routing:bootstrap_rw] bind_address=0.0.0.0 bind_port=6446 destinations=metadata-cache://myinnodbcluster/?role=PRIMARY routing_strategy=first-available protocol=classic [routing:bootstrap_ro] bind_address=0.0.0.0 bind_port=6447 destinations=metadata-cache://myinnodbcluster/?role=SECONDARY routing_strategy=round-robin-with-fallback protocol=classic [routing:bootstrap_x_rw] bind_address=0.0.0.0 bind_port=6448 destinations=metadata-cache://myinnodbcluster/?role=PRIMARY routing_strategy=first-available protocol=x [routing:bootstrap_x_ro] bind_address=0.0.0.0 bind_port=6449 destinations=metadata-cache://myinnodbcluster/?role=SECONDARY routing_strategy=round-robin-with-fallback protocol=x [http_server] port=8443 ssl=1 ssl_cert=/usr/bin/myrouter/data/router-cert.pem ssl_key=/usr/bin/myrouter/data/router-key.pem [http_auth_realm:default_auth_realm] backend=default_auth_backend method=basic name=default_realm [rest_router] require_realm=default_auth_realm [rest_api] [http_auth_backend:default_auth_backend] backend=metadata_cache [rest_routing] require_realm=default_auth_realm [rest_metadata_cache] require_realm=default_auth_realm root@ArunClusterRouter:~# |
Starting MySQL Router and verifying the process
To start MySQL Router, the myrouter/start.sh script is used. This script acts as a custom wrapper to initiate the MySQL Router service with the default configuration settings. When executed, the script generates a PID file, such as /root/myrouter/mysqlrouter.pid, which contains the process ID of the running MySQL Router instance. The log output from MySQL Router is directed to a file, and console logging is stopped to ensure smoother operations in the background.
Once MySQL Router has started, you can verify that it is running by checking for its process ID using the ps -ef | grep -i mysqlrouter command. This command lists all running processes related to MySQL Router. The output will show the process ID (PID) of MySQL Router, confirming that it is actively running. For example:
1 2 3 4 5 6 7 8 |
root@ArunClusterRouter:~# myrouter/start.sh root@ArunClusterRouter:~# PID 3864 written to '/root/myrouter/mysqlrouter.pid' stopping to log to the console. Continuing to log to filelog root@ArunClusterRouter:~# ps -ef | grep -i mysqlrouter root 3864 1 6 17:04 pts/1 00:00:05 /usr/bin/mysqlrouter -c /root/myrouter/mysqlrouter.conf root 3894 1260 0 17:05 pts/1 00:00:00 grep --color=auto -i mysqlrouter root@ArunClusterRouter:~# |
Validating MySQL Router port bindings
To ensure that MySQL Router is correctly set up and running, it’s important to verify that it is listening on the expected network ports. You can use netstat or ss commands to check the listening ports.
By running the netstat -tulnp | grep mysqlrouter command, you will get an output that shows the ports MySQL Router is listening on. The output should display entries like the following:
1 2 3 4 5 6 7 |
root@ArunClusterRouter:~# netstat -tulnp | grep mysqlrouter tcp 0 0 0.0.0.0:6449 0.0.0.0:* LISTEN 3864/mysqlrouter tcp 0 0 0.0.0.0:6448 0.0.0.0:* LISTEN 3864/mysqlrouter tcp 0 0 0.0.0.0:6447 0.0.0.0:* LISTEN 3864/mysqlrouter tcp 0 0 0.0.0.0:6446 0.0.0.0:* LISTEN 3864/mysqlrouter tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 3864/mysqlrouter root@ArunClusterRouter:~# |
This confirms that MySQL Router is listening on the following ports:
- 6446 for Read/Write traffic
- 6447 for Read-Only traffic
- 6448 / 6449 for metadata or classic MySQL connections
- 8443 for REST API or administrative interfaces (if configured)
Alternatively, you can use ss -tuln | grep 64 to get a more compact view of the active ports:
1 2 3 4 5 6 |
root@ArunClusterRouter:~# ss -tuln | grep 64 tcp LISTEN 0 1024 0.0.0.0:6449 0.0.0.0:* tcp LISTEN 0 1024 0.0.0.0:6448 0.0.0.0:* tcp LISTEN 0 1024 0.0.0.0:6447 0.0.0.0:* tcp LISTEN 0 1024 0.0.0.0:6446 0.0.0.0:* root@ArunClusterRouter:~# |
These ports confirm that MySQL Router is actively listening and ready to handle traffic according to its configuration. With this confirmation, you can be sure that MySQL Router is correctly configured and ready to route traffic to the InnoDB Cluster as intended.
Testing read/write and read-only routing
After configuring MySQL Router, it’s essential to verify that the traffic is being properly directed according to the desired routing strategy.
First, let’s confirm that the read-write traffic is being properly redirected to the primary node (ArunClusterND1). This is done by connecting to the MySQL Router on port 6446, which is configured for read-write traffic. The query executed shows the current user, hostname, and read-only settings for the connection:
1 2 3 4 5 6 7 8 |
root@ArunClusterRouter:~# mysql -uappuser -p -h172.31.18.180 -P6446 -e 'select user(), @@hostname, @@read_only, @@super_read_only' Enter password: +---------------------------+----------------+-------------+-------------------+ | user() | @@hostname | @@read_only | @@super_read_only | +---------------------------+----------------+-------------+-------------------+ | appuser@ArunClusterRouter | ArunClusterND1 | 0 | 0 | +---------------------------+----------------+-------------+-------------------+ root@ArunClusterRouter:~# |
As expected, the hostname is ArunClusterND1, indicating that the read-write traffic is being correctly directed to the primary node. Additionally, both @@read_only and @@super_read_only are set to 0, confirming that the node is accepting read-write queries.
Next, let’s verify that the read-only traffic is being redirected to the secondary node (ArunClusterND2). For this, we connect to MySQL Router on port 6447, which is dedicated to read-only traffic. The result of the query executed here confirms that the traffic is correctly routed:
1 2 3 4 5 6 7 8 |
root@ArunClusterRouter:~# mysql -uappuser -p -h172.31.18.180 -P6447 -e 'select user(), @@hostname, @@read_only, @@super_read_only' Enter password: +---------------------------+----------------+-------------+-------------------+ | user() | @@hostname | @@read_only | @@super_read_only | +---------------------------+----------------+-------------+-------------------+ | appuser@ArunClusterRouter | ArunClusterND2 | 1 | 1 | +---------------------------+----------------+-------------+-------------------+ root@ArunClusterRouter:~# |
Here, the hostname is ArunClusterND2, confirming that the read-only traffic is being properly routed to the secondary node. The @@read_only and @@super_read_only variables are both set to 1, indicating that this node is configured to accept only read queries and cannot perform write operations.
This behaviour demonstrates that the MySQL Router is correctly handling the read-write and read-only traffic, directing it to the appropriate nodes in the cluster.
Failover test
After setting up the InnoDB Cluster and MySQL Router, it’s crucial to test the failover process to ensure that traffic is properly redirected when the primary node fails. In this test, we simulate a failover by promoting a secondary node to be the new primary and verifying that MySQL Router redirects read-write and read-only traffic accordingly.
Verifying InnoDB Cluster status with MySQL Shell
To begin, we start by checking the current status of the cluster using MySQL Shell. The cluster is currently operating with ArunClusterND1 as the primary node, and both ArunClusterND2 and ArunClusterND3 as secondary nodes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
root@ArunClusterND1:/home/ubuntu# mysqlsh innoclusteradmin@localhost:3306 Please provide the password for 'innoclusteradmin@localhost:3306': **** Save password for 'innoclusteradmin@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): y MySQL Shell 8.0.36 Copyright (c) 2016, 2023, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help' or '?' for help; 'quit' to exit. Creating a session to 'innoclusteradmin@localhost:3306' Fetching schema names for auto-completion... Press ^C to stop. Your MySQL connection id is 7856 Server version: 8.0.41-0ubuntu0.24.04.1 (Ubuntu) No default schema selected; type use <schema> to set one. MySQL localhost:3306 ssl Py > MySQL localhost:3306 ssl Py > cluster = dba.get_cluster() MySQL localhost:3306 ssl Py > cluster.status() { "clusterName": "innodbtestcluster", "defaultReplicaSet": { "name": "default", "primary": "ArunClusterND1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "ArunClusterND1:3306": { "address": "ArunClusterND1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": "applier_queue_applied", "role": "HA", "status": "ONLINE", "version": "8.0.41" }, "ArunClusterND2:3306": { "address": "ArunClusterND2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": "applier_queue_applied", "role": "HA", "status": "ONLINE", "version": "8.0.41" }, "ArunClusterND3:3306": { "address": "ArunClusterND3:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": "applier_queue_applied", "role": "HA", "status": "ONLINE", "version": "8.0.41" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "ArunClusterND1:3306" } MySQL localhost:3306 ssl Py > |
At this point, ArunClusterND1:3306 is the primary, and ArunClusterND2 and ArunClusterND3 are secondary members of the cluster.
Promoting a secondary node to primary
To test the failover, we manually promote ArunClusterND2 to be the primary node:
This command successfully switches ArunClusterND2:3306 from a secondary to the primary role, while ArunClusterND1 is demoted to secondary:
1 2 3 4 5 6 7 8 9 |
MySQL localhost:3306 ssl Py > cluster.set_primary_instance('ArunClusterND2:3306') Setting instance 'ArunClusterND2:3306' as the primary instance of cluster 'innodbtestcluster'... Instance 'ArunClusterND3:3306' remains SECONDARY. Instance 'ArunClusterND2:3306' was switched from SECONDARY to PRIMARY. Instance 'ArunClusterND1:3306' was switched from PRIMARY to SECONDARY. The instance 'ArunClusterND2:3306' was successfully elected as primary. MySQL localhost:3306 ssl Py > |
Verifying InnoDB Cluster member status using SQL
We can also use the following performance_schema
query to verify the current status of the Group Replication cluster nodes. This query retrieves important information such as the replication channel name, unique member ID, host, port, role (PRIMARY
for the new writer node and SECONDARY
for reader nodes), and the current state of each member (ONLINE
, OFFLINE
, etc.).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
MySQL localhost:3306 ssl SQL > SELECT rgm.CHANNEL_NAME, rgm.MEMBER_ID, rgm.MEMBER_HOST, rgm.MEMBER_PORT, rgm.MEMBER_ROLE, rgm.MEMBER_STATE, rgms.COUNT_TRANSACTIONS_IN_QUEUE, rgms.COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE, rgms.COUNT_TRANSACTIONS_REMOTE_APPLIED FROM performance_schema.replication_group_members rgm JOIN performance_schema.replication_group_member_stats rgms ON rgm.MEMBER_ID = rgms.MEMBER_IDG *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 217c72cb-30bf-11f0-8f67-0affef90b599 MEMBER_HOST: ArunClusterND1 MEMBER_PORT: 3306 MEMBER_ROLE: SECONDARY MEMBER_STATE: ONLINE COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 476 *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 234d92e8-30bf-11f0-8d31-0affea343aeb MEMBER_HOST: ArunClusterND2 MEMBER_PORT: 3306 MEMBER_ROLE: PRIMARY MEMBER_STATE: ONLINE COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 27135 *************************** 3. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 24869d66-30bf-11f0-97ed-0afff8203dab MEMBER_HOST: ArunClusterND3 MEMBER_PORT: 3306 MEMBER_ROLE: SECONDARY MEMBER_STATE: ONLINE COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 20 3 rows in set (0.0007 sec) MySQL localhost:3306 ssl SQL > |
The cluster.status()
command in MySQL Shell provides a high-level overview of the InnoDB Cluster, showing each node’s role (PRIMARY or SECONDARY), mode (R/W or R/O), replication lag (abstracted), and overall cluster health. In contrast, the SQL query on replication_group_members
and replication_group_member_stats
gives a lower-level view with detailed metrics like transaction queues and the number of transactions applied, which helps in diagnosing replication lag and synchronization status. While cluster.status()
is ideal for quick health checks, the SQL query is better suited for in-depth troubleshooting.
Verifying the traffic routing
After promoting ArunClusterND2 to the primary, we verify that MySQL Router is correctly routing traffic to the new primary and secondary nodes.
- Read-Write Traffic:
We check that read-write traffic is now directed to ArunClusterND2, which is now the primary:
1 2 3 4 5 6 7 8 |
root@ArunClusterRouter:~# mysql -uappuser -p -h172.31.18.180 -P6446 -e 'select user(), @@hostname, @@read_only, @@super_read_only' Enter password: +---------------------------+----------------+-------------+-------------------+ | user() | @@hostname | @@read_only | @@super_read_only | +---------------------------+----------------+-------------+-------------------+ | appuser@ArunClusterRouter | ArunClusterND2 | 0 | 0 | +---------------------------+----------------+-------------+-------------------+ root@ArunClusterRouter:~# |
As expected, the hostname is now ArunClusterND2, and the node is accepting read-write queries (@@read_only is 0).
- Read-Only Traffic:
Similarly, read-only traffic should now be redirected to ArunClusterND1, which has been demoted to a secondary:
1 2 3 4 5 6 7 |
root@ArunClusterRouter:~# mysql -uappuser -p -h172.31.18.180 -P6447 -e 'select user(), @@hostname, @@read_only, @@super_read_only' Enter password: +---------------------------+----------------+-------------+-------------------+ | user() | @@hostname | @@read_only | @@super_read_only | +---------------------------+----------------+-------------+-------------------+ | appuser@ArunClusterRouter | ArunClusterND1 | 1 | 1 | +---------------------------+----------------+-------------+-------------------+ |
Here, the hostname is ArunClusterND1, confirming that read-only traffic is being correctly routed to the former primary, now secondary node, and is in read-only mode (@@read_only and @@super_read_only are 1).
Verifying the cluster status after failover
Finally, we confirm the new cluster status after the failover. The new primary is now ArunClusterND2, while ArunClusterND1 has become a secondary:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
MySQL localhost:3306 ssl Py > cluster.status() { "clusterName": "innodbtestcluster", "defaultReplicaSet": { "name": "default", "primary": "ArunClusterND2:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "ArunClusterND1:3306": { "address": "ArunClusterND1:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": "applier_queue_applied", "role": "HA", "status": "ONLINE", "version": "8.0.41" }, "ArunClusterND2:3306": { "address": "ArunClusterND2:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": "applier_queue_applied", "role": "HA", "status": "ONLINE", "version": "8.0.41" }, "ArunClusterND3:3306": { "address": "ArunClusterND3:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": "applier_queue_applied", "role": "HA", "status": "ONLINE", "version": "8.0.41" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "ArunClusterND2:3306" } MySQL localhost:3306 ssl Py > |
Conclusion
With this successful failover, MySQL Router continues to route traffic according to the new cluster topology, ensuring that both read-write and read-only traffic are directed to the correct nodes.
This concludes the failover test, confirming that MySQL Router is functioning correctly and that the cluster can tolerate a primary node failure.
For those looking to dive deeper, our Group Replication 101 training course offers a great opportunity. This instructor-led class covers all the topics mentioned above, with hands-on labs and expert guidance. Learn more and sign up at percona.com/training.