Setting up an InnoDB Cluster requires three key components: Group Replication, MySQL Shell, and MySQL Router. In the previous post, we covered the process of building a 3-node InnoDB Cluster. In this post, we shift our focus to configuring MySQL Router and validating failover functionality.

Environment overview

We are using three InnoDB Cluster nodes along with one dedicated router node, and the corresponding entries need to be updated in the system’s hosts file to ensure proper name resolution across the setup.

Bootstrapping MySQL Router

The next step in setting up an InnoDB Cluster environment is to install MySQL Router, a lightweight middleware that enables applications to connect transparently to the correct MySQL instance in the cluster. Once installed, MySQL Router must be bootstrapped using the mysqlrouter –bootstrap command with appropriate credentials, such as innoclusteradmin, to automatically configure routing based on the current cluster topology. This process sets up the necessary configuration files and directories to route read and write traffic to the correct nodes, enabling seamless failover and high availability for client connections.

MySQL Router configuration details

The mysqlrouter –bootstrap innoclusteradmin@ArunClusterND2:3306 command initializes and configures MySQL Router for integration with the InnoDB Cluster. During this process, MySQL Router connects to the specified cluster node to retrieve information about the cluster topology and automatically generates a configuration tailored to the cluster’s setup. It creates necessary internal accounts, stores credentials securely using the keyring, adjusts file permissions, and writes a new configuration under the specified directory (myrouter). 

This configuration enables the router to provide seamless connection routing: it sets up ports for both the MySQL Classic and X Protocols, distinguishing between read/write and read-only traffic. After successful configuration, the Router is ready to be started using the generated configuration file, allowing client applications to connect to the InnoDB Cluster transparently via the defined local ports.

From the ArunClusterND1 node, we re-verified whether any mysql_router user was present in the cluster after running the mysqlrouter –bootstrap command. As shown in the output below, a user named mysql_router1_fqkfovl was indeed created, confirming that the bootstrap process successfully registered the Router with the cluster.

The default MySQL Router configuration, located at /usr/bin/myrouter/mysqlrouter.conf, is automatically generated during the MySQL Router bootstrap process. The configuration includes several important settings. Under the [DEFAULT]section, it specifies paths for logging, runtime, and data storage, as well as SSL certificates and keys for secure communication. The connection and read timeouts are set to 5 and 30 seconds, respectively, while the dynamic state and keyring files are stored in the data folder. The default SSL mode for the client is set to PREFERRED, and the server SSL mode is configured as AS_CLIENT. Additionally, the configuration defines user for the router and sets up the metadata cache for the InnoDB cluster.

The [logger] section defines the logging level as INFO. For routing configurations, the file defines specific settings for both read-write and read-only routing for the MySQL InnoDB cluster, binding to different ports for each routing method. The routing strategies are first-available for primary and round-robin-with-fallback for secondary nodes. The configuration also enables HTTP server access with SSL on port 8443, and includes basic authentication settings in the [http_auth_realm] section. These default configurations are tailored for connecting MySQL Router to a group replication-enabled InnoDB Cluster, allowing it to route traffic efficiently to the appropriate nodes based on their roles.

Starting MySQL Router and verifying the process

To start MySQL Router, the myrouter/start.sh script is used. This script acts as a custom wrapper to initiate the MySQL Router service with the default configuration settings. When executed, the script generates a PID file, such as /root/myrouter/mysqlrouter.pid, which contains the process ID of the running MySQL Router instance. The log output from MySQL Router is directed to a file, and console logging is stopped to ensure smoother operations in the background.

Once MySQL Router has started, you can verify that it is running by checking for its process ID using the ps -ef | grep -i mysqlrouter command. This command lists all running processes related to MySQL Router. The output will show the process ID (PID) of MySQL Router, confirming that it is actively running. For example:

Validating MySQL Router port bindings

To ensure that MySQL Router is correctly set up and running, it’s important to verify that it is listening on the expected network ports. You can use netstat or ss commands to check the listening ports.

By running the netstat -tulnp | grep mysqlrouter command, you will get an output that shows the ports MySQL Router is listening on. The output should display entries like the following:

This confirms that MySQL Router is listening on the following ports:

  • 6446 for Read/Write traffic
  • 6447 for Read-Only traffic
  • 6448 / 6449 for metadata or classic MySQL connections
  • 8443 for REST API or administrative interfaces (if configured)

Alternatively, you can use ss -tuln | grep 64 to get a more compact view of the active ports:

These ports confirm that MySQL Router is actively listening and ready to handle traffic according to its configuration. With this confirmation, you can be sure that MySQL Router is correctly configured and ready to route traffic to the InnoDB Cluster as intended.

Testing read/write and read-only routing

After configuring MySQL Router, it’s essential to verify that the traffic is being properly directed according to the desired routing strategy.

First, let’s confirm that the read-write traffic is being properly redirected to the primary node (ArunClusterND1). This is done by connecting to the MySQL Router on port 6446, which is configured for read-write traffic. The query executed shows the current user, hostname, and read-only settings for the connection:

As expected, the hostname is ArunClusterND1, indicating that the read-write traffic is being correctly directed to the primary node. Additionally, both @@read_only and @@super_read_only are set to 0, confirming that the node is accepting read-write queries.

Next, let’s verify that the read-only traffic is being redirected to the secondary node (ArunClusterND2). For this, we connect to MySQL Router on port 6447, which is dedicated to read-only traffic. The result of the query executed here confirms that the traffic is correctly routed:

Here, the hostname is ArunClusterND2, confirming that the read-only traffic is being properly routed to the secondary node. The @@read_only and @@super_read_only variables are both set to 1, indicating that this node is configured to accept only read queries and cannot perform write operations.

This behaviour demonstrates that the MySQL Router is correctly handling the read-write and read-only traffic, directing it to the appropriate nodes in the cluster.

Failover test

After setting up the InnoDB Cluster and MySQL Router, it’s crucial to test the failover process to ensure that traffic is properly redirected when the primary node fails. In this test, we simulate a failover by promoting a secondary node to be the new primary and verifying that MySQL Router redirects read-write and read-only traffic accordingly.

Verifying InnoDB Cluster status with MySQL Shell

To begin, we start by checking the current status of the cluster using MySQL Shell. The cluster is currently operating with ArunClusterND1 as the primary node, and both ArunClusterND2 and ArunClusterND3 as secondary nodes:

At this point, ArunClusterND1:3306 is the primary, and ArunClusterND2 and ArunClusterND3 are secondary members of the cluster.

Promoting a secondary node to primary

To test the failover, we manually promote ArunClusterND2 to be the primary node:

This command successfully switches ArunClusterND2:3306 from a secondary to the primary role, while ArunClusterND1 is demoted to secondary:

Verifying InnoDB Cluster member status using SQL

We can also use the following performance_schema query to verify the current status of the Group Replication cluster nodes. This query retrieves important information such as the replication channel name, unique member ID, host, port, role (PRIMARY for the new writer node and SECONDARY for reader nodes), and the current state of each member (ONLINE, OFFLINE, etc.).

The cluster.status() command in MySQL Shell provides a high-level overview of the InnoDB Cluster, showing each node’s role (PRIMARY or SECONDARY), mode (R/W or R/O), replication lag (abstracted), and overall cluster health. In contrast, the SQL query on replication_group_members and replication_group_member_stats gives a lower-level view with detailed metrics like transaction queues and the number of transactions applied, which helps in diagnosing replication lag and synchronization status. While cluster.status() is ideal for quick health checks, the SQL query is better suited for in-depth troubleshooting.

Verifying the traffic routing

After promoting ArunClusterND2 to the primary, we verify that MySQL Router is correctly routing traffic to the new primary and secondary nodes.

  • Read-Write Traffic:
    We check that read-write traffic is now directed to ArunClusterND2, which is now the primary:

As expected, the hostname is now ArunClusterND2, and the node is accepting read-write queries (@@read_only is 0).

  • Read-Only Traffic:

Similarly, read-only traffic should now be redirected to ArunClusterND1, which has been demoted to a secondary:

Here, the hostname is ArunClusterND1, confirming that read-only traffic is being correctly routed to the former primary, now secondary node, and is in read-only mode (@@read_only and @@super_read_only are 1).

Verifying the cluster status after failover

Finally, we confirm the new cluster status after the failover. The new primary is now ArunClusterND2, while ArunClusterND1 has become a secondary:

Conclusion

With this successful failover, MySQL Router continues to route traffic according to the new cluster topology, ensuring that both read-write and read-only traffic are directed to the correct nodes.

This concludes the failover test, confirming that MySQL Router is functioning correctly and that the cluster can tolerate a primary node failure.

For those looking to dive deeper, our Group Replication 101 training course offers a great opportunity. This instructor-led class covers all the topics mentioned above, with hands-on labs and expert guidance. Learn more and sign up at percona.com/training.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments