Innodb Cluster or ClusterSet topologies already have secondary instances that can act as a failover for primary or also offload read requests. However, with MySQL 8.4, we now have the feasibility of adding a separate async replica to the cluster for serving various special/ad-hoc queries or some reporting purposes. This will also help offload read traffic away from primary or secondary.

Let me now showcase the practical implementation and deployment of the read replica inside the InnoDB cluster. For demo purposes, I am deploying the instances using MySQL sandbox commands available with MySQL Shell.

Topology

Innodb Cluster Read Replica's

Async read replica auto source failover

 

  • InnoDB cluster nodes

  • Async replica

Quickly deploying InnoDB Cluster

Bootstrapping the first node

Adding other nodes

Verifying cluster status

Adding async replica

Now we have the InnoDB cluster ready. Next, we can add the async replica.

Before adding the async replica, some prerequisites need to be met on the replica.

Prerequisites:

  • It should be a standalone MySQL instance with version 8.0.23 or higher.
  • There should be no unmanaged replication channels configured.
  • The instance should have the same credentials as those used to manage the cluster.
  • It is recommended you run “dba.checkInstanceConfiguration()” or “dba.configureInstance()” before attempting to create a Read Replica to have the same compatibility as Innodb cluster nodes.

Once all prerequisites are set, we can add the async replica by executing the below command on one of the InnoDB cluster nodes.

Output:

By default the source node will be the Primary node (“127.0.0.1:3307”) of the cluster. In the cluster status, we can see the “readReplicas” section with the async node details.

We can verify the source instance using the replication status command below.

Now comes the good part. Suppose the primary goes down or switches to another secondary, and the async replica automatically switches to the next primary of the cluster.

Performing manual switchover over node (“127.0.0.1:3308”)

Output:

The async replica automatically chooses the new primary/source.

In the cluster status, we can see the “readReplicas” under the new primary now.

If we want the chosen source to be a secondary rather than a primary, we can define “replicationSources” as “secondary.” In this case, the async will always fail over to one of the secondary instead of the primary.

E.g.,

We can get more information like label, replication source type, etc, with the help of the below command.

Monitoring

Under performance_schema and mysql database, we have tables (replication_asynchronous_connection_failover & replication_asynchronous_connection_failover_managed) which basically keep the source node information and also update it when the cluster topology changes and the source has to be changed for the async replica. Currently, we see a high weight (“80”) for secondary nodes, as the source will be decided among them only.

For the rest, we can track the async node information via cluster commands like “cluster.status()” and “cluster.describe(),” as we saw in the implementation above.

Traffic routing

MySQLRouter comes very handy when we talk about sending read-only traffic (ad-hoc, reporting, etc) to these async nodes. First, we can set up an account for the MySQL router service.

Then, we can bootstrap the mysqlrouter with our existing cluster information and start the service.

Below are the read & write endpoints available under both Classic and X protocols.

Here is the router(“Router1”) information.

Next, we need to set the setRoutingOption() as below so the read-only traffic goes to the read replicas connected via the “Router1″ service.

Output:

Finally, we can verify the “routingOptions()” status as below.

Output:

For the connection test, we can simply connect to the router service over read-only port 6447.

Recovery of the async replica

If the async replica somehow disconnected from the source node or broke due to some unfixable errors, then we can try rejoining the instance via the “rejoinInstance()” function. Based on the necessity, we can try both incremental and full recovery via clone.

Output:

Final thought

Setting up and managing isolated read replicas is quite easy and handy under the Innodb Cluster and ClusterSet ecosystem. We can also use a dedicated routing endpoint and leverage MySQLrouter to distribute read traffic over different replicas. This allows the main cluster secondary nodes to be focused on handling more critical production workloads and maintaining high availability by diverting the less critical workload to the async nodes.

mysql performance tuning

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments