Having a stand-by cluster ensures maximum data availability and a disaster recovery solution. In this blog post, we will cover how to set up a standby cluster using streaming replication and how to create an ad-hoc/standby cluster that uses a remote pgBackRest repository.  Both the source and destination clusters can be deployed in different namespaces, regions, or data centers, with no such dependencies between them.

Let’s deep dive into each of the processes below.

Building a stand-by cluster using streaming replication

1) Below is the leader/primary cluster which is already set up and running. 


In order to stand-by to connect to leader/primary,  we need to expose the service in the below part of [cr.yaml] file.

The exact endpoint details below will be used later in stand-by cluster configuration.

E.g,

2) Next, we need to make sure we have copied all the certificates from the leader/primary cluster and deployed the same on the stand-by cluster which we set up under a different namespace [postgres-operator2].

Delete the old certificates from the new setup/ stand-by after taking the backup(if required). 

Before applying new  secret changes make sure to change the namespace [postgres-operator2] as per new cluster.

3) If we change the cert name to any different naming then we need to perform the changes in the stand-by [cr.yaml] file accordingly and re-apply the changes there.

Additionally, we also need to enable the stand-by option and add the leader endpoint details in stand-by [cr.yaml].

4)  Finally, we can deploy the modified changes.

Also, make sure to delete the pod and associated pvc in case the changes are not reflects.

5) Verify the changes on stand-by.

Primary/Leader:

Stand-by:

 

Building an stand-by/ad-hoc cluster using a pgBackRest repository

1)  Considering the below standby cluster.

2) Next  we need  to set up our bucket/S3 credentials in a secret file.

Output:

Note – For configuring with other storage types like (GCB, ABS etc) please refer to the manual – https://docs.percona.com/percona-operator-for-postgresql/2.0/backups-storage.html#__tabbed_1_3

3) Once the secret file is deployed , we need to add the remote bucket/endpoint details along with the above secret [cluster1-pgbackrest-secrets] in the pgBackRest backup section of [cr.yaml] file. The backup stored in the remote S3 repository is initiated by the main primary cluster node with similar pgBackRest configuration.

4) Also, enable the stand-by and mention the target repo name in the [cr.yaml] file.

Finally, we can apply the modifications.

5) Verifying data synchronization.

The existing pgBackRest backups will be now listed on the standby side also.

Further, if we access the stand-by database the data sync will reflect there.


If the changes do not reflect, try removing the old pod/pvc.

Summary:

So, the above procedures we discussed basically outline a few ways to deploy a new standalone/stand-by cluster from the source primary cluster in the k8s/Percona operator-based environment. This also provides the flexibility to serve both the purpose of having a continuous data stream or just building a one time cluster with the exact data set. 

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments