Percona Backup for MongoDB (PBM) supports snapshot-based physical backups. This is made possible by the backup cursor functionality present in Percona Server for MongoDB. 

In a previous post, we discussed Percona Backup for MongoDB and Disk Snapshots in Google Cloud Platform (part 1) and showed how to implement snapshot-based backups. Now, let’s see how to restore a snapshot-based backup in GCP.

For this demo, I have created a 2-shard MongoDB cluster (each shard consisting of a 3-node PSA replica set) deployed on Google Cloud Platform instances. Each instance has an extra persistent disk attached for storing the MongoDB data, and the PBM agent is installed as per the documentation.

Manual example

Let’s start by checking the details of the backup we are going to restore. Remember that we can also get the complete list of available backups by running pbm list.

Here we can see the nodes that PBM had selected (one per replica set) to be snapshotted at the time of the backup.

Preparation

The first step of the restore is to shut down all mongos routers and arbiter nodes. PBM agent is not meant to be run on those server types, so PBM cannot do it for you automatically.

Now we need to start the restore from any node with pbm client:

This step takes a few minutes while Percona Backup for MongoDB stops the database, cleans up data directories on all nodes, provides the restore name, and prompts you to copy the data.

Restore

Next, we use the snapshots to re-create the volumes for each member of the cluster. Let’s start with the config servers. 

We need to get the ID of the snapshot to restore. Here, we can search through available snapshots using the snapshot name and date as we saved it in the “Description” field. For example:

Now, we need to follow these steps for all three Config Servers:

1. Unmount and detach the old volume

2. Create a new volume based on the snapshot

3. Attach the new volume

4. Mount the volume

We need to repeat the process for all Shard1 and Shard2 replica set members, using the proper snapshot in each case. 

Once that is done, the last step is to finish the restore with PBM:

Automating the restore

We have covered the manual approach, let’s see now how we can automate the above steps.

High-level steps

The idea is to provide the script with the backup name we want to restore and the cluster’s topology. 

The script should:

  1. Shut down any remaining services not handled by PBM
  2. Run the pbm restore command
  3. Get the the snapshots names to restore for each cluster role
  4. Unmount and detach the old volumes
  5. Create new volumes based on the snapshot for each cluster role
  6. Attach and mount the new volumes
  7. Run pbm restore-finish
  8. Start all services

Note: In order to manipulate instances, volumes and snapshots we need to have the instanceAdmin and storageAdmin IAM roles assigned to our user (or service account).

Sample script

The example script is available on Github. It requires gcloud CLI to be installed. Keep in mind this is just a proof of concept, so don’t use it for production environments, as there is only basic error checking. 

We call the script specifying:

  1. The backup name (as reported by pbm status)
  2. The list of nodes for the config server replica set
  3. The name and list of nodes for each shard’s replica set

For example:

Conclusion

Percona Backup for MongoDB provides the interface for making snapshot-based physical backups and restores. We have covered the complete restore process using snapshots in GCP and provided a bash script example. In a real production environment, automating the restore process using Ansible or similar tooling could also be a good idea. 

If you have any suggestions for feature requests or bug reports, make sure to let us know by creating a ticket in our public issue tracker. Pull requests are also more than welcome!

reasons to switch from Mongodb to percona for mongodb

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments