Percona Backup for MongoDB (PBM) supports snapshot-based physical backups. This is made possible by the backup cursor functionality present in Percona Server for MongoDB.
In a previous post, we discussed Percona Backup for MongoDB and Disk Snapshots in Google Cloud Platform (part 1) and showed how to implement snapshot-based backups. Now, let’s see how to restore a snapshot-based backup in GCP.
For this demo, I have created a 2-shard MongoDB cluster (each shard consisting of a 3-node PSA replica set) deployed on Google Cloud Platform instances. Each instance has an extra persistent disk attached for storing the MongoDB data, and the PBM agent is installed as per the documentation.
Manual example
Let’s start by checking the details of the backup we are going to restore. Remember that we can also get the complete list of available backups by running pbm list.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# pbm describe-backup 2024-10-03T15:23:51Z name: "2024-10-03T15:23:51Z" opid: 66feb70725398000d9398e35 type: external last_write_time: "2024-10-03T15:23:54Z" last_transition_time: "2024-10-03T15:24:09Z" mongodb_version: 7.0.14-8 fcv: "7.0" pbm_version: 2.6.0 status: done size_h: 0 B replsets: - name: shard1 status: done node: gcp-test-mongodb-shard01svr1:27018 last_write_time: "2024-10-03T15:23:53Z" last_transition_time: "2024-10-03T15:24:08Z" security: {} - name: shard0 status: done node: gcp-test-mongodb-shard00svr1:27018 last_write_time: "2024-10-03T15:23:46Z" last_transition_time: "2024-10-03T15:24:09Z" security: {} - name: mongo-cfg status: done node: gcp-test-mongodb-cfg02:27019 last_write_time: "2024-10-03T15:23:54Z" last_transition_time: "2024-10-03T15:24:08Z" configsvr: true security: {} |
Here we can see the nodes that PBM had selected (one per replica set) to be snapshotted at the time of the backup.
Preparation
The first step of the restore is to shut down all mongos routers and arbiter nodes. PBM agent is not meant to be run on those server types, so PBM cannot do it for you automatically.
1 2 |
# systemctl stop mongos # systemctl stop mongod |
Now we need to start the restore from any node with pbm client:
1 2 3 4 5 |
# pbm restore --external Starting restore 2024-10-03T15:23:51Z from [external].....................................................Ready to copy data to the nodes data directory. After the copy is done, run: pbm restore-finish 2024-10-03T15:23:51Z -c </path/to/pbm.conf.yaml> Check restore status with: pbm describe-restore 2024-10-03T15:23:51Z -c </path/to/pbm.conf.yaml> No other pbm command is available while the restore is running! |
This step takes a few minutes while Percona Backup for MongoDB stops the database, cleans up data directories on all nodes, provides the restore name, and prompts you to copy the data.
Restore
Next, we use the snapshots to re-create the volumes for each member of the cluster. Let’s start with the config servers.
We need to get the ID of the snapshot to restore. Here, we can search through available snapshots using the snapshot name and date as we saved it in the “Description” field. For example:
1 2 3 4 5 |
# gcloud compute snapshots list --filter="name:gcp-test-mongodb-cfg01-data AND description:*2024-10-01-11-52*" --format="value(name)" NAME DISK_SIZE_GB SRC_DISK STATUS gcp-test-mongodb-cfg01-data-2024-10-01-11-52z 20 northamerica-northeast1-b/disks/gcp-test-mongodb-cfg01 READY |
Now, we need to follow these steps for all three Config Servers:
1. Unmount and detach the old volume
1 2 3 4 5 |
# umount /var/lib/mongo # gcloud compute disks list --filter="labels.Name=gcp-test-mongodb-cfg01-data" --format="value(name,zone)" # gcloud compute instances detach-disk gcp-test-mongodb-cfg01 --disk=gcp-test-mongodb-cfg01-data |
2. Create a new volume based on the snapshot
1 2 3 4 |
# gcloud compute disks create gcp-test-mongodb-shard01svr1-data-new --source-snapshot=gcp-test-mongodb-cfg01-data-2024-10-01-11-52z --zone=us-west1-a --type=pd-balanced |
3. Attach the new volume
1 2 3 4 5 6 |
# INSTANCE_ID=$(gcloud compute instances list --filter="name='gcp-test-mongodb-cfg01'" --format="get(name)") # gcloud compute instances attach-disk $INSTANCE_ID --disk=gcp-test-mongodb-shard01svr1-data-new --device-name=/dev/disk/by-id/google-persistent-disk-1 --zone=us-west1-a |
4. Mount the volume
1 2 3 4 5 6 7 8 |
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi └─sda2 8:2 0 19.8G 0 part / sdb 8:16 0 20G 0 disk # mount /dev/sdb /var/lib/mongo |
We need to repeat the process for all Shard1 and Shard2 replica set members, using the proper snapshot in each case.
Once that is done, the last step is to finish the restore with PBM:
1 |
# pbm restore-finish -c /etc/pbm-storage.conf |
Automating the restore
We have covered the manual approach, let’s see now how we can automate the above steps.
High-level steps
The idea is to provide the script with the backup name we want to restore and the cluster’s topology.
The script should:
- Shut down any remaining services not handled by PBM
- Run the pbm restore command
- Get the the snapshots names to restore for each cluster role
- Unmount and detach the old volumes
- Create new volumes based on the snapshot for each cluster role
- Attach and mount the new volumes
- Run pbm restore-finish
- Start all services
Note: In order to manipulate instances, volumes and snapshots we need to have the instanceAdmin and storageAdmin IAM roles assigned to our user (or service account).
Sample script
The example script is available on Github. It requires gcloud CLI to be installed. Keep in mind this is just a proof of concept, so don’t use it for production environments, as there is only basic error checking.
We call the script specifying:
- The backup name (as reported by pbm status)
- The list of nodes for the config server replica set
- The name and list of nodes for each shard’s replica set
For example:
1 2 3 4 5 6 |
# ./test_restore.sh 2024-10-03T15:23:51Z --config-servers=gcp-test-mongodb-cfg00,gcp-test-mongodb-cfg01,gcp-test-mongodb-cfg02 --shard0=gcp-test-mongodb-shard00svr0,gcp-test-mongodb-shard00svr1 --shard1=gcp-test-mongodb-shard01svr0,gcp-test-mongodb-shard01svr1 --arbiters=gcp-test-mongodb-shard01arb0,gcp-test-mongodb-shard00arb0 --mongos=gcp-test-mongodb-mongos00 |
Conclusion
Percona Backup for MongoDB provides the interface for making snapshot-based physical backups and restores. We have covered the complete restore process using snapshots in GCP and provided a bash script example. In a real production environment, automating the restore process using Ansible or similar tooling could also be a good idea.
If you have any suggestions for feature requests or bug reports, make sure to let us know by creating a ticket in our public issue tracker. Pull requests are also more than welcome!