Percona Backup for MongoDB (PBM) supports snapshot-based physical backups. This is made possible by the backup cursor functionality present in Percona Server for MongoDB.
In a previous post, we discussed Percona Backup for MongoDB and Disk Snapshots in Amazon AWS and showed how to implement EBS snapshot-based backups. Now, let’s see how to restore a snapshot-based backup in AWS.
For this demo, I have created a 2-shard MongoDB cluster (each shard consisting of a 3-node PSA replica set) deployed on AWS EC2 instances. Each instance has an extra EBS volume attached for storing the MongoDB data, and the PBM agent is installed as per the documentation.
Manual example
Let’s start by checking the details of the backup we are going to restore. Remember that we can also get the complete list of available backups by running PBM list.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# pbm describe-backup 2024-10-01T11:52:10Z name: "2024-10-01T11:52:10Z" opid: 66fbe26a08467a8184d156cf type: external last_write_time: "2024-10-01T11:52:13Z" last_transition_time: "2024-10-01T11:53:39Z" mongodb_version: 7.0.14-8 fcv: "7.0" pbm_version: 2.6.0 status: done size_h: 0 B replsets: - name: shard0 status: done node: aws-test-mongodb-shard00svr1:27018 last_write_time: "2024-10-01T11:52:13Z" last_transition_time: "2024-10-01T11:52:16Z" security: {} - name: shard1 status: done node: aws-test-mongodb-shard01svr1:27018 last_write_time: "2024-10-01T11:52:13Z" last_transition_time: "2024-10-01T11:52:16Z" security: {} - name: mongo-cfg status: done node: aws-test-mongodb-cfg01:27019 last_write_time: "2024-10-01T11:52:13Z" last_transition_time: "2024-10-01T11:52:16Z" configsvr: true security: {} |
Here, we can see the nodes that PBM selected (one per replica set) to be snapshotted during the backup.
Preparation
The first step of the restore is to shut down all mongos routers and arbiter nodes. PBM agent is not meant to be run on those server types, so PBM cannot do it for you automatically.
1 2 |
# systemctl stop mongos # systemctl stop mongod |
Now we need to start the restore from any node with pbm client:
1 2 3 4 5 |
# pbm restore --external Starting restore 2024-10-01T12:00:34.14158023Z from [external].....................................................Ready to copy data to the nodes data directory. After the copy is done, run: pbm restore-finish 2024-10-01T12:00:34.14158023Z -c </path/to/pbm.conf.yaml> Check restore status with: pbm describe-restore 2024-10-01T12:00:34.14158023Z -c </path/to/pbm.conf.yaml> No other pbm command is available while the restore is running! |
This step takes a few minutes while Percona Backup for MongoDB stops the database, cleans up data directories on all nodes, provides the restore name, and prompts you to copy the data.
Restore
Next, we use the snapshots to re-create the volumes for each member of the cluster. Let’s start with the config servers.
We need to get the ID of the snapshot to restore. Here, we can search through available snapshots using the snapshot name and date as we saved it in the “Description” field. For example:
1 2 3 4 5 |
# aws ec2 describe-snapshots --filters "Name=tag:Name,Values=aws-test-mongodb-cfg01-data" "Name=description,Values=*2024-10-01-11-52*" --query "Snapshots[*].SnapshotId" --output text snap-000fa5dea5b8a133a |
Now, we need to follow these steps for all three Config Servers:
1. Unmount and detach the old volume
1 2 3 4 5 |
# umount /var/lib/mongo # aws ec2 describe-volumes --filters "Name=tag:Name,Values=aws-test-mongodb-cfg01-data" --query "Volumes[*].[VolumeId, AvailabilityZone]" --output text vol-09bf698a54b681717 us-west-2a # aws ec2 detach-volume --volume-id vol-09bf698a54b681717 |
2. Create a new volume based on the snapshot
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# aws ec2 create-volume --snapshot-id snap-000fa5dea5b8a133a --availability-zone us-west-2a --volume-type gp2 { "AvailabilityZone": "us-west-2a", "CreateTime": "2024-10-01T14:36:34+00:00", "Encrypted": false, "Size": 20, "SnapshotId": "snap-000fa5dea5b8a133a", "State": "creating", "VolumeId": "vol-07a9ffd45d2b79e91", "Iops": 100, "Tags": [], "VolumeType": "gp2", "MultiAttachEnabled": false } |
3. Attach the new volume
1 2 3 4 5 6 |
# INSTANCE_ID=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=aws-test-mongodb-cfg01" --query "Reservations[].Instances[].InstanceId" --output text) # aws ec2 attach-volume --volume-id "vol-07a9ffd45d2b79e91" --instance-id $INSTANCE_ID --device /dev/xvdf |
Note: the device argument is just a placeholder. NVMe EBS volumes are renamed using NVMe device names like /dev/nvme*
4. Mount the volume
1 2 3 4 5 6 7 8 |
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS nvme0n1 259:0 0 100G 0 disk ├─nvme0n1p1 259:2 0 1M 0 part └─nvme0n1p2 259:3 0 100G 0 part / nvme1n1 259:1 0 20G 0 disk # mount /dev/nvme1n1 /var/lib/mongo |
We need to repeat the process for all Shard1 and Shard2 replica set members, using the proper snapshot in each case.
Once that is done, the last step is to finish the restore with PBM:
1 |
# pbm restore-finish -c /etc/pbm-storage.conf |
Automating the restore
We have covered the manual approach, let’s see now how we can automate the above steps.
High-level steps
The idea is to provide the script with the backup name we want to restore and the cluster’s topology.
The script should:
- Shut down any remaining services not handled by PBM
- Run the pbm restore command
- Get the the snapshots IDs to restore for each cluster role
- Unmount and detach the old volumes
- Create new volumes based on the snapshot for each cluster role
- Attach and mount the new volumes
- Run the pbm restore-finish command
- Start all services
Sample script
In order to manipulate instances, volumes, and snapshots, we are going to need the following IAM permissions:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
"Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateSnapshot", "ec2:CreateVolume", "ec2:DescribeTags", "ec2:CreateTags", "ec2:DetachVolume", "ec2:AttachVolume" ], "Resource": "*" } ] }' |
We can use either an IAM instance role or access/secret access key pair authentication.
The example script is available in Github. It requires AWS CLI to be installed. Keep in mind this is just a proof of concept, so don’t use it for production environments, as there is only basic error checking.
We call the script specifying:
- the backup name (as reported by pbm status)
- the list of nodes for the config server replica set
- the name and list of nodes for each shard’s replica set
For example:
1 2 3 4 5 6 |
# ./test_restore.sh 2024-10-01T11:52:10Z --config-servers=aws-test-mongodb-cfg00,aws-test-mongodb-cfg01,aws-test-mongodb-cfg02 --shard0=aws-test-mongodb-shard00svr0,aws-test-mongodb-shard00svr1 --shard1=aws-test-mongodb-shard01svr0,aws-test-mongodb-shard01svr1 --arbiters=aws-test-mongodb-shard01arb0,aws-test-mongodb-shard00arb0 --mongos=aws-test-mongodb-mongos00 |
Conclusion
Percona Backup for MongoDB provides the interface for making snapshot-based physical backups and restores. We have covered the complete restore process using snapshots in AWS and provided a bash script example. In a real production environment, automating the restore process using Ansible or similar tooling could also be a good idea.
If you have any suggestions for feature requests or bug reports, please let us know by creating a ticket in our public issue tracker. Pull requests are also welcome!
MongoDB Performance Tuning is a collection of insights, strategies, and best practices from Percona’s MongoDB experts. Use it to diagnose — and correct — the issues that may be affecting your database’s performance.
Download MongoDB Performance Tuning today!