The Percona Kubernetes Operator for Percona XtraDB Cluster can send backups to Amazon S3 or S3-compatible storage. And every now and then at Support, we are asked how to send backups to Google Cloud Storage.
Google Cloud Storage offers an “interoperability mode” which is S3-compatible. However, there are a few details to take care of when using it.
Google Cloud Storage Configuration
First, select “Settings” under “Storage” in the Navigation Menu. Under Settings, select the Interoperability tab. If Interoperability is not yet enabled, click Enable Interoperability Access. This turns on the S3-compatible interface to Google Cloud Storage.
After enabling S3-compatible storage, an access key needs to be generated. There are two options: Access keys can be tied to Service accounts or User accounts. For production workloads, Google recommends Service account access keys, but for this example, a User account access key will be used for simplicity. The Interoperability page links to further documentation on the differences between the two, so this article does not go into those details.
To create a User account HMAC (Hash-based Message Authentication Code) keys scroll down to “User account HMAC” and click “Create a key”. This generates an access key and accompanying secret. These keys will be used as an AWS access key and secret later on. The user account also needs access to the bucket that will be used for backups. This can be set up by selecting the bucket in Storage Browser, and going to the Permissions tab.
Once a key has been created and the account permissions are verified to be correct, the Percona XtraDB Cluster (PXC) Operator needs to be configured to use the new keys.
First, the access key and secret need to be base64 encoded. For example:
$ echo -n GOOGFJDEWQ3KJFAS | base64
$ echo -n IFEWw99s0+ece3SXuf9q | base64
The -n parameter to echo is important, without it a line break will also be encoded and the key won’t work.
Next, the base64-encoded values need to be stored in the deploy/backup-s3.yaml file in the PXC Operator directory as the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY like this:
$ cat deploy/backup-s3.yaml
After modifying the file, the secrets need to be stored in Kubernetes using:
$ kubectl apply -f deploy/backup-s3.yaml
In the cr.yaml of PXC Operator the backup destination is defined as follows:
bucket is the name of the bucket as created in Google Cloud Storage, credentialsSecret must match the entry in backup-s3.yaml. endpointUrl is the “Storage URI” as shown in the Interoperability tab of Google Cloud Storage.
Now that the backup destination has been defined, to take an on-demand backup the backup/backup.yaml file needs to be modified:
Here pxcCluster needs to match the name of the cluster, and storageName needs to match the entry in cr.yaml. After modifying the file an on-demand backup can be started using:
$ kubectl apply -f deploy/backup/backup.yml
From here on the documentation for PXC Operator at https://www.percona.com/doc/kubernetes-operator-for-pxc/backups.html can be followed, since after configuring the Google Cloud Storage destination taking and restoring backups works exactly as it does when using Amazon S3.
As you can see, using Google Cloud Storage together with Percona Kubernetes Operator for Percona XtraDB Cluster is not difficult at all, but few details are slightly different from Amazon S3.
Be sure to get in touch with Percona’s Training Department to schedule a hands-on tutorial session with our K8S Operator. Our instructors will guide you and your team through all the setup processes, learn how to take backups, handle recovery, scale the cluster, and manage high-availability with ProxySQL.
Percona XtraDB Cluster is a cost-effective and robust clustering solution created to support your business-critical data. It gives you the benefits and features of MySQL along with the added enterprise features of Percona Server for MySQL.