This post was originally published in 2021 and was updated in 2025.

Kubernetes adoption keeps climbing, and databases are often one of the last workloads teams try to move. The reasons are clear: PostgreSQL is critical, downtime isn’t an option, and migration can feel risky. But with the right approach, you can modernize without bringing your applications to a halt.

In this post, I’ll show you how to migrate a PostgreSQL database to Kubernetes with minimal downtime using Percona Operator for PostgreSQL. You’ll see how backups and Write Ahead Logs (WALs) flow through pgBackRest to keep the target cluster in sync until you’re ready to cut over.

This walkthrough focuses on the technical steps, but if you’re evaluating PostgreSQL on Kubernetes more broadly (automation, scaling, or day-2 operations), make sure to check out our in-depth resources at the end.

Goal

To perform the migration I’m going to use the following setup:

Migrating PostgreSQL to Kubernetes

  1. PostgreSQL database deployed on-prem or in the cloud (the Source).
  2. Google Kubernetes Engine (GKE) cluster where Percona Operator deploys and manages a PostgreSQL cluster (the Target) and pgBackRest Pod.
  3. PostgreSQL backups and Write Ahead Logs are uploaded to an Object Storage bucket (GCS in my case).
  4. pgBackRest Pod reads the data from the bucket.
  5. pgBackRest Pod restores the data continuously to the PostgreSQL cluster in Kubernetes.

The data is continuously synchronized. In the end, I want to shut down PostgreSQL running on-prem and only keep the cluster in GKE.

Migration

Prerequisites

To replicate the setup, you will need the following:

  • PostgreSQL (v 12 or 13) running somewhere
  • pgBackRest installed
  • Google Cloud Storage or any S3 bucket (examples here use GCS)
  • Kubernetes cluster

Configure the source

I have Percona Distribution for PostgreSQL version 13 running on some Linux machines.

1. Configure pgBackrest

  • pg1-path should point to PostgreSQL data directory

  • repo1-type is set to GCS as we want our backups to go there

  • The key is in /tmp/gcs.key file. The key can be obtained through Google Cloud UI.

  • The backups are going to be stored in on-prem-pg folder in sp-test-1 bucket

2. Edit postgresql.conf config to enable archival through pgBackrest 

A restart is required after changing the configuration.

3. Operator requires to have a postgresql.conf file in the data directory. It is enough to have an empty file:

4. Create primaryuser on the Source

Configure the target

1. Deploy Percona Operator for PostgreSQL on Kubernetes. Read more about it in the documentation here.

2. Edit main custom resource manifest – deploy/cr.yaml.

Keep the cluster name as cluster1. The cluster will run in Standby mode, syncing data from the GCS bucket.

Example spec.backup section:

  • Also, set spec.pgReplicas.hotStandby.size to 1 for at least one replica.

3. Operator should be able to authenticate with GCS.

To do that we need to create a secret object called <CLUSTERNAME>-backrest-repo-config with gcs-key in data. It should be the same key we used on the Source. See the example of this secret here.

4. Create users by creating Secret objects: postgres  and primaryuser (the one we created on the Source). See the examples of users Secrets here. The passwords should be the same as on the Source.

5. Now let’s deploy our cluster on Kubernetes by applying the cr.yaml:

Verify and troubleshoot

If everything is configured correctly, you should see the following in the Primary Pod logs:

Make a change on the Source and confirm that it is synchronized to the Target cluster.

Common issues

Forgot to create postgresql.conf?

Forgot to create primaryuser?

Wrong or missing object store credentials?

Cutover

Once you’re confident everything is working, it’s time to complete the migration.

1. Stop the source PostgreSQL cluster to ensure no data is written

2. Promote the Target cluster to primary.

To do that remove spec.backup.repoPath, change spec.standby to false in deploy/cr.yaml, and apply the changes:

PostgreSQL will restart, and the logs should confirm promotion:

Wrapping up

Migrating PostgreSQL to Kubernetes doesn’t have to be overwhelming. With the Percona Operator for PostgreSQL and pgBackRest, you can keep downtime minimal while gaining the flexibility and consistency of a Kubernetes-native deployment.

If you’re considering running PostgreSQL on Kubernetes at scale, the migration itself is only the first step. Day-2 operations—like scaling, monitoring, and cost management—can quickly become the real challenge. That’s why we’ve put together a dedicated resource to show you how to simplify PostgreSQL in the cloud with automation, scalability, and zero lock-in.

 

Explore how Percona makes PostgreSQL on Kubernetes easier

 

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments