Open Source Isn’t What It Used to Be
The landscape of open source has undergone significant changes in recent years, and selecting the right operator and tooling for PostgreSQL clusters in Kubernetes has never been more crucial.
MinIO, for example, was a widely used open source S3-compatible storage backend. Over the past few years, it has:
- Switched to AGPLv3 with commercial-only extras
- Entered “maintenance mode,” closing community issues, limiting support to paid subscriptions, and stopping acceptance of community PRs
Similarly, Bitnami Docker images, which have long been a staple for databases, including Postgres, middleware, and developer tooling, now have stricter usage terms. VMware’s changes to Bitnami image licensing disrupted many Kubernetes Helm charts that depended on them.
Crunchy Data illustrates how licensing and distribution changes can affect open source operators directly. For years, Crunchy offered fully open source PostgreSQL Docker images. Between 2022 and 2024, several key shifts occurred:
- Redistribution restrictions: While the PostgreSQL code is open source, Crunchy’s Docker images include branding and enterprise features that cannot be freely redistributed.
- Crunchy Data software made available through their Developer Program is intended for internal or personal use only. Use in production environments by larger organizations typically requires an active support subscription.
- The terms explicitly prohibit using Crunchy’s images to deliver support or consulting services to others unless you have an authorized agreement. While the source code itself remains open source, these restrictions mean the official images are not fully redistributable, which limits the practical use of the project in production and commercial settings. In other words, it is open source in theory, but cannot be freely used or shared like truly open-source software.
- Registry move: Most images were moved to registry.developers.crunchydata.com, requiring authentication and acceptance of terms, marking a clear line between open-source code and proprietary builds.
What These Restrictions Really Mean for Kubernetes Users
When container images and operators come with redistribution limits, authentication requirements, or “internal-use-only” clauses, the impact on Kubernetes environments is immediate and painful. Teams can no longer:
- Build air-gapped clusters because images cannot be mirrored to private registries
- Rely on GitOps workflows that expect publicly accessible OCI images
- Fork or customize operators freely, since official images cannot be redistributed with modifications
- Use the software in commercial or customer-facing products without additional licensing
- Run multi-cluster or multi-tenant Postgres without violating usage terms
For database operators, where everything depends on container images, these restrictions effectively turn a project into a “source-available but not operationally open” solution.
As a result, many teams are switching to fully open-source alternatives like Percona Operator for PostgreSQL, StackGres, Zalando Postgres Operator, and CloudNativePG.
The bigger picture? Open source today often exists more in theory than in practice. It has become increasingly important to investigate what the authors’ “open source” claims actually mean. In many cases, the value advertised as open source is only theoretical: licensing restrictions, redistribution limits, and other usage constraints can make products far less open than expected. The boundaries of open source are being tested on multiple levels, so even projects officially licensed as open source may not provide the freedom, transparency, and usability that the term implies. Code might be available, but usable images, updates, and community collaboration can be limited.
Kubernetes users must be strategic: choose projects with open images, transparent governance, and sustainable community support. And because the landscape can shift quickly, migration strategies are critical.
For Percona Operator for PostgreSQL and Crunchy Data PostgreSQL Operator, migration is surprisingly straightforward: Percona’s operator is a hard fork of Crunchy’s. Moving data can be done in multiple ways, sometimes nearly without downtime, other times faster with minimal downtime, depending on your use case.
Migrate to Freedom
In this guide, we’ll show you how to migrate from Crunchy Data PostgreSQL Operator to Percona PostgreSQL Operator, a truly open source alternative.
Versions Used in This Guide
For clarity and reproducibility, all migration examples in this blog post were created using the following versions:
- Crunchy Data PostgreSQL Kubernetes Operator: v5.8.6
- Percona PostgreSQL Kubernetes Operator: v2.8.0
- PostgreSQL: 17
Different versions may have slight differences in CR fields or behavior. Always consult the official documentation for your specific operator and Postgres version.
Because the Percona PostgreSQL Operator is a fork of the Crunchy Data PostgreSQL Operator, both operators cannot manage the same namespaces simultaneously. The Crunchy Operator is cluster-wide by default, which can lead to resource conflicts if both operators watch overlapping namespaces.
To avoid this:
- Set PGO_TARGET_NAMESPACES for the Crunchy Data Operator so it watches only the namespaces where existing Crunchy clusters are deployed.
- Deploy the Percona PostgreSQL Operator in a separate namespace (e.g., percona-postgres-operator) to ensure clean separation and avoid controller ownership conflicts.
These precautions ensure the migration environment remains safe and predictable, particularly when running both operators concurrently during the transition. If your operator was deployed using a single namespace kustomize/install/namespace PGO_TARGET_NAMESPACES env variable should already be set.
1. Migration Using a Standby Cluster
(pgBackRest repo based standby + Streaming Replication)
One of the simplest and safest ways to migrate from the Crunchy Data PostgreSQL Operator to the Percona PG Operator is by deploying a standby cluster. You can do this using either of the following options or both together:
- pgBackRest repo–based standby
- Streaming replication
In our example, we’ll use both methods to provide maximum safety and data integrity. For a more in-depth exploration of each approach, you can refer to our official documentation for full details.
Step 1: Start with Your Existing Crunchy Data Cluster
Before anything else, you need an operational Crunchy Data PostgreSQL cluster (referred to as the source-cluster). Let’s think that it was deployed under cpgo namespace.
For this example, we assume the source-cluster was deployed using a Custom Resource similar to the one below, and that it uses AWS S3 as a pgBackRest backup repository.
The example of source-cluster can be deployed from the GitHub repo:
|
1 |
kubectl -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/refs/heads/migration/deploy/source-cluster-cr.yaml -n cpgo |
Or using the following command:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
echo 'apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: source-cluster spec: service: type: LoadBalancer postgresVersion: 17 instances: - name: instance1 replicas: 3 dataVolumeClaimSpec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi proxy: pgBouncer: service: type: LoadBalancer replicas: 3 config: global: pool_mode: transaction backups: pgbackrest: configuration: - secret: name: pgo-s3-creds global: repo1-path: /pgo-migration-testing/crunchydata repos: - name: repo1 s3: bucket: pgo-migration-testing endpoint: s3.amazonaws.com region: us-east-1 schedules: full: "0 0 * * 0"' | kubectl apply -n cpgo -f - |
Important Settings for the Migration
pgBackRest configuration
These fields are required because the Percona PG Operator will use the same repository:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
backups: pgbackrest: configuration: - secret: name: pgo-s3-creds global: repo1-path: /pgo-migration-testing/crunchydata repos: - name: repo1 s3: bucket: pgo-migration-testing endpoint: s3.amazonaws.com region: us-east-1 |
LoadBalancer service
Percona standby cluster must have network access to the source cluster. In my example, this is done using a public IP (Service type: LoadBalancer), but you can use any Service type that ensures the same result. The key requirement is that the source-cluster is reachable from the target-cluster.
|
1 2 3 |
spec: service: type: LoadBalancer |
Collect Information Required for Streaming Replication
If you plan to use streaming replication (recommended for minimal data lag), the target Percona cluster will need authenticated network connectivity to the primary source instance.
Get the LoadBalancer IP
Example output:
|
1 2 |
kubectl get service source-cluster-ha -o jsonpath='{.status.loadBalancer.ingress[0].ip}:{.spec.ports[0].port}{"n"}' -n cpgo 34.27.90.225:5432 |
Export replication and TLS certificates
Example output:
|
1 2 3 4 5 |
kubectl get secret source-cluster-cluster-cert source-cluster-replication-cert -n cpgo NAME TYPE DATA AGE source-cluster-cluster-cert Opaque 3 24h source-cluster-replication-cert Opaque 12 24h |
Then export them:
|
1 2 3 4 5 |
kubectl get secret source-cluster-cluster-cert -o json -n cpgo | yq '{"apiVersion": .apiVersion, "kind": .kind, "data": .data, "metadata": {"name": .metadata.name}, "type": .type}' -o yaml > ~/source-cluster-cluster-cert.yaml kubectl get secret source-cluster-replication-cert -o json -n cpgo | yq '{"apiVersion": .apiVersion, "kind": .kind, "data": .data, "metadata": {"name": .metadata.name}, "type": .type}' -o yaml > ~/source-cluster-replication-cert.yaml |
Step 2: Deploy the Percona PG Operator and the Standby Cluster (target-cluster)
Install the Percona PG Operator
|
1 2 3 |
kubectl create ns percona-postgres-operator kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.8.0/deploy/bundle.yaml -n percona-postgres-operator |
Create an AWS credentials secret (shared repository)
|
1 2 3 4 5 6 7 8 9 |
echo "apiVersion: v1 kind: Secret metadata: name: pgo-s3-creds stringData: S3.conf: | [global] repo1-s3-key=XXXXXXXXXXXXXXXXXXXX repo1-s3-key-secret=XXXXXXXXXXXXXXXXXXXX" | kubectl apply -n percona-postgres-operator -f - |
Import the certificates (required only for streaming replication)
|
1 2 |
kubectl apply -f ~/source-cluster-cluster-cert.yaml -n percona-postgres-operator kubectl apply -f ~/source-cluster-replication-cert.yaml -n percona-postgres-operator |
Step 3: Standby Cluster Setup for Migration from Crunchy Data PostgreSQL Operator
The cluster can be deployed from from GitHub repo:
|
1 |
kubectl apply -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/refs/heads/migration/deploy/target-cluster-cr.yaml -n percona-postgres-operator |
Or using the following command:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
echo "apiVersion: pgv2.percona.com/v2 kind: PerconaPGCluster metadata: name: target-cluster-percona annotations: pgv2.percona.com/patroni-version: "4" spec: crVersion: 2.8.0 # Custom certificates that were obtained from source-cluster should be used in case of “Streaming replication”. It can be avoid if you use only “pgBackrest repo based standby” replication. secrets: customReplicationTLSSecret: name: source-cluster-replication-cert customTLSSecret: name: source-cluster-cluster-cert standby: enabled: true # Public IP of source-cluster-percona-ha service from source-cluster for “Streaming replication” host: 34.27.90.225 # PostgreSQL port of source-cluster-percona-ha service from source-cluster for “Streaming replication” port: 5432 # AWS pgBackrest repo, which is used by source-cluster repoName: repo1 image: docker.io/percona/percona-distribution-postgresql:17.6-1 imagePullPolicy: Always postgresVersion: 17 instances: - name: instance1 replicas: 3 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres-operator.crunchydata.com/data: postgres topologyKey: kubernetes.io/hostname dataVolumeClaimSpec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi proxy: pgBouncer: replicas: 3 image: docker.io/percona/percona-pgbouncer:1.24.1-1 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres-operator.crunchydata.com/role: pgbouncer topologyKey: kubernetes.io/hostname backups: pgbackrest: repos: # AWS pgBackrest repo, which is used by source-cluster - name: repo1 s3: bucket: pg-operator-testing endpoint: s3.amazonaws.com region: us-east-1 image: docker.io/percona/percona-pgbackrest:2.56.0-1 configuration: - secret: name: pgo-s3-creds global: repo1-path: /pg-operator-testing/crunchydata" | kubectl apply -n percona-postgres-operator -f - |
Step 4: Wait for the Cluster to Start Syncing
Example:
|
1 2 3 4 |
kubectl get pg -n percona-postgres-operator -w NAME ENDPOINT STATUS POSTGRES PGBOUNCER AGE target-cluster-percona target-cluster-percona-pgbouncer.default.svc ready 3 3 17h |
Check replication status.
Example output:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
kubectl exec source-cluster-instance1-xg46-0 -n cpgo -it -- bash postgres=# SELECT application_name, client_addr, state, sent_offset - (replay_offset - (sent_lsn - replay_lsn) * 255 * 16 ^ 6 ) AS byte_lag, write_lag, flush_lag, replay_lag FROM ( SELECT application_name, client_addr, client_hostname, state, ('x' || lpad(split_part(sent_lsn::TEXT, '/', 1), 8, '0'))::bit(32)::bigint AS sent_lsn, ('x' || lpad(split_part(replay_lsn::TEXT, '/', 1), 8, '0'))::bit(32)::bigint AS replay_lsn, ('x' || lpad(split_part(sent_lsn::TEXT, '/', 2), 8, '0'))::bit(32)::bigint AS sent_offset, ('x' || lpad(split_part(replay_lsn::TEXT, '/', 2), 8, '0'))::bit(32)::bigint AS replay_offset, write_lag, flush_lag, replay_lag FROM pg_stat_replication ) AS s; application_name | client_addr | state | byte_lag | write_lag | flush_lag | replay_lag -----------------------------------------+--------------+-----------+----------+-----------------+-----------------+----------------- source-cluster-instance1-bs4k-0 | 10.16.1.7 | streaming | 0 | 00:00:00.000971 | 00:00:00.001979 | 00:00:00.002072 source-cluster-instance1-9jp5-0 | 10.16.0.14 | streaming | 0 | 00:00:00.000903 | 00:00:00.002108 | 00:00:00.002164 target-cluster-percona-instance1-thn5-0 | 10.128.0.103 | streaming | 0 | 00:00:00.000957 | 00:00:00.00201 | 00:00:00.002038 (3 rows) |
At this point, the Percona cluster is fully caught up and functional as a read-only standby. Now you can already switch read-only traffic to the new cluster for testing.
Step 5: Perform the Final Cutover
1.Convert the source cluster to standby mode.
|
1 2 3 4 5 6 7 8 9 10 11 |
❯ kubectl patch postgrescluster source-cluster --type=merge -n cpgo -p ' { "spec": { "standby": { "enabled": true } } }' |
Wait for replication to fully catch up.
2. (Optional) Shut down the source cluster
Prevents accidental writes or split-brain scenarios.
|
1 |
kubectl patch postgrescluster source-cluster -n cpgo --type merge --patch '{"spec":{"shutdown": true}}' |
3.Promote the Percona standby cluster
|
1 2 3 4 5 6 7 8 |
kubectl patch perconapgcluster target-cluster-percona -n percona-postgres-operator --type=merge -p ' { "spec": { "standby": { "enabled": false } } }' |
4.Verify that the cluster is now writable and healthy
|
1 2 3 4 |
kubectl get pg target-cluster-percona -n percona-postgres-operator NAME ENDPOINT STATUS POSTGRES PGBOUNCER AGE cluster1-percona cluster1-percona-pgbouncer.default.svc ready 3 3 23h |
As you can see, this migration path works almost entirely out of the box. For users coming from the Crunchy Data PostgreSQL Operator, this method feels natural because it leverages the same native standby/replica mechanisms used for HA and disaster recovery. The key difference is that now you can also use this familiar mechanism to migrate safely to the Percona PostgreSQL Operator, a truly open-source alternative.
2. Migrate Data Using Backup Restore
The second migration option is restoring your Percona cluster directly from a backup created by the Crunchy Data PostgreSQL Operator. This is often the fastest and simplest way to migrate, especially when you don’t require a live standby or continuous replication.
Step 1: Start with Your Existing Crunchy Data Cluster (source-cluster)
Below is the example Crunchy Data cluster we used earlier. It performs pgBackRest backups to AWS S3, and we will restore from the most recent full backup created by this cluster.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: source-cluster spec: postgresVersion: 17 instances: - name: instance1 replicas: 3 dataVolumeClaimSpec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi proxy: pgBouncer: replicas: 3 config: global: pool_mode: transaction backups: pgbackrest: configuration: - secret: name: pgo-s3-creds global: repo1-path: /pgo-migration-testing/crunchydata repos: - name: repo1 s3: bucket: pgo-migration-testing endpoint: s3.amazonaws.com region: us-east-1 schedules: full: "0 0 * * 0" |
Step 2: Deploy the Percona PG Operator and the Cluster (target-cluster)
Install the Percona PG Operator
|
1 2 |
kubectl create ns percona-postgres-operator kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.8.0/deploy/bundle.yaml -n percona-postgres-operator |
Create an AWS credentials secret (shared repository)
|
1 2 3 4 5 6 7 8 9 |
echo "apiVersion: v1 kind: Secret metadata: name: pgo-s3-creds stringData: S3.conf: | [global] repo1-s3-key=XXXXXXXXXXXXXXXXXXXX repo1-s3-key-secret=XXXXXXXXXXXXXXXXXXXX" | kubectl apply -n percona-postgres-operator -f - |
Step 3: Deploy the Percona PostgreSQL Cluster
This cluster boots directly from the Crunchy backup.
Apply the CR:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
echo "apiVersion: pgv2.percona.com/v2 kind: PerconaPGCluster metadata: name: target-cluster annotations: pgv2.percona.com/patroni-version: "4" spec: crVersion: 2.8.0 image: docker.io/percona/percona-distribution-postgresql:17.6-1 imagePullPolicy: Always postgresVersion: 17 instances: - name: instance1 replicas: 3 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres-operator.crunchydata.com/data: postgres topologyKey: kubernetes.io/hostname dataVolumeClaimSpec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi proxy: pgBouncer: replicas: 3 image: docker.io/percona/percona-pgbouncer:1.24.1-1 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres-operator.crunchydata.com/role: pgbouncer topologyKey: kubernetes.io/hostname dataSource: pgbackrest: stanza: db configuration: - secret: name: pgo-s3-creds global: repo1-path: /pg-operator-testing/crunchydata repo: name: repo1 s3: bucket: pg-operator-testing endpoint: s3.amazonaws.com region: us-east-1 backups: pgbackrest: repos: - name: repo1 s3: bucket: pg-operator-testing endpoint: s3.amazonaws.com region: us-east-1 image: docker.io/percona/percona-pgbackrest:2.56.0-1 configuration: - secret: name: pgo-s3-creds global: repo1-path: /pg-operator-testing/percona | kubectl apply -n percona-postgres-operator -f - |
Two important sections to understand
1. dataSource: – bootstrapping the Cluster From Crunchy Backups
This section is responsible for restoring the database from Crunchy’s backup.
This tells the Percona Operator:
- which backup repo to read from
- which S3 bucket/path stores the Crunchy backups
- which credentials to use
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
dataSource: pgbackrest: stanza: db configuration: - secret: name: pgo-s3-creds global: repo1-path: /pg-operator-testing/crunchydata repo: name: repo1 s3: bucket: pg-operator-testing endpoint: s3.amazonaws.com region: us-east-1 |
2. backups: Target Cluster Backup Configuration
This defines new backup storage for the Percona cluster. It must be separate from Crunchy’s backup storage to avoid conflicts.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
backups: pgbackrest: repos: - name: repo1 s3: bucket: pg-operator-testing endpoint: s3.amazonaws.com region: us-east-1 image: docker.io/percona/percona-pgbackrest:2.56.0-1 configuration: - secret: name: pgo-s3-creds global: repo1-path: /pg-operator-testing/percona |
As soon as the Custom Resource is applied, the cluster is bootstrapped using the storage for the backup defined in the dataSource section and then started. Once the cluster becomes ready, you can immediately create new backups. In this case, repo1 from the backups section will be used as the target repository.
Step 4: Wait for the Cluster
Example:
|
1 2 3 4 |
kubectl get pg -n percona-postgres-operator NAME ENDPOINT STATUS POSTGRES PGBOUNCER AGE target-cluster-percona target-cluster-percona-pgbouncer.default.svc ready 3 |
As you can see, cluster (target-cluster) was successfully restored from the latest full backup which was made on (source-cluster).
3. Migrate Data Using PV of Crunchy Data PostgreSQL cluster
This migration option uses the existing Persistent Volume from the Crunchy cluster, even after the cluster is deleted.
It is useful when:
- you want to avoid a full backup/restore
- your storage is very large
- you must preserve the original data directory exactly
- you removed the cluster but kept the PV
Step 1: Configure the Source Cluster to Retain PVs
Modify Persistent Volume Retention
If you want to delete your source-cluster but keep persistent volumes (PV) which were used by the cluster you have only one way to do it. The retention of PV should be changed. For dynamically provisioned PersistentVolumes, the default reclaim policy is “Delete”, which removes any data on a persistent volume once there are no more persistent volume claims (PVCs) associated with it.
To retain a persistent volume you will need to set the reclaim policy to Retain.
Let’s check the list of PVs which are associated with PVC used by source-cluser:
|
1 2 3 4 5 6 |
kubectl get pvc --selector=postgres-operator.crunchydata.com/cluster=source-cluster,postgres-operator.crunchydata.com/data=postgres NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE source-cluster-instance1-5vxr-pgdata Bound pvc-d842c205-bbd1-4a0a-8fd0-301398a61e6f 10Gi RWO standard-rwo <unset> 164m source-cluster-instance1-hm99-pgdata Bound pvc-a9891ba9-d2f7-4d12-a6ef-a3051e0f89db 10Gi RWO standard-rwo <unset> 164m source-cluster-instance1-zdkd-pgdata Bound pvc-1b10bf46-56e2-4d25-868b-81e12a1fe120 10Gi RWO standard-rwo <unset> |
We suggest using the PV of the primary pod. You can get it using the following command:
|
1 2 3 4 5 6 7 |
kubectl get pvc -n cpgo $(kubectl get pod -n cpgo -l postgres-operator.crunchydata.com/role=primary -o jsonpath='{.items[0].spec.volumes[?(@.name=="postgres-data")].persistentVolumeClaim.claimName}') -o jsonpath='{.spec.volumeName}' pvc-a9891ba9-d2f7-4d12-a6ef-a3051e0f89db |
Finally, we can change the reclaim policy to Retain for PV:
|
1 |
kubectl patch pv pvc-a9891ba9-d2f7-4d12-a6ef-a3051e0f89db -n cpgo -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' |
Verify that the change:
|
1 2 3 |
kubectl get pv -n cpgo NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc-a9891ba9-d2f7-4d12-a6ef-a3051e0f89db 10Gi RWO Retain Bound cpgo/source-cluster-instance1-zdkd-pgdata standard-rwo <unset> 166m |
Step 2: Delete your existing Crunchy Data Cluster (source-cluster) and operator if needed:
|
1 2 |
kubectl delete postgrescluster source-cluster -n cpgo kubectl delete -k kustomize/install/default |
Step 3. Deploy the Percona PG Operator and Create Percona PostgreSQL Cluster With Retained Volume (target-cluster)
Install the Percona PG Operator
|
1 2 |
kubectl create ns percona-postgres-operator kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.8.0/deploy/bundle.yaml -n percona-postgres-operator |
Create an AWS credentials secret (shared repository)
|
1 2 3 4 5 6 7 8 9 |
echo "apiVersion: v1 kind: Secret metadata: name: pgo-s3-creds stringData: S3.conf: | [global] repo1-s3-key=XXXXXXXXXXXXXXXXXXXX repo1-s3-key-secret=XXXXXXXXXXXXXXXXXXXX" | kubectl apply -n percona-postgres-operator -f - |
You can now create target-cluster using retained volume. In order to do it you will need to provide a label that is unique for your persistent volumes. Let’s add it first to your PV.
|
1 |
kubectl label pv pvc-a9891ba9-d2f7-4d12-a6ef-a3051e0f89db pgo-postgres-cluster=percona-postgres-operator-cluster |
Next, you need to refer to this label in your CR.
Example:
|
1 2 3 4 5 6 7 8 9 |
dataVolumeClaimSpec: accessModes: - ReadWriteOnce selector: matchLabels: pgo-postgres-cluster: percona-postgres-operator-cluster resources: requests: storage: 10Gi |
Now we are ready to create a target-cluster using PV from source-cluster:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
echo "apiVersion: pgv2.percona.com/v2 kind: PerconaPGCluster metadata: name: target-cluster annotations: pgv2.percona.com/patroni-version: "4" spec: crVersion: 2.8.0 image: docker.io/percona/percona-distribution-postgresql:17.6-1 imagePullPolicy: Always postgresVersion: 17 instances: - name: instance1 replicas: 1 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres-operator.crunchydata.com/data: postgres topologyKey: kubernetes.io/hostname dataVolumeClaimSpec: accessModes: - ReadWriteOnce selector: matchLabels: pgo-postgres-cluster: percona-postgres-operator-cluster resources: requests: storage: 10Gi proxy: pgBouncer: replicas: 3 image: docker.io/percona/percona-pgbouncer:1.24.1-1 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres-operator.crunchydata.com/role: pgbouncer topologyKey: kubernetes.io/hostname backups: pgbackrest: repos: - name: repo1 s3: bucket: pg-operator-testing endpoint: s3.amazonaws.com region: us-east-1 image: docker.io/percona/percona-pgbackrest:2.56.0-1 configuration: - secret: name: pgo-s3-creds global: repo1-path: /pg-operator-testing/percona | kubectl apply -n percona-postgres-operator -f - |
Step 4: Wait for the Cluster
Example:
|
1 2 3 4 |
kubectl get pg -n percona-postgres-operator NAME ENDPOINT STATUS POSTGRES PGBOUNCER AGE target-cluster-percona target-cluster-percona-pgbouncer.default.svc ready 1 |
The cluster (target-cluster) was successfully started.
Conclusion
This blog post demonstrated three ways to migrate from the Crunchy Data PostgreSQL Operator to the fully open-source Percona PostgreSQL Operator:
- Standby Cluster Migration – Almost zero downtime using streaming replication or pgBackRest standby.
- Migration Using Backup and Restore –Fast and simple, restore directly from Crunchy’s S3 backups.
- Migration Using Existing Persistent Volumes – Ideal when you want to reuse storage without copying data.
All three approaches provide safe, predictable, and reversible migration paths.
And since Percona’s operator, images, and tooling are 100% open source, you always retain full control, including the option to migrate back to Crunchy Data PostgreSQL Operator if needed. The same approaches can be adapted for migrating to other open-source operators (Zalando, StackGres, CloudNativePG), but that’s a topic for a future article.
P.S. This blog post covers only basic deployment patterns and simplified configuration examples. If your environment is more complex, uses custom images, includes Crunchy’s TDE or other enterprise features, or requires tailored migration steps, don’t hesitate to contact Percona. Our team is happy to help you plan and execute a smooth, reliable migration.