Open Source Isn’t What It Used to Be

The landscape of open source has undergone significant changes in recent years, and selecting the right operator and tooling for PostgreSQL clusters in Kubernetes has never been more crucial.

MinIO, for example, was a widely used open source S3-compatible storage backend. Over the past few years, it has:

  • Switched to AGPLv3 with commercial-only extras
  • Entered “maintenance mode,” closing community issues, limiting support to paid subscriptions, and stopping acceptance of community PRs

Similarly, Bitnami Docker images, which have long been a staple for databases, including Postgres, middleware, and developer tooling, now have stricter usage terms. VMware’s changes to Bitnami image licensing disrupted many Kubernetes Helm charts that depended on them.

Crunchy Data illustrates how licensing and distribution changes can affect open source operators directly. For years, Crunchy offered fully open source PostgreSQL Docker images. Between 2022 and 2024, several key shifts occurred:

  1. Redistribution restrictions: While the PostgreSQL code is open source, Crunchy’s Docker images include branding and enterprise features that cannot be freely redistributed.
  2. Crunchy Data software made available through their Developer Program is intended for internal or personal use only. Use in production environments by larger organizations typically requires an active support subscription.
  3. The terms explicitly prohibit using Crunchy’s images to deliver support or consulting services to others unless you have an authorized agreement. While the source code itself remains open source, these restrictions mean the official images are not fully redistributable, which limits the practical use of the project in production and commercial settings. In other words, it is open source in theory, but cannot be freely used or shared like truly open-source software.
  4. Registry move: Most images were moved to registry.developers.crunchydata.com, requiring authentication and acceptance of terms, marking a clear line between open-source code and proprietary builds.


What These Restrictions Really Mean for Kubernetes Users

When container images and operators come with redistribution limits, authentication requirements, or “internal-use-only” clauses, the impact on Kubernetes environments is immediate and painful. Teams can no longer:

  • Build air-gapped clusters because images cannot be mirrored to private registries
  • Rely on GitOps workflows that expect publicly accessible OCI images
  • Fork or customize operators freely, since official images cannot be redistributed with modifications
  • Use the software in commercial or customer-facing products without additional licensing
  • Run multi-cluster or multi-tenant Postgres without violating usage terms

For database operators, where everything depends on container images, these restrictions effectively turn a project into a “source-available but not operationally open” solution.
As a result, many teams are switching to fully open-source alternatives like Percona Operator for PostgreSQL, StackGres, Zalando Postgres Operator, and CloudNativePG.

The bigger picture? Open source today often exists more in theory than in practice. It has become increasingly important to investigate what the authors’ “open source” claims actually mean. In many cases, the value advertised as open source is only theoretical: licensing restrictions, redistribution limits, and other usage constraints can make products far less open than expected. The boundaries of open source are being tested on multiple levels, so even projects officially licensed as open source may not provide the freedom, transparency, and usability that the term implies. Code might be available, but usable images, updates, and community collaboration can be limited.

Kubernetes users must be strategic: choose projects with open images, transparent governance, and sustainable community support. And because the landscape can shift quickly, migration strategies are critical.

For Percona Operator for PostgreSQL and Crunchy Data PostgreSQL Operator, migration is surprisingly straightforward: Percona’s operator is a hard fork of Crunchy’s. Moving data can be done in multiple ways, sometimes nearly without downtime, other times faster with minimal downtime, depending on your use case.

Migrate to Freedom

In this guide, we’ll show you how to migrate from Crunchy Data PostgreSQL Operator to Percona PostgreSQL Operator, a truly open source alternative.

Versions Used in This Guide

For clarity and reproducibility, all migration examples in this blog post were created using the following versions:

  • Crunchy Data PostgreSQL Kubernetes Operator: v5.8.6
  • Percona PostgreSQL Kubernetes Operator: v2.8.0
  • PostgreSQL: 17

Different versions may have slight differences in CR fields or behavior. Always consult the official documentation for your specific operator and Postgres version.
Because the Percona PostgreSQL Operator is a fork of the Crunchy Data PostgreSQL Operator, both operators cannot manage the same namespaces simultaneously. The Crunchy Operator is cluster-wide by default, which can lead to resource conflicts if both operators watch overlapping namespaces.

To avoid this:

  • Set PGO_TARGET_NAMESPACES for the Crunchy Data Operator so it watches only the namespaces where existing Crunchy clusters are deployed.
  • Deploy the Percona PostgreSQL Operator in a separate namespace (e.g., percona-postgres-operator) to ensure clean separation and avoid controller ownership conflicts.

These precautions ensure the migration environment remains safe and predictable, particularly when running both operators concurrently during the transition. If your operator was deployed using a single namespace kustomize/install/namespace PGO_TARGET_NAMESPACES env variable should already be set.

1. Migration Using a Standby Cluster

(pgBackRest repo based standby + Streaming Replication)

One of the simplest and safest ways to migrate from the Crunchy Data PostgreSQL Operator to the Percona PG Operator is by deploying a standby cluster. You can do this using either of the following options or both together:

  • pgBackRest repo–based standby
  • Streaming replication

In our example, we’ll use both methods to provide maximum safety and data integrity. For a more in-depth exploration of each approach, you can refer to our official documentation for full details.


Step 1: Start with Your Existing Crunchy Data Cluster

Before anything else, you need an operational Crunchy Data PostgreSQL cluster (referred to as the source-cluster). Let’s think that it was deployed under cpgo namespace.
For this example, we assume the source-cluster was deployed using a Custom Resource similar to the one below, and that it uses AWS S3 as a pgBackRest backup repository.

The example of source-cluster can be deployed from the GitHub repo:

Or using the following command:


Important Settings for the Migration

pgBackRest configuration

These fields are required because the Percona PG Operator will use the same repository:


LoadBalancer service

Percona standby cluster must have network access to the source cluster. In my example, this is done using a public IP (Service type: LoadBalancer), but you can use any Service type that ensures the same result. The key requirement is that the source-cluster is reachable from the target-cluster.


Collect Information Required for Streaming Replication

If you plan to use streaming replication (recommended for minimal data lag), the target Percona cluster will need authenticated network connectivity to the primary source instance.

Get the LoadBalancer IP

Example output:

Export replication and TLS certificates

Example output:

Then export them:

Step 2: Deploy the Percona PG Operator and the Standby Cluster (target-cluster)

Install the Percona PG Operator 


Create an AWS credentials secret (shared repository)

Import the certificates (required only for streaming replication)


Step 3: Standby Cluster Setup for Migration from Crunchy Data PostgreSQL Operator

The cluster can be deployed from from GitHub repo:

Or using the following command:

Step 4: Wait for the Cluster to Start Syncing

Example:

Check replication status.

Example output:

At this point, the Percona cluster is fully caught up and functional as a read-only standby. Now you can already switch read-only traffic to the new cluster for testing.


Step 5: Perform the Final Cutover

1.Convert the source cluster to standby mode.

Wait for replication to fully catch up.


2. (Optional) Shut down the source cluster

Prevents accidental writes or split-brain scenarios.

3.Promote the Percona standby cluster

4.Verify that the cluster is now writable and healthy


As you can see, this migration path works almost entirely out of the box.
For users coming from the Crunchy Data PostgreSQL Operator, this method feels natural because it leverages the same native standby/replica mechanisms used for HA and disaster recovery. The key difference is that now you can also use this familiar mechanism to migrate safely to the Percona PostgreSQL Operator, a truly open-source alternative.

 

2. Migrate Data Using Backup Restore

The second migration option is restoring your Percona cluster directly from a backup created by the Crunchy Data PostgreSQL Operator. This is often the fastest and simplest way to migrate, especially when you don’t require a live standby or continuous replication.

Step 1: Start with Your Existing Crunchy Data Cluster (source-cluster)

Below is the example Crunchy Data cluster we used earlier. It performs pgBackRest backups to AWS S3, and we will restore from the most recent full backup created by this cluster.

Step 2: Deploy the Percona PG Operator and the Cluster (target-cluster)

Install the Percona PG Operator

Create an AWS credentials secret (shared repository)

Step 3: Deploy the Percona PostgreSQL Cluster

This cluster boots directly from the Crunchy backup.

Apply the CR:

Two important sections to understand
1. dataSource: – bootstrapping the Cluster From Crunchy Backups

This section is responsible for restoring the database from Crunchy’s backup.
This tells the Percona Operator:

  • which backup repo to read from
  • which S3 bucket/path stores the Crunchy backups
  • which credentials to use

2. backups: Target Cluster Backup Configuration

This defines new backup storage for the Percona cluster. It must be separate from Crunchy’s backup storage to avoid conflicts.

As soon as the Custom Resource is applied, the cluster is bootstrapped using the storage for the backup defined in the dataSource section and then started. Once the cluster becomes ready, you can immediately create new backups. In this case, repo1 from the backups section will be used as the target repository.

Step 4: Wait for the Cluster

Example:

As you can see, cluster (target-cluster) was successfully restored from the latest full backup which was made on (source-cluster).

3. Migrate Data Using PV of Crunchy Data PostgreSQL cluster

This migration option uses the existing Persistent Volume from the Crunchy cluster, even after the cluster is deleted.
It is useful when:

  • you want to avoid a full backup/restore
  • your storage is very large
  • you must preserve the original data directory exactly
  • you removed the cluster but kept the PV


Step 1: Configure the Source Cluster to Retain PVs

Modify Persistent Volume Retention

If you want to delete your source-cluster but keep persistent volumes (PV) which were used by the cluster you have only one way to do it. The retention of PV should be changed.  For dynamically provisioned PersistentVolumes, the default reclaim policy is “Delete”, which removes any data on a persistent volume once there are no more persistent volume claims (PVCs) associated with it.
To retain a persistent volume you will need to set the reclaim policy to Retain.

Let’s check the list of PVs which are associated with PVC used by source-cluser:


We suggest using the PV of the primary pod. You can get it using the following command:

Finally, we can change the reclaim policy to Retain for PV:


Verify that the change:


Step 2: Delete your existing Crunchy Data Cluster (source-cluster) and operator if needed:


Step 3. Deploy the Percona PG Operator and Create Percona PostgreSQL Cluster With Retained Volume (target-cluster)

 Install the Percona PG Operator 

Create an AWS credentials secret (shared repository)

You can now create target-cluster using retained volume. In order to do it you will need to provide a label that is unique for your persistent volumes. Let’s add it first to your PV.


Next, you need to refer to this label in your CR.

Example:

Now we are ready to create a target-cluster using PV from source-cluster:


Step 4: Wait for the Cluster

Example:

The cluster (target-cluster) was successfully started.

Conclusion

This blog post demonstrated three ways to migrate from the Crunchy Data PostgreSQL Operator to the fully open-source Percona PostgreSQL Operator:

  1. Standby Cluster Migration – Almost zero downtime using streaming replication or pgBackRest standby.
  1. Migration Using Backup and Restore –Fast and simple,  restore directly from Crunchy’s S3 backups.
  1. Migration Using Existing Persistent Volumes – Ideal when you want to reuse storage without copying data.


All three approaches provide
safe, predictable, and reversible migration paths.

And since Percona’s operator, images, and tooling are 100% open source, you always retain full control, including the option to migrate back to Crunchy Data PostgreSQL Operator if needed. The same approaches can be adapted for migrating to other open-source operators (Zalando, StackGres, CloudNativePG), but that’s a topic for a future article.

P.S. This blog post covers only basic deployment patterns and simplified configuration examples. If your environment is more complex, uses custom images, includes Crunchy’s TDE or other enterprise features, or requires tailored migration steps, don’t hesitate to contact Percona. Our team is happy to help you plan and execute a smooth, reliable migration.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments