Deploying databases on Kubernetes is getting easier every year. The part that still hurts is making deployments repeatable and predictable across clusters and environments, especially from Continuous Integration(CI) perspective. This is where PR-based automation helps; you can review a plan, validate changes, and only apply after approval, before anything touches your cluster.
If you’ve ever installed an operator by hand, applied a few YAMLs, changed a script “just a bit”, and then watched the same setup behave differently in another environment, this post is for you.
In this tutorial, we’ll deploy Percona Operator for MySQL and a sample three-node MySQL cluster using OpenTofu – a fully open-source Terraform fork. Then we’ll take the exact same deployment and run it through CI using OpenTaco (formerly known as Digger), so that infrastructure changes can be validated and applied from Pull Requests.
We’ll use this demo repository throughout the guide: GitHub Demo Percona Operator MySQL OpenTaco.

OpenTofu and OpenTaco shine when infrastructure and databases must be reviewed, validated, and approved before they ever touch the cluster.
Databases are central to most stacks, and changes should be handled with more care. We want a workflow where updates are reviewed and validated before they are ever deployed to a cluster. That’s exactly what OpenTofu + OpenTaco enables: a PR shows the plan output for review, and apply happens only when you approve it.
OpenTofu (and Terraform) already gives us the “Infrastructure as Code” part: plan what will change, apply it, and store state. The problem is operational, for example, in a team: who runs “apply”, when do they run it, and how do we avoid collisions?
OpenTaco sits on top of your existing CI system (in our case, GitHub Actions). Instead of someone manually running tofu plan and tofu apply, you can run those steps through a Pull Request workflow, where:
By the end of this blog post, we will have:
You need a Kubernetes cluster you can reach using kubectl. That can be local (kind/minikube) or managed (GKE/EKS/AKS). Before going further, make sure these work:
|
1 |
kubectl cluster-info<br><br># Output<br>Kubernetes control plane is running at https://34.57.102.230<br><br>kubectl get nodes<br><br># Output<br>NAME STATUS ROLES AGE VERSION<br>gke-k8s-testing-auto-k8s-testing-auto-31c0c085-16vr Ready <none> 13h v1.32.9-gke.1675000<br>gke-k8s-testing-auto-k8s-testing-auto-4cd48431-vd43 Ready <none> 13h v1.32.9-gke.1675000<br>gke-k8s-testing-auto-k8s-testing-auto-b5c8ecb0-scp4 Ready <none> 13h v1.32.9-gke.1675000<br> |
You’ll also need:
This project automates deploying the Percona Operator for MySQL to Kubernetes using OpenTofu (a Terraform fork) and Digger for CI/CD.
|
1 |
demo-percona-operator-mysql-opentaco/<br>├── .github/<br>│ └── workflows/<br>│ └── digger_workflow.yml # GitHub Actions CI/CD workflow<br>├── opentofu/ # OpenTofu infrastructure code<br>│ ├── main.tf # Main infrastructure definitions<br>│ ├── variables.tf # Input variables<br>│ └── versions.tf # Version constraints & backend config<br>├── digger.yml # Digger CI/CD configuration<br>├── README.md # Project documentation<br>└── .gitignore # Git ignore patterns<br> |
What each part does:
Before we bring OpenTaco into the picture, it’s worth running the deployment once locally with OpenTofu. This is not a separate approach; it’s the same OpenTofu project that OpenTaco will run later in CI. Doing it locally first helps you confirm your Kubernetes access and Helm chart behaviour without also debugging CI credentials.

Image01: Local workflow overview.
|
1 |
git clone https://github.com/gkech/demo-percona-operator-mysql-opentaco.git<br>cd demo-percona-operator-mysql-opentaco<br>cd opentofu<br> |
OpenTofu uses a state file to remember what it deployed, so it can plan changes and later destroy the same resources cleanly.
Option A: local state (quickest to start)
Comment out the backend block in versions.tf and run:
|
1 |
tofu init<br><br>-->> Exampe Output<br>Initializing the backend...<br>Initializing provider plugins...<br>Providers are signed by their developers.<br>OpenTofu has created a lock file .terraform.lock.hcl to record the provider<br>OpenTofu has been successfully initialized!<br>You may now begin working with OpenTofu. Try running "tofu plan" to see<br>any changes that are required for your infrastructure.<br>commands will detect it and remind you to do so if necessary.<br> |
Option B: remote state on S3 (recommended for CI)
|
1 |
backend "s3" {<br> bucket = "s3-k8s-testing-automation-edithturn"<br> key = "percona-opentaco/terraform.tfstate"<br> region = "us-east-1"<br>}<br> |
Option C: remote state on GCS (also great for CI)
|
1 |
backend "gcs" {<br> bucket = "percona-demo-opentaco"<br> prefix = "terraform/state"<br>}<br><b><br><br></b> |
3. Run the tofu plan and tofu apply
tofu plan is a dry-run: it shows exactly what OpenTofu would create/change/destroy, without touching your cluster. In our case, it plans to create one namespace and two Helm releases (the Percona Operator chart and the MySQL cluster chart)
|
1 |
tofu plan<br><br>-->> Example Output<br>OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:<br> + create<br><br>OpenTofu will perform the following actions:<br><br> # helm_release.percona_db will be created<br> + resource "helm_release" "percona_db" {<br> + atomic = false<br> + chart = "ps-db"<br> + name = "percona-ps-db"<br> + namespace = "opentaco-mysql"<br> + repository = "https://percona.github.io/percona-helm-charts/"<br> + status = "deployed"<br> + timeout = 300<br> + values = [<br> + <<-EOT<br> "mysql":<br> "annotations":<br> "open": "taco-taco"<br> "resources":<br> "limits":<br> "memory": "5G"<br> "requests":<br> "memory": "2G"<br> EOT,<br> ]<br> + verify = false<br> + version = "1.0.0"<br> + wait = true<br> + wait_for_jobs = false<br> }<br><br> # helm_release.percona_operator will be created<br> + resource "helm_release" "percona_operator" {<br> + atomic = false<br> + chart = "ps-operator"<br> + name = "percona-ps-operator"<br> + namespace = "opentaco-mysql"<br> + repository = "https://percona.github.io/percona-helm-charts/"<br> + status = "deployed"<br> }<br><br> # kubernetes_namespace.percona will be created<br> + resource "kubernetes_namespace" "percona" {<br> + id = (known after apply)<br> + wait_for_default_service_account = false<br> }<br><br>Plan: 3 to add, 0 to change, 0 to destroy.<br><br>Changes to Outputs:<br> + namespace = "opentaco-mysql"<br> + note = "Percona Operator and MySQL cluster deployed via Helm; see kubectl -n opentaco-mysql get pods"<br> + operator_chart_version = (known after apply)<br> |
When you run tofu apply, OpenTofu executes this plan and actually installs those Helm charts into the cluster. In this run, OpenTofu created the opentaco-mysql namespace first, then installed two Helm releases: percona-ps-operator (the Percona Operator) and percona-ps-db (the demo MySQL cluster). The final “Outputs” section confirms what was deployed, including the chart versions and the namespace.
|
1 |
tofu apply -auto-appove<br><br>-->> Example Output<br><br>OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:<br> + create<br><br>OpenTofu will perform the following actions:<br><br> # helm_release.percona_db will be created<br> # helm_release.percona_operator will be created<br> # kubernetes_namespace.percona will be created<br> + resource "kubernetes_namespace" "percona" {<br> + metadata {<br> + name = "opentaco-mysql"<br> }<br> }<br><br>Plan: 3 to add, 0 to change, 0 to destroy.<br><br>Changes to Outputs:<br> + namespace = "opentaco-mysql"<br> + note = "Percona Operator and MySQL cluster deployed via Helm; see kubectl -n opentaco-mysql get pods"<br> + operator_chart_version = (known after apply)<br>kubernetes_namespace.percona: Creating...<br>kubernetes_namespace.percona: Creation complete after 1s [id=opentaco-mysql]<br>helm_release.percona_operator: Creating...<br>helm_release.percona_operator: Still creating... [10s elapsed]<br>helm_release.percona_operator: Creation complete after 13s [id=percona-ps-operator]<br>helm_release.percona_db: Creating...<br>helm_release.percona_db: Creation complete after 4s [id=percona-ps-db]<br><br>Apply complete! Resources: 3 added, 0 changed, 0 destroyed.<br>Outputs:<br><br>database_chart_version = "1.0.0"<br>namespace = "opentaco-mysql"<br>note = "Percona Operator and MySQL cluster deployed via Helm; see kubectl -n opentaco-mysql get pods"<br>operator_chart_version = "1.0.0"<br> |
In this demo, OpenTofu is the “orchestrator”: it describes what should be installed, and then uses the Helm provider to install it into your Kubernetes cluster.
When you run tofu destroy, OpenTofu uninstalls the Helm releases, which removes the operator and the demo MySQL cluster (and whatever the charts are configured to clean up).
Now, let’s confirm everything is running. Let’s check the pods:
|
1 |
kubectl -n opentaco-mysql get pods<br><br># Output<br>NAME READY STATUS RESTARTS AGE<br>percona-ps-db-haproxy-0 2/2 Running 0 10m<br>percona-ps-db-haproxy-1 2/2 Running 0 9m55s<br>percona-ps-db-haproxy-2 2/2 Running 0 9m29s<br>percona-ps-db-mysql-0 2/2 Running 0 12m<br>percona-ps-db-mysql-1 2/2 Running 0 10m<br>percona-ps-db-mysql-2 2/2 Running 0 8m39s<br>percona-ps-operator-77bc4755c5-pv5rz 1/1 Running 0 12m<br> |
You should see the operator pod, as well as the MySQL and HAProxy pods created by the operator.
Check the custom resource:
|
1 |
kubectl -n opentaco-mysql get perconaservermysql<br><br># Example Output<br>NAME REPLICATION ENDPOINT STATE MYSQL HAPROXY <br>percona-ps-db group-replication percona-ps-db-haproxy.opentaco-mysql ready 3 3 <br> |
|
1 |
kubectl -n opentaco-mysql describe perconaservermysql percona-ps-db<br><br># Output<br>Name: percona-ps-db<br>Namespace: opentaco-mysql<br>Labels: app.kubernetes.io/instance=percona-ps-db<br> app.kubernetes.io/managed-by=Helm<br> app.kubernetes.io/name=ps-db<br> app.kubernetes.io/version=1.0.0<br> helm.sh/chart=ps-db-1.0.0<br>Annotations: meta.helm.sh/release-name: percona-ps-db<br> meta.helm.sh/release-namespace: opentaco-mysql<br>API Version: ps.percona.com/v1<br>Kind: PerconaServerMySQL<br>Metadata:<br> Finalizers:<br> percona.com/delete-mysql-pods-in-order<br>Spec:<br> Backup:<br> Enabled: true<br> Image: percona/percona-xtrabackup:8.4.0-4.1<br> Image Pull Policy: Always<br> Cr Version: 1.0.0<br> Mysql:<br> Affinity:<br> Anti Affinity Topology Key: kubernetes.io/hostname<br> Annotations:<br> Open: taco-taco<br> Auto Recovery: true<br> Cluster Type: group-replication<br> Expose Primary:<br> Enabled: true<br> Grace Period: 600<br> Image: percona/percona-server:8.4.6-6.1<br> Image Pull Policy: Always<br>.<br>.<br>.<br>Events:<br> Type Reason Age From Message<br> ---- ------ ---- ---- -------<br> Warning ClusterStateChanged 18m ps-controller -> Initializing<br> Warning ClusterStateChanged 12m ps-controller Initializing -> Ready<br> |
This should show the operator is reconciling, and the cluster becomes ready.
Now, let’s check Services:
|
1 |
kubectl -n opentaco-mysql get svc<br><br>-->> Example Output<br>NAME TYPE CLUSTER-IP PORT(S) AGE<br>percona-ps-db-haproxy ClusterIP 34.118.234.180 3306/TCP,3307/TCP,3309/TCP...<br>percona-ps-db-mysql ClusterIP None 3306/TCP,33062/TCP...<br>percona-ps-db-mysql-primary ClusterIP 34.118.227.76 3306/TCP,33062/TCP...<br>percona-ps-db-mysql-proxy ClusterIP None 3306/TCP,33062/TCP,33060/TCP...<br>percona-ps-db-mysql-unready ClusterIP None 3306/TCP,33062/TCPTCP...<br> |
We can see the services for primary and HAProxy
Extract the MySQL root password:
|
1 |
kubectl -n opentaco-mysql get secret percona-ps-db-secrets -o jsonpath='{.data.root}' | base64 -d && echo<br><br># Example Output<br>r]#s.KM~uu4XT<WO<br> |
Port-forward to the primary service
|
1 |
kubectl -n opentaco-mysql port-forward svc/percona-ps-db-mysql-primary 3306:3306<br>Forwarding from 127.0.0.1:3306 -> 3306<br>Forwarding from [::1]:3306 -> 3306<br> |
Then connect using a MySQL client:
|
1 |
mysql -h 127.0.0.1 -u root -p<br>Enter password: <br>Welcome to the MySQL monitor. Commands end with ; or g.<br>Your MySQL connection id is 4544<br>Server version: 8.4.6-6 Percona Server (GPL), Release 6, Revision dbba4396<br><br>Copyright (c) 2000, 2025, Oracle and/or its affiliates.<br><br>Oracle is a registered trademark of Oracle Corporation and/or its<br>affiliates. Other names may be trademarks of their respective<br>owners.<br><br>Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.<br><br>mysql> <br> |
Once connected, let’s try:
|
1 |
mysql> SHOW DATABASES;<br>+-------------------------------+<br>| Database |<br>+-------------------------------+<br>| information_schema |<br>| mysql |<br>| mysql_innodb_cluster_metadata |<br>| performance_schema |<br>| sys |<br>| sys_operator |<br>+-------------------------------+<br>6 rows in set (0.10 sec)<br> |
If this works, our MySQL cluster is running correctly! Wohoo!!
After we are done, we can run: tofu destroy. This will uninstall both Helm releases (the operator and the demo MySQL cluster) and then delete the opentaco-mysql namespace, leaving your Kubernetes cluster itself untouched.
|
1 |
cd opentofu<br>tofu destroy -auto-approve<br><br>-->> Example Output<br>OpenTofu will perform the following actions:<br><br> # helm_release.percona_db will be destroyed<br> # helm_release.percona_operator will be destroyed<br> # kubernetes_namespace.percona will be destroyed<br>Plan: 0 to add, 0 to change, 3 to destroy.<br><br> |
This is the part that makes the demo useful for teams.
Once your repo is connected to OpenTaco Cloud (via the GitHub App), OpenTaco uses GitHub Actions to run your OpenTofu project and report results back to the Pull Request.
So you don’t need someone to run tofu manually on their laptop; your PR becomes the workflow.

Image02: PR-based workflow overview.
1. Connect your repository to OpenTaco Cloud
After that, OpenTaco can react to PRs and run workflows.
OpenTaco (Digger) reads digger.yml in your repo to find:
apply|
1 |
projects:<br> - name: percona-opentaco<br> dir: opentofu<br> workspace: default<br> tool: opentofu<br> workflow: default<br><br>workflows:<br> default:<br> plan:<br> steps:<br> - init<br> - plan<br> apply:<br> steps:<br> - init<br> - apply<br> |
Let’s configure first GitHub Actions secrets because the runner needs access to:
You need these secrets in GitHub:
For GKE access:
For S3 backend (if you use it):
Note: When your state backend is S3, OpenTofu needs AWS credentials during init to read/write the state file. So in CI, you must authenticate to AWS (in addition to GKE/GCP). In this example, we use S3, so our GitHub Actions workflow includes an AWS credentials step before Digger runs.
|
1 |
- name: Configure AWS credentials<br> uses: aws-actions/configure-aws-credentials@v4<br> with:<br> aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}<br> aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}<br> aws-region: ${{ secrets.AWS_REGION }}<br> |
Now that we have the credentials, we create a branch and change something under opentofu/ (for example: bump the chart version, adjust MySQL memory limits, change values passed to the chart). In this example, we are changing the name of the namespace to opentaco-mysql-test in the variables.tf file.
Next step is to open a Pull Request and add a comment, with:
|
1 |
digger plan |
OpenTaco will run tofu init + tofu plan in GitHub Actions and post the plan output back to the PR.
Nothing is deployed yet; this is a dry run.
When you’re ready, run:
|
1 |
digger apply<br> |
OpenTaco will run tofu init + tofu apply. This installs/updates the same Helm releases you tested locally:
After digger apply is applied successfully, we can check the output that looks like this:
Besides PR comments, otaco.app gives you a quick view of:
You should be able to verify that the cluster resources exist. Let’s explore the pods.
|
1 |
kubectl -n opentaco-mysql-test get pods<br># -->> Example Output<br>NAME READY STATUS RESTARTS AGE<br>percona-ps-db-haproxy-0 2/2 Running 0 8m59s<br>percona-ps-db-haproxy-1 2/2 Running 0 8m38s<br>percona-ps-db-haproxy-2 2/2 Running 0 8m18s<br>percona-ps-db-mysql-0 2/2 Running 0 9m49s<br>percona-ps-db-mysql-1 2/2 Running 0 9m2s<br>percona-ps-db-mysql-2 2/2 Running 0 8m14s<br>percona-ps-operator-676bf7c664-d2hdp 1/1 Running 0 9m56s<br><br><br><br>kubectl -n opentaco-mysql-test get svc<br># -->> Example Output<br><br>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br>percona-ps-db-haproxy ClusterIP 34.118.229.10 <none> 3306/TCP,3307/TCP,3309/TCP,33060/TCP,33062/TCP 10m<br>percona-ps-db-mysql ClusterIP None <none> 3306/TCP,33062/TCP,33060/TCP,6450/TCP,33061/TCP 10m<br>percona-ps-db-mysql-primary ClusterIP 34.118.228.29 <none> 3306/TCP,33062/TCP,33060/TCP,6450/TCP,33061/TCP 10m<br>percona-ps-db-mysql-proxy ClusterIP None <none> 3306/TCP,33062/TCP,33060/TCP,6450/TCP,33061/TCP 10m<br>percona-ps-db-mysql-unready ClusterIP None <none> 3306/TCP,33062/TCP,33060/TCP,6450/TCP,33061/TCP 10m<br> |
You should also see the state object created in your backend.
For S3:
|
1 |
aws s3 ls s3://s3-k8s-testing-automation-edithturn/percona-opentaco/ --recursive<br>2025-12-21 20:20:02 1316 percona-opentaco/terraform.tfstate<br> |
You’ll see JSON describing the OpenTofu resource (the type as “helm_release”) and the outputs like namespace and note. That’s expected: OpenTofu is tracking the execution wrapper, not each Kubernetes object.
|
1 |
aws s3 cp s3://s3-k8s-testing-automation-edithturn/percona-opentaco/terraform.tfstate - | jq<br><br># -->> Example Output <br>{<br> "version": 4,<br> "terraform_version": "1.6.6",<br> "serial": 1,<br> "lineage": "9826ce3b-e10b-ed80-a19e-d433c5731b92",<br> "outputs": {<br> "database_chart_version": {<br> "value": "1.0.0",<br> "type": "string"<br> },<br> "namespace": {<br> "value": "opentaco-mysql-test",<br> "type": "string"<br> },<br> "note": {<br> "value": "Percona Operator and MySQL cluster deployed via Helm; see kubectl -n opentaco-mysql-test get pods",<br> "type": "string"<br> },<br> "operator_chart_version": {<br> "value": "1.0.0",<br> "type": "string"<br> }<br> },<br> "resources": [<br> {<br> "mode": "managed",<br> "type": "helm_release",<br> "name": "percona_db",<br> "provider": "provider["registry.terraform.io/hashicorp/helm"]",<br> "instances": [<br> {<br> |
If you prefer to remove the database cluster and operator directly with Kubernetes/Helm, follow the official docs: Percona Operator for MySQL uninstall/delete cluster steps
At this point, you have a repeatable method for deploying the Percona Operator for MySQL and a demo MySQL cluster on Kubernetes. You can now run the same workflow locally or from a CI/CD pipeline. Your deployment becomes documented, reproducible, and team-friendly.
If you’d like to explore further, check out the demo-percona-operator-mysql-opentaco repo and try a few changes of your own. We’re happy to help if you run into issues with OpenTaco or the Percona Operator for MySQL. And if you do play with it, tell us how it went, share your findings, ideas, or improvements with us!
Resources
RELATED POSTS