Deploying databases on Kubernetes is getting easier every year. The part that still hurts is making deployments repeatable and predictable across clusters and environments, especially from Continuous Integration(CI) perspective. This is where PR-based automation helps; you can review a plan, validate changes, and only apply after approval, before anything touches your cluster. 

If you’ve ever installed an operator by hand, applied a few YAMLs, changed a script “just a bit”, and then watched the same setup behave differently in another environment, this post is for you.

In this tutorial, we’ll deploy Percona Operator for MySQL and a sample three-node MySQL cluster using OpenTofu – a fully open-source Terraform fork. Then we’ll take the exact same deployment and run it through CI using OpenTaco (formerly known as Digger), so that infrastructure changes can be validated and applied from Pull Requests.

We’ll use this demo repository throughout the guide:  GitHub Demo Percona Operator MySQL OpenTaco.

What OpenTaco adds to OpenTofu

OpenTofu and OpenTaco shine when infrastructure and databases must be reviewed, validated, and approved before they ever touch the cluster.

Databases are central to most stacks, and changes should be handled with more care. We want a workflow where updates are reviewed and validated before they are ever deployed to a cluster. That’s exactly what OpenTofu + OpenTaco enables: a PR shows the plan output for review, and apply happens only when you approve it.

OpenTofu (and Terraform) already gives us the “Infrastructure as Code” part: plan what will change, apply it, and store state. The problem is operational, for example, in a team: who runs “apply”, when do they run it, and how do we avoid collisions?

OpenTaco sits on top of your existing CI system (in our case, GitHub Actions). Instead of someone manually running tofu plan and tofu apply, you can run those steps through a Pull Request workflow, where:

  • A pull request can trigger a plan and show results in the PR
  • Apply happens in a controlled way (for example, after approval/merge, or when someone explicitly requests it)
  • Concurrent changes are prevented via locking
  • The same steps are repeatable in every environment

By the end of this blog post, we will have:

  • Percona Operator for MySQL running in your Kubernetes cluster
  • A sample PerconaServerMySQL custom resource deployed
  • A three-node MySQL cluster (Group Replication) and HAProxy pods created by the operator
  • OpenTofu state stored remotely (GCS or S3), which matters for CI
  • OpenTaco for IaC PR automation

Prerequisites

You need a Kubernetes cluster you can reach using kubectl. That can be local (kind/minikube) or managed (GKE/EKS/AKS). Before going further, make sure these work:

You’ll also need:

  • OpenTofu (tofu)
  • Git
  • Optional: a MySQL client to test connectivity

Demo repository structure

This project automates deploying the Percona Operator for MySQL to Kubernetes using OpenTofu (a Terraform fork) and Digger for CI/CD.

What each part does:

  • opentofu/ doesn’t manage Kubernetes objects directly. Instead, it manages the action of running those scripts in a repeatable way.
  • digger.yml tells OpenTaco what “project” to run and which steps to execute for plan/apply/destroy.
  • digger_workflow.yml, here we have GitHub Actions workflow(s) that run OpenTaco

Run it locally with OpenTofu (Based on Helm)

Before we bring OpenTaco into the picture, it’s worth running the deployment once locally with OpenTofu. This is not a separate approach; it’s the same OpenTofu project that OpenTaco will run later in CI. Doing it locally first helps you confirm your Kubernetes access and Helm chart behaviour without also debugging CI credentials.

Image01: Local workflow overview.

  1. Let’s start by cloning the repo and moving into the OpenTofu project
  2. Choose where the OpenTofu state will live (local vs remote)

OpenTofu uses a state file to remember what it deployed, so it can plan changes and later destroy the same resources cleanly.

  • For local learning, the local state is fine.
  • For CI and team usage, prefer a remote state (shared and consistent across runs, and can support locking).

Option A: local state (quickest to start)

Comment out the backend block in versions.tf and run:

Option B: remote state on S3 (recommended for CI)

Option C: remote state on GCS (also great for CI)

 

3. Run the tofu plan and tofu apply

tofu plan is a dry-run: it shows exactly what OpenTofu would create/change/destroy, without touching your cluster. In our case, it plans to create one namespace and two Helm releases (the Percona Operator chart and the MySQL cluster chart)

When you run tofu apply, OpenTofu executes this plan and actually installs those Helm charts into the cluster. In this run, OpenTofu created the opentaco-mysql namespace first, then installed two Helm releases: percona-ps-operator (the Percona Operator) and percona-ps-db (the demo MySQL cluster). The final “Outputs” section confirms what was deployed, including the chart versions and the namespace.

In this demo, OpenTofu is the “orchestrator”: it describes what should be installed, and then uses the Helm provider to install it into your Kubernetes cluster.

When you run tofu destroy, OpenTofu uninstalls the Helm releases, which removes the operator and the demo MySQL cluster (and whatever the charts are configured to clean up).

Verify that the operator and cluster are running

Now, let’s confirm everything is running. Let’s check the pods:

You should see the operator pod, as well as the MySQL and HAProxy pods created by the operator.

Check the custom resource:

This should show the operator is reconciling, and the cluster becomes ready.

Now, let’s check Services:

We can see the services for primary and HAProxy

Quick connectivity test

Extract the MySQL root password:

Port-forward to the primary service

Then connect using a MySQL client:

Once connected, let’s try:

If this works, our MySQL cluster is running correctly! Wohoo!!

Clean Up (Destroy Everything)

After we are done, we can run: tofu destroy. This will uninstall both Helm releases (the operator and the demo MySQL cluster) and then delete the opentaco-mysql namespace, leaving your Kubernetes cluster itself untouched.

Time for OpenTaco: PR-based flow

This is the part that makes the demo useful for teams.

Once your repo is connected to OpenTaco Cloud (via the GitHub App), OpenTaco uses GitHub Actions to run your OpenTofu project and report results back to the Pull Request.

So you don’t need someone to run tofu manually on their laptop; your PR becomes the workflow.

Image02: PR-based workflow overview.

1. Connect your repository to OpenTaco Cloud

1. Install the GitHub App:

  • Go to: otaco.app, create an account
  • Select your repo
  • Approve permissions

After that, OpenTaco can react to PRs and run workflows.

2. How OpenTaco knows what to run: digger.yml

OpenTaco (Digger) reads digger.yml in your repo to find:

  • where your OpenTofu project lives (opentofu/)
  • which tool to use (opentofu)
  • which steps to run for plan and apply

3. Register Actions Secrets

Let’s configure first GitHub Actions secrets because the runner needs access to:

  • your Kubernetes cluster (GKE in our example), and
  • your remote state backend (GCS or S3),

You need these secrets in GitHub:

For GKE access:

  • GOOGLE_CLOUD_CREDENTIALS (the full service account JSON)
  • GCP_PROJECT_ID
  • GKE_CLUSTER_NAME
  • GKE_CLUSTER_REGION

For S3 backend (if you use it):

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_REGION

Note: When your state backend is S3, OpenTofu needs AWS credentials during init to read/write the state file. So in CI, you must authenticate to AWS (in addition to GKE/GCP). In this example, we use S3, so our GitHub Actions workflow includes an AWS credentials step before Digger runs.

4. Testing PR-based workflow with OpenTaco

Now that we have the credentials, we create a branch and change something under opentofu/ (for example: bump the chart version, adjust MySQL memory limits, change values passed to the chart). In this example, we are changing the name of the namespace to opentaco-mysql-test in the variables.tf file.

Next step is to open a Pull Request and add a comment, with:

OpenTaco will run tofu init + tofu plan in GitHub Actions and post the plan output back to the PR.

Nothing is deployed yet; this is a dry run.

When you’re ready, run:

OpenTaco will run tofu init + tofu apply. This installs/updates the same Helm releases you tested locally:

  • The Percona Operator chart
  • the MySQL cluster chart

After digger apply is applied successfully, we can check the output that looks like this:

5. OpenTaco UI (otaco.app): what it’s for

Besides PR comments, otaco.app  gives you a quick view of:

  • Which repos are connected
  • recent plan/apply jobs
  • timestamps and status (succeeded/failed)
  • outputs captured from runs

 

6. Confirming it worked (CI and state)

You should be able to verify that the cluster resources exist. Let’s explore the pods.

You should also see the state object created in your backend.

For S3:

You’ll see JSON describing the OpenTofu resource (the type as “helm_release”) and the outputs like namespace and note. That’s expected: OpenTofu is tracking the execution wrapper, not each Kubernetes object.

7. Clean up 

If you prefer to remove the database cluster and operator directly with Kubernetes/Helm, follow the official docs: Percona Operator for MySQL uninstall/delete cluster steps 

Closing

At this point, you have a repeatable method for deploying the Percona Operator for MySQL and a demo MySQL cluster on Kubernetes. You can now run the same workflow locally or from a CI/CD pipeline. Your deployment becomes documented, reproducible, and team-friendly.

If you’d like to explore further, check out the demo-percona-operator-mysql-opentaco repo and try a few changes of your own. We’re happy to help if you run into issues with OpenTaco or the Percona Operator for MySQL. And if you do play with it, tell us how it went, share your findings, ideas, or improvements with us!

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments