The latest 1.7.0 release of Percona Operator for MongoDB came out just recently and enables users to:
Today we will look into these new features, the use cases, and highlight some architectural and technical decisions we made when implementing them.
The 1.6.0 release of our Operator introduced single shard support, which we highlighted in this blog post and explained why it makes sense. But horizontal scaling is not possible without support for multiple shards.
A new shard is just a new ReplicaSet which can be added under spec.replsets in cr.yaml:
|
1 |
spec:<br> ...<br> replsets:<br> - name: rs0<br> size: 3<br> ....<br> - name: rs1<br> size: 3<br> ... |
Read more on how to configure sharding.
In the Kubernetes world, a MongoDB ReplicaSet is a StatefulSet with a number of pods specified in spec.replsets.[].size variable.
Once pods are up and running, the Operator does the following:

Then the output of db.adminCommand({ listShards:1 }) will look like this:
|
1 |
"shards" : [<br> {<br> "_id" : "replicaset-1",<br> "host" : "replicaset-1/percona-cluster-replicaset-1-0.percona-cluster-replicaset-1.default.svc.cluster.local:27017,percona-cluster-replicaset-1-1.percona-cluster-replicaset-1.default.svc.cluster.local:27017,percona-cluster-replicaset-1-2.percona-cluster-replicaset-1.default.svc.cluster.local:27017",<br> "state" : 1<br> },<br> {<br> "_id" : "replicaset-2",<br> "host" : "replicaset-2/percona-cluster-replicaset-2-0.percona-cluster-replicaset-2.default.svc.cluster.local:27017,percona-cluster-replicaset-2-1.percona-cluster-replicaset-2.default.svc.cluster.local:27017,percona-cluster-replicaset-2-2.percona-cluster-replicaset-2.default.svc.cluster.local:27017",<br> "state" : 1<br> }<br> ], |
Percona Operators are built to simplify the deployment and management of the databases on Kubernetes. Our goal is to provide resilient infrastructure, but the operator does not manage the data itself. Deleting a shard requires moving the data to another shard before removal, but there are a couple of caveats:

There are a few choices:
For now, we decided to pick option #1 and won’t touch the data, but in future releases, we would like to work with the community to introduce fully-automated shard removal.
When the user wants to remove the shard now, we first check if there are any non-system databases present on the ReplicaSet. If there are none, the shard can be removed:
|
1 |
func (r *ReconcilePerconaServerMongoDB) checkIfPossibleToRemove(cr *api.PerconaServerMongoDB, usersSecret *corev1.Secret, rsName string) error {<br> systemDBs := map[string]struct{}{<br> "local": {},<br> "admin": {},<br> "config": {},<br> } |

The sidecar container pattern allows users to extend the application without changing the main container image. They leverage the fact that all containers in the pod share storage and network resources.
Percona Operators have built-in support for Percona Monitoring and Management to gain monitoring insights for the databases on Kubernetes, but sometimes users may want to expose metrics to other monitoring systems. Lets see how mongodb_exporter can expose metrics running as a sidecar along with ReplicaSet containers.
1. Create the monitoring user that the exporter will use to connect to MongoDB. Connect to mongod in the container and create the user:
|
1 |
> db.getSiblingDB("admin").createUser({<br> user: "mongodb_exporter",<br> pwd: "mysupErpassword!123",<br> roles: [<br> { role: "clusterMonitor", db: "admin" },<br> { role: "read", db: "local" }<br> ]<br> }) |
2. Create the Kubernetes secret with these login and password. Encode both the username and password with base64:
|
1 |
$ echo -n mongodb_exporter | base64<br>bW9uZ29kYl9leHBvcnRlcg==<br>$ echo -n 'mysupErpassword!123' | base64<br>bXlzdXBFcnBhc3N3b3JkITEyMw== |
Put these into the secret and apply:
|
1 |
$ cat mongoexp_secret.yaml<br>apiVersion: v1<br>kind: Secret<br>metadata:<br> name: mongoexp-secret<br>data:<br> username: bW9uZ29kYl9leHBvcnRlcg==<br> password: bXlzdXBFcnBhc3N3b3JkITEyMw==<br><br>$ kubectl apply -f mongoexp_secret.yaml |
3. Add a sidecar for mongodb_exporter into cr.yaml and apply:
|
1 |
replsets:<br>- name: rs0<br> ...<br> sidecars:<br> - image: bitnami/mongodb-exporter:latest<br> name: mongodb-exporter<br> env:<br> - name: EXPORTER_USER<br> valueFrom:<br> secretKeyRef:<br> name: mongoexp-secret<br> key: username<br> - name: EXPORTER_PASS<br> valueFrom:<br> secretKeyRef:<br> name: mongoexp-secret<br> key: password<br> - name: POD_IP<br> valueFrom:<br> fieldRef:<br> fieldPath: status.podIP<br> - name: MONGODB_URI<br> value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_IP):27017"<br> args: ["--web.listen-address=$(POD_IP):9216"<br><br>$ kubectl apply -f deploy/cr.yaml |
All it takes now is to configure the monitoring system to fetch the metrics for each mongod Pod. For example, prometheus-operator will start fetching metrics once annotations are added to ReplicaSet pods:
|
1 |
replsets:<br>- name: rs0<br> ...<br> annotations:<br> prometheus.io/scrape: 'true'<br> prometheus.io/port: '9216' |
Running CICD pipelines that deploy MongoDB clusters on Kubernetes is a common thing. Once these clusters are terminated, the Persistent Volume Claims (PVCs) are not. We have now added automation that removes PVCs after cluster deletion. We rely on Kubernetes Finalizers – asynchronous pre-delete hooks. In our case we hook the finalizer to the Custom Resource (CR) object which is created for the MongoDB cluster.

A user can enable the finalizer through cr.yaml in the metadata section:
|
1 |
metadata:<br> name: my-cluster-name<br> finalizers:<br> - delete-psmdb-pvc |
Percona is committed to providing production-grade database deployments on Kubernetes. Our Percona Operator for MongoDB is a feature-rich tool to deploy and manage your MongoDB clusters with ease. Our Operator is free and open source. Try it out by following the documentation here or help us to make it better by contributing your code and ideas to our Github repository.
Resources
RELATED POSTS