This is the second part of the series of blog posts unmasking the complexity of MongoDB cluster exposure in Kubernetes with Percona Operator for MongoDB. In the first part, we focused heavily on split horizons and a single replica set. 

In this part, we will expose a sharded cluster and a single replica set with Istio, a service mesh utility that simplifies exposure, security, and observability in Kubernetes. We will focus on its ingress capability. 

Prerequisites

  1. Kubernetes cluster. We will use Google Kubernetes Engine, but any other would do as well. In some examples we will use a Load Balancer, which should be supported by the cluster of your choice.
  2. Percona Operator for MongoDB deployed. Follow our quick start installation guide.

Prepare

Deploy Istio

First, we need to deploy Istio and its components. 

Custom Resource Definitions (CRD)

CRDs extend the Kubernetes API and enable users to create Istio Custom Resources as regular k8s primitives. 

Istiod

Istiod provides service discovery, configuration, and certificate management. Make sure that Istiod is up and running before you proceed with Ingress installation.

Ingress gateway

This component manages the exposure, similar to other ingress controllers. It works on the L4 or L7 layer.

Istio gateway is exposed with a LoadBalancer service. We will need to edit this service later on to expose the database. The EXTERNAL_IP address of a service is the one that will be used to connect to the database:

Expose

Two resources should be created to expose anything with Istio – Gateway and Virtual Service. Gateway is a balancer, whereas Virtual Service defines the routing rules. 

Sharded cluster

Skip this part and read below if you are interested in a single replica set exposure.

The sharded cluster has a single point of entry – mongos. We will expose it through the Istio ingress controller.

  • Expose the port on the controller itself. To do it, we will edit the Service resource. We need to add the port we need into spec.ports section:

We created a gateway that uses istio-ingress and handles port 27017.

Mongos is now exposed, and you can connect to it using the IP address of an istio-ingress service.

Get the user and password from the Secret as described in our documentation

Your connection string will look like this:

Single replica set

It might be counterintuitive, but exposing a single replica set is more complicated than a sharded cluster. It involves additional configuration for split horizon and TLS certificate generation. We strongly encourage you to read the previous blog post in the series that covers split horizons.

We will execute the following steps:

  1. Create a domain name that will be used to connect to the replica set
  2. Ensure proper splitHorizons configuration
  3. Generate TLS certificates for Transport Encryption
  4. Deploy the cluster
  5. Expose with Istio

Creating domain names

There are two ways to approach this:

  1. Have a domain per replica set node (see 08-a-rs-expose-istio.yaml)
  2. Have a single domain for all nodes (see 08-b-rs-expose-istio.yaml)
    1. It is possible, as we are going to have different TCP ports for different replica set nodes.

For both options, we need to point domain names to EXTERNAL_IP address that we obtained for Istio ingress above. 

Going forward, we will use a single domain option in our examples.

Configure split horizons

We went very deep into details of split horizons in the previous blog post. Have a look. For a single domain splitHorizons section looks like this:

It is important to note, that as we use Istio, replica set nodes will be exposed using ClusterIPs only.

Also, please note that splitHorizons supports port setting starting from version 1.16 of the Operator. You can see more information in K8SPSMDB-1004.

Generate certificates

We will use the script that we created in the previous blog post, which mimics the behavior described in the documentation. It will generate Certificate Authority and server certificate with a key. We will need these to connect to the cluster with mongosh. Split horizons will not work without TLS certificates in place.

Deploy Percona Server for MongoDB

Deploy 08-b-rs-expose-istio.yaml manifest:

Expose with Istio

Edit istio-ingress service

For each replica set node, we will have a separate TCP port. We need to edit the istio-ingress service to have it exposed. You can also pre-configure it in the values.yaml when deploying with helm.

Istio gateway and virtual service

Create Gateway resource in the database’s namespace (09-rs-istio-gateway.yaml):

Create a virtual service that points to the replica set nodes (10-rs-istio-virtual-svc.yaml):

Connect to the database

Let’s try to connect:

Explanation:

  • As we use a single domain with multiple ports, our seedlist looks as follows:

  • ca.pem and server-cert.pem are generated with generate-tls.sh script. If you did not use it, use your own certificates. 

TLS SNI support

It is also possible to expose MongoDB using TLS SNI (Server Name Indication), where all the connections are going through regular HTTPS port 443. We are leaving it out of this blog post for two reasons:

  1. The configuration of it is almost identical to what we described above
  2. TLS passthrough is in experimental phase in Istio right now

Conclusion

In this second part of our series, we’ve taken the next step in mastering MongoDB cluster exposure in Kubernetes by leveraging Istio. We demonstrated how Istio simplifies the exposure process for both sharded clusters and single replica sets, while also enhancing security and observability. By combining Istio’s powerful features with the Percona Operator for MongoDB, you can achieve a more robust and manageable MongoDB deployment in your Kubernetes environment.

 

Try Percona Operator for MongoDB now

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments