Install Percona server for MongoDB on Google Kubernetes Engine (GKE)
This quickstart shows you how to configure a Percona server for MongoDB operator with the Google Kubernetes Engine. The document assumes some experience with Google Kubernetes Engine (GKE). For more information on the GKE, see the Kubernetes Engine Quickstart.
All commands from this quickstart can be run either in the Google Cloud shell or in your local shell.
To use Google Cloud shell, you need nothing but a modern web browser.
If you would like to use your local shell, install the following:
kubectl. It is the Kubernetes command-line tool you will use to manage and deploy applications. To install the tool, run the following command:
$ gcloud auth login $ gcloud components install kubectl
Configuring default settings for the cluster
You can configure the settings using the
gcloud tool. You can run it either in the Cloud Shell or in your local shell (if you have installed Google Cloud SDK locally on the previous step). The following command will create a cluster named
$ gcloud container clusters create my-cluster-name --project <project name> --zone us-central1-a --cluster-version 1.15 --machine-type n1-standard-4 --num-nodes=3
You must edit the following command and other command-line statements to replace the
<project name> placeholder with your project name. You may also be required to edit the zone location, which is set to
us-central1 in the above example. Other parameters specify that we are creating a cluster with 3 nodes and with machine type of 4 vCPUs and 45 GB memory.
You may wait a few minutes for the cluster to be generated, and then you will see it listed in the Google Cloud console (select Kubernetes Engine → Clusters in the left menu panel):
Now you should configure the command-line access to your newly created cluster to make
kubectl be able to use it.
In the Google Cloud Console, select your cluster and then click the Connect shown on the above image. You will see the connect statement configures command-line access. After you have edited the statement, you may run the command in your local shell:
$ gcloud container clusters get-credentials my-cluster-name --zone us-central1-a --project <project name>
Installing the Operator
First of all, use your Cloud Identity and Access Management (Cloud IAM) to control access to the cluster. The following command will give you the ability to create Roles and RoleBindings:
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value core/account)
The return statement confirms the creation:
Create a namespace and set the context for the namespace. The resource names must be unique within the namespace and provide a way to divide cluster resources between users spread across multiple projects.
So, create the namespace and save it in the namespace context for subsequent commands as follows (replace the
<namespace name>placeholder with some descriptive name):
$ kubectl create namespace <namespace name> $ kubectl config set-context $(kubectl config current-context) --namespace=<namespace name>
At success, you will see the message that namespace/<namespace name> was created, and the context (gke_<project name>_<zone location>_<cluster name>) was modified.
Use the following
git clonecommand to download the correct branch of the percona-server-mongodb-operator repository:
git clone -b v1.6.0 https://github.com/percona/percona-server-mongodb-operator
After the repository is downloaded, change the directory to run the rest of the commands in this document:
Deploy the Operator with the following command:
kubectl apply -f deploy/bundle.yaml
The following confirmation is returned:
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com created role.rbac.authorization.k8s.io/percona-server-mongodb-operator created serviceaccount/percona-server-mongodb-operator created rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created deployment.apps/percona-server-mongodb-operator created
The operator has been started, and you can create the Percona Server for MongoDB:
$ kubectl apply -f deploy/cr.yaml
The process could take some time. The return statement confirms the creation:
During previous steps, the Operator has generated several secrets, including the password for the
rootuser, which you will need to access the cluster.
kubectl get secretscommand to see the list of Secrets objects (by default Secrets object you are interested in has
kubectl get secret my-cluster-secrets -o yamlwill return the YAML file with generated secrets, including the
MONGODB_USER_ADMIN_PASSWORDstrings, which should look as follows:
... data: ... MONGODB_USER_ADMIN_PASSWORD: aDAzQ0pCY3NSWEZ2ZUIzS1I= MONGODB_USER_ADMIN_USER: dXNlckFkbWlu
Here the actual password is base64-encoded, and
echo 'aDAzQ0pCY3NSWEZ2ZUIzS1I=' | base64 --decodewill bring it back to a human-readable form.
Verifying the cluster operation
It may take ten minutes to get the cluster started. You can verify its creation with the
kubectl get pods command:
$ kubectl get pods
The result should look as follows:
NAME READY STATUS RESTARTS AGE my-cluster-name-cfg-0 2/2 Running 0 11m my-cluster-name-cfg-1 2/2 Running 1 10m my-cluster-name-cfg-2 2/2 Running 1 9m my-cluster-name-mongos-55659468f7-2kvc2 1/1 Running 0 11m my-cluster-name-mongos-55659468f7-7jfqc 1/1 Running 0 11m my-cluster-name-mongos-55659468f7-dfwcj 1/1 Running 0 11m my-cluster-name-rs0-0 2/2 Running 0 11m my-cluster-name-rs0-1 2/2 Running 0 10m my-cluster-name-rs0-2 2/2 Running 0 9m percona-server-mongodb-operator-6fc78d686d-26hdz 1/1 Running 0 37m
Also, you can see the same information when browsing Pods of your cluster in Google Cloud console via the Object Browser:
If all nodes are up and running, you can try to connect to the cluster.
First of all, run a container with a MongoDB client and connect its console
output to your terminal. The following command will do this, naming the new Pod
kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:4.4.2-4 --restart=Never -- bash -il
Executing it may require some time to deploy the correspondent Pod. Now run
mongo tool in the percona-client command shell using the login (which is
userAdmin) and password obtained from the secret:
mongo "mongodb://userAdmin:userAdminPassword@my-cluster-name-mongos.<namespace name>.svc.cluster.local/admin?ssl=false"
kubectl get pods command had shown some errors, you can examine the problematic Pod with the
kubectl describe <pod name> command. For example, this command returns information for the selected Pod:
kubectl describe pod my-cluster-name-rs0-2
Review the detailed information for
Warning statements and then correct the configuration. An example of a warning is as follows:
Warning FailedScheduling 68s (x4 over 2m22s) default-scheduler 0/1 nodes are available: 1 node(s) didn’t match pod affinity/anti-affinity, 1 node(s) didn’t satisfy existing pods anti-affinity rules.
Alternatively, you can examine your Pods via the object browser. Errors will look as follows:
Clicking the problematic Pod will bring you to the details page with the same warning:
Removing the GKE cluster
There are several ways that you can delete the cluster.
You can clean up the cluster with the
gcloud command as follows:
gcloud container clusters delete <cluster name>
The return statement requests your confirmation of the deletion. Type
y to confirm.
Also, you can delete your cluster via the GKE console. Just click the
Delete popup menu item in the clusters list:
The cluster deletion may take time.