This blog discusses installing Percona Monitoring and Management on Google Container Engine.
I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that pmm-server uses.
The regular install instructions are here: https://www.percona.com/doc/percona-monitoring-and-management/install.html
Since Google Container Engine runs on Kubernetes, we had to do some interesting changes to the server install instructions.
First, you will want to get the gcloud shell. This is done by clicking the gcloud shell button at the top right of your screen when logged into your GCS project.

Once you are in the shell, you just need to run some commands to get up and running.
Let’s set our availability zone and region:
|
1 |
manjot_singh@googleproject:~$ gcloud config set compute/zone asia-east1-c<br><br>Updated property [compute/zone]. |
Then let’s set up our auth:
|
1 |
manjot_singh@googleproject:~$ gcloud auth application-default login<br>...<br>These credentials will be used by any library that requests<br>Application Default Credentials.<br> |
Now we are ready to go.
Normally, we create a persistent container called pmm-data to hold the data the server collects and survive container deletions and upgrades. For GCS, we will create persistent disks, and use the minimum (Google) recommended size for each.
|
1 |
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-prom-data-pv<br>Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-prom-data-pv].<br>NAME ZONE SIZE_GB TYPE STATUS<br>pmm-prom-data-pv asia-east1-c 200 pd-standard READY<br><br>manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-consul-data-pv<br>Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-consul-data-pv].<br>NAME ZONE SIZE_GB TYPE STATUS<br>pmm-consul-data-pv asia-east1-c 200 pd-standard READY<br><br>manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-mysql-data-pv<br>Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-mysql-data-pv].<br>NAME ZONE SIZE_GB TYPE STATUS<br>pmm-mysql-data-pv asia-east1-c 200 pd-standard READY<br><br>manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-grafana-data-pv<br>Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-grafana-data-pv].<br>NAME ZONE SIZE_GB TYPE STATUS<br>pmm-grafana-data-pv asia-east1-c 200 pd-standard READY<br> |
Ignoring messages about disk formatting, we are ready to create our Kubernetes cluster:
|
1 |
manjot_singh@googleproject:~$ gcloud container clusters create pmm-server --num-nodes 1 --machine-type n1-standard-2<br><br>Creating cluster pmm-server...done. <br>Created [https://container.googleapis.com/v1/projects/googleproject/zones/asia-east1-c/clusters/pmm-server].<br>kubeconfig entry generated for pmm-server.<br>NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS<br>pmm-server asia-east1-c 1.4.6 999.911.999.91 n1-standard-2 1.4.6 1 RUNNING<br> |
You should now see something like:
|
1 |
manjot_singh@googleproject:~$ gcloud compute instances list<br>NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS<br>gke-pmm-server-default-pool-73b3f656-20t0 asia-east1-c n1-standard-2 10.14.10.14 911.119.999.11 RUNNING<br> |
Now that our container manager is up, we need to create 2 configs for the “pod” we are creating to run our container. One will be used only to initialize the server and move the container drives to the persistent disks and the second one will be the actual running server.
|
1 |
manjot_singh@googleproject:~$ vi pmm-server-init.json<br>{<br> "apiVersion": "v1",<br> "kind": "Pod",<br> "metadata": {<br> "name": "pmm-server",<br> "labels": {<br> "name": "pmm-server"<br> }<br> },<br> "spec": {<br> "containers": [{<br> "name": "pmm-server",<br> "image": "percona/pmm-server:1.0.6",<br> "env": [{<br> "name":"SERVER_USER",<br> "value":"http_user"<br> },{<br> "name":"SERVER_PASSWORD",<br> "value":"http_password"<br> },{<br> "name":"ORCHESTRATOR_USER",<br> "value":"orchestrator"<br> },{<br> "name":"ORCHESTRATOR_PASSWORD",<br> "value":"orch_pass"<br> }<br> ],<br> "ports": [{<br> "containerPort": 80<br> }<br> ],<br> "volumeMounts": [{<br> "mountPath": "/opt/prometheus/d",<br> "name": "pmm-prom-data"<br> },{<br> "mountPath": "/opt/c",<br> "name": "pmm-consul-data"<br> },{<br> "mountPath": "/var/lib/m",<br> "name": "pmm-mysql-data"<br> },{<br> "mountPath": "/var/lib/g",<br> "name": "pmm-grafana-data"<br> }]<br> }<br> ],<br> "restartPolicy": "Always",<br> "volumes": [{<br> "name":"pmm-prom-data",<br> "gcePersistentDisk": {<br> "pdName": "pmm-prom-data-pv",<br> "fsType": "ext4"<br> }<br> },{<br> "name":"pmm-consul-data",<br> "gcePersistentDisk": {<br> "pdName": "pmm-consul-data-pv",<br> "fsType": "ext4"<br> }<br> },{<br> "name":"pmm-mysql-data",<br> "gcePersistentDisk": {<br> "pdName": "pmm-mysql-data-pv",<br> "fsType": "ext4"<br> }<br> },{<br> "name":"pmm-grafana-data",<br> "gcePersistentDisk": {<br> "pdName": "pmm-grafana-data-pv",<br> "fsType": "ext4"<br> }<br> }]<br> }<br>}<br> |
|
1 |
manjot_singh@googleproject:~$ vi pmm-server.json<br>{<br> "apiVersion": "v1",<br> "kind": "Pod",<br> "metadata": {<br> "name": "pmm-server",<br> "labels": {<br> "name": "pmm-server"<br> }<br> },<br> "spec": {<br> "containers": [{<br> "name": "pmm-server",<br> "image": "percona/pmm-server:1.0.6",<br> "env": [{<br> "name":"SERVER_USER",<br> "value":"http_user"<br> },{<br> "name":"SERVER_PASSWORD",<br> "value":"http_password"<br> },{<br> "name":"ORCHESTRATOR_USER",<br> "value":"orchestrator"<br> },{<br> "name":"ORCHESTRATOR_PASSWORD",<br> "value":"orch_pass"<br> }<br> ],<br> "ports": [{<br> "containerPort": 80<br> }<br> ],<br> "volumeMounts": [{<br> "mountPath": "/opt/prometheus/data",<br> "name": "pmm-prom-data"<br> },{<br> "mountPath": "/opt/consul-data",<br> "name": "pmm-consul-data"<br> },{<br> "mountPath": "/var/lib/mysql",<br> "name": "pmm-mysql-data"<br> },{<br> "mountPath": "/var/lib/grafana",<br> "name": "pmm-grafana-data"<br> }]<br> }<br> ],<br> "restartPolicy": "Always",<br> "volumes": [{<br> "name":"pmm-prom-data",<br> "gcePersistentDisk": {<br> "pdName": "pmm-prom-data-pv",<br> "fsType": "ext4"<br> }<br> },{<br> "name":"pmm-consul-data",<br> "gcePersistentDisk": {<br> "pdName": "pmm-consul-data-pv",<br> "fsType": "ext4"<br> }<br> },{<br> "name":"pmm-mysql-data",<br> "gcePersistentDisk": {<br> "pdName": "pmm-mysql-data-pv",<br> "fsType": "ext4"<br> }<br> },{<br> "name":"pmm-grafana-data",<br> "gcePersistentDisk": {<br> "pdName": "pmm-grafana-data-pv",<br> "fsType": "ext4"<br> }<br> }]<br> }<br>}<br> |
Then create it:
|
1 |
manjot_singh@googleproject:~$ kubectl create -f pmm-server-init.json <br>pod "pmm-server" created |
Now we need to move data to persistent disks:
|
1 |
manjot_singh@googleproject:~$ kubectl exec -it pmm-server bash<br><br>root@pmm-server:/opt# supervisorctl stop grafana<br>grafana: stopped<br>root@pmm-server:/opt# supervisorctl stop prometheus<br>prometheus: stopped<br>root@pmm-server:/opt# supervisorctl stop consul<br>consul: stopped<br>root@pmm-server:/opt# supervisorctl stop mysql<br>mysql: stopped<br><br>root@pmm-server:/opt# mv consul-data/* c/<br>root@pmm-server:/opt# chown pmm.pmm c<br><br>root@pmm-server:/opt# cd prometheus/<br>root@pmm-server:/opt/prometheus# mv data/* d/<br>root@pmm-server:/opt/prometheus# chown pmm.pmm d<br><br>root@pmm-server:/var/lib# cd /var/lib<br>root@pmm-server:/var/lib# mv mysql/* m/<br>root@pmm-server:/var/lib# chown mysql.mysql m<br><br>root@pmm-server:/var/lib# mv grafana/* g/<br>root@pmm-server:/var/lib# chown grafana.grafana g<br><br>root@pmm-server:/var/lib# exit<br><br>manjot_singh@googleproject:~$ kubectl delete pods pmm-server<br>pod "pmm-server" deleted<br> |
Now recreate the pmm-server container with the actual configuration:
|
1 |
manjot_singh@googleproject:~$ kubectl create -f pmm-server.json <br>pod "pmm-server" created |
It’s up!
Now let’s get access to it by exposing it to the internet:
|
1 |
manjot_singh@googleproject:~$ kubectl expose deployment pmm-server --type=LoadBalancer<br><br>service "pmm-server" exposed<br> |
You can get more information on this by running:
|
1 |
manjot_singh@googleproject:~$ kubectl describe services pmm-server<br>Name: pmm-server<br>Namespace: default<br>Labels: run=pmm-server<br>Selector: run=pmm-server<br>Type: LoadBalancer<br>IP: 10.3.10.3<br>Port: <unset> 80/TCP<br>NodePort: <unset> 31757/TCP<br>Endpoints: 10.0.0.8:80<br>Session Affinity: None<br>Events:<br> FirstSeen LastSeen Count From SubobjectPath Type Reason Message<br> --------- -------- ----- ---- ------------- -------- ------ -------<br> 22s 22s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer<br> |
To find the public IP of your PMM server, look under “EXTERNAL-IP”
|
1 |
manjot_singh@googleproject:~$ kubectl get services<br>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br>kubernetes 10.3.10.3 <none> 443/TCP 7m<br>pmm-server 10.3.10.99 999.911.991.91 80/TCP 1m<br> |
That’s it, just visit the external IP in your browser and you should see the PMM landing page!
One of the things we didn’t resolve was being able to access the pmm-server container within the vpc. The client had to go through the open internet and hit PMM via the public IP. I hope to work on this some more and resolve this in the future.
I have also talked to our team about making mounts for persistent disks easier so that we can use less mounts and make the configuration and setup easier.
Resources
RELATED POSTS