In this blog post, I’d like to share some experiences in setting up a Vitess environment for local tests and development on OSX/macOS. As previously, I have presented How To Test and Deploy Kubernetes Operator for MySQL(PXC) in OSX/macOS, this time I will be showing how to Run Vitess on Kubernetes.
Since running Kubernetes on a laptop is only experimental, I had faced several issues going through straight forward installation steps so I had to apply a few workarounds to the environment. This setup will have only minimum customization involved.
For a high-level overview of Vitess, please visit Part I of this series, Introduction to Vitess on Kubernetes for MySQL.
Housekeeping items needed during installation:
One of the main challenges I’ve faced was that the latest Kubernetes version wasn’t compatible with the existing development. The issue is filed here in GitHub, hence we start with the previous version, not the default.
|
1 |
$ minikube start -p vitess --memory=4096 --kubernetes-version=1.15.2<br>? [vitess] minikube v1.5.2 on Darwin 10.14.6<br>✨ Automatically selected the 'virtualbox' driver<br>? Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...<br>? Preparing Kubernetes v1.15.2 on Docker '18.09.9' ...<br>? Downloading kubelet v1.15.2<br>? Downloading kubeadm v1.15.2<br>? Pulling images ...<br>? Launching Kubernetes ...<br>⌛ Waiting for: apiserver<br>? Done! kubectl is now configured to use "vitess"<br>E1127 15:56:46.076308 30453 start.go:389] kubectl info: exec: exit status 1<br><br><br> |
Verify that the minikube is initialized and running.
|
1 |
$ kubectl -n kube-system get pods<br>NAME READY STATUS RESTARTS AGE<br>coredns-5c98db65d4-2zwsf 1/1 Running 0 1m<br>coredns-5c98db65d4-qmslc 1/1 Running 0 1m<br>etcd-minikube 1/1 Running 0 34s<br>kube-addon-manager-minikube 1/1 Running 0 43s<br>kube-apiserver-minikube 1/1 Running 0 41s<br>kube-controller-manager-minikube 1/1 Running 0 28s<br>kube-proxy-wrc5k 1/1 Running 0 1m<br>kube-scheduler-minikube 1/1 Running 0 45s<br>storage-provisioner 1/1 Running 0 1m<br> |
The next item on the list is to get etcd operator running. At this point, we’ll still need to clone etcd to local directory to have access to files.
|
1 |
$ git clone https://github.com/coreos/etcd-operator.git<br> |
The issue reported here is a workaround to replace the deployment.yaml file. Once that’s done we can proceed with the installation.
Under /Users/[username]/Kubernetes/etcd-operator run;
|
1 |
$./example/rbac/create_role.sh<br>Creating role with ROLE_NAME=etcd-operator, NAMESPACE=default<br>clusterrole "etcd-operator" created<br>Creating role binding with ROLE_NAME=etcd-operator, ROLE_BINDING_NAME=etcd-operator, NAMESPACE=default<br>clusterrolebinding "etcd-operator" created<br><br>$ kubectl create -f example/deployment.yaml<br>deployment "etcd-operator" created<br><br>$ kubectl get customresourcedefinitions<br>NAME KIND<br>etcdclusters.etcd.database.coreos.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io<br> |
If the above steps don’t work, alternatively you may install etcd-operator via helm.
|
1 |
$ kubectl get customresourcedefinitions<br>NAME KIND<br>etcdbackups.etcd.database.coreos.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io<br>etcdclusters.etcd.database.coreos.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io<br>etcdrestores.etcd.database.coreos.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io<br> |
|
1 |
$ brew install helm |
Another issue faced with Helm is it can’t find tiller.
|
1 |
$ helm init<br>$HELM_HOME has been configured at /Users/askdba/.helm.<br>Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.<br> |
Get Vitess client installed using go:
|
1 |
$ go get vitess.io/vitess/go/cmd/vtctlclient<br>go: finding vitess.io/vitess v2.1.1+incompatible<br>go: downloading vitess.io/vitess v2.1.1+incompatible<br>go: extracting vitess.io/vitess v2.1.1+incompatible<br>go: downloading golang.org/x/net v0.0.0-20191028085509-fe3aa8a45271<br>go: downloading github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b<br>go: finding github.com/youtube/vitess v2.1.1+incompatible<br>go: extracting github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b<br>go: downloading github.com/youtube/vitess v2.1.1+incompatible<br>go: extracting golang.org/x/net v0.0.0-20191028085509-fe3aa8a45271<br>go: extracting github.com/youtube/vitess v2.1.1+incompatible<br>go: downloading github.com/golang/protobuf v1.3.2<br>go: downloading google.golang.org/grpc v1.24.0<br>go: extracting github.com/golang/protobuf v1.3.2<br>go: extracting google.golang.org/grpc v1.24.0<br>go: downloading google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6<br>go: downloading golang.org/x/text v0.3.2<br>go: downloading golang.org/x/sys v0.0.0-20191028164358-195ce5e7f934<br>go: extracting golang.org/x/sys v0.0.0-20191028164358-195ce5e7f934<br>go: extracting golang.org/x/text v0.3.2<br>go: extracting google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6<br>go: finding github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b<br>go: finding golang.org/x/net v0.0.0-20191028085509-fe3aa8a45271<br>go: finding github.com/golang/protobuf v1.3.2<br>go: finding google.golang.org/grpc v1.24.0<br>go: finding google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6<br>go: finding golang.org/x/text v0.3.2<br><br> |
Now we’re ready to launch our test cluster in Vitess which consists of the sample database schema. We will go over this in the next blog by creating a single keyspace and sharding across instances using Vitess.
|
1 |
$ helm install ../../helm/vitess -f 101_initial_cluster.yaml<br>NAME: steely-owl<br>LAST DEPLOYED: Wed Nov 27 16:46:00 2019<br>NAMESPACE: default<br>STATUS: DEPLOYED<br>RESOURCES:<br>==> v1/ConfigMap<br>NAME AGE<br>vitess-cm 2s<br><br>==> v1/Job<br>NAME AGE<br>commerce-apply-schema-initial 2s<br>commerce-apply-vschema-initial 2s<br>zone1-commerce-0-init-shard-master 2s<br>==> v1/Pod(related)<br>NAME AGE<br>commerce-apply-schema-initial-fzm66 2s<br>commerce-apply-vschema-initial-j9hzb 2s<br>vtctld-757df48d4-vbv5z 2s<br>vtgate-zone1-5cb4fcddcb-fx8xd 2s<br>zone1-commerce-0-init-shard-master-zd7vs 2s<br>zone1-commerce-0-rdonly-0 2s<br>zone1-commerce-0-replica-0 1s<br>zone1-commerce-0-replica-1 1s<br>==> v1/Service<br>NAME AGE<br>vtctld 2s<br>vtgate-zone1 2s<br>vttablet 2s<br>==> v1beta1/Deployment<br>NAME AGE<br>vtctld 2s<br>vtgate-zone1 2s<br>==> v1beta1/PodDisruptionBudget<br>NAME AGE<br>vtgate-zone1 2s<br>zone1-commerce-0-rdonly 2s<br>zone1-commerce-0-replica 2s<br>==> v1beta1/StatefulSet<br>NAME AGE<br>zone1-commerce-0-rdonly 2s<br>zone1-commerce-0-replica 2s<br>==> v1beta2/EtcdCluster<br>NAME AGE<br>etcd-global 2s<br>etcd-zone1 2s<br>NOTES:<br>Release name: steely-owl<br>To access administrative web pages, start a proxy with:<br> kubectl proxy --port=8001<br>Then use the following URLs:<br> vtctld: http://localhost:8001/api/v1/namespaces/default/services/vtctld:web/proxy/app/<br> vtgate: http://localhost:8001/api/v1/namespaces/default/services/vtgate-zone1:web/proxy/<br><br>$ kubectl describe service vtgate-zone1<br>Name: vtgate-zone1<br>Namespace: default<br>Labels: app=vitess<br> cell=zone1<br> component=vtgate<br>Selector: app=vitess,cell=zone1,component=vtgate<br>Type: NodePort<br>IP: 10.109.14.82<br>Port: web 15001/TCP<br>NodePort: web 30433/TCP<br>Endpoints: 172.17.0.7:15001<br>Port: grpc 15991/TCP<br>NodePort: grpc 30772/TCP<br>Endpoints: 172.17.0.7:15991<br>Port: mysql 3306/TCP<br>NodePort: mysql 31090/TCP<br>Endpoints: 172.17.0.7:3306<br>Session Affinity: None<br>No events.<br> |
Here we face another issue even though our cluster is up and running. We aren’t able to access this environment from the laptop. Under /Users/askdba/go/src/vitess.io/vitess/examples/helm there’s a script called kmysql.sh which figures out the hostname and port number for this cluster, but it fails due to the above-mentioned issue.
This script returns an error as follows:
|
1 |
$ ./kmysql.sh<br>ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (61)<br> |
To fix this, we’ll need to create a pod to access Kubernetes running inside minikube:
|
1 |
$ kubectl run -i --rm --tty percona-client --image=percona:5.7 --restart=Never -- bash -il<br>Waiting for pod default/percona-client to be running, status is Pending, pod ready: false<br>If you don't see a command prompt, try pressing enter.<br> |
This allows us required access to the cluster.
|
1 |
$ mysql -h vtgate-zone1 -P 3306<br>Welcome to the MySQL monitor. Commands end with ; or g.<br>Your MySQL connection id is 1<br>Server version: 5.5.10-Vitess Percona Server (GPL), Release 23, Revision 500fcf5<br><br>Copyright (c) 2009-2019 Percona LLC and/or its affiliates<br>Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.<br><br>Oracle is a registered trademark of Oracle Corporation and/or its<br>affiliates. Other names may be trademarks of their respective<br>owners.<br><br>Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.<br><br>mysql> s<br>--------------<br>mysql Ver 14.14 Distrib 5.7.26-29, for Linux (x86_64) using 6.2<br><br>Connection id: 1<br>Current database: commerce<br>Current user: vt_app@localhost<br>SSL: Not in use<br>Current pager: stdout<br>Using outfile: ''<br>Using delimiter: ;<br>Server version: 5.5.10-Vitess Percona Server (GPL), Release 23, Revision 500fcf5<br>Protocol version: 10<br>Connection: vtgate-zone1 via TCP/IP<br>Server characterset: utf8<br>Db characterset: utf8<br>Client characterset: utf8<br>Conn. characterset: utf8<br>TCP port: 3306<br>--------------<br><br>mysql> show tables;<br>+--------------------+<br>| Tables_in_commerce |<br>+--------------------+<br>| corder |<br>| customer |<br>| product |<br>+--------------------+<br>3 rows in set (0.01 sec)<br> |
Summary of issues:
Read Part I of this series: Introduction to Vitess on Kubernetes for MySQL – Part I
Read Part III of this series: Setup and Deploy Vitess on Kubernetes (Minikube) for MySQL – Part III
References
Resources
RELATED POSTS