Buy Percona ServicesBuy Now!
Subscribe to Latest MySQL Performance Blog posts feed
Updated: 14 hours 3 min ago

Don’t Let a Leap Second Leap on Your Database!

December 27, 2016 - 1:00pm

This blog discusses how to prepare your database for the new leap second coming in the new year.

At the end of this year, on December 31, 2016, a new leap second gets added. Many of us remember the huge problems this caused back in 2012. Some of our customers asked how they should prepare for this year’s event to avoid any unexpected problems.

It’s a little late, but I thought discussing the issue might still be useful.

The first thing is to make sure your systems avoid the issue with abnormally high CPU usage. This was an problem in 2012 due to a Linux kernel bug. After the leap second was added, CPU utilization sky-rocketed on many systems, taking down many popular sites. This issue was addressed back in 2012, and similar global problems did not occur in 2015 thanks to those fixes. So it is important to make sure you have an up-to-date Linux kernel version.

It’s worth knowing that in the case of any unpredicted system misbehavior from the leap second problem, the quick remedy for the CPU overheating was restarting services or rebooting servers (in the worst case).

(Please do not reboot the server without being absolutely sure that your serious problems started exactly when the leap second was added.)

The following are examples of bug records:

The second thing is to add proper support for the upcoming event. Leap second additions are announced some time before they are implemented, as it isn’t known exactly when the next one will occur for sure.

Therefore, you should upgrade your OS tzdata package to prepare your system for the upcoming leap second. This document shows how to check if your OS is already “leap second aware”:

zdump -v right/America/Los_Angeles | grep Sat.Dec.31.*2016

A non-updated system returns an empty output. On an updated OS, you should receive something like this:

right/America/Los_Angeles Sat Dec 31 23:59:60 2016 UTC = Sat Dec 31 15:59:60 2016 PST isdst=0 gmtoff=-28800 right/America/Los_Angeles Sun Jan 1 00:00:00 2017 UTC = Sat Dec 31 16:00:00 2016 PST isdst=0 gmtoff=-28800

If your systems use the NTP service though, the above is not necessary (as stated in https://access.redhat.com/solutions/2441291). Still, you should make sure that the NTP services you use are also up-to-date.

With regards to leap second support in MySQL there is nothing to do, regardless of the version. MySQL doesn’t allow an extra second numeration within the 60 seconds part of timestamp datatype, so you should expect rows with 59 instead of 60 seconds when the additional second is added, as described here: https://dev.mysql.com/doc/refman/5.7/en/time-zone-leap-seconds.html

Similarly, MongoDB expects no serious problems either.

Let’s “smear” the second

Many big Internet properties, however, introduced a technique to adapt to the leap second change more gracefully and smoothly, called Leap Smear or Slew. Instead of introducing the additional leap second immediately, the clock slows down a bit, allowing it to gradually get in sync with the new time. This way there is no issue with extra abnormal second notation, etc.

This solution is used by Google, Amazon, Microsoft, and others. You can find a comprehensive document about Google’s use here: https://developers.google.com/time/smear

You can easily introduce this technique with the ntpd -x or Chronyd slew options, which are nicely explained in this document: https://developers.redhat.com/blog/2015/06/01/five-different-ways-handle-leap-seconds-ntp/

Summary

Make sure you have your kernel up-to-date, NTP service properly configured and consider using the Slew/Smear technique to make the change easier. After the kernel patches in 2012, no major problems happened in 2015. We expect none this year either (especially if you take time to properly prepare).

Percona Server for MongoDB 3.4 Beta is now available

December 23, 2016 - 6:43am

Percona is pleased to announce the release of Percona Server for MongoDB 3.4.0-1.0beta on December 23, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

NOTE: Beta packages are available from testing repository.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as adding features like external authentication, audit logging, and profiling rate limiting. Percona Server for MongoDB requires no changes to MongoDB applications or code.

This beta release is based on MongoDB 3.4.0 and includes the following additional changes:

  • Red Hat Enterprise Linux 5 and derivatives (including CentOS 5) are no longer supported.
  • MongoRocks is now based on RocksDB 4.11.
  • PerconaFT and TokuBackup were removed.
    As alternatives, we recommend using MongoRocks for write-heavy workloads and Hot Backup for physical data backups on a running server.

Percona Server for MongoDB 3.4.0-1.0beta release notes are available in the official documentation.

 

Percona Blog Poll: What Programming Languages are You Using for Backend Development?

December 21, 2016 - 10:53am

Take Percona’s blog poll on what programming languages you’re using for backend development.

While customers and users focus and interact with applications and websites, these are really just the tip of the iceberg for the whole end-to-end system that allows applications to run. The backend is what makes a website or application work. The backend has three parts to it: server, application, and database. A backend operation can be a web application communicating with the server to make a change in a database stored on a server. Technologies like PHP, Ruby, Python, and others are the ones backend programmers use to make this communication work smoothly, allowing the customer to purchase his or her ticket with ease.

Backend programmers might not get a lot of credit, but they are the ones that design, maintain and repair the machinery that powers a system.

Please take a few seconds and answer the following poll on backend programming languages. Which are you using? Help the community learn what languages help solve critical database issues. Please select from one to six languages as they apply to your environment.

If you’re using other languages, or have specific issues, feel free to comment below. We’ll post a follow-up blog with the results!

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Percona Poll Results: What Database Technologies Are You Using?

December 21, 2016 - 10:46am

This blog shows the results from Percona’s poll on what database technologies our readers use in their environment.

We design different databases for different scenarios. Using one database technology for every situation doesn’t make sense, and can lead to non-optimal solutions for common issues. Big data and IoT applications, high availability, secure backups, security, cloud vs. on-premises deployment: each have a set of requirements that might need a special technology. Relational, document-based, key-value, graphical, column family – there are many options for many problems. More and more, database environments combine more than one solution to address the various needs of an enterprise or application (known as polyglot persistence).

The following are the results of our poll on database technologies:

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

We’ve concluded our database technology poll that looks at the technologies our readers are running in 2016. Thank you to the more than 1500 people who responded! Let’s look at what the poll results tell us, and how they compare to the similar poll we did in 2013.

Since the wording of the two poll questions is slightly different, the results won’t be directly comparable.  

First, let’s set the record straight: this poll does not try to be an unbiased, open source database technology poll. We understand our audience likely has many more MySQL and MongoDB users than other technologies. So we should look at the poll results as “how MySQL and MongoDB users look at open source database technology.”

It’s interesting to examine which technologies we chose to include in our 2016 poll, compared to the 2013 poll. The most drastic change can be seen in the full-text search technologies. This time, we decided not to include Lucene and Sphinx this time. ElasticSearch, which wasn’t included back in 2013, is now the leading full-text search technology. This corresponds to what we see among our customers.

The change between Redis versus Memcached is also interesting. Back in 2013, Memcached was the clear supporting technology winner. In 2016, Redis is well ahead.

We didn’t ask about PostgreSQL back in 2013 (few people probably ran PostgreSQL alongside MySQL then). Today our poll demonstrates its very strong showing.

We are also excited to see MongoDB’s strong ranking in the poll, which we interpret both as a result of the huge popularity of this technology and as recognition of our success as MongoDB support and services provider. We’ve been in the MongoDB solutions business for less than two years, and already seem to have a significant audience among MongoDB users.

In looking at other technologies mentioned, it is interesting to see that Couchbase and Riak were mentioned by fewer people than in 2013, while Cassandra came in about the same. I don’t necessarily see it as diminishing popularity for these technologies, but as potentially separate communities forming that don’t extensively cross-pollinate.

Kafka also deserves special recognition: with the initial release in January 2011, it gets a mention back in our 2013 poll. Our current poll shows it at 7%. This is a much larger number than might be expected, as Kafka is typically used in complicated, large-scale applications.

Thank you for participating!

Installing Percona Monitoring and Management on Google Container Engine (Kubernetes)

December 21, 2016 - 10:19am

This blog post discussing installing Percona Monitoring and Management on a Google container engine (Kubernetes),I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that pmm-server uses.

The regular install instructions are here: https://www.percona.com/doc/percona-monitoring-and-management/install.html

Since Google Container Engine runs on Kubernetes, we had to do some interesting changes to the server install instructions.

First, you will want to get the gcloud shell. This is done by clicking the gcloud shell button at the top right of your screen when logged into your GCS project.

Once you are in the shell, you just need to run some commands to get up and running.

Let’s set our availability zone and region:

manjot_singh@googleproject:~$ gcloud config set compute/zone asia-east1-c Updated property [compute/zone].

Then let’s set up our auth:

manjot_singh@googleproject:~$ gcloud auth application-default login ... These credentials will be used by any library that requests Application Default Credentials.

Now we are ready to go.

Normally, we create a persistent container called pmm-data to hold the data the server collects and survive container deletions and upgrades. For GCS, we will create persistent disks, and use the minimum (Google) recommended size for each.

manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-prom-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-prom-data-pv]. NAME              ZONE          SIZE_GB  TYPE         STATUS pmm-prom-data-pv  asia-east1-c  200      pd-standard  READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-consul-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-consul-data-pv]. NAME                ZONE          SIZE_GB  TYPE         STATUS pmm-consul-data-pv  asia-east1-c  200      pd-standard  READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-mysql-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-mysql-data-pv]. NAME               ZONE          SIZE_GB  TYPE         STATUS pmm-mysql-data-pv  asia-east1-c  200      pd-standard  READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-grafana-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-grafana-data-pv]. NAME                 ZONE          SIZE_GB  TYPE         STATUS pmm-grafana-data-pv  asia-east1-c  200      pd-standard  READY

Ignoring messages about disk formatting, we are ready to create our Kubernetes cluster:

manjot_singh@googleproject:~$ gcloud container clusters create pmm-server --num-nodes 1 --machine-type n1-standard-2 Creating cluster pmm-server...done. Created [https://container.googleapis.com/v1/projects/googleproject/zones/asia-east1-c/clusters/pmm-server]. kubeconfig entry generated for pmm-server. NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS pmm-server asia-east1-c 1.4.6 999.911.999.91 n1-standard-2 1.4.6 1 RUNNING

You should now see something like:

manjot_singh@googleproject:~$ gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-pmm-server-default-pool-73b3f656-20t0 asia-east1-c n1-standard-2 10.14.10.14 911.119.999.11 RUNNING

Now that our container manager is up, we need to create 2 configs for the “pod” we are creating to run our container. One will be used only to initialize the server and move the container drives to the persistent disks and the second one will be the actual running server.

manjot_singh@googleproject:~$ vi pmm-server-init.json {   "apiVersion": "v1",   "kind": "Pod",   "metadata": {       "name": "pmm-server",       "labels": {           "name": "pmm-server"       }   },   "spec": {     "containers": [{         "name": "pmm-server",         "image": "percona/pmm-server:1.0.6",         "env": [{                 "name":"SERVER_USER",                 "value":"http_user"             },{                 "name":"SERVER_PASSWORD",                 "value":"http_password"             },{                 "name":"ORCHESTRATOR_USER",                 "value":"orchestrator"             },{                 "name":"ORCHESTRATOR_PASSWORD",                 "value":"orch_pass"             }         ],         "ports": [{             "containerPort": 80             }         ],         "volumeMounts": [{           "mountPath": "/opt/prometheus/d",           "name": "pmm-prom-data"         },{           "mountPath": "/opt/c",           "name": "pmm-consul-data"         },{           "mountPath": "/var/lib/m",           "name": "pmm-mysql-data"         },{           "mountPath": "/var/lib/g",           "name": "pmm-grafana-data"         }]       }     ],     "restartPolicy": "Always",     "volumes": [{       "name":"pmm-prom-data",       "gcePersistentDisk": {           "pdName": "pmm-prom-data-pv",           "fsType": "ext4"       }     },{       "name":"pmm-consul-data",       "gcePersistentDisk": {           "pdName": "pmm-consul-data-pv",           "fsType": "ext4"       }     },{       "name":"pmm-mysql-data",       "gcePersistentDisk": {           "pdName": "pmm-mysql-data-pv",           "fsType": "ext4"       }     },{       "name":"pmm-grafana-data",       "gcePersistentDisk": {           "pdName": "pmm-grafana-data-pv",           "fsType": "ext4"       }     }]   } }

manjot_singh@googleproject:~$ vi pmm-server.json { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "pmm-server", "labels": { "name": "pmm-server" } }, "spec": { "containers": [{ "name": "pmm-server", "image": "percona/pmm-server:1.0.6", "env": [{ "name":"SERVER_USER", "value":"http_user" },{ "name":"SERVER_PASSWORD", "value":"http_password" },{ "name":"ORCHESTRATOR_USER", "value":"orchestrator" },{ "name":"ORCHESTRATOR_PASSWORD", "value":"orch_pass" } ], "ports": [{ "containerPort": 80 } ], "volumeMounts": [{ "mountPath": "/opt/prometheus/data", "name": "pmm-prom-data" },{ "mountPath": "/opt/consul-data", "name": "pmm-consul-data" },{ "mountPath": "/var/lib/mysql", "name": "pmm-mysql-data" },{ "mountPath": "/var/lib/grafana", "name": "pmm-grafana-data" }] } ], "restartPolicy": "Always", "volumes": [{ "name":"pmm-prom-data", "gcePersistentDisk": { "pdName": "pmm-prom-data-pv", "fsType": "ext4" } },{ "name":"pmm-consul-data", "gcePersistentDisk": { "pdName": "pmm-consul-data-pv", "fsType": "ext4" } },{ "name":"pmm-mysql-data", "gcePersistentDisk": { "pdName": "pmm-mysql-data-pv", "fsType": "ext4" } },{ "name":"pmm-grafana-data", "gcePersistentDisk": { "pdName": "pmm-grafana-data-pv", "fsType": "ext4" } }] } }

Then create it:

manjot_singh@googleproject:~$ kubectl create -f pmm-server-init.json pod "pmm-server" created

Now we need to move data to persistent disks:

manjot_singh@googleproject:~$ kubectl exec -it pmm-server bash root@pmm-server:/opt# supervisorctl stop grafana grafana: stopped root@pmm-server:/opt# supervisorctl stop prometheus prometheus: stopped root@pmm-server:/opt# supervisorctl stop consul consul: stopped root@pmm-server:/opt# supervisorctl stop mysql mysql: stopped root@pmm-server:/opt# mv consul-data/* c/ root@pmm-server:/opt# chown pmm.pmm c root@pmm-server:/opt# cd prometheus/ root@pmm-server:/opt/prometheus# mv data/* d/ root@pmm-server:/opt/prometheus# chown pmm.pmm d root@pmm-server:/var/lib# cd /var/lib root@pmm-server:/var/lib# mv mysql/* m/ root@pmm-server:/var/lib# chown mysql.mysql m root@pmm-server:/var/lib# mv grafana/* g/ root@pmm-server:/var/lib# chown grafana.grafana g root@pmm-server:/var/lib# exit manjot_singh@googleproject:~$ kubectl delete pods pmm-server pod "pmm-server" deleted

Now recreate the pmm-server container with the actual configuration:

manjot_singh@googleproject:~$ kubectl create -f pmm-server.json pod "pmm-server" created

It’s up!

Now let’s get access to it by exposing it to the internet:

manjot_singh@googleproject:~$ kubectl expose deployment pmm-server --type=LoadBalancer service "pmm-server" exposed

You can get more information on this by running:

manjot_singh@googleproject:~$ kubectl describe services pmm-server Name: pmm-server Namespace: default Labels: run=pmm-server Selector: run=pmm-server Type: LoadBalancer IP: 10.3.10.3 Port: <unset> 80/TCP NodePort: <unset> 31757/TCP Endpoints: 10.0.0.8:80 Session Affinity: None Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 22s 22s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer

To find the public IP of your PMM server, look under “EXTERNAL-IP”

manjot_singh@googleproject:~$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.3.10.3 <none> 443/TCP 7m pmm-server 10.3.10.99 999.911.991.91 80/TCP 1m

That’s it, just visit the external IP in your browser and you should see the PMM landing page!

One of the things we didn’t resolve was being able to access the pmm-server container within the vpc. The client had to go through the open internet and hit PMM via the public IP. I hope to work on this some more and resolve this in the future.

I have also talked to our team about making mounts for persistent disks easier so that we can use less mounts and make the configuration and setup easier.

 

 

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.