Buy Percona ServicesBuy Now!

Webinar Thursday December 29: JSON in MySQL 5.7

Latest MySQL Performance Blog posts - December 27, 2016 - 2:44pm

Please join Percona’s Consultant David Ducos on Thursday, December 29, 2016 at 10 am PST/ 1:00 pm EST (UTC-8) as he presents JSON in MySQL 5.7.

Since it was implemented in MySQL 5.7, we can use JSON as a data type. In this webinar, we will review some of the useful functions that have been added to work with JSON.

We will examine and analyze how JSON works internally, and take into account some of the costs related to employing this new technology. 

At the end of the webinar, you will know the answers to the following questions: 

  • What is JSON?
  • Why don’t we keep using VARCHAR?
  • How does it work? 
  • What are the costs?
  • What limitations should we take into account?
  • What are the benefits of using MySQL JSON support?

Register for the webinar here.

David Ducos, Percona Consultant

David studied Computer Science at the National University of La Plata, and has worked as a Database Consultant since 2008. He worked for three years in a worldwide platform of free classifieds, until starting work for Percona in November 2014 as part of the Consulting team.

Don’t Let a Leap Second Leap on Your Database!

Latest MySQL Performance Blog posts - December 27, 2016 - 1:00pm

This blog discusses how to prepare your database for the new leap second coming in the new year.

At the end of this year, on December 31, 2016, a new leap second gets added. Many of us remember the huge problems this caused back in 2012. Some of our customers asked how they should prepare for this year’s event to avoid any unexpected problems.

It’s a little late, but I thought discussing the issue might still be useful.

The first thing is to make sure your systems avoid the issue with abnormally high CPU usage. This was an problem in 2012 due to a Linux kernel bug. After the leap second was added, CPU utilization sky-rocketed on many systems, taking down many popular sites. This issue was addressed back in 2012, and similar global problems did not occur in 2015 thanks to those fixes. So it is important to make sure you have an up-to-date Linux kernel version.

It’s worth knowing that in the case of any unpredicted system misbehavior from the leap second problem, the quick remedy for the CPU overheating was restarting services or rebooting servers (in the worst case).

(Please do not reboot the server without being absolutely sure that your serious problems started exactly when the leap second was added.)

The following are examples of bug records:

The second thing is to add proper support for the upcoming event. Leap second additions are announced some time before they are implemented, as it isn’t known exactly when the next one will occur for sure.

Therefore, you should upgrade your OS tzdata package to prepare your system for the upcoming leap second. This document shows how to check if your OS is already “leap second aware”:

zdump -v right/America/Los_Angeles | grep Sat.Dec.31.*2016

A non-updated system returns an empty output. On an updated OS, you should receive something like this:

right/America/Los_Angeles Sat Dec 31 23:59:60 2016 UTC = Sat Dec 31 15:59:60 2016 PST isdst=0 gmtoff=-28800 right/America/Los_Angeles Sun Jan 1 00:00:00 2017 UTC = Sat Dec 31 16:00:00 2016 PST isdst=0 gmtoff=-28800

If your systems use the NTP service though, the above is not necessary (as stated in https://access.redhat.com/solutions/2441291). Still, you should make sure that the NTP services you use are also up-to-date.

With regards to leap second support in MySQL there is nothing to do, regardless of the version. MySQL doesn’t allow an extra second numeration within the 60 seconds part of timestamp datatype, so you should expect rows with 59 instead of 60 seconds when the additional second is added, as described here: https://dev.mysql.com/doc/refman/5.7/en/time-zone-leap-seconds.html

Similarly, MongoDB expects no serious problems either.

Let’s “smear” the second

Many big Internet properties, however, introduced a technique to adapt to the leap second change more gracefully and smoothly, called Leap Smear or Slew. Instead of introducing the additional leap second immediately, the clock slows down a bit, allowing it to gradually get in sync with the new time. This way there is no issue with extra abnormal second notation, etc.

This solution is used by Google, Amazon, Microsoft, and others. You can find a comprehensive document about Google’s use here: https://developers.google.com/time/smear

You can easily introduce this technique with the ntpd -x or Chronyd slew options, which are nicely explained in this document: https://developers.redhat.com/blog/2015/06/01/five-different-ways-handle-leap-seconds-ntp/

Summary

Make sure you have your kernel up-to-date, NTP service properly configured and consider using the Slew/Smear technique to make the change easier. After the kernel patches in 2012, no major problems happened in 2015. We expect none this year either (especially if you take time to properly prepare).

SQL cluster issue, need help please

Lastest Forum Posts - December 27, 2016 - 6:43am
Currently we have been running (3) SQL servers bootstrapped with Percona with HAProxy as the handler between out APP and SQL. Our cluster had a failure where SQL2 and SQL3 stopped handling requests and talking to SQL1. We were able to restart and recover SQL2 but SQL 3 is providing us with the below error. We are looking for assistance in restoring our cluster to full functionality. Appreciate any help.

● mysql.service - LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon
Loaded: loaded (/etc/init.d/mysql)
Active: failed (Result: exit-code) since Fri 2016-12-23 10:24:27 UTC; 4 days ago
Process: 7108 ExecStart=/etc/init.d/mysql start (code=exited, status=1/FAILURE)

Dec 23 10:24:27 nj-sql3 mysql[7108]: Stale sst_in_progress file in datadir: mysqldStarting MySQL (Percona XtraDB Cluster) database server: mysqldState transfer in progress, setting sleep higher: mysqld . . .The server quit without upd
ating PID file (/var/run/mysqld/mysqld.pid). ... failed!
Dec 23 10:24:27 nj-sql3 mysql[7108]: failed!
Dec 23 10:24:27 nj-sql3 systemd[1]: mysql.service: control process exited, code=exited status=1
Dec 23 10:24:27 nj-sql3 systemd[1]: Failed to start LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon.
Dec 23 10:24:27 nj-sql3 systemd[1]: Unit mysql.service entered failed state.

TokuDB on Percona MySQL 5.5

Lastest Forum Posts - December 27, 2016 - 4:07am
Hello,

I have Percona MySQL 5.5 server. I want to install compatible TokuDB plugin and then use it on the same server.
Does MySQL 5.5 support TokuDB? Because I can not find it on the Downloads page:
https://www.percona.com/downloads/Pe...er-5.5/LATEST/

In the same time tokudb is available for mysql 5.6 here:
https://www.percona.com/downloads/Pe...er-5.6/LATEST/

I can not find any statements that mysql 5.5 does not support it.
Please help and clarify how can I install and use TokuDB on MySQL 5.5
Thank you.

Client Doesnt show up in Grafana

Lastest Forum Posts - December 26, 2016 - 9:20am
My client doesnt show up in Grafana. Pmm server is in aws machine and it has ports 80 and 443 open.
Does it need any more ports open for incoming connections from pmm client?
[root@XYZ sbin]# pmm-admin ping
OK, PMM server is alive.

PMM Server | XX.YY.ZZ.AA
Client Name | ABC
Client Address | AA.BB.CC.DD

Percona Server for MongoDB 3.4 Beta is now available

Latest MySQL Performance Blog posts - December 23, 2016 - 6:43am

Percona is pleased to announce the release of Percona Server for MongoDB 3.4.0-1.0beta on December 23, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

NOTE: Beta packages are available from testing repository.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as adding features like external authentication, audit logging, and profiling rate limiting. Percona Server for MongoDB requires no changes to MongoDB applications or code.

This beta release is based on MongoDB 3.4.0 and includes the following additional changes:

  • Red Hat Enterprise Linux 5 and derivatives (including CentOS 5) are no longer supported.
  • MongoRocks is now based on RocksDB 4.11.
  • PerconaFT and TokuBackup were removed.
    As alternatives, we recommend using MongoRocks for write-heavy workloads and Hot Backup for physical data backups on a running server.

Percona Server for MongoDB 3.4.0-1.0beta release notes are available in the official documentation.

 

pt-online-schema-change stops with EXPLAIN statement error in Percona Toolkit 2.2.17

Lastest Forum Posts - December 22, 2016 - 12:25pm
I want to alter the Primary key of a table using pt-online-schema change tool on MySQL 5.5 Master DB. I tested out two different versions of Percona. Version 2.1.0 worked correctly and changed the schema of the table. However, 2.2.17 does not work, and gives me this error:

2016-12-22T20:06:33 Error copying rows from `management`.`table1` to `management`.`_table1_new`: Error executing EXPLAIN SELECT /*!40001 SQL_NO_CACHE */ `column1`, `column1`, `column2`, `column1`, `column2`, `column3` FROM `management`.`table1` FORCE INDEX(`PRIMARY`) WHERE ((`column1` > ?) OR (`column1` = ? AND `column2` > ?) OR (`column1` = ? AND `column2` = ? AND `column3` >= ?)) ORDER BY `column1`, `column2`, `column3` LIMIT ?, 2 /*next chunk boundary*/: DBD::mysql::st execute failed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''999', 2 /*next chunk boundary*/' at line 1 [for Statement "EXPLAIN SELECT /*!40001 SQL_NO_CACHE */ `column1`, `column1`, `column2`, `column1`, `column2`, `column3` FROM `management`.`table1` FORCE INDEX(`PRIMARY`) WHERE ((`column1` > ?) OR (`column1` = ? AND `column2` > ?) OR (`column1` = ? AND `column2` = ? AND `column3` >= ?)) ORDER BY `column1`, `column2`, `column3` LIMIT ?, 2 /*next chunk boundary*/"] at PerconaToolkit2.2.17/bin/pt-online-schema-change line 10883, <STDIN> line 1.

The command I am using to test is:

PerconaToolkit2.2.17/bin/pt-online-schema-change --ask-pass --host=localhost --port=8895 --socket=/Database1/mysql/state/mysql.sock --user=root --print --progress=percentage,1 --execute --nocheck-alter --alter "DROP PRIMARY KEY, ADD PRIMARY KEY (column1, column2, column3, column4)" D=management,t=table1

Can you help me out to check whats the issue here ?

toku_recover_fassociate: Assertion `r==0' failed (errno=2)

Lastest Forum Posts - December 22, 2016 - 8:58am
We're attempting to transfer our db from one server to another using the tokudb hot backup plugin. Our database is primarily tokudb, with system tables being left as Innodb. I'm able to start up the new server with tokudb and toku hot backup in place. When I go to change the data dir to use the data from the old server I get this error. /mnt/workspace/percona-server-5.7-debian-binary/label_exp/debian-jessie-64bit/percona-server-5.7-5.7.16-10/storage/tokudb/PerconaFT/ft/logger/recover.cc:471 toku_recover_fassociate: Assertion `r==0' failed (errno=2)
: No such file or directory

The back process we have is this:
Login into server 1
set the toku_backup_dir to our cloud storage drive
once that's finished unmount our csd

Go to server 2
mount csd
stop mysql
rsync -avP from csd to new mysql dir
update my.cnf file to use the new mysql dir, which is the old servers data
start mysql


Any ideas what could be causing the error?


The output looks like this:


mysqld_safe Transparent huge pages are already set to: never.
mysqld_safe Starting mysqld daemon with databases from /var/local/mysql
[Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
[Warning] 'NO_AUTO_CREATE_USER' sql mode was not set.
[Note] /usr/sbin/mysqld (mysqld 5.7.16-10) starting as process 5204 ...
[Note] InnoDB: PUNCH HOLE support available
[Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
[Note] InnoDB: Uses event mutexes
[Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
[Note] InnoDB: Compressed tables use zlib 1.2.8
[Note] InnoDB: Using Linux native AIO
[Note] InnoDB: Number of pools: 1
[Note] InnoDB: Using CPU crc32 instructions
[Note] InnoDB: Initializing buffer pool, total size = 8G, instances = 8, chunk size = 128M
[Note] InnoDB: Completed initialization of buffer pool
[Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
[Note] InnoDB: Recovering partial pages from the parallel doublewrite buffer at /var/local/mysql/xb_doublewrite
[Note] InnoDB: Highest supported file format is Barracuda.
[Note] InnoDB: Log scan progressed past the checkpoint lsn 39748453726411
[Note] InnoDB: Doing recovery: scanned up to log sequence number 39748453726420
[Note] InnoDB: Doing recovery: scanned up to log sequence number 39748453726420
[Note] InnoDB: Database was not shutdown normally!
[Note] InnoDB: Starting crash recovery.
[Note] InnoDB: Created parallel doublewrite buffer at /var/local/mysql/xb_doublewrite, size 31457280 bytes
[Note] InnoDB: Last MySQL binlog file position 0 29185505, file name mysql-bin.011878
[Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
[Note] InnoDB: Creating shared tablespace for temporary tables
[Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
[Note] InnoDB: File './ibtmp1' size is now 12 MB.
[Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
[Note] InnoDB: 32 non-redo rollback segment(s) are active.
[Note] InnoDB: Waiting for purge to start
[Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.7.16-10 started; log sequence number 39748453726420
[Note] InnoDB: Loading buffer pool(s) from /var/local/mysql/ib_buffer_pool
[Note] Plugin 'FEDERATED' is disabled.
[Note] InnoDB: Buffer pool(s) load completed at 161222 16:42:04
PerconaFT recovery starting in env /var/local/mysql/
PerconaFT recovery scanning backward from 315924479769
PerconaFT recovery bw_begin_checkpoint at 315924370568 timestamp 1482366079344546 (bw_newer)
PerconaFT recovery bw_end_checkpoint at 315924043645 timestamp 1482366064849133 xid 315923076503 (bw_newer)
PerconaFT recovery bw_begin_checkpoint at 315923076503 timestamp 1482366019344284 (bw_between)
PerconaFT recovery turning around at begin checkpoint 315923076503 time 45504849
PerconaFT recovery starts scanning forward to 315924479769 from 315923076503 left 1403266 (fw_between)
/mnt/workspace/percona-server-5.7-debian-binary/label_exp/debian-jessie-64bit/percona-server-5.7-5.7.16-10/storage/tokudb/PerconaFT/ft/logger/recover.cc:471 toku_recover_fassociate: Assertion `r==0' failed (errno=2)
: No such file or directory
Backtrace: (Note: toku_do_assert=0x0x7f3fa0d45630)
/usr/lib/mysql/plugin/ha_tokudb.so(_Z19db_env_do_backtraceP8_IO_FILE+0x1 b)[0x7f3fa0d6ffbb]
/usr/lib/mysql/plugin/ha_tokudb.so(+0x13d0e3)[0x7f3fa0d700e3]
/usr/lib/mysql/plugin/ha_tokudb.so(+0x11262e)[0x7f3fa0d4562e]
/usr/lib/mysql/plugin/ha_tokudb.so(_Z14tokuft_recoverP13__toku_db_envPFv S0_P7tokutxnEPFvS0_P10cachetableEP10tokuloggerPKcS C_PFiP9__toku_dbPK10__toku_dbtSH_EPFiSE_SH_SH_SH_P FvSH_PvESK_EPFiSE_SE_P9DBT_ARRAYSQ_SH_SH_EPFiSE_SE _SQ_SH_SH_Em+0x267b)[0x7f3fa0d6441b]
/usr/lib/mysql/plugin/ha_tokudb.so(+0x88c57)[0x7f3fa0cbbc57]
/usr/lib/mysql/plugin/ha_tokudb.so(+0x63d61)[0x7f3fa0c96d61]
/usr/sbin/mysqld(_Z24ha_initialize_handlertonP13st_plugin_in t+0x51)[0x7f67c1]
/usr/sbin/mysqld[0xc7fbb6]
/usr/sbin/mysqld[0xc8552f]
/usr/sbin/mysqld(_Z11plugin_initPiPPci+0x7b7)[0xc874b7]
/usr/sbin/mysqld[0x78fa4c]
/usr/sbin/mysqld(_Z11mysqld_mainiPPc+0x7f7)[0x791027]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f41eb38ab45]
/usr/sbin/mysqld[0x787664]
Engine status function not available
Memory usage:
Arena 0:
system bytes = 0
in use bytes = 0
Total (incl. mmap):
system bytes = 0
in use bytes = 0
max mmap regions = 0
max mmap bytes = 0
16:42:10 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=67108864
read_buffer_size=131072
max_used_connections=0
max_threads=129
thread_count=0
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 116499 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0xe832dc]
/usr/sbin/mysqld(handle_fatal_signal+0x479)[0x797679]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf8d0)[0x7f41ed4188d0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7f41eb39e067]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f41eb39f448]
/usr/lib/mysql/plugin/ha_tokudb.so(+0x13d0e8)[0x7f3fa0d700e8]
/usr/lib/mysql/plugin/ha_tokudb.so(+0x11262e)[0x7f3fa0d4562e]
/usr/lib/mysql/plugin/ha_tokudb.so(_Z14tokuft_recoverP13__toku_db_envPFv S0_P7tokutxnEPFvS0_P10cachetableEP10tokuloggerPKcS C_PFiP9__toku_dbPK10__toku_dbtSH_EPFiSE_SH_SH_SH_P FvSH_PvESK_EPFiSE_SE_P9DBT_ARRAYSQ_SH_SH_EPFiSE_SE _SQ_SH_SH_Em+0x267b)[0x7f3fa0d6441b]
/usr/lib/mysql/plugin/ha_tokudb.so(+0x88c57)[0x7f3fa0cbbc57]
/usr/lib/mysql/plugin/ha_tokudb.so(+0x63d61)[0x7f3fa0c96d61]
/usr/sbin/mysqld(_Z24ha_initialize_handlertonP13st_plugin_in t+0x51)[0x7f67c1]
/usr/sbin/mysqld[0xc7fbb6]
/usr/sbin/mysqld[0xc8552f]
/usr/sbin/mysqld(_Z11plugin_initPiPPci+0x7b7)[0xc874b7]
/usr/sbin/mysqld[0x78fa4c]
/usr/sbin/mysqld(_Z11mysqld_mainiPPc+0x7f7)[0x791027]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f41eb38ab45]
/usr/sbin/mysqld[0x787664]
You may download the Percona Server operations manual by visiting
http://www.percona.com/software/percona-server/. You may find information
in the manual which will help you identify the cause of the crash.

pt-table-checksum - SQL Permossions

Lastest Forum Posts - December 22, 2016 - 6:03am
Hi,

I'm using pt-table-checksum in my environment, with the same SQL user to run the tool (with this command):
Code: pt-table-checksum --ignore-databases mysql h=$masterHost,P=$masterPort,u=$slaveUser,p=$slavePass What is the minimum but necessary Permission that the SQL user needs to run it?

Thanks!

Backup for pmm-server and data container

Lastest Forum Posts - December 21, 2016 - 8:25pm
hello, you have any suggested process or method or scripts for backing up the pmm-server and data containers

Percona Blog Poll: What Programming Languages are You Using for Backend Development?

Latest MySQL Performance Blog posts - December 21, 2016 - 10:53am

Take Percona’s blog poll on what programming languages you’re using for backend development.

While customers and users focus and interact with applications and websites, these are really just the tip of the iceberg for the whole end-to-end system that allows applications to run. The backend is what makes a website or application work. The backend has three parts to it: server, application, and database. A backend operation can be a web application communicating with the server to make a change in a database stored on a server. Technologies like PHP, Ruby, Python, and others are the ones backend programmers use to make this communication work smoothly, allowing the customer to purchase his or her ticket with ease.

Backend programmers might not get a lot of credit, but they are the ones that design, maintain and repair the machinery that powers a system.

Please take a few seconds and answer the following poll on backend programming languages. Which are you using? Help the community learn what languages help solve critical database issues. Please select from one to six languages as they apply to your environment.

If you’re using other languages, or have specific issues, feel free to comment below. We’ll post a follow-up blog with the results!

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Percona Poll Results: What Database Technologies Are You Using?

Latest MySQL Performance Blog posts - December 21, 2016 - 10:46am

This blog shows the results from Percona’s poll on what database technologies our readers use in their environment.

We design different databases for different scenarios. Using one database technology for every situation doesn’t make sense, and can lead to non-optimal solutions for common issues. Big data and IoT applications, high availability, secure backups, security, cloud vs. on-premises deployment: each have a set of requirements that might need a special technology. Relational, document-based, key-value, graphical, column family – there are many options for many problems. More and more, database environments combine more than one solution to address the various needs of an enterprise or application (known as polyglot persistence).

The following are the results of our poll on database technologies:

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

We’ve concluded our database technology poll that looks at the technologies our readers are running in 2016. Thank you to the more than 1500 people who responded! Let’s look at what the poll results tell us, and how they compare to the similar poll we did in 2013.

Since the wording of the two poll questions is slightly different, the results won’t be directly comparable.  

First, let’s set the record straight: this poll does not try to be an unbiased, open source database technology poll. We understand our audience likely has many more MySQL and MongoDB users than other technologies. So we should look at the poll results as “how MySQL and MongoDB users look at open source database technology.”

It’s interesting to examine which technologies we chose to include in our 2016 poll, compared to the 2013 poll. The most drastic change can be seen in the full-text search technologies. This time, we decided not to include Lucene and Sphinx this time. ElasticSearch, which wasn’t included back in 2013, is now the leading full-text search technology. This corresponds to what we see among our customers.

The change between Redis versus Memcached is also interesting. Back in 2013, Memcached was the clear supporting technology winner. In 2016, Redis is well ahead.

We didn’t ask about PostgreSQL back in 2013 (few people probably ran PostgreSQL alongside MySQL then). Today our poll demonstrates its very strong showing.

We are also excited to see MongoDB’s strong ranking in the poll, which we interpret both as a result of the huge popularity of this technology and as recognition of our success as MongoDB support and services provider. We’ve been in the MongoDB solutions business for less than two years, and already seem to have a significant audience among MongoDB users.

In looking at other technologies mentioned, it is interesting to see that Couchbase and Riak were mentioned by fewer people than in 2013, while Cassandra came in about the same. I don’t necessarily see it as diminishing popularity for these technologies, but as potentially separate communities forming that don’t extensively cross-pollinate.

Kafka also deserves special recognition: with the initial release in January 2011, it gets a mention back in our 2013 poll. Our current poll shows it at 7%. This is a much larger number than might be expected, as Kafka is typically used in complicated, large-scale applications.

Thank you for participating!

Installing Percona Monitoring and Management on Google Container Engine (Kubernetes)

Latest MySQL Performance Blog posts - December 21, 2016 - 10:19am

This blog post discussing installing Percona Monitoring and Management on a Google container engine (Kubernetes),I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that pmm-server uses.

The regular install instructions are here: https://www.percona.com/doc/percona-monitoring-and-management/install.html

Since Google Container Engine runs on Kubernetes, we had to do some interesting changes to the server install instructions.

First, you will want to get the gcloud shell. This is done by clicking the gcloud shell button at the top right of your screen when logged into your GCS project.

Once you are in the shell, you just need to run some commands to get up and running.

Let’s set our availability zone and region:

manjot_singh@googleproject:~$ gcloud config set compute/zone asia-east1-c Updated property [compute/zone].

Then let’s set up our auth:

manjot_singh@googleproject:~$ gcloud auth application-default login ... These credentials will be used by any library that requests Application Default Credentials.

Now we are ready to go.

Normally, we create a persistent container called pmm-data to hold the data the server collects and survive container deletions and upgrades. For GCS, we will create persistent disks, and use the minimum (Google) recommended size for each.

manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-prom-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-prom-data-pv]. NAME              ZONE          SIZE_GB  TYPE         STATUS pmm-prom-data-pv  asia-east1-c  200      pd-standard  READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-consul-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-consul-data-pv]. NAME                ZONE          SIZE_GB  TYPE         STATUS pmm-consul-data-pv  asia-east1-c  200      pd-standard  READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-mysql-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-mysql-data-pv]. NAME               ZONE          SIZE_GB  TYPE         STATUS pmm-mysql-data-pv  asia-east1-c  200      pd-standard  READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-grafana-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-grafana-data-pv]. NAME                 ZONE          SIZE_GB  TYPE         STATUS pmm-grafana-data-pv  asia-east1-c  200      pd-standard  READY

Ignoring messages about disk formatting, we are ready to create our Kubernetes cluster:

manjot_singh@googleproject:~$ gcloud container clusters create pmm-server --num-nodes 1 --machine-type n1-standard-2 Creating cluster pmm-server...done. Created [https://container.googleapis.com/v1/projects/googleproject/zones/asia-east1-c/clusters/pmm-server]. kubeconfig entry generated for pmm-server. NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS pmm-server asia-east1-c 1.4.6 999.911.999.91 n1-standard-2 1.4.6 1 RUNNING

You should now see something like:

manjot_singh@googleproject:~$ gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-pmm-server-default-pool-73b3f656-20t0 asia-east1-c n1-standard-2 10.14.10.14 911.119.999.11 RUNNING

Now that our container manager is up, we need to create 2 configs for the “pod” we are creating to run our container. One will be used only to initialize the server and move the container drives to the persistent disks and the second one will be the actual running server.

manjot_singh@googleproject:~$ vi pmm-server-init.json {   "apiVersion": "v1",   "kind": "Pod",   "metadata": {       "name": "pmm-server",       "labels": {           "name": "pmm-server"       }   },   "spec": {     "containers": [{         "name": "pmm-server",         "image": "percona/pmm-server:1.0.6",         "env": [{                 "name":"SERVER_USER",                 "value":"http_user"             },{                 "name":"SERVER_PASSWORD",                 "value":"http_password"             },{                 "name":"ORCHESTRATOR_USER",                 "value":"orchestrator"             },{                 "name":"ORCHESTRATOR_PASSWORD",                 "value":"orch_pass"             }         ],         "ports": [{             "containerPort": 80             }         ],         "volumeMounts": [{           "mountPath": "/opt/prometheus/d",           "name": "pmm-prom-data"         },{           "mountPath": "/opt/c",           "name": "pmm-consul-data"         },{           "mountPath": "/var/lib/m",           "name": "pmm-mysql-data"         },{           "mountPath": "/var/lib/g",           "name": "pmm-grafana-data"         }]       }     ],     "restartPolicy": "Always",     "volumes": [{       "name":"pmm-prom-data",       "gcePersistentDisk": {           "pdName": "pmm-prom-data-pv",           "fsType": "ext4"       }     },{       "name":"pmm-consul-data",       "gcePersistentDisk": {           "pdName": "pmm-consul-data-pv",           "fsType": "ext4"       }     },{       "name":"pmm-mysql-data",       "gcePersistentDisk": {           "pdName": "pmm-mysql-data-pv",           "fsType": "ext4"       }     },{       "name":"pmm-grafana-data",       "gcePersistentDisk": {           "pdName": "pmm-grafana-data-pv",           "fsType": "ext4"       }     }]   } }

manjot_singh@googleproject:~$ vi pmm-server.json { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "pmm-server", "labels": { "name": "pmm-server" } }, "spec": { "containers": [{ "name": "pmm-server", "image": "percona/pmm-server:1.0.6", "env": [{ "name":"SERVER_USER", "value":"http_user" },{ "name":"SERVER_PASSWORD", "value":"http_password" },{ "name":"ORCHESTRATOR_USER", "value":"orchestrator" },{ "name":"ORCHESTRATOR_PASSWORD", "value":"orch_pass" } ], "ports": [{ "containerPort": 80 } ], "volumeMounts": [{ "mountPath": "/opt/prometheus/data", "name": "pmm-prom-data" },{ "mountPath": "/opt/consul-data", "name": "pmm-consul-data" },{ "mountPath": "/var/lib/mysql", "name": "pmm-mysql-data" },{ "mountPath": "/var/lib/grafana", "name": "pmm-grafana-data" }] } ], "restartPolicy": "Always", "volumes": [{ "name":"pmm-prom-data", "gcePersistentDisk": { "pdName": "pmm-prom-data-pv", "fsType": "ext4" } },{ "name":"pmm-consul-data", "gcePersistentDisk": { "pdName": "pmm-consul-data-pv", "fsType": "ext4" } },{ "name":"pmm-mysql-data", "gcePersistentDisk": { "pdName": "pmm-mysql-data-pv", "fsType": "ext4" } },{ "name":"pmm-grafana-data", "gcePersistentDisk": { "pdName": "pmm-grafana-data-pv", "fsType": "ext4" } }] } }

Then create it:

manjot_singh@googleproject:~$ kubectl create -f pmm-server-init.json pod "pmm-server" created

Now we need to move data to persistent disks:

manjot_singh@googleproject:~$ kubectl exec -it pmm-server bash root@pmm-server:/opt# supervisorctl stop grafana grafana: stopped root@pmm-server:/opt# supervisorctl stop prometheus prometheus: stopped root@pmm-server:/opt# supervisorctl stop consul consul: stopped root@pmm-server:/opt# supervisorctl stop mysql mysql: stopped root@pmm-server:/opt# mv consul-data/* c/ root@pmm-server:/opt# chown pmm.pmm c root@pmm-server:/opt# cd prometheus/ root@pmm-server:/opt/prometheus# mv data/* d/ root@pmm-server:/opt/prometheus# chown pmm.pmm d root@pmm-server:/var/lib# cd /var/lib root@pmm-server:/var/lib# mv mysql/* m/ root@pmm-server:/var/lib# chown mysql.mysql m root@pmm-server:/var/lib# mv grafana/* g/ root@pmm-server:/var/lib# chown grafana.grafana g root@pmm-server:/var/lib# exit manjot_singh@googleproject:~$ kubectl delete pods pmm-server pod "pmm-server" deleted

Now recreate the pmm-server container with the actual configuration:

manjot_singh@googleproject:~$ kubectl create -f pmm-server.json pod "pmm-server" created

It’s up!

Now let’s get access to it by exposing it to the internet:

manjot_singh@googleproject:~$ kubectl expose deployment pmm-server --type=LoadBalancer service "pmm-server" exposed

You can get more information on this by running:

manjot_singh@googleproject:~$ kubectl describe services pmm-server Name: pmm-server Namespace: default Labels: run=pmm-server Selector: run=pmm-server Type: LoadBalancer IP: 10.3.10.3 Port: <unset> 80/TCP NodePort: <unset> 31757/TCP Endpoints: 10.0.0.8:80 Session Affinity: None Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 22s 22s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer

To find the public IP of your PMM server, look under “EXTERNAL-IP”

manjot_singh@googleproject:~$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.3.10.3 <none> 443/TCP 7m pmm-server 10.3.10.99 999.911.991.91 80/TCP 1m

That’s it, just visit the external IP in your browser and you should see the PMM landing page!

One of the things we didn’t resolve was being able to access the pmm-server container within the vpc. The client had to go through the open internet and hit PMM via the public IP. I hope to work on this some more and resolve this in the future.

I have also talked to our team about making mounts for persistent disks easier so that we can use less mounts and make the configuration and setup easier.

 

 

how to get slow query logs for mysql and analyse it through pt-query-digest?

Lastest Forum Posts - December 21, 2016 - 4:44am
I have enabled slow query logs in config file. when i execute pt-query-digest filename, it just shows the query-size. It does not show complete output like it is showing for tcpdump option. How to analyse the slow query logs??

Percona XtraDB Cluster 5.7.16-27.19 is now available

Lastest Forum Posts - December 21, 2016 - 2:04am
Percona announces the release of Percona XtraDB Cluster 5.7.16-27.19. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.7.16-27.19 is now the current release, based on the following:All Percona software is open-source and free.

Percona XtraDB Cluster 5.6.34-26.19 is now available

Lastest Forum Posts - December 21, 2016 - 2:02am
Percona announces the release of Percona XtraDB Cluster 5.6.34-26.19. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.34-26.19 is now the current release, based on the following:All Percona software is open-source and free. Details of this release can be found in the 5.6.34-26.19 milestone on Launchpad.

how to enable tablestats=OFF to ON

Lastest Forum Posts - December 20, 2016 - 9:44pm
hello weber,

i wanna know how to how to force enable tablestats=OFF to ON .
currently i have 1000* count tables.



about aurora db

Lastest Forum Posts - December 20, 2016 - 6:45pm
hello,
Is it possible that collect aurora v5.6 rds data without performace_schema table enable option.
on pmm v1.0.7 ??



Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.