Buy Percona SupportEmergency? Get 24/7 Help Now!

Update the Signing Key for Percona Debian and Ubuntu Packages

Latest MySQL Performance Blog posts - October 13, 2016 - 11:06am

In this blog post, we’ll explain how to update the signing key for Percona Debian and Ubuntu packages.

Some of the users might have already noticed following warning on Ubuntu 16.04 (Xenial Xerus):

W: Signature by key 430BDF5C56E7C94E848EE60C1C4CBDCDCD2EFD2A uses weak digest algorithm (SHA1)

when running apt-get update.

Percona .deb packages are signed with a key that uses an algorithm now considered weak. Starting with the next release, Debian and Ubuntu packages are signed with a new key that uses the much stronger SHA-512 algorithm. All future package release will also contain the new algorithm.

You’ll need to do one of the following in order to use the new key:

  • If you installed the Percona repository package as described here, this package is automatically updated to a new package version (percona-release_0.1-4). This package currently contains both the old and new keys. This helps make the transition easier (until all packages are signed with the new key).
  • Install the new Percona repository package as described in the installation guide.
  • Manually download and add the key from either or by running:
    apt-key adv --keyserver --recv-keys 8507EFA5 or
    apt-key adv --keyserver --recv-keys 8507EFA5

It’s important that you add the new key before the next release. Otherwise you’ll see the following warning:

W: GPG error: xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9334A25F8507EFA5

Leave any questions about updating the signing key for Percona Debian and Ubuntu packages in the comments below.

Validating backup & restore procedure

Lastest Forum Posts - October 13, 2016 - 9:11am
We are looking to validate our restore procedure from an xbstream-based backup, because it doesn't seem to be well documented, and we're having some corruption issues during our test restores.

The backup configuration is:[xtrabackup]

decompress-threads=4The backup is taken with:xtrabackup --backup --stream=xbstream > /data/backups/2016-09-26_01-54-10.xbstream
The backup is then restored with these commands:
$ sudo service mysql stop

# Extract the backup to /tmp/restore
$ mkdir -p /tmp/restore
$ xbstream -x -v -C /tmp/restore < /data/backups/2016-09-26_01-54-10.xbstream

# Decompress the backup files, remove compressed versions
$ pushd /tmp/restore
$ for i in `find . -iname "*\.qp"`; do qpress -do $i > $(dirname $i)/$(basename $i .qp) && rm -f $i; done
$ popd

# Move the backup files to back to datadir
$ sudo rsync -avzh /tmp/restore/ /data/mysql/
$ sudo chown -R mysql: /data/mysql
$ sudo service mysql start

# if all goes well, delete /tmp/restore
$ rm -rf /tmp/restore/
However, we're getting some ibdata1/log file position mismatch errors and MySQL is not starting (restoring to same server):Oct 13 14:25:27 test mysqld: 2016-10-13 14:25:27 140163571337152 [Note] InnoDB: The log sequence numbers 770055386153 and 770055386153 in ibdata files do not match the log sequence number 1144662140864 in the ib_logfiles!
Oct 13 14:25:27 test mysqld: 2016-10-13 14:25:27 140163571337152 [Note] InnoDB: Database was not shutdown normally!
Oct 13 14:25:27 test mysqld: 2016-10-13 14:25:27 140163571337152 [Note] InnoDB: Starting crash recovery.
Oct 13 14:25:27 test mysqld: 2016-10-13 14:25:27 140163571337152 [Note] InnoDB: Reading tablespace information from the .ibd files...
Oct 13 14:25:27 test mysqld: 2016-10-13 14:25:27 140163571337152 [Note] InnoDB: Restoring possible half-written data pages
Oct 13 14:25:27 test mysqld: 2016-10-13 14:25:27 140163571337152 [Note] InnoDB: from the doublewrite buffer...
Oct 13 14:25:27 test mysqld: InnoDB: Error: trying to access page number 3636950279 in space 0,
Oct 13 14:25:27 test mysqld: InnoDB: space name ./ibdata1,
Oct 13 14:25:27 test mysqld: InnoDB: which is outside the tablespace bounds.
Oct 13 14:25:27 test mysqld: InnoDB: Byte offset 0, len 16384, i/o type 10.
Oct 13 14:25:27 test mysqld: InnoDB: If you get this error at mysqld startup, please check that
Oct 13 14:25:27 test mysqld: InnoDB: your my.cnf matches the ibdata files that you have in the
Oct 13 14:25:27 test mysqld: InnoDB: MySQL server. Does anyone have any ideas on whether the restore procedure we're using is valid? Thanks, Matt

MySQL 5.7 Performance Tuning Immediately After Installation

Latest MySQL Performance Blog posts - October 12, 2016 - 2:52pm

This blog updates Stephane Combaudon’s blog on MySQL performance tuning, and covers MySQL 5.7 performance tuning immediately after installation.

A few years ago, Stephane Combaudon wrote a blog post on Ten MySQL performance tuning settings after installation that covers the (now) older versions of MySQL: 5.1, 5.5 and 5.6. In this post, I will look into what to tune in MySQL 5.7 (with a focus on InnoDB).

The good news is that MySQL 5.7 has significantly better default values. Morgan Tocker created a page with a complete list of features in MySQL 5.7, and is a great reference point. For example, the following variables are set:

In MySQL 5.7, there are only four really important variables that need to be changed. However, there are other InnoDB and global MySQL variables that might need to be tuned for a specific workload and hardware.

To start, add the following settings to my.cnf under the [mysqld] section. You will need to restart MySQL:

[mysqld] # other variables here innodb_buffer_pool_size = 1G # (adjust value here, 50%-70% of total RAM) innodb_log_file_size = 256M innodb_flush_log_at_trx_commit = 1 # may change to 2 or 0 innodb_flush_method = O_DIRECT


Variable Value innodb_buffer_pool_size Start with 50% 70% of total RAM. Does not need to be larger than the database size innodb_flush_log_at_trx_commit
  • 1   (Default)
  • 0/2 (more performance, less reliability)
innodb_log_file_size 128M – 2G (does not need to be larger than buffer pool) innodb_flush_method O_DIRECT (avoid double buffering)


What is next?

Those are a good starting point for any new installation. There are a number of other variables that can increase MySQL performance for some workloads. Usually, I would setup a MySQL monitoring/graphing tool (for example, the Percona Monitoring and Management platform) and then check the MySQL dashboard to perform further tuning.

What can we tune further based on the graphs?

InnoDB buffer pool size. Look at the graphs:

As we can see, we can probably benefit from increasing the InnoDB buffer pool size a bit to ~10G, as we have RAM available and the number of free pages is small compared to the total buffer pool.

InnoDB log file size. Look at the graph:

As we can see here, InnoDB usually writes 2.26 GB of data per hour, which exceeds the total size of the log files (2G). We can now increase the innodb_log_file_size variable and restart MySQL. Alternatively, use “show engine InnoDB status” to calculate a good InnoDB log file size.

Other variables

There are a number of other InnoDB variables that can be further tuned:


Setting innodb_autoinc_lock_mode =2 (interleaved mode) can remove the need for table-level AUTO-INC lock (and can increase performance when multi-row insert statements are used to insert values into tables with auto_increment primary key). This requires binlog_format=ROW  or MIXED  (and ROW is the default in MySQL 5.7).

innodb_io_capacity and innodb_io_capacity_max

This is a more advanced tuning, and only make sense when you are performing a lot of writes all the time (it does not apply to reads, i.e. SELECTs). If you really need to tune it, the best method is knowing how many IOPS the system can do. For example, if the server has one SSD drive, we can set innodb_io_capacity_max=6000 and innodb_io_capacity=3000 (50% of the max). It is a good idea to run the sysbench or any other benchmark tool to benchmark the disk throughput.

But do we need to worry about this setting? Look at the graph of buffer pool’s “dirty pages“:

In this case, the total amount of dirty pages is high, and it looks like InnoDB can’t keep up with flushing them. If we have a fast disk subsystem (i.e., SSD), we might benefit from increasing innodb_io_capacity and innodb_io_capacity_max.

Conclusion or TL;DR version

The new MySQL 5.7 defaults are much better for general purpose workloads. At the same time, we still need to configure InnoDB variables to take advantages of the amount of RAM on the box. After installation, follow these steps:

  1. Add InnoDB variables to my.cnf (as described above) and restart MySQL
  2. Install a monitoring system, (e.g., Percona Monitoring and Management platform)
  3. Look at the graphs and determine if MySQL needs to be tuned further

Encrypt your –defaults-file

Latest MySQL Performance Blog posts - October 12, 2016 - 10:22am
Encrypt your credentials using GPG

This blog post will look how to use encryption to secure your database credentials.

In the recent blog post Use MySQL Shell Securely from Bash, there are some good examples of how you might avoid using a ~/.my.cnf – but you still need to put that password down on disk in the script. MySQL 5.6.6 and later introduced the  –login-path option, which is a handy way to store per-connection entries and keep the credentials in an encrypted format. This is a great improvement, but as shown in Get MySQL Passwords in Plain Text from .mylogin.cnf, it is pretty easy to get that information back out.

Let’s fix this with gpg-agent, mkfifo and a few servings of Bash foo…

If you want to keep prying eyes away from your super secret database credentials, then you really need to encrypt it. Nowadays most people are familiar with GPG (GNU Privacy Guard), but for those of you that aren’t it is a free implementation of the OpenPGP standard that allows you to encrypt and sign your data and communication.

First steps…

Before we can go on to use GPG to encrypt our credentials, we need to get it working. GnuPG comes with almost every *nix operating system, but for this post we’ll be using Ubuntu 16.04 LTS and we’ll presume that it isn’t yet installed.

$ sudo apt-get install gnupg gnupg-agent pinentry-curses

Once the packages are installed, there is a little configuration required to make things simpler. We’ll go with some minimal settings just to get you going. First of all, we’ll create our main key:

$ gpg --gen-key gpg (GnuPG) 1.4.12; Copyright (C) 2012 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (4096) Requested keysize is 4096 bits Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years Key is valid for? (5y) Key expires at Tue 05 Oct 2021 23:59:00 BST Is this correct? (y/N) y You need a user ID to identify your key; the software constructs the user ID from the Real Name, Comment and Email Address in this form: "Heinrich Heine (Der Dichter) <>" Real name: Ceri Williams Email address: Comment: Encrypted credentials for MySQL You selected this USER-ID: "Ceri Williams (Encrypted credentials for MySQL) <>" Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key.

After typing a password and gaining sufficient entropy you will have your first key! You can show your private keys as follows:

$ gpg --list-secret-keys /home/ceri/.gnupg/secring.gpg ----------------------------- sec 4096R/C38C02B0 2016-10-06 [expires: 2021-10-05] uid Ceri Williams (Encrypted credentials for MySQL) <>

We’ll now create our “gpg.conf” in which to keep a few settings. This sets the key that is used by default when encrypting, enables the gpg-agent and removes the copyright message.

$ cat <<EOF > ~/.gnupg/gpg.conf default-key C38C02B0 use-agent no-greeting EOF

Now we’ll add a few settings for “gpg-agent” and allow the key to be saved for one day to reduce the number of times you need to enter a password. Also, as this post concentrates on command line programs, we’ve enabled the ncurses pinentry to specify the password when requested.

$ cat <<EOF > ~/.gnupg/gpg-agent.conf pinentry-program /usr/bin/pinentry-curses default-cache-ttl 86400 max-cache-ttl 86400 EOF

You can find more information about setting up and using GPG in the GNU Privacy Handbook.

Encrypt your credentials

If all has gone well so far, you should be able to encrypt your first message. Here is a simple example to create armored (ASCII) output for a recipient with key “C38C02B0”:

$ echo hello | gpg -e --armor -r C38C02B0 -----BEGIN PGP MESSAGE----- Version: GnuPG v1 hQIMA/T3pqGixN5nAQ/+IxmmgoHNVY2IXp7OAQUZZtCw0ayZu/rFotsJBiQcNG4W J9JZmG78fgPfyF2FD4oVsXDBW7yDzfDSxCcX7LL9z4p33bzUAYOwofRP9+8qJGq/ qob1SclNN4fdFc/PtI7XKYBFYcHlfFeTIH44w9GEGdZlyfDfej+qGTJX+UHrKTo3 DaE2qpb7GvohEnDPX5WM0Pts3cATi3PcH4C9OZ5dgYizmlPB58R2DZl1ioERy2jE WSIhkZ8ZPW9ezWYDCtFbgFSpgynzYeFRVv1rel8cxZCSYgHOHrUgQM6WdtVFmEjL ONaRiEA9IcXZXDXaeFezKr2F8PJyaVfmheZDdRTdw54e4R6kPunDeWtD2aCJE4EF ztyWLgQZ0wNE8UY0PepSu5p0FAENk08xd9xNMCSiCuwmBAorafaO9Q8EnJjHS/w5 aKLJzNzad+8zKq3zgBxHGj1liHmx873Epz5izsH/lK9Jwy6H5qGVB71XuNuRMzNr ghgHFWNX7Wy8wnBnV6MrenASgtCUY6cGdT7YpPe6pLr8Qj/3QRLdzHDlMi9gGxoS 26emhTi8sIUzQRtQxFKKXyZ43sldtRewHE/k4/ZRXz5N6ST2cSFAcsMyjScS4p2a JvPvHt4xhn8uRhgiauqd7IqCCSWFrAR4J50AdARmVeucWsbRzIJIEnKW4G/XikvS QQFOvcdalGWKMpH+mRBkHRjbOgGpB0GeRbuKzhdDvVT+EhhIOG8DphumgI0yDyTo Ote5sANgTRpr0KunJPgz5pER =HsSu -----END PGP MESSAGE-----

Now that we have GPG working, we can secure our credentials and encrypt them to use later on. One of the default files MySQL reads is “~/.my.cnf”, which is where you can store your user credentials for easy command line access.

$ cat <<EOF | gpg --encrypt --armor -r C38C02B0 -o ~/.my.cnf.asc [client] user = ceri password = mysecretpassword [mysql] skip-auto-rehash prompt = "smysql d> " EOF

There you go, everything is nice and secure! But wait, how can anything use this?

Bash foo brings MySQL data to you

Most MySQL and Percona tools will accept the “–defaults-file” argument, which tells the program where to look to find what configuration to run. This will allow us to use our encrypted config.

The following script carries out the following actions:

  1. Creates a temporary file on disk and then removes it
  2. Creates a FIFO (a socket-like communication channel that requires both ends to be connected)
  3. Decrypts the config to the FIFO in the background
  4. Launches the “mysql” client and reads from the FIFO

#!/bin/bash set -e declare -ra ARGS=( "${@}" ) declare -ri ARGV=${#ARGS[@]} declare -r SEC_MYCNF=$(test -f ${1:-undef} && echo $_ || echo '.my.cnf.asc') declare -r SEC_FIFO=$(mktemp) declare -a PASSTHRU=( "${ARGS[@]}" ) test ${ARGV} -gt 0 && test -f "${ARGS[0]}" && PASSTHRU=( "${ARGS[@]:1}" ) set -u function cleanup { test -e ${SEC_FIFO} && rm -f $_ return $? } function decrypt { set +e $(which gpg) --batch --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1 test $? -eq 0 || $(which gpg) --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1 set -e } function exec_cmd { local -r cmd=${1} set +u ${cmd} --defaults-file=${SEC_FIFO} "${PASSTHRU[@]}" set -u } trap cleanup EXIT test -e ${SEC_MYCNF} || exit 1 cleanup && mkfifo ${SEC_FIFO} && decrypt & exec_cmd /usr/bin/mysql

You can use this script as you would normally with the “mysql” client, and pass your desired arguments. You can also optionally pass a specific encrypted config as the first argument:

$ ./ .my.test.asc Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 56 Server version: 5.7.14-8 Percona Server (GPL), Release '8', Revision '1f84ccd' Copyright (c) 2009-2016 Percona LLC and/or its affiliates Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. smysql (none)>

There we go, MySQL access via an encrypted “–defaults-file” – and as long as your key is unlocked in the agent you do not need to enter the password.

But wait . . . what about all of the other tools that you might want to use? Well, with a slight tweak you can make the script a little fancier and get other tools to use the config, too (tools such as mysqladmin, mysqldump, pt-show-grants, pt-table-checksum, etc.). The key part of the next script is the specification of accepted commands (“ALIASES”) and the use of symbolic links to alias the script:

#! /bin/bash set -e declare -ra ARGS=( "${@}" ) declare -ri ARGV=${#ARGS[@]} declare -rA ALIASES=( [smysql]=mysql [smysqldump]=mysqldump [smysqladmin]=mysqladmin [spt-show-grants]=pt-show-grants [spt-table-checksum]=pt-table-checksum [spt-table-sync]=pt-table-sync [spt-query-digest]=pt-query-digest ) declare -r PROGNAME=$(basename ${0}) declare -r SEC_MYCNF=$(test -f ${1:-undef} && echo $_ || echo '.my.gpg') declare -r SEC_FIFO=$(mktemp) declare -a PASSTHRU=( "${ARGS[@]}" ) test ${ARGV} -gt 0 && test -f "${ARGS[0]}" && PASSTHRU=( "${ARGS[@]:1}" ) set -u function cleanup { test -e ${SEC_FIFO} && rm -f $_ return $? } function decrypt { set +e $(which gpg) --batch --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1 test $? -eq 0 || $(which gpg) --yes -o ${SEC_FIFO} -d ${SEC_MYCNF} >debug.log 2>&1 set -e } function check_cmd { local k local cmd=${1} for k in "${!ALIASES[@]}"; do test "${cmd}" = ${k} && test -x "$(which ${ALIASES[${k}]})" && echo $_ && return 0 done return 1 } function exec_cmd { local -r cmd=${1} set +u ${cmd} --defaults-file=${SEC_FIFO} "${PASSTHRU[@]}" set -u } function usage { local realfn=$(realpath ${0}) cat < USAGE: $(basename ${0}) enc_file.gpg [--arg=val] use a GPG-encrypted my.cnf (default: ${SEC_MYCNF}) currently supports: ${ALIASES[@]} create a symlink to match the alias (real app prefixed with 's') e.g. sudo ln -s ${realfn} /usr/local/bin/smysql sudo ln -s ${realfn} /usr/local/bin/spt-show-grants EOS } trap cleanup EXIT test -e ${SEC_MYCNF} || { usage; exit 1; } cmd=$(check_cmd ${PROGNAME}) test $? -eq 0 || { echo ${ALIASES[${PROGNAME}]} is not available; exit 3; } cleanup && mkfifo ${SEC_FIFO} && decrypt & exec_cmd ${cmd}

Now we can set up some symlinks so that the script can be called in a way that the correct application is chosen:

$ mkdir -p ~/bin $ mv ~/bin $ ln -s ~/bin/ ~/bin/smysql $ ln -s ~/bin/ ~/bin/smysqladmin $ ln -s ~/bin/ ~/bin/spt-show-grants


With some symlinks now in place we can try out some of the tools that we have enabled:

$ ~/bin/smysql -Bsse 'select 1' 1 $ ~/bin/smysqladmin proc +----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+ | Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | +----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+ | 58 | ceri | localhost | | Query | 0 | starting | show processlist | 0 | 0 | +----+------+-----------+----+---------+------+----------+------------------+-----------+---------------+ $ ~/bin/spt-show-grants --only root@localhost | head -n3 -- Grants dumped by pt-show-grants -- Dumped from server Localhost via UNIX socket, MySQL 5.7.14-8 at 2016-10-07 01:01:55 -- Grants for 'root'@'localhost'

Enjoy some added security in your database environment, on your laptop and even on your Raspberry Pi!

Anonymous login

Lastest Forum Posts - October 12, 2016 - 3:51am
I have setup a PMM environment to monitor our DB infrastructure. The setup is entirely internal and only accessible from within our network. I'd like to enable it so that visiting the Metrics page presents the user with the metrics without a requirement to login, similar to the demo page hosted by Percona.

Using the loose_ option prefix in my.cnf

Latest MySQL Performance Blog posts - October 11, 2016 - 2:41pm

In this blog post, I’ll look at how to use the loose_ option prefix in my.cnf in MySQL.

mysqld throws errors at startup – and refuses to start up – if a non-existent options are defined in the my.cnf file.

For example:

2016-10-05 15:56:07 23864 [ERROR] /usr/sbin/mysqld: unknown variable 'bogus_option=1'

The MySQL manual has a solution: use the loose_ prefix option in my.cnf file.

In the prior example, if we specify loose_bogus_option=1 in the my.cnf file, instead of bogus_option=1, mysqld starts successfully and ignores that option.

This is useful in three situations:

  1. Using Percona Server-specific options in a config file that might be used for MySQL Community instances.
  2. Including options from a future version of MySQL.
  3. Setting options for plugins before they’re loaded.
Use Case 1:

We can use this feature to make a my.cnf file that will work on both MySQL Community and Percona Server, but contains options that aren’t implemented in MySQL Community. For example:

slow_query_log=ON long_query_time=0 loose_log_slow_rate_limit = 100

Without the loose_ prefix on log_slow_rate_limit, MySQL Community will throw a fatal error (as log_slow_rate_limit is not implemented in MySQL Community).

Use Case 2:

If you push out the same my.cnf file to multiple versions of MySQL, you can still use features that only exist on newer versions:

loose_super_read_only = ON

Use Case 3:

Another use case is installing the Percona audit log plugin. If you specify the audit_log  options in my.cnf, and then restart the server before running INSTALL PLUGIN, mysqld will fail. If you use the loose_ prefix, mysqld will start up successfull, and the options will be read when you run INSTALL PLUGIN.

loose_audit_log_file = /audit/audit.log loose_audit_log_rotate_on_size = 1073741824 loose_audit_log_rotations = 5 loose_audit_log_format = JSON

This trick also works for options given on the command line, e.g. mysqld --loose-bogus-option.

MySQL 8.0: The end of MyISAM

Latest MySQL Performance Blog posts - October 11, 2016 - 10:54am

This blog discusses the gradual end of MyISAM in MySQL.

The story that started 20 years ago is coming to its end. I’m talking about the old MyISAM storage engine that was the only storage provided by MySQL in 1995, and was available in MySQL for 20+ years. Actually, part of my job as a MySQL consultant for 10+ years was to discover MyISAM tables and advise customers how to convert those to InnoDB.

(Check your MySQL installation, you may still have MyISAM tables).

MySQL 5.7 still used MyISAM storage for the system tables in the MySQL schema.

In MySQL 8.0 (DMR version as of writing), the MyISAM storage engine is still available. But in a very limited scope:

  • After introducing the new data dictionary, the MyISAM tables are gone from the system schema (“mysql” db).
  • Working with MyISAM is harder now (and discouraged): you can’t just copy MyISAM tables into a running MySQL server, they will not be discovered (unlike InnoDB, where you can use “ALTER TABLE … IMPORT TABLESPACE”)
  • However, you can create a table engine=MyISAM, and it will work as before

InnoDB implemented all the older, missing features:

Feature MyISAM InnoDB Full Text Indexes yes Since MySQL 5.6 Portable tables (tablespaces) yes Since MySQL 5.6 Spatial Indexes/RTREE (GIS) yes Since MySQL 5.7 Last update for table yes Since MySQL 5.7

( Suitable for temp tables yes Since MySQL 5.7

Also complex selects uses InnoDB ondisk temp tables Faster count(*) yes *Faster in MySQL 5.7 but does not store counter


So the only MyISAM advantages left are:

  1. Tables will be smaller on disk compared to uncompressed InnoDB tables.
  2. The count(*) is still much faster in MyISAM:

mysql> select count(*) from a_myisam; +----------+ | count(*) | +----------+ | 6291456 | +----------+ 1 row in set (0.00 sec) mysql> select count(*) from a_innodb; +----------+ | count(*) | +----------+ | 6291456 | +----------+ 1 row in set (2.16 sec)

I would not use MyISAM unless there is a specific case, and for well-known reasons (MyISAM are non-transactional, table level locks, with no crash recovery, etc.)

My colleague Laurynas Biveinis also suggested converting MyISAM to an optional storage engine plugin.

intermittent stored function &amp;quot;does not exist&amp;quot; problem

Lastest Forum Posts - October 11, 2016 - 9:40am
We use a database running on percona server 5.6.28 (under ubuntu), which has a large number of stored procedures and functions (around 2000 procedures and 600 functions). A few weeks ago (after significantly increasing the number of stored functions) the calling application started getting messages saying that functions did not exist (we verified that they did actually exist.) There were no updates to, or reconfigurations of, either percona or the underlying ubuntu system at the same time that the problem started happening.

Restarting percona made the problem go away for a while. Eventually we tried increasing stored_program_cache to 600, and this seems to have cured the problem (at least for the time being.)

I searched for a long time on the web, and had great difficulty finding anything relevant. The only things which I could find (apart from obvious user errors) were related to case-sensitivity issues, and we verified that (to the best of our belief) these are not applicable in our case. Eventually, I decided to look at the code, and found something which seemed strange: I noticed that there were some calls to sp_find_routine() which apparently told it not to check mysql.proc if the requested stored procedure or function was not found in the cache, and just to return a "does not exist" error. I know I'm clutching at straws (particularly as no one else seems to be experiencing this) - but this did seem to suggest what we have been seeing (and there don't seem to any other obvious possible causes.)

I'd be grateful for any thoughts on this.

Bug 1617586 release date?

Lastest Forum Posts - October 11, 2016 - 9:36am

I can't find where bugs are tracked for some reason. Does anyone know when this patch will be released?



apt-get upate problem

Lastest Forum Posts - October 11, 2016 - 4:20am
Hi folks notice error today

Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: jessie InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9334A25F8507EFA5

W: Failed to fetch

W: Some index files failed to download. They have been ignored, or old ones used instead.
W: Duplicate sources.list entry jessie/main amd64 Packages (/var/lib/apt/lists/repo.percona.com_apt_dists_jessie_main_binary-amd64_Packages)

Take Percona’s One-Click 2017 Top Database Concerns Poll

Latest MySQL Performance Blog posts - October 10, 2016 - 4:23pm

Take Percona’s One-Click 2017 Top Database Concerns Poll.

With 2017 coming quick around the corner, it’s time to start thinking about what next year is going to bring to the open source database community. We just finished Percona Live Europe 2017, and at the conference we looked at new technologies, new techniques, and new ideas. With all this change comes some uncertainty and concern regarding change: how is a changing ecosystem going to affect your database environment? Are you up to speed on the latest technologies and how they can impact your world?

Are your biggest concerns for 2017 scalability, performance, security, monitoring, updates and bugs, or staffing issues? What do you think is going to be your biggest issue in the coming year??


Please take a few seconds and answer the following poll. It will help the community get an idea of how the new year could impact their critical database environments.

If you’ve faced specific issues, feel free to comment below. We’ll post a follow-up blog with the results!

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

You can see the results of our last blog poll on security here.

Percona Database Security Poll Results

Latest MySQL Performance Blog posts - October 10, 2016 - 4:21pm

This blog reports the results of Percona’s database security poll.

As Peter Zaitsev mentioned recently in his blog post on database support, the data breach costs can hit both your business reputation and your bottom line. Costs vary depending on the company size and market, but recent studies estimate direct costs ranging in average from $1.6M to 7.01M. Everyone agrees leaving rising security risks and costs unchecked is a recipe for disaster.

Reducing security-based outages doesn’t have a simple answer, but can be a combination of internal and external monitoring, support contracts, enhanced security systems, and a better understanding of security configuration settings.

The results from Percona’s database security poll responses are in:

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

With over 600 unique participants, “None” was the clear front runner, with up to ten hours a distant second. It seems that, generally speaking for our respondents, security is not a large problem for their open source databases. When it does affect them, it causes minimal downtime.


Check out the latest Percona one-click poll on 2017 top database concerns here.

Consul Architecture

Latest MySQL Performance Blog posts - October 10, 2016 - 3:13pm

In this blog post, I’ll provide my thoughts about Consul for ProxySQL service discovery and automation.

I approached Consul recently while looking for a service discovery and configuration automation solution for ProxySQL. My colleague Nik Vyzas wrote a great post on this topic, and I suggest you read it. I wrote this article to share my first impressions of Consul (for whomever it might interest).

Consul is a complete service discovery solution. In this respect it differs from its alternative etcd, which only provides a foundation to build such solutions.

Consul consists of a single, small binary (the Linux binary is 24MB). You just download it, edit the configuration file and start the program. It doesn’t need a package. The Consul binary does it all. You can start it as a server or as a client. It also provides a set of administrative tasks, usable via the command-line or the HTTP API.

But what is Consul about?

I mentioned service discovery, which is the primary purpose of Consul. But what is it?

Suppose that you have a Percona XtraDB Cluster. Applications query this cluster via ProxySQL (or another proxy), which distributes the workload among the running servers. But the applications still need to know ProxySQL’s address and port. But what if we can’t reach the ProxySQL instance? Well, service discovery is what allows applications to reach a running ProxySQL server. A service discovery server is a server that tells applications the IP address and port of a running service they need. It can also store information about service configuration.

Let’s continue with our Percona XtraDB Cluster and ProxySQL example. Here is what Consul can do for us:

  • When a node is added, automatically discover other nodes in the cluster.
  • When a proxy is added, automatically discover all cluster nodes.
  • Automatically configure the proxy with users and other settings.
  • Even some basic monitoring, thanks to Consul health checks.

Now, let’s see how it does these things.


If you only want to test Consul interfaces from a developer point of view, you can start a stand-alone Consul instance in developer mode. This means that Consul will run in-memory, and will not write anything to disk.

Applications can query Consul in two ways. It can be queried like a DNS server, which is the most lightweight option. For example, an application can send a request for mysql.service.dc1.consul, which means “please find a running MySQL service, in the datacenter called dc1.” Consul will reply with an A or SRV record, with the IP and possibly the port of a running server.

You can make the same request via a REST API. The API can register or unregister services, add health checks, and so on.

Consul performs health checks to find out which services are running. Consul expects to receive an integer representing success or error, just like Nagios. In fact, you can use Nagios plugins with Consul. You can even use Consul as a basis for a distributed monitoring system.

The HTTP API also includes endpoints for a KV store. Under the hood, Consul includes BoltDB. This means you can use Consul for configuration automation. Endpoints are also provided to implement distributed semaphores and leader election.

The Consul binary also provides an easy command-line interface, mainly used for administrative tasks: registering or unregistering services, adding new nodes to Consul, and so on. It also provides good diagnostic commands.


In production, Consul runs as a cluster. As mentioned above, each instance can be a server or a client. Clients have less responsibilities: when they receive queries (reads) or transactions (writes), they act like a proxy and forward them to a server. Each client also executes health checks against some services, and informs servers about their health status.

Servers are one of two types: an elected leader, and its followers. The leader can change at any moment. When a follower receives a request from a client, it forwards it to the leader. If it is a transaction, the leader logs it locally and replicates it to the followers. When more than half of them accept the changes, the transaction gets committed. The term “transaction” is a bit confusing: since version 0.7, think of a “transaction” as something that changes the state of the cluster.

Reads can have three different consistency levels, where stricter levels are slower. Followers forward queries to the leader by default, which in turn contacts other followers to check if it is still the leader. This mechanism guarantees that the applications (the users) never receive stale data. However, it requires a considerable amount of work. For this reason, less reliable but faster consistency levels are supported (depending on the use case).

Therefore, we can say that having more servers improves the reliability in case of some nodes crashes, but lowers the performance because it implies more network communications. The recommended number of servers is five. Having a high number of clients makes the system more scalable, because the health check and request forwarding work is distributed over all clients.

Multi-cluster configurations are natively supported, for geographically distributed environments. Each cluster serves data about different services. Applications, however, can query any cluster. If necessary, Consul will forward the request to the proper cluster to retrieve the required information.


Currently most Linux distributions do not include Consul. However the package is present in some versions that are not yet stable (like Debian Testing and Ubuntu 16.10).

Some community packages also exist. Before using them, you should test them to be sure that they are production-ready.

Consul in Docker

Consul’s official Docker image is based on Alpine Linux, which makes it very small. Alpine Linux is a distribution designed for embedded environments, and has recently become quite popular in the Docker world. It is based on Busybox, a tiny re-implementation of GNU basic tools.

The image is also very secure. Normally containers run a daemon as root; Consul runs as consul user, via a sudo alternative called gosu.

A Good Use Case

When we start a new container in a “dockerized” environment, we cannot predict its IP. This is a major pain when setting up a cluster: all nodes must be configured with other nodes addresses, and optionally a proxy (like ProxySQL) must know the nodes’ addresses. The problem reappears every time we add a new node, a new slave, or a new proxy. Consul is a great way to solve this problem. We will see this in depth in a future post.

Slow query log keeping in check

Lastest Forum Posts - October 10, 2016 - 4:30am
Will PMM keep the Slow Query log from growing to a crazy value?

I have tried using the Persona options for keeping the slow query log rotated by using "max_slowlog_size" available in PS 5.5 and it can't then read the log using this.

Query Analytics : Interval out of sequence

Lastest Forum Posts - October 10, 2016 - 3:05am
I cannot get query analytics to be processed from a node :
2016-10-10T09:59:00.001140141Z qan-analyzer-3fba67e0 warning Skipping interval '194 2016-10-10 09:58:00 UTC to 2016-10-10 09:59:00 UTC (0-0)' because interval '193 2016-10-10 09:57:00 UTC to 2016-10-10 09:58:00 UTC (0-0)' is still being parsed 2016-10-10T09:58:38.017831004Z data-sender info at 2016-10-10 09:58:38 UTC: 0 files, 0, 2ms, 0.00 Mbps net util, 0.00 Mbps net speed 2016-10-10T09:58:14.991883367Z agent info Cmd ok: Reply[Cmd:Version Error:'' Id:b8a20c79-905d-4b35-66d0-1aaf8e20f5f7] 2016-10-10T09:58:14.986035177Z agent info Cmd begin: Cmd[Service:agent Cmd:Version Ts:'2016-10-10 09:58:14.986477595 +0000 UTC' User:api AgentUUID: Id:b8a20c79-905d-4b35-66d0-1aaf8e20f5f7] 2016-10-10T09:58:00.001775856Z qan-analyzer-3fba67e0-worker warning Interval out of sequence: got 193, expected 192 2016-10-10T09:57:35.017027378Z data-sender info at 2016-10-10 09:57:35 UTC: 0 files, 0, 2ms, 0.00 Mbps net util, 0.00 Mbps net speed 2016-10-10T09:57:14.992363352Z agent info Cmd ok: Reply[Cmd:Version Error:'' Id:2f93b7fd-1369-456c-553b-d88127ab9659] 2016-10-10T09:57:14.986084264Z agent info Cmd begin: Cmd[Service:agent Cmd:Version Ts:'2016-10-10 09:57:14.986502808 +0000 UTC' User:api AgentUUID: Id:2f93b7fd-1369-456c-553b-d88127ab9659] Any ideas ?

pmm already exist and can't change server

Lastest Forum Posts - October 9, 2016 - 9:28pm
pmm server that system was crash. i was not stop pmm client server in other server. and reinstall pmm server

change to pmm server

pmm client can't start

int already exist

Why Percona didn’t participate in the 2016 Gartner Operational DBMS MQ

Latest MySQL Performance Blog posts - October 7, 2016 - 1:14pm
Graphic courtesy of Gartner:

Around this time last year, Gartner named Percona a Challenger in the 2015 Magic Quadrant for Operational Database Management Systems (Gartner Operational DBMS MQ). For those not familiar with these, Magic Quadrants (MQ) are a set of market research reports published by Gartner that provide qualitative analysis into a market by highlighting its participants, relevant trends, and overall depth.

While we were excited with our first placement in the Gartner MQ, Percona elected to be excluded in the 2016 Gartner Operational DBMS MQ due to changes in the inclusion criteria this year, one of which was an emphasis on venture capital funding.

While many of the companies featured in the MQ provide open source software, they also sell proprietary or open core versions of their software. Open source companies are formed as an alternative to commercial software companies. To a company backed by investors, and a board of directors looking to maximize revenue and market capitalization, open core or business source licenses (BSL) are an attractive prospect. However, the reasons for moving to a monetized-software model misses the point of open source companies.

At Percona we ship all our software as open source, and make it easily accessible to everyone. We have been doing it the “hard” way for the last 10 years, by avoiding the entanglements of venture capital. This gives us the freedom to plan reasonable and achievable growth rates without sacrificing our open source ideals.

Percona will continue to focus on what we think is important to our customers, our users and the open source database market as whole:

Our model at Percona is keeping our enterprise-grade software, such as Percona Server for MySQLPercona Server for MongoDB, Percona XtraDB Cluster and Percona Xtrabackup, completely open source and free. Our commitment to open source remains attractive to our customers because it prevents vendor lock-in. And being a successful open source company means prioritizing the value of your customer solutions over selling software. We pair our products with great services, such as Support for MySQL and MongoDB, RDBA for MySQL and MongoDB and Percona Care.

We enjoyed working with Gartner last year and look forward to working with them on future MQs as criteria continues to evolve.

In the meantime, we will always strive to put the customer first. At every phase of Percona’s journey, we remain very grateful to the customers who trust us with their database environments. We will continue to earn and honor that trust.

From Percona Live Europe 2016: Thank You, Attendees!

Latest MySQL Performance Blog posts - October 7, 2016 - 7:42am

From everyone at Percona, we’d like to send a big thank you to all our sponsors, exhibitors, and attendees at this year’s Percona Live Europe Open Source Database Conference.

The conference was an outstanding success! The sold-out event brought the European open source database community together, with a technical emphasis on the core topics of MySQL, MongoDB, PostgreSQL and other open source databases.

We are posting tutorial and session presentation slides at the Percona Live Europe site, and all of them should be available shortly. To get slides for a talk, browse the program schedule, find a topic that interests you and click that topic. The slides are at the bottom of the pop-up window containing the talk speaker and description. Click the “download slides” link to download them.

Highlights This Year:

Thanks to Our Sponsors!

We would like to thank all of our valuable event sponsors – your participation really makes the show happen.

As valued Percona partners, we invite you to be a sponsor of the Percona Live Open Source Database Conferences. We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for both our 2017 North America and European Conferences.

Percona Live Europe 2017: Dublin, Ireland!

Next year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We look forward to seeing you there!

Percona Live Open Source Database Conference 2017: Santa Clara, CA

The Percona Live Open Source Database Conference 2017 is the premier event for the diverse and active open source database community, as well as organizations that develop and use open source database software. 

The conference will feature one day of tutorials and three days of keynote talks and breakout sessions related to open source databases and software. Attendees get briefed on the hottest topics, learn about building and maintaining high-performing deployments and hear from top industry leaders.

The Percona Live Open Source Database Conference 2017 is April 24th – 27th, at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

The Percona Live 2017 Call for Papers is open until October 31, 2016. Do you have a MySQL, MongoDB, PostgreSQL or open source database use case to share, a skill to teach, or a big idea to discuss? We invite you to submit your speaking proposal for either breakout or tutorial sessions. This conference provides an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience.

Submit your proposal here.

Percona Live 2017 sponsorship opportunities are available now. Click here to find out how to sponsor.

Register for Percona Live 2017 now! Super Saver registration lasts until Nov 13. Click here to register.

MongoDB : no WiredTiger stats

Lastest Forum Posts - October 7, 2016 - 7:20am
Hi guys,

I'm currently giving a try to PMM for MongoDB and that's pretty good : fast & easy to install, nice UI => great job!

The metrics monitoring for our MongoDB standalone test server works well but I can't see anything on the WiredTiger part and nothing related in help.

I added mongodb:metrics using :

Code: sudo pmm-admin add mongodb:metrics --uri mongodb://admin:****@localhost:27017/admin I'm using 1.0.4 version.

Thanks for your help!

How to uninstall percona toolkit on redhat

Lastest Forum Posts - October 6, 2016 - 6:19pm
I am a new guy in Linux, could any one help me ?

General Inquiries

For general inquiries, please send us your question and someone will contact you.