EmergencyEMERGENCY? Get 24/7 Help Now!

Deadlocks(ERROR 1213 (40001)) when running an alter table command

Lastest Forum Posts - January 15, 2016 - 2:55pm
When I'm running following SQL, the application which is still live is hitting a deadlock: ERROR 1213 (40001) at line 2: Deadlock found when trying to get lock; try restarting transaction

AALTER TABLE `comm` DROP FOREIGN KEY `communication_conversation_fk_1`;
ADD COLUMN `thumb_url` varchar(256) NULL DEFAULT NULL COMMENT '' AFTER `search_index_version`,
ADD COLUMN `thumb_src_url` varchar(256) NULL DEFAULT NULL COMMENT '',

When I read the mysql 5.6 documentation it shouldn't lock, or am I wrong?

Version:Server version: 5.6.27-76.0-56-log Percona XtraDB Cluster (GPL), Release rel76.0, Revision 2a6d4e5, WSREP version 25.13, wsrep_25.13

Any pointers here?

Thanks in advance for your assistance!

Making Apache Spark Four Times Faster

Latest MySQL Performance Blog posts - January 15, 2016 - 2:52pm

This is a followup to my previous post Apache Spark with Air ontime performance data.

To recap an interesting point in that post: when using 48 cores with the server, the result was worse than with 12 cores. I wanted to understand the reason is was true, so I started digging. My primary suspicion was that Java (I never trust Java) was not good dealing with 100GB of memory.

There are few links pointing to the potential issues with a huge HEAP:


Following the last article’s advice, I ran four instances of Spark’s slaves. This is an old technique to better utilize resources, as often (as is well known from old MySQL times) one instance doesn’t scale well.

I added the following to the config:


The full description of the test can be found in my previous post Apache Spark with Air ontime performance data

The results:

Although the results for four instances still don’t scale much after using 12 cores, at least there is no extra penalty for using more.

It could be that the dataset is just not big enough to show the setup’s full potential.

I think there is a clear indication that with the 25GB HEAP size, Java performs much better than with 100GB – at least with Oracle’s JDK (there are comments that a third-party commercial JDK may handle this better).

This is something to keep in mind when working with Java-based servers (like Apache Spark) on high end servers.

Getting TokuBackup to Work

Lastest Forum Posts - January 15, 2016 - 1:40pm
1. We downloaded and successfully installed Percona-Server-tokudb-56-5.6.28-rel76.1.el6.x86_64.rpm and the other associated packages, which is supposed to include TokuBackup.

-rw-r--r-- 1 root root 125880320 Jan 12 03:41 Percona-Server-5.6.28-76.1-r5759e76-el6-x86_64-bundle.tar
-rw-rw-r-- 1 root root 71754356 Jan 8 14:41 Percona-Server-56-debuginfo-5.6.28-rel76.1.el6.x86_64.rpm
-rw-rw-r-- 1 root root 6776280 Jan 8 14:41 Percona-Server-client-56-5.6.28-rel76.1.el6.x86_64.rpm
-rw-rw-r-- 1 root root 1078544 Jan 8 14:41 Percona-Server-devel-56-5.6.28-rel76.1.el6.x86_64.rpm
-rw-rw-r-- 1 root root 20550228 Jan 8 14:41 Percona-Server-server-56-5.6.28-rel76.1.el6.x86_64.rpm
-rw-rw-r-- 1 root root 743104 Jan 8 14:41 Percona-Server-shared-56-5.6.28-rel76.1.el6.x86_64.rpm
-rw-rw-r-- 1 root root 23329528 Jan 8 14:41 Percona-Server-test-56-5.6.28-rel76.1.el6.x86_64.rpm
-rw-rw-r-- 1 root root 1640196 Jan 8 14:41 Percona-Server-tokudb-56-5.6.28-rel76.1.el6.x86_64.rpm

2. We successfully ran ps_tokudb_admin...

Checking SELinux status...
INFO: SELinux is disabled.

Checking if Percona Server is running with jemalloc enabled...
INFO: Percona Server is running with jemalloc enabled.

Checking transparent huge pages status on the system...
INFO: Transparent huge pages are currently disabled on the system.

Checking if thp-setting=never option is already set in config file...
INFO: Option thp-setting=never is set in the config file.

Checking TokuDB engine plugin status...
INFO: TokuDB engine plugin is installed.

3. We checked for the existence and permissions on /usr/lib64/mysql/plugin/tokudb_backup.so

-rwxr-xr-x 1 root root 21128 Jan 8 14:40 /usr/lib64/mysql/plugin/tokudb_backup.so

4. We tried to install the plugin... no joy...

install plugin tokudb_backup soname 'tokudb_backup.so';

ERROR 1126 (HY000): Can't open shared library '/usr/lib64/mysql/plugin/tokudb_backup.so' (errno: 2 /usr/lib64/mysql/plugin/tokudb_backup.so: undefined symbol: tokubackup_version_string)

5. Plugin does not show as installed or active...

| plugin_name | plugin_status |
| TokuDB | ACTIVE |
| TokuDB_file_map | ACTIVE |
| TokuDB_fractal_tree_info | ACTIVE |
| TokuDB_fractal_tree_block_map | ACTIVE |
| TokuDB_trx | ACTIVE |
| TokuDB_locks | ACTIVE |
| TokuDB_lock_waits | ACTIVE |

What am I doing wrong?

How do I pt-table-sync all tables from one localhost database to another?

Lastest Forum Posts - January 15, 2016 - 11:18am
I have two databases on localhost: db1 and db2. I want to pt-table-sync all tables on db2 to be exactly like their equivalents on db1. I tried the following command line:

Code: $ pt-table-sync --dry-run h=localhost,u=user,p=pass,d=db1 h=localhost,u=user,p=pass,d=db2 This failed with the following error:

Code: you specified a database but not a table in h=localhost,u=user,p=pass,d=db1. are you trying to sync only tables in the 'db1' database? if so, use '--databases db1' instead. I also tried dropping the D part of both DSNs, and instead specify Code: --databases=db1,db2 This shows zero differences, even though in reality differences do exist. I'm guessing what it does is compare each database to itself.

Is there a way to do this with pt-table-sync?

I can't get any values

Lastest Forum Posts - January 15, 2016 - 2:36am
When I execute the script /var/lib/zabbix/percona/scripts/get_mysql_stats_wrapper.sh gg return empty.

my PHP version is php-5.1.6-45.el5_11

[root@localhost scripts]# ./get_mysql_stats_wrapper.sh gg
[root@localhost scripts]#

================================================== =
[root@localhost scripts]# cat ~zabbix/.my.cnf
user = zabbix
password = zabbix

================================================== =

[root@localhost scripts]# ll
total 76
-rwxr-xr-x 1 root root 1277 Jan 15 14:27 get_mysql_stats_wrapper.sh
-rwxr-xr-x 1 root root 999 Jan 13 16:17 nginx_status.sh
-rwxr-xr-x 1 root root 59634 Jan 15 11:50 ss_get_mysql_stats.php
-rwxr-xr-x 1 root root 798 Dec 29 15:43 tcp_connections.sh

================================================== =

[root@localhost scripts]# egrep "mysql_user|mysql_pass" ss_get_mysql_stats.php
$mysql_user = 'zabbix';
$mysql_pass = 'zabbix';

================================================== =
mysql> show grants for zabbix@'localhost';
| Grants for zabbix@localhost
| GRANT SUPER, REPLICATION CLIENT ON *.* TO 'zabbix'@'localhost'
1 row in set (0.00 sec)

================================================== =

[root@localhost ~]# /usr/local/zabbix/bin/zabbix_get -s -k MySQL.running-slave
[root@localhost ~]# /usr/local/zabbix/bin/zabbix_get -s -k MySQL.Open-tables

[root@localhost ~]#

================================================== =
zabbix-server log

[root@localhost ~]# tail -f /tmp/zabbix_server.log
3732:20160115:175324.148 error reason for "ZHDX_18:MySQL.thread-hash-memory" changed: Received value [] is not suitable for value type [Numeric (float)]
3732:20160115:175324.148 error reason for "ZHDX_18:MySQL.Threads-cached" changed: Received value [] is not suitable for value type [Numeric (float)]
3732:20160115:175324.148 error reason for "ZHDX_18:MySQL.Threads-connected" changed: Received value [] is not suitable for value type [Numeric (float)]
3732:20160115:175324.148 error reason for "ZHDX_18:MySQL.Threads-created" changed: Received value [] is not suitable for value type [Numeric (float)]
3732:20160115:175324.148 error reason for "ZHDX_18:MySQL.Threads-running" changed: Received value [] is not suitable for value type [Numeric (float)]
3730:20160115:175329.156 error reason for "ZHDX_18:MySQL.total-mem-alloc" changed: Received value [] is not suitable for value type [Numeric (float)]
3730:20160115:175329.156 error reason for "ZHDX_18:MySQL.uncheckpointed-bytes" changed: Received value [] is not suitable for value type [Numeric (float)]
3730:20160115:175329.156 error reason for "ZHDX_18:MySQL.unflushed-log" changed: Received value [] is not suitable for value type [Numeric (float)]

OpenSSH CVE-2016-0777: Details and Mitigation

Latest MySQL Performance Blog posts - January 14, 2016 - 12:04pm

Earlier today advisories were sent out regarding OpenSSH versions 5.4 through 7.1., informing users about a security bug in the software. In essence, the advisory instructed people to add the  UseRoaming no option to their ssh_config file, with a promise for further information to be made available shortly.


Add undocumented "UseRoaming no" to ssh_config or use "-oUseRoaming=no" to prevent upcoming #openssh client bug CVE-2016-0777. More later.

— markus (@msfriedl) January 14, 2016

The post on the security issue at OpenBSD Journal can be seen here: http://undeadly.org/cgi?action=article&sid=20160114142733

This information was then later released detailing the issue and the implications here: http://www.openssh.com/txt/release-7.1p2

The statement below summarized the main issue:

“The matching server code has never been shipped, but the client code was enabled by default and could be tricked by a malicious server into leaking client memory to the server, including private client user keys.”

So what does this all mean? Simply speaking, this means a malicious or compromised server could potentially retrieve the users private SSH keys from memory. The stolen keys could then be used to authenticate against servers.

(2FA helps to protect servers from the use of stolen keys, however this is not in as widespread use as it should be.)

The short summary is in lieu of an update to the software. In the meantime, you can use the following mitigation options to protect yourself:

  1. In your ~/.ssh/config:
    Host * UseRoaming no
  2. In your ssh_config:
    Linux: /etc/ssh/ssh_config OSX: /private/etc/ssh/ssh_config
  3. On each CLI execution:
    ssh -oUseRoaming=no <hostname>    

Personally, I’ve used a combination of 1 and 2, as often a ~/.ssh/config cleanup is required. Make sure and check that your OpenSSH is correctly configured, and keep watching for updates.

Prometheus as an Engine for MySQL Monitoring

Latest MySQL Performance Blog posts - January 14, 2016 - 7:23am

When I first discovered Graphite years ago, I was very impressed with its monitoring capabilities.  Compared to many RRD-based tools that were popular at the time (like Cacti), Graphite separated the captured data and graphs, allowing you to do all kinds of math and transformations while visualizing data. For example, I could plot the relationship between system queries and disk IO, and capture how the system was getting more IO bound over time. It also had reasonably high performance, allowing me to capture high-resolution data for medium-sized systems.

Just last year I discovered Prometheus, and it also impressed me. I think it has the potential to take Graphite’s flexibility to the next level. Though I am in no way a Prometheus expert, I  want to share my understanding and thoughts on it so far.

Data Model

The data model is perhaps what attracted me to Prometheus the most. While it’s not obvious at first, when you do figure it out it has fantastic flexibility.

In the data model used by Whisper and Carbon in Graphite, you will use something like this to store MySQL data:

myapp.store.mysql.db01.status.questions = 5000

You can set up any hierarchy you like, but it has to have a hierarchy.

What Prometheus does instead is allow you to use a set of key-value pairs. The same data shown above could be presented like this:

questions_total{app=”myapp”,subsystem=”store”,engine=”mysql”,host=”db01”, source=”status”} = 5000

(You most likely wouldn’t use this exact structure, but it’s good for illustration.)

The difference between these approaches it that Prometheus provides you multiple dimensions on which you can filter and aggregate, plus you can add those dimensions later as you need them (without needing to redesign your tree hierarchy).

These labels are very dynamic, and I can change them in a second. For example, a MySQL server reporting as a “Master” might start reporting as a “Slave” in the very next second, and its data will be aggregated differently.

This is especially important in the modern, often cloud-based and virtualized world. For example, using Prometheus it is very easy to tag servers by their region or availability zones. I can also do things like compute MySQL space usage by both the database and storage engine. The possibilities are endless.

Data Capture

Unlike Graphite – where the main model is push and the hosts themselves choose what kind of information they want to push to monitoring system and at which intervals – with Prometheus you set up “Exporters” that have the ability to export the data. It is up to the Prometheus server configuration to choose what data to sample and how frequently.

The clear benefit of Prometheus’ approach is that you can have as many servers as you like pulling the data, so it is very easy to create a development version of your system and play around with it – without affecting production. It also provides a simple pathway to high availability.

(Both the push and pull approaches have their benefits and drawbacks. Brian Brazil wrote an excellent article advertising the pull model of monitoring.)

Prometheus does create a few challenges for me. Unless I want to set up Service Discovery, it is a hassle to monitor any development/test VMs I might spin up (that would otherwise not be open to external access at all). While this isn’t the main use case for Prometheus, it is helpful for me to test the dashboard’s behavior with different operating systems, workloads, etc.

A more significant issue I discovered is dealing with some data that can’t be captured to multiple locations, because the data capture causes the data to change.

Here is specific example: if I look at the events_statements_summary_by_digest table in PERFORMANCE_SCHEMA, there is a MAX_TIMER_WAIT field that shows me what the maximum query execution time is for the query pattern. If I want to get the maximum query execution time for every minute, for example, I would need to “truncate” the table to reset the statistics and let the maximum value be computed again. If I don’t perform that operation, the data becomes meaningless. If I make the exporter to reset the statistics during the poll, however, I can’t pull it from two Prometheus servers.

This is one instance where Prometheus’ performance schema design could be better. I could set up a Cron job or Event to clear out the statistics regularly and get a  proper maximum value for every five minutes, but that isn’t an overly convenient solution.

Another issue I discovered is that Prometheus doesn’t have any protection from bad (long) samples, or a very good method of detecting of them. Let’s imagine that I have a MySQL server and I’m sampling status data every second. For some reason the call to SHOW GLOBAL STATUS took five seconds to execute. The truth is we don’t really know where in those five seconds the SHOW GLOBAL STATUS output corresponds – it might be at very start, it might be at the very end. As such, you don’t really know how to process the counters. Whatever you do, you’re likely to be wrong. My preference in this case it to simply discard such samples, because even missing one percent of the samples is unlikely to change the whole picture. Constantly questioning whether you really had a couple of seconds where the QPS spiked to ten times the normal rate, or that it’s an invalid sample, is not something I on which I want to waste a lot of time!

My preferred approach is to configure the SHOW GLOBAL STATUS capture so that if it takes more than ten percent of the capture interval, it will be discarded. For example, with a one second capture I would allow 100ms for the capture. If the system is not keeping up with this scale, I would be better to not fool myself and reduce the capture resolution to around five seconds.

The only protection Prometheus allows is to configure the scrape_timeout, but unfortunately it is only limited to one second resolution at this point. This is too coarse for any high-resolution capture.

Finally, it is also inconvenient to specify different resolutions for different data. In MySQL there is a often a lot of data that I want to capture, but the resolution needed for each capture is different. For example, SHOW GLOBAL STATUS with one second resolution is must. At the same time, capturing the table size information from INFORMATION_SCHEMA with a one second resolution would put too much load on MySQL, especially if there are a lot of tables. That level of resolution in this case isn’t really needed.

An attractive thing about Prometheus is that the Prometheus development team uses it a lot for MySQL monitoring, so the MySQL Exporter is really good. Most MySQL monitoring plugins I find resort to reporting just a few basics statistics, which is not nearly enough for advanced diagnostics. The Prometheus MySQL exporter gets tons of stuff and has been adding more in every version.

I also very much like that the Prometheus Exporters are designed using HTTP protocol. This means it is very easy to debug or see what kind of data they capture. They present it simply using a web-browser:

Computational Model

I think the basic operations in Prometheus are pretty intuitive, but if you look at some of the advanced behaviors you’re going to find some inconveniences and some things that are likely to surprise you.

One inconvenience is that Prometheus is mainly designed for working with high resolution data. If there are more than five minute holes (by default) in the time series, they could disappear from the graphs. As I mentioned, for MySQL there is quite a lot of information that it makes sense to capture at a resolution lower than five minutes.

Prometheus functions are looking in the “past,” and designed in a way that the value of the function at any time (T) when it could be computed is not going to change. It all looks clean and logical, but it causes issues with holes in the data capture.  

As an example, let’s imagine following five seconds where the total number of questions from the start successfully scrapped some seconds but not others (due to a network issue, overload, etc.):

1 - 10 2 - 20 3 - X 4 - X 5 - 200

When we capture data of “counter” type the most important value it has is not the actual counter value at the given time but the rate of change of the counter at different time intervals. If in this case, for example, the query rate was ten QPS for intervals one through two seconds, this can be clearly computed. But what was the query rate in the three through four second interval? We don’t really have exact data, but that is fine: we know there have been 180 queries during the two through five second interval, giving us 60 QPS (which we can use for the three through four seconds interval).

This is NOT, however, how Prometheus will compute it if you use a high irate() function (which is suppose to give you highest resolution possible). When you evaluate irate() at T=4, it doesn’t have access to the T=5 value, even if it is in the database. Instead, it will look back and find the matching previous interval (one through two) and use the corresponding value of ten QPS.

I find this pretty puzzling and inconvenient.

There is also the rate() function, which can be used to get the average rate for the period.  Unfortunately it can’t estimate the rate for a smaller period based on the available data for a longer period. So for example if I ask rate() function to compute a query rate at T=4, looking one second back, it will return no data. This isn’t a big deal when you’re working with data manually, but if you’re building zoomable dashboards it means you can zoom in to the point where the data will disappear (rather than stopping at the best possible value available).


Prometheus has its own high performance storage system which is based in part on LevelDB. It is highly optimized for time series and can achieve a very high level of compression. Be ready, though: all your label combinations will create a different time series on the low level, and will require a lot of files. This isn’t really a problem with SSD drives and modern file systems, but it something to look out for.

The capture engine and storage systems are rather efficient. Even though Prometheus does not have built in clustering for “scaling out,” you can reportedly get more than 300K metrics per second captured on a single node. You can also use multiple Prometheus servers as needed.

The problem I found with Prometheus’ storage is that it is very self contained: you can only use it from Prometheus or access it from the HTTP API. There are no tools at this point to export it for advanced analysis with R, or to dump the whole database into something like JSON format so it can be loaded into a different database engine. Some of these features might already be on roadmap.

Purging and Aggregation

Retention configuration in Prometheus is pretty spartan. You can set storage.local.retention to the length you want to store the data, but that’s it. You can’t configure it to purge different data at different times. You can run multiple Prometheus instances to achieve this, but it’s quite a hassle.  It’s also not possible to instruct Prometheus to automatically build summaries in order to execute low resolution queries faster.

For example if I have MySQL’s query rate captured every second, but I want to view the data over a long time period (e.g., how it changed over last three months to estimate growth trends), data aggregated at hour intervals would be enough for that purpose.

There is support for recording rules to help achieve some of this, but it is not explicit or convenient in my opinion.

Looking at the Prometheus roadmap, some of these issues might not be fixed in Prometheus but achieved through integrating other systems such as InfluxDB (where experimental support already exists).


A lot of these limitations make sense if you look at the purpose for which Prometheus was created: getting high-resolution data and being able to provide as much troubleshooting information as possible to its Alerting engine. It is not really designed for storing extensive history. Too bad! I would very much like to get both of those properties in the single system!


As you install Prometheus, it has a built-in Expression Browser, which is great for debugging and interactive analyses. It also allows you to see what data you actually have in the database. It will disappoint you, however, if you’re looking for beautiful graphs!

This shows I have the information about MySQL query rate from two servers, as well as the available and configured labels.

If I want to pick one server and look at the average rate of queries per five minutes, I can do this:

There are some tools available in the graph to chose the time range and resolution.

You should aware that visualizing data with the rate() function often shows you things that do not exist. In this case, it looks like the number of queries was gradually creeping up. In reality, I just started the benchmark so the number of queries jumped almost immediately. This is what the real situation looks like (using irate()):

As I explained before, irate() does not handle missing data points very well, plus it behaves somewhat bizarrely when you “zoom out” – providing instant rate information at sparse intervals (e.g., the instant rate computed every one second over 60 seconds) rather than smoothing things to averages.

There is also the PromDash tool available for Prometheus, which gives you nicer looking dashboards and supports a lot of Prometheus’ features. Now that Grafana has official support for Prometheus, it is my preferred tool to build dashboards – especially since it supports multiple data sources besides Prometheus.


I’m very excited about Prometheus. It allows me to get a lot of information easily and use it for Performance analyses in benchmarking or troubleshooting. It would be great if it also had a simple integrated solution for long term storage and trending. I am also looking forward to better integration with Grafana and better documentation on how to create Prometheus-based dashboards – especially with some Prometheus-based examples!  

Note: All above was written about Prometheus 0.16.1. Prometheus is rapidly evolving and may  change with newer versions.

Get Low Rate Personal Loan,Flexible Installment

Lastest Forum Posts - January 13, 2016 - 11:38pm
With Gain Credit Personal Loans, you can get instant loan/money for a wide range of your personal needs like renovation of your home, marriage in the family, a family holiday, your child's education, buying a house, medical expenses or any other emergencies. With minimum documentation, you can now avail a personal loan at attractive 3% interest rates. This is trust and honest loans which you will not regret, Contact us via Email: gaincreditloan01@gmail.com

Your Full Details:
Full Name. . .. . .. . .. . .. . .
Loan Amount Needed. . ...
Loan Duration. . .. . .. . .. . .
Phone Number. . .. . .. . ..
Applied before. . .. . .. . ..
Country. . .. . .
Email Us: gaincreditloan01@gmail.com

I get 403 forbidden if I try to download Percona Monigoring Plugins

Lastest Forum Posts - January 13, 2016 - 3:24pm
I use this link: https://www.percona.com/downloads/pe...s-1.1.6.tar.gz . I get an error "403 forbidden". I'd like to test this plugin and compare to others.

Play the Percona Powerball Pool!!

Latest MySQL Performance Blog posts - January 12, 2016 - 10:25am
The Only Sure Thing is Percona Powerball Pool

Everyone is talking about the upcoming Powerball lottery draw. 1.4 BILLION dollars!! And growing! Millions of people are imagining what they would do IF they win. It’s the stuff of dreams.

That is literally true. The chances of winning the Powerball Lottery are 1 in 292.2 million. Or roughly speaking, the chances of picking the right combination of numbers is like flipping a coin and getting heads 28 times in a row. You’re more likely to get struck by lightning (twice) or bitten by a shark.


You know what is a sure thing? Percona’s ability to optimize your database performance and increase application performance. Our Support and Percona Care consultants will give you a 1 in 1 chance of making your database run better, solving your data performance issues, and improving the performance of your applications.

However, in the spirit of moment, Percona has bought 10 sets of Powerball numbers and have posted them on Facebook, Twitter and LinkedIn. It’s the Percona Powerball Pool! Like either post and share it, and you are qualified for one (1) equal share of the winnings! Use #perconapowerball when you share.

Here are the numbers:

We at Percona can’t promise a huge Powerball windfall (in fact, as data experts we’re pretty sure you won’t win!), but we can promise that our consultants are experts at helping you with your full LAMP stack environments. Anything affecting your data performance – on that we can guarantee you a win!

Full rules are here.

Getting lots of deadlocks on multimaster cluster

Lastest Forum Posts - January 12, 2016 - 9:39am
Hi all


Single 7 node Percona cluster

Datacenter1: (In use applications)
Percona node1
Percona node2
Percona node3
Percona node4

Datacenter2: (Disaster recovery only)
Percona node5
Percona node6
Percona node7


We have an 4 node web application that is load balanced which connects to percona nodes like below:
We are getting lots of deadlock errors, out of 500 requests we get about 80-90 success.

webapp1 -> Percona node1
webapp2 -> Percona node2
webapp3 -> Percona node3
webapp4 -> Percona node4


If we point all webapps to a single Percona node like below, we get very little deadlocks?

webapp1 -> Percona node1
webapp2 -> Percona node1
webapp3 -> Percona node1
webapp4 -> Percona node1

Can anyone please advise how we can fix this issue? We would like to use a true multi master percona cluster.


import table partition procedure

Lastest Forum Posts - January 12, 2016 - 7:52am
following the bellow link,

If we want to attach the partition P2 to the table T1,
the procedure for import the partition previously created with xtrabackup is:

1- create a new table T2.
2- attach the partition P2 to the T2 table.
3- swap partitions between T2 and T1.

for which reason we can't use???:
ALTER TABLE t1 IMPORT PARTITION p2 thanks in advance, regards

max_allowed_packet reset

Lastest Forum Posts - January 12, 2016 - 6:37am
I am having some trouble with my global max_allowed_packet being reset. The variable is set in the my.cnf file but some process with the server, galera or xtrabackup periodically resets the value. I have not been able to determine from the logs where this is coming from. Anyone have a clue?

Percona Server 5.6.28-76.1 is now available

Latest MySQL Performance Blog posts - January 12, 2016 - 6:30am

Percona is glad to announce the release of Percona Server 5.6.28-76.1 on January 12, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.28, including all the bug fixes in it, Percona Server 5.6.28-76.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.28-76.1 milestone on Launchpad.

Bugs Fixed:

  • Clustering secondary index could not be created on a partitioned TokuDB table. Bug fixed #1527730 (DB-720).
  • When enabled, super-read-only option could break statement-based replication while executing a multi-table update statement on a slave. Bug fixed #1441259.
  • Running OPTIMIZE TABLE or ALTER TABLE without the ENGINE clause would silently change table engine if enforce_storage_engine variable was active. This could also result in system tables being changed to incompatible storage engines, breaking server operation. Bug fixed #1488055.
  • Setting the innodb_sched_priority_purge variable (available only in debug builds) while purge threads were stopped would cause a server crash. Bug fixed #1368552.
  • Small buffer pool size could cause XtraDB buffer flush thread to spin at 100% CPU. Bug fixed #1433432.
  • Enabling TokuDB with ps_tokudb_admin script inside the Docker container would cause an error due to insufficient privileges even when running as root. In order for this script to be used inside Docker containers this error has be been changed to a warning that a check is impossible. Bug fixed #1520890.
  • InnoDB status will start printing negative values for spin rounds per wait, if the wait number, even though being accounted as a signed 64-bit integer, will not fit into a signed 32-bit integer. Bug fixed #1527160 (upstream #79703).

Other bugs fixed: #1384595 (upstream #74579), #1384658 (upstream #74619), #1471141 (upstream #77705), #1179451, #1524763 and #1530102.

Release notes for Percona Server 5.6.28-76.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Percona Server 5.5.47-37.7 is now available

Latest MySQL Performance Blog posts - January 12, 2016 - 6:10am

Percona is glad to announce the release of Percona Server 5.5.47-37.7 on January 12, 2016. Based on MySQL 5.5.47, including all the bug fixes in it, Percona Server 5.5.47-37.7 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.47-37.7 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • Running OPTIMIZE TABLE or ALTER TABLE without the ENGINE clause would silently change table engine if enforce_storage_engine variable was active. This could also result in system tables being changed to incompatible storage engines, breaking server operation. Bug fixed #1488055.

Other bugs fixed: #1179451, #1524763, and #1530102.

Release notes for Percona Server 5.5.47-37.7 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Update failed. garbd pre-removal script on Ubuntu

Lastest Forum Posts - January 12, 2016 - 4:05am
I recently updated my whole Ubuntu 14.04 System. There were also some updates for Percona.

But the update of garbd failed because of this messages:

Preparing to unpack .../percona-xtradb-cluster-garbd-3.x_3.13-1.trusty_amd64.deb ...
invoke-rc.d: initscript garbd, action "stop" failed.
dpkg: warning: subprocess old pre-removal script returned error exit status 3
dpkg: trying script from the new package instead ...
invoke-rc.d: initscript garbd, action "stop" failed.
dpkg: error processing archive /var/cache/apt/archives/percona-xtradb-cluster-garbd-3.x_3.13-1.trusty_amd64.deb (--unpack):
subprocess new pre-removal script returned error exit status 3
* Garbd config /etc/default/garbd is not configured yet
Errors were encountered while processing:
E: Sub-process /usr/bin/dpkg returned an error code (1)

I don't use garbd on that server (so its not running and also not configured in any way). I have installed "percona-xtradb-cluster-full-56"

Any Ideas to resolve this. This situation is blocking my whole package manager.


Lastest Forum Posts - January 12, 2016 - 2:57am
is there a reason why tokudb tables don't appear in INFORMATION_SCHEMA.INDEX_STATISTICS and INFORMATION_SCHEMA.TABLE_STATISTICS, while MyISAM and InnoDB do?

is it possible to enable that somehow?

Percona Monitoring Plugins 1.1.6 release

Latest MySQL Performance Blog posts - January 11, 2016 - 10:01pm

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.6.


  • Added new RDS instance classes to RDS scripts.
  • Added boto profile support to RDS scripts.
  • Added AWS region support and ability to specify all regions to RDS scripts.
  • Added ability to set AWS region and boto profile on data source level in Cacti.
  • Added period, average time and debug options to pmp-check-aws-rds.py.
  • Added ability to override Nginx server status URL path on data source level in Cacti.
  • Made Memcached and Redis host configurable for Cacti script.
  • Added the ability to lookup the master’s server_id when using pt-heartbeat with pmp-check-mysql-replication-delay.
  • Changed how memory stats are collected by Cacti script and pmp-check-unix-memory.
    Now /proc/meminfo is parsed instead of running free command. This also fixes pmp-check-unix-memory for EL7.

  • Set default MySQL connect timeout to 5s for Cacti script. Can be overridden in the config.
  • Fixed innodb transactions count on the Cacti graph for MySQL 5.6 and higher.
  • Fixed –login-path option in Nagios scripts when using it along with other credential options.

Thanks to contributors: David Andruczyk, Denis Baklikov, Mischa ter Smitten, Mitch Hagstrand.

The project is fully hosted on Github now including issues and Launchpad project is discontinued.

A new tarball is available from downloads area or in packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. You can find links to the documentation, forums and more at the project homepage.

About Percona Monitoring Plugins
Percona Monitoring Plugins are monitoring and graphing components designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix.

Bare-metal servers for button-push database-as-a-service

Latest MySQL Performance Blog posts - January 11, 2016 - 9:50am

Enterprises’ demand flexibility, scalability and efficiency to keep up with the demands of their customers — while maintaining the bottom line. To solve this, they’re running to cloud infrastructure services to both cut costs and take advantage of cutting-edge technology innovations. Clouds have brought simplicity and ease of use to infrastructure management. However, with this ease of use often comes some sacrifice: namely, performance.

Performance degradation often stems from the introduction of virtualization and a hypervisor layer. While the hypervisor enables the flexibility and management capabilities needed to orchestrate multiple virtual machines on a single box, it also creates additional processing overhead.

Regardless, cloud servers also have huge advantages: they deploy at lightning speed and enable hassle-free private networking without the need for a private VLAN from the datacenter. They also allow the customer near instantaneous scalability without the burden of risky capital expenditures.

Bare-metal servers are one solution to this trade-off. A bare metal server is all about plain hardware. It is a single-tenant physical server that is completely dedicated to a single data-intensive workload. It prioritizes performance and reliability. A bare-metal server provides a way to enable cloud services that eliminates the overhead of virtualization, but retains the flexibility, scalability and efficiency.

On certain CPU-bound workloads, bare metal servers can outperform a cloud server of the same configuration by four times. Database management systems, being very sensitive to both CPU performance and IO speed, can obviously benefit from access to a bare metal environment.

Combine a bare metal server accessible via a cloud service with a high performance MySQL solution and you get all benefits of the cloud without sacrificing performance. This is an ideal solution for startups, side projects or even production applications.

In fact this is just what we’ve done with a partnership between Percona and Servers.com, where you can automatically provision Percona Server for MySQL on one of their bare metal servers. You can learn more about this service here.

MongoDB revs you up: What storage engine is right for you? (Part 2)

Latest MySQL Performance Blog posts - January 11, 2016 - 9:48am
Differentiating Between MongoDB Storage Engines: WiredTiger

In our last post, we discussed what a storage engine is, and how you can determine the characteristics of one versus the other. From that post:

“A database storage engine is the underlying software that a DBMS uses to create, read, update and delete data from a database. The storage engine should be thought of as a “bolt on” to the database (server daemon), which controls the database’s interaction with memory and storage subsystems.”

Check out the full post here.

Generally speaking, it’s important to understand what type of work environment the database is going to interact with, and to select a storage engine that is tailored to that environment.

The last post looked at MMAPv1, the original default engine for MongoDB (through release 3.0). This post will examine the new default MongoDB engine: WiredTiger.


Find it in: MongoDB or Percona builds

MongoDB, Inc. introduced WiredTiger for document-level concurrency control for write operations in MongoDB v3.0. As a result of this introduction, multiple clients can now modify different documents of a collection at the same time. WiredTiger in MongoDB currently only supports B-trees for the data structure. However, it also has the ability to use LSM-trees, but it is not currently implemented in the MongoDB version of the engine.

WiredTiger has a few interesting features, most notably compression, document-level locking, and index prefix compression. B-trees, due to their rigidity in disk interaction and chattiness with storage, are not typically known for their performance when used with compression. However, WiredTiger has done an excellent job of maintaining good performance with compression and gives a decent performance/compression ratio with the “snappy” compression algorithm. Be that as it may, if deeper compression is necessary, you may want to evaluate another storage engine. Index prefix compression is a unique feature that should improve the usefulness of the cache by decreasing the size of indexes in memory (especially very repetitive indexes).

WiredTiger’s ideal use cases include data that are likely to stay within a few multiples of cache size. One can also expect good performance from TTL-like workloads, especially when data is within the limit previously mentioned.


Most people don’t know that they have a choice when it comes to storage engines, and that the choice should be based on what the database workload will look like. Percona’s Vadim Tkachenko performed an excellent benchmark test comparing the performances of RocksDB, PerconaFT and WiredTiger to help specifically differentiate between these engines.

In the next post, we’ll take a closer look at Percona’s MongoDB storage engine: PerconaFT.


General Inquiries

For general inquiries, please send us your question and someone will contact you.