Buy Percona ServicesBuy Now!

Alerts on PMM

Lastest Forum Posts - April 12, 2017 - 3:36am
Hi all.

I am working to create a new dashboard with alerts for all the servers i am using and i really spending a lot of time on that,
there are a few things that are general and are probably needed by all, like: replication monitoring/disk space / cpu usage/ disk utilization / server up or down/ connections/ locks / etc...
Do we plan to add something like that to PMM? if so when (to save me the time creating everything from nothing).
if not, dose some one has a dashboard will all the general test he can share ?

Thank you,
Alon.

pmm-client docker container

Lastest Forum Posts - April 12, 2017 - 12:33am
do u have a plan to make pmm-client docker container like pmm-server ?
i think it makes deploy easy.



PMM Server is password protected ?

Lastest Forum Posts - April 11, 2017 - 1:50pm
I launched a new PMM Server in US East with ami-ebdf17fd. I've installed the pmm client using percona's yum repository. I can ping the new PMM server.

When I run pmm-admin config it says the server is password protected. What password is it referring to ?

# pmm-admin config --server 10.5.18.223
Unable to connect to PMM server by address: 10.5.18.223

Looks like the server is password protected.
Use 'pmm-admin config' to define server user and password.

Correct Index Choices for Equality + LIKE Query Optimization

Latest MySQL Performance Blog posts - April 11, 2017 - 12:51pm

As part of our support services, we do a lot of query optimization. This is where most performance gains come from. Here’s an example of the work we do.

Some days ago a customer arrived with the following table:

CREATE TABLE `infamous_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `member_id` int(11) NOT NULL DEFAULT '0', `email` varchar(200) NOT NULL DEFAULT '', `msg_type` varchar(255) NOT NULL DEFAULT '', `t2send` int(11) NOT NULL DEFAULT '0', `flag` char(1) NOT NULL DEFAULT '', `sent` varchar(100) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `f` (`flag`), KEY `email` (`email`), KEY `msg_type` (`msg_type`(5)), KEY `t_msg` (`t2send`,`msg_type`(5)) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

And a query that looked like this:

SELECT COUNT(*) FROM `infamous_table` WHERE `t2send` > 1234 AND `msg_type` LIKE 'prefix%';

The table had an index t_msg that wasn’t helping at all: the EXPLAIN for our 1000000 rows test table looked like this:

id: 1 select_type: SIMPLE table: infamous_table type: range possible_keys: t_msg key: t_msg key_len: 4 ref: NULL rows: 107478 Extra: Using where

You can see the index is the on that was expected: “t_msg”. But the key_len is 4. This indicates that the INT part was used, but that the msg_type(5) part was ignored. This resulted examining 100k+ rows. If you have MySQL 5.6, you can see it more clearly with EXPLAIN FORMAT=JSON under used_key_parts:

EXPLAIN: { "query_block": { "select_id": 1, "table": { "table_name": "infamous_table", "access_type": "range", "possible_keys": [ "t_msg" ], "key": "t_msg", "used_key_parts": [ "t2send" ], "key_length": "4", "rows": 107478, "filtered": 100, "index_condition": "(`test`.`infamous_table`.`t2send` > 1234)", "attached_condition": "(`test`.`infamous_table`.`msg_type` like 'prefix%')" } } }

The customer had multi-valued strings like “PREFIX:INT:OTHER-STRING” stored in the columnmsg_type, and that made it impossible to convert it to an enum or similar field type that allowed changing the LIKE for an equity.

So the solution was rather simple: just like for point and range queries over numeric values, you must define the index with the ranged field as the rightmost part. This means the correct index would have looked like msg_type(5),t2send. The EXPLAIN for the new index provided the customer with some happiness:

id: 1 select_type: SIMPLE table: infamous_table type: range possible_keys: t_msg,better_multicolumn_index key: better_multicolumn_index key_len: 11 ref: NULL rows: 4716 Extra: Using where

You can see the key_len is now what we would have expected: four bytes for the INT and another seven bytes for the VARCHAR (five for our chosen prefix + two for prefix length). More importantly, you can notice the rows count decreased by approximately 22 times.

We used pt-online-schema on the customer’s environment to apply ALTER to avoid downtime. This made it an easy and painless solution, and the query effectively executed in under 1/20 of the time! So, all fine and dandy? Well, almost. We did a further test, and the query looked like this:

SELECT COUNT(*) FROM `infamous_table` WHERE `t2send` > 1234 AND `msg_type` LIKE 'abc%';

So where’s the difference? The length of the string used for the LIKE condition is shorter than the prefix length we choose for the VARCHAR part of the index (the customer intended to look-up strings with only three chars, so we needed to check this). This query also scanned 100k rows, and EXPLAIN showed the key_len was 4, meaning the VARCHAR part was being ignored once again.

This means the index prefix needed to be shorter. We ALTERed the table and made the prefix four characters long, counting on the fact that the multi-valued strings were using “:” to separate the values, so we suggested the customer include the colon in the look-up string for the shortest strings. In this case,  'abc%' would be 'abc:%' (which is also four characters long).

As a final optimization, we suggested dropping old indexes that were covered by the new better_multicolumn_index, and that were most likely created by the customer while testing optimization.

Conclusion

Just like in point-and-range queries, the right order for multi-column indexes is putting the ranged part last. Equally important is that the length of the string prefix needs to match the length of the shortest string you intend to look-up. Just remember, you can’t make this prefix too short or you’ll lose specificity and the query will end up scanning rows unnecessarily.

Webinar Wednesday 4/12: Tuning MongoDB Consistency

Latest MySQL Performance Blog posts - April 11, 2017 - 9:54am

Please join Percona’s Senior Technical Operations Architect Tim Vaillancourt as he presents Tuning MongoDB Consistency on April 12, 2017 at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Register Now  Welcome to part two of Percona’s tuning series. In our previous webinar, we mentioned some of the best practices for MongoDB tuning. What if you still need better performance after following the tuning advice in the first webinar? Part two takes a closer look at some of the some of the other options to consider when tuning queries.

In this webinar, we will cover:

  • Consistency, atomicity and isolation in MongoDB
  • Replica set rollbacks, and the risks to your data
  • Integrity vs. scalability tradeoffs to consider during development
  • Using read concerns and write concerns to tune your application data consistency
  • When to use Read Preference, and the tradeoffs of doing so
  • Tuning your MongoDB deployment and server configuration for data integrity/consistency
  • Performing cluster-wide consistent backups

By the end of the webinar you will have a better understanding of how to use MongoDB’s features to achieve a required balance of consistency and scalability.

Register for the webinar here.

Timothy Vaillancourt, Senior Technical Operations Architect

Tim joined Percona in 2016 as Sr. Technical Operations Architect for MongoDB with a goal to make the operations of MongoDB as smooth as possible. With experience operating infrastructures in industries such as government, online marketing/publishing, SaaS and gaming, combined with experience tuning systems from the hard disk all the way up to the end-user, Tim has spent time in nearly every area of the modern IT stack with many lessons learned.

Tim is based in Amsterdam, NL and enjoys traveling, coding and music. Before Percona Tim was the Lead MySQL DBA of Electronic Arts’ DICE studios, helping some of the largest games in the world (“Battlefield” series, “Mirrors Edge” series, “Star Wars: Battlefront”) launch and operate smoothly while also leading the automation of MongoDB deployments for EA systems. Before the role of DBA at EA’s DICE studio, Tim served as a subject matter expert in NoSQL databases, queues and search on the Online Operations team at EA SPORTS. Before moving to the gaming industry, Tim served as a Database/Systems Admin operating a large MySQL-based SaaS infrastructure at AbeBooks/Amazon Inc.

Percona Server for MySQL 5.7.17-13 is Now Available

Lastest Forum Posts - April 11, 2017 - 1:14am
Percona announces the GA release of Percona Server for MySQL 5.7.17-13 on April 5, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.17, including all the bug fixes in it, Percona Server for MySQL 5.7.17-13 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.17-13 milestone at Launchpad. Bugs Fixed:

  • MyRocks storage engine detection implemented in mysqldumpin Percona Server 5.6.17-12 was using a deprecated INFORMATION_SCHEMA.SESSION_VARIABLES table, causing mysqldump failures on servers running with the show_compatibility_56 variable set to OFF. Bug fixed #1676401.
The release notes for Percona Server for MySQL 5.7.17-13 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Percona Server for MongoDB 3.4.3-1.3 is Now Available

Lastest Forum Posts - April 11, 2017 - 1:12am
Percona announces the release of Percona Server for MongoDB 3.4.3-1.3 on April 6, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB 3.4.3-1.3

Percona Server for MongoDB 3.4.3-1.3 is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features:Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release candidate is based on MongoDB 3.4.3 and includes the following additional changes:
  • #PSMDB-123: Fixed Hot Backup to create proper subdirectories in the destination directory.
  • #PSMDB-126: Added index and collection names to duplicate key error message.
Percona Server for MongoDB 3.4.3-1.3 release notes are available in the official documentation.

ProxySQL Rules: Do I Have Too Many?

Latest MySQL Performance Blog posts - April 10, 2017 - 4:52pm

In this blog post we are going to take a closer look at ProxySQL rules. How do they work, and how big is the performance impact of having many rules?

I would like to say thank you to Renè, who was willing to answer all my questions during my tests.

Overview

ProxySQL is heavily based on the query rules. We can set up ProxySQL without rules based only on the host groups, but if we want read/write splitting or sharding (or anything else) we need rules.

ProxySQL knows the SQL protocol and language, so we can easily create rules based on username, schema name and even on the query itself. We can write regular expressions that match the query digest. Let me show you an example:

insert into mysql_query_rules (username,destination_hostgroup,active,retries,match_digest) values('Testuser',601,1,3,'^SELECT');

This rule matches all the queries starting with “SELECT”, and sends them to host group 601.

After version 1.3.1, the default regex engine was RE2. Starting after version 1.4, the default regex engine will be PCRE.

I would like to highlight three options that can have a bigger impact on your rules than you think: flagIN, flagOUT, apply.

With regards to the manual:

. . .these allow us to create “chains of rules” that get applied one after the other. An input flag value is set to 0, and only rules with flagIN=0 are considered at the beginning. When a matching rule is found for a specific query, flagOUT is evaluated and if NOT NULL the query will be flagged with the specified flag in flagOUT. If flagOUT differs from flagIN, the query will exit the current chain and enters a new chain of rules having flagIN as the new input flag. If flagOUT matches flagIN, the query will be re-evaluated again against the first rule with said flagIN. This happens until there are no more matching rules, or apply is set to 1 (which means this is the last rule to be applied)

You might not be sure what this means, but I will show you later.

As you can see, adding a rule is easy and we can add hundreds of rules, But is there any performance impact?

Test Case

We can write rules based on any part of the query (for example, “userid” or some “sharding key”). In these tests I wrote the rules based on table names because I can easily generate tables with “sysbench”, and run queries against these tables.

I created 1000 tables using sysbench, and I am going to test them with a direct MySQL connection, ProxySQL without rules, with ten rules and with 100 rules.

Time to do some tests to see if adding 100 or more rules have any effect on the performance?

I used two c4.4xlarge instances with SSDs, and I am going to share the steps so anybody can repeat my test and share/compare the results. NodeA is running MySQL 5.7.17 server, and NodeB is running “ProxySQL 1.3.4: and sysbench. During the test I increased the sysbench threads in the following steps:1,2,4,8,12,16,20,24.

I tried to use the simplest ProxySQL configuration as possible:

INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('10.10.10.243',600,3306,1000,0); INSERT INTO mysql_replication_hostgroups VALUES (600,'',''); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK; insert into mysql_users (username,password,active,default_hostgroup,default_schema) values ('testuser_rw','Testpass1.',1,600,'test'); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK; 

Only one server, one host group. I tried to measure the impact the rules had, so in all the test I sent the queries to the same host group. I only changed the rules (and some ProxySQL settings, as I will explain later).

As I mentioned, I am going to filter based on table names. Here are the 100 rules that I used:

insert into mysql_query_rules (username,destination_hostgroup,active,retries,match_digest) values('testuser_rw',600,1,3,'(from|into|update|into table) sbtest1b'); insert into mysql_query_rules (username,destination_hostgroup,active,retries,match_digest) values('testuser_rw',600,1,3,'(from|into|update|into table) sbtest2b'); ... insert into mysql_query_rules (username,destination_hostgroup,active,retries,match_digest) values('testuser_rw',600,1,3,'(from|into|update|into table) sbtest100b');First Test

First I ran tests with a direct MySQL connection, ProxySQL without rules, ProxySQL with ten rules and ProxySQL with 100 rules.

ProxySQL itself has an impact on the performance, but there is a big difference between 10 and 100 rules. So adding more and more rules can have a negative effect on the performance.

That’s all? Can we do anything to speed things up? I used the default ProxySQL settings. Let’s have a look what can we tune.

Increasing the Number of Threads

Let’s go step by step. First we can increase the thread number inside ProxySQL (the default is 4). We will increase it to 8:

UPDATE global_variables SET variable_value='8' WHERE variable_name='mysql-threads'; SAVE MYSQL VARIABLES TO DISK;

ProxySQL has to be restarted after this changes.

With this simple changes, we can improve the performance. As we can see, the difference is getting larger and larger as we increase the number of the sysbench threads.

Compiling

By compiling our own package, we can gain some extra performance. It is not clear why, so we opened a ticket for further investigation:

I removed some of the columns because the graph got to busy.

ProxySQL 1.4

In ProxySQL 1.4 (which is not GA yet), we can change between the regex engines. However, even using the same engine (RE2) is faster in 1.4:

Apply

As I mentioned, ProxySQL has a few important parameters like “apply”. With apply, if the query matches a rule it won’t check the remaining rules. In an ideal world, if you have 100 rules and 100 queries in random order which match only one rule, you only have to check 50 rules on average.

The new rules:

insert into mysql_query_rules (username,destination_hostgroup,active,retries,match_digest,apply) values('testuser_rw',600,1,3,'(from|into|update|into table) sbtest1b',1);

As you can see it didn’t help at all. But why? Because in this test we have 1000 tables, and we are running queries on all of the tables. This means 90% the queries have to check all the rules anyway. Let’s make a test with 100 tables to see if the “apply” helps or not:

As we can see, with 100 tables we get a much better performance. But of course this is not a valid solution because we can’t just drop tables, “userids” or “sharding keys”. In the next post I will show you how to use “apply” in a more effective way.

Conclusion

So far, ProxySQL 1.4 with the PCRE engine and eight threads gives us the best performance with 100 rules and 1000 tables. As we can see, both the number of the rules and the query distribution matter. Both impact the performance. In my next blog post, I will show you how you can add some logic into your rules so that, even if you have more rules, you will get better performance.

Intermittent connection issues to DB server

Lastest Forum Posts - April 10, 2017 - 8:59am
Hello,

Our application team is intermittently facing below error. Initially mysql setup was on windows server and we used to face this error there as well. Later we moved to linux server and still facing same error. There are no specific pattern, it justs occurs randomly.

App is still hosted on Windows setup, it used .NET connector for Db connection. Could you guys please help with the same.

ERROR:
Authentication to host IP for user dbuser using method 'mysql_native_password' failed with message: Reading from the stream has failed
---> MySql.Data.MySqlClient.MySqlException (0x80004005): Reading from the stream has failed. ---> System.IO.IOException: Unable to read data from the transport connection: An established connection was aborted by the software in your host machine. ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
--- End of inner exception stack trace ---
at MySql.Data.Common.MyNetworkStream.HandleOrRethrowE xception(Exception e)
at MySql.Data.Common.MyNetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at MySql.Data.MySqlClient.TimedStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.BufferedStream.Read(Byte[] array, Int32 offset, Int32 count)
at MySql.Data.MySqlClient.MySqlStream.ReadFully(Strea m stream, Byte[] buffer, Int32 offset, Int32 count)
at MySql.Data.MySqlClient.MySqlStream.LoadPacket()
at MySql.Data.MySqlClient.MySqlStream.LoadPacket()
at MySql.Data.MySqlClient.MySqlStream.ReadPacket()
at MySql.Data.MySqlClient.Authentication.MySqlAuthent icationPlugin.ReadPacket()
at MySql.Data.MySqlClient.Authentication.MySqlAuthent icationPlugin.AuthenticationFailed(Exception ex)
at MySql.Data.MySqlClient.Authentication.MySqlAuthent icationPlugin.ReadPacket()
at MySql.Data.MySqlClient.Authentication.MySqlAuthent icationPlugin.Authenticate(Boolean reset)
at MySql.Data.MySqlClient.NativeDriver.Authenticate(S tring authMethod, Boolean reset)
at MySql.Data.MySqlClient.NativeDriver.Open()
at MySql.Data.MySqlClient.Driver.Open()
at MySql.Data.MySqlClient.Driver.Create(MySqlConnecti onStringBuilder settings)
at MySql.Data.MySqlClient.MySqlPool.GetPooledConnecti on()
at MySql.Data.MySqlClient.MySqlPool.TryToGetDriver()
at MySql.Data.MySqlClient.MySqlPool.GetConnection()
at MySql.Data.MySqlClient.MySqlConnection.Open()
at MM.Backend.Common.Repository.BaseRepository`1.GetR ecords(MySqlCommand command)
at MM.Backend.Common.Repository.StockDetailsRepositor y.GetAllActiveStocks()
at MM.Backend.Scorecard.Valuation.ValuationInitialise r.Main(String[] args)

Upgrade-path from mysql-community-57 to Percona, rpm dependency error

Lastest Forum Posts - April 10, 2017 - 4:59am
Should it not be possible to directly switch from MySQL-Community release to Percona?

See attached

Ultimately the problem is that Percona-Server-shared-57-5.7.17-13.1.el6.x86_64 does not correctly replace mysql-community-libs-5.7.17-1.el6.x86_64

Code: file /usr/lib64/mysql/libmysqlclient.so.20.3.4 from install of Percona-Server-shared-57-5.7.17-13.1.el6.x86_64 conflicts with file from package mysql-community-libs-5.7.17-1.el6.x86_64 If I remove mysql-community-libs-5.7.17-1.el6.x86_64 with --nodeps then Percona installs correctly (with Percona-Server-shared-compat)

16MB Document Size Limitation

Lastest Forum Posts - April 9, 2017 - 8:12pm
Do we still have 16 MB document size limitation? Or Is there any workaround to store a document size more than 16 MB?

Cannot connect to MySQL pmm:***@unix(/var/lib/mysql/mysql.sock): Error 1045

Lastest Forum Posts - April 7, 2017 - 9:14pm
I'm adding monitor services according to https://www.percona.com/doc/percona-...mysql-instance, but I got errors in the log:

> Cannot connect to MySQL pmm:***@unix(/var/lib/mysql/mysql.sock): Error 1045: Access denied for user 'pmm'@'localhost' (using password: YES)

What can I do to fix it? Thanks.

Non-Deterministic Order for SELECT with LIMIT

Latest MySQL Performance Blog posts - April 7, 2017 - 12:26pm

In this blog, we’ll look at how queries in systems with parallel processing can return rows in a non-deterministic order (and how to fix it).

Short story:

Do not rely on the order of your rows if your query does not use ORDER BY. Even with ORDER BY, rows with the same values can be sorted differently. To fix this issue, always add ORDER BY ... ID when you have LIMIT N.

Long story:

While playing with MariaDB ColumnStore and Yandex ClickHouse, I came across a very simple case. In MariaDB ColumnStore and Yandex ClickHouse, the simple query (which I used for testing) select * from <table> where ... limit 10  returns results in a non-deterministic order.

This is totally expected. SELECT * from <table> WHERE ... LIMIT 10 means “give me any ten rows, and as there is no order they can be anything that matches the WHERE condition.” What we used to get in vanilla MySQL + InnoDB, however, is different: SELECT * from <table> WHERE ... LIMIT 10 gives us the rows sorted by primary key. Even with MyISAM in MySQL, if the data doesn’t change, the results are repeatable:

mysql> select * from City where CountryCode = 'USA' limit 10; +------+--------------+-------------+--------------+------------+ | ID | Name | CountryCode | District | Population | +------+--------------+-------------+--------------+------------+ | 3793 | New York | USA | New York | 8008278 | | 3794 | Los Angeles | USA | California | 3694820 | | 3795 | Chicago | USA | Illinois | 2896016 | | 3796 | Houston | USA | Texas | 1953631 | | 3797 | Philadelphia | USA | Pennsylvania | 1517550 | | 3798 | Phoenix | USA | Arizona | 1321045 | | 3799 | San Diego | USA | California | 1223400 | | 3800 | Dallas | USA | Texas | 1188580 | | 3801 | San Antonio | USA | Texas | 1144646 | | 3802 | Detroit | USA | Michigan | 951270 | +------+--------------+-------------+--------------+------------+ 10 rows in set (0.01 sec) mysql> select * from City where CountryCode = 'USA' limit 10; +------+--------------+-------------+--------------+------------+ | ID | Name | CountryCode | District | Population | +------+--------------+-------------+--------------+------------+ | 3793 | New York | USA | New York | 8008278 | | 3794 | Los Angeles | USA | California | 3694820 | | 3795 | Chicago | USA | Illinois | 2896016 | | 3796 | Houston | USA | Texas | 1953631 | | 3797 | Philadelphia | USA | Pennsylvania | 1517550 | | 3798 | Phoenix | USA | Arizona | 1321045 | | 3799 | San Diego | USA | California | 1223400 | | 3800 | Dallas | USA | Texas | 1188580 | | 3801 | San Antonio | USA | Texas | 1144646 | | 3802 | Detroit | USA | Michigan | 951270 | +------+--------------+-------------+--------------+------------+ 10 rows in set (0.00 sec)

The results are ordered by ID here. In most cases, when the data doesn’t change and the query is the same, the order of results will be deterministic: open the file, read ten lines from the beginning, close the file. (When using indexes it can be different if different indexes are selected. For the same query, the database will probably select the same index if the data is static.)

But this is still not guaranteed. Here’s why: imagine we now introduce parallelism, split our table into ten pieces and run ten threads. Each will work on its own piece. Then, unless we specifically wait on each thread to finish and order the results, it will give us a random order of results. Let’s simulate this in a bash script:

for y in {2000..2010} do sql="select YearD, count(*), sum(ArrDelayMinutes) from ontime where yeard=$y and carrier='DL' limit 1" mysql -Nb ontime -e "$sql" & done wait

The script’s purpose is to perform aggregation faster by taking advantage of multiple CPU cores on the server in parallel. It opens ten connections to MySQL and returns results as they arrive:

$ ./parallel_test.sh 2009 428007 5003632 2007 475889 5915443 2008 451931 5839658 2006 506086 6219275 2003 660617 5917398 2004 687638 8384465 2002 728758 7381821 2005 658302 8143431 2010 732973 9169167 2001 835236 8339276 2000 908029 11105058 $ ./parallel_test.sh 2009 428007 5003632 2008 451931 5839658 2007 475889 5915443 2006 506086 6219275 2005 658302 8143431 2003 660617 5917398 2004 687638 8384465 2002 728758 7381821 2010 732973 9169167 2001 835236 8339276 2000 908029 11105058

In this case, the faster queries arrive first and are on top, with the slower on the bottom. If the network was involved (think about different nodes in a cluster connected via a network), then the response time from each node can be much more random due to non-deterministic network latency.

In the case of MariaDB ColumnStore or Yandex Clickhouse, where scans are performed in parallel, the order of the results can also be non-deterministic. An example for ClickHouse:

:) select * from wikistat where project = 'en' limit 1; SELECT * FROM wikistat WHERE project = 'en' LIMIT 1 ┌───────date─┬────────────────time─┬─project─┬─subproject─┬─path─────┬─hits─┬──size─┐ │ 2008-07-11 │ 2008-07-11 14:00:00 │ en │ │ Retainer │ 14 │ 96857 │ └────────────┴─────────────────────┴─────────┴────────────┴──────────┴──────┴───────┘ 1 rows in set. Elapsed: 0.031 sec. Processed 2.03 million rows, 41.40 MB (65.44 million rows/s., 1.33 GB/s.) :) select * from wikistat where project = 'en' limit 1; SELECT * FROM wikistat WHERE project = 'en' LIMIT 1 ┌───────date─┬────────────────time─┬─project─┬─subproject─┬─path─────────┬─hits─┬───size─┐ │ 2008-12-15 │ 2008-12-15 14:00:00 │ en │ │ Graeme_Obree │ 18 │ 354504 │ └────────────┴─────────────────────┴─────────┴────────────┴──────────────┴──────┴────────┘ 1 rows in set. Elapsed: 0.023 sec. Processed 1.90 million rows, 68.19 MB (84.22 million rows/s., 3.02 GB/s.)

An example for ColumnStore:

MariaDB [wikistat]> select * from wikistat limit 1 date: 2008-01-18 time: 2008-01-18 06:00:00 project: en subproject: NULL path: Doctor_Who:_Original_Television_Soundtrack hits: 2 size: 2 1 row in set (1.63 sec) MariaDB [wikistat]> select * from wikistat limit 1 date: 2008-01-31 time: 2008-01-31 10:00:00 project: de subproject: NULL path: Haramaki hits: 1 size: 1 1 row in set (1.58 sec)

In another case (bug#72076) we use ORDER BY, but the rows being sorted are the same. MySQL 5.7 contains the “ORDER BY” + LIMIT optimization:

If multiple rows have identical values in the ORDER BY columns, the server is free to return those rows in any order, and may do so differently depending on the overall execution plan. In other words, the sort order of those rows is nondeterministic with respect to the nonordered columns.

Conclusion
In systems that involve parallel processing, queries like select * from table where ... limit N can return rows in a random order (even if the data doesn’t change between the calls). This is due to the async nature of the parallel calls: whoever serves results faster wins. In MySQL, you run select * from table limit 1 three times and get the same data in the same order (especially if the table data doesn’t change), but the response time will be slightly different. In a massively parallel system, the difference in the response times can cause the rows to be ordered differently.

To fix: always add ORDER BY ... ID  when you have LIMIT N.

Percona Server for MongoDB 3.4.3-1.3 is Now Available

Latest MySQL Performance Blog posts - April 7, 2017 - 9:57am

Percona announces the release of Percona Server for MongoDB 3.4.3-1.3 on April 6, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB 3.4.3-1.3

Percona Server for MongoDB 3.4.3-1.3 is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features:

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release candidate is based on MongoDB 3.4.3 and includes the following additional changes:

  • #PSMDB-123: Fixed Hot Backup to create proper subdirectories in the destination directory.
  • #PSMDB-126: Added index and collection names to duplicate key error message.

Percona Server for MongoDB 3.4.3-1.3 release notes are available in the official documentation.

SST XtraDB Backup - Use a different .sst directory for JOINER

Lastest Forum Posts - April 7, 2017 - 2:31am

Hi there !

I'm trying to restore one node of my percona cluster. Usually, everything goes well.
Unfortunatly, I'm running out of space into my datadir : XtraBackup create a ".sst" directory under datadir that fill with backup file

See log output :

-----------------------------------------------------------------------------------------------------------------
WSREP_SST: [INFO] WARNING: Stale temporary SST directory: /datas/mysql//.sst from previous state transfer. Removing
WSREP_SST: [INFO] Proceeding with SST (20170404 10:40:00.090)
WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20170404 10:40:00.091)
WSREP_SST: [INFO] Cleaning the existing datadir and innodb-data/log directories (20170404 10:40:00.093)
WSREP_SST: [INFO] Waiting for SST streaming to complete! (20170404 10:40:00.119)
2017-04-04 10:40:00 16965 [Note] WSREP: (485f6f57, 'tcp://0.0.0.0:4567') turning message relay requesting off
2017-04-04 10:54:16 16965 [Note] WSREP: 0.0 (node_cluster_56): State transfer to 1.0 (node_cluster_56) complete.
2017-04-04 10:54:16 16965 [Note] WSREP: Member 0.0 (node_cluster_56) synced with group.
WSREP_SST: [INFO] Compressed qpress files found (20170404 10:54:16.800)
WSREP_SST: [INFO] Decompression with 32 threads (20170404 10:54:16.806)
WSREP_SST: [INFO] Evaluating find /datas/mysql//.sst -type f -name '*.qp' -printf '%p\n%h\n' | xargs -n 2 qpress -T32d (20170404 10:54:16.810)
^M^M ^M^Mqpress: Disk full while writing destination file
xargs: qpress : a terminé son exécution avec le statut 255 ; arrêt abrupt.
WSREP_SST: [ERROR] Cleanup after exit with status:124 (20170404 11:13:27.825)
2017-04-04 11:13:27 16965 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.1.1.8' --datadir '/datas/mysql/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '16965' '' : 124 (Wrong medium type)
2017-04-04 11:13:27 16965 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2017-04-04 11:13:27 16965 [ERROR] WSREP: SST script aborted with error 124 (Wrong index given to function)
2017-04-04 11:13:27 16965 [ERROR] WSREP: SST failed: 124 (Wrong medium type)
2017-04-04 11:13:27 16965 [ERROR] Aborting
-------------------------------------------------------------------------------------------------------------------

Unfortunatly (again), you can't not simply change the .sst directoty for the joiner in wsrep_sst_xtrabackup-v2 script.
So I tried to "hack" it to create my own .sst directory who is symbolic link to antoher partition.

--------------------------------------------
rm -rf /datas/mysql/*
rm -rf /datas/mysql/.sst
ln -s /var/tmp/xtrabackup /datas/mysql/.sst
chown -R mysql:mysql /datas/mysql
chown -R mysql:mysql /var/tmp/xtrabackup
---------------------------------------------

It's works until innobackupex state :

------------------------------------------------------------------------------------------------
WSREP_SST: [INFO] Preparing the backup at /datas/mysql//.sst (20170405 10:52:46.812)
WSREP_SST: [INFO] Evaluating innobackupex --no-version-check --apply-log $rebuildcmd ${DATA} &>${DATA}/innobackup.prepare.log (20170405 10:52:46.816)
WSREP_SST: [ERROR] Cleanup after exit with status:1 (20170405 10:52:46.861)
2017-04-05 10:52:46 30211 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.118.8' --datadir '/datas/mysql/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '30211' '' : 1 (Operation
not permitted)
2017-04-05 10:52:46 30211 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2017-04-05 10:52:46 30211 [ERROR] WSREP: SST script aborted with error 1 (Operation not permitted)
2017-04-05 10:52:46 30211 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2017-04-05 10:52:46 30211 [ERROR] Aborting
-------------------------------------------------------------------------------------------------

More details in innobackup.prepare.log :

--------------------------------------------------------------------------------------------------
170405 11:32:21 innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints "completed OK!".

innobackupex version 2.3.6 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 7686bfc)
xtrabackup: cd to /datas/mysql//.sst
xtrabackup: This target seems to be not prepared yet.
2017-04-05 11:32:21 7f74d64907e0 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
xtrabackup: Warning: cannot open ./xtrabackup_logfile. will try to find.
2017-04-05 11:32:21 7f74d64907e0 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
xtrabackup: Fatal error: cannot find ./xtrabackup_logfile.
xtrabackup: Error: xtrabackup_init_temp_log() failed.
--------------------------------------------------------------------------------------------------

Another way is to avoid SST :
https://severalnines.com/blog/how-av...galera-cluster

It not seems to complex but in production envireonnement, I'm not confortable with it.

Has anyone faced that issue ? Have other suggestions ?

Regards,

check docker logs

Lastest Forum Posts - April 6, 2017 - 7:53pm
pmm-server version : 1.1.2
docker logs id :

2017-04-06 14:47:10,962 CRIT Supervisor running as root (no user in config file)
2017-04-06 14:47:10,962 WARN Included extra file "/etc/supervisord.d/pmm.ini" during parsing


how can i fix this ?

Percona Live Featured Session with Ilya Kosmodemiansky: Linux IO internals for Database Administrators

Latest MySQL Performance Blog posts - April 6, 2017 - 1:57pm

IWelcome to another post in the series of Percona Live featured session blogs! In these blogs, we’ll highlight some of the session speakers that will be at this year’s Percona Live conference. We’ll also discuss how these sessions can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!

In this Percona Live featured session, we’ll meet Ilya Kosmodemiansky, CEO and Consultant of Data Egret. His session is Linux IO Internals for Database Administrators. Input/output performance problems are an everyday agenda item for DBAs since the beginning of databases. Data volume grows rapidly, and you need to get your data quickly from the disk – and more importantly to the disk. This talk covers how Linus IO works, how database pages travel from the disk level to the database shared memory and back, and what mechanisms exist to help control the exchange.

NOTE: As of this interview, Ilya’s company has rebranded from PostgreSQL Consulting to Data Egret. You can read about this change here.
I had a chance to speak with Ilya about Linux IO Internals for Database Administrators:

Percona: How did you get into database technology? What do you love about it?

Ilya: I am actually a biologist by training, so my first programming experience was in basic bioinformatics rather than general programming. That was a long time ago. Then I started trying myself in pure informatics and found it really exciting. I then worked as an Oracle, DB2 and Postgres DB. Today, my main focus is Postgres.

Dealing with open source technology is very different, and I enjoy the sense of community it brings.

Databases are the bread and butter of any business, and are crucial to its success. I find it exciting to be able to support the different types of businesses we are working with, and really enjoy solving problems and troubleshooting complex issues. This is what makes my day-to-day really enjoyable and keeps me on my toes.

Percona: Your talk is called Linux IO internals for Database Administrators. Why are Linux IO internals important for DBAs?

Ilya: Databases are really a part of a larger ecosystem, they heavily rely on operating system’s internal mechanisms, hardware, etc. To be an expert DBA and have a full control over your database, you should have a deeper understanding of how this system works. This knowledge also helps you to tackle different situations and avoid problems where possible.

After you reach a certain level of database optimization, you need to scrutinize your system on a deeper level. This is the only way you can ensure its optimal function.

Percona: What value does understanding how the IO internals work add to their ability to do their jobs?

Ilya: I would say it’s similar to driving vs. troubleshooting a car. To drive a car, you only need to know how you change gears, adjust the mirrors, add fuel and have a good driving technique. But if your car breaks down you need to understand what happens under the hood, and how its different components work on a deeper level. It not only makes you a better and more confident driver, but it also helps you get the best out of your car.

The same thing is true with databases. If you know how they really work, you will be able to better optimize their performance. As a DBA, you are in a sense an F1 mechanic. You need to know how they work to be able to do a good job, efficiently and fast.

Percona: What do you want attendees to take away from your session? Why should they attend?

Ilya: I would like my audience to really start and see the bigger picture, while at the same time not forgetting the importance of detail. It’s always easy to rely on a checklist, and an average DBA always looks for one. My talk is going to disappointment them: there is no ultimate checklist that finds all the possible failures and fixes. You really need to have a deeper understanding of database internals. Only that knowledge allows you to quickly make the right decision in critical situations. Having this knowledge will also allow you to be the judge of what to optimize and what to improve, so that you can get most out of your system.

Percona: What are you most looking forward to at Percona Live 2017?

Ilya: This is going to be my third conference, and it always attracts fantastic speakers and a great audience. I am looking forward to learning a lot about broader technologies I would normally have no chance to look at, making new friends and hanging out with old ones. Being part of the open source community is like having an extended family, and I find events such as Percona Live contribute to its strength by bringing together different communities.

For the first time, we will have PostgreSQL community booth at Percona Live this year. I think it’s a fantastic opportunity that will allow the two communities to get together and provide a fertile ground for new discussions, collaborations and mutual technology improvements.

For more information on Linux IO internals, or PostgreSQL in general, see Ilya and Data Egret’s various social handles:

Register for Percona Live Data Performance Conference 2017, and see Ilya present his session on Linux IO Internals for Database Administrators. Use the code FeaturedTalk and receive $100 off the current registration price!

Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community, as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Dealing with MySQL Error Code 1215: “Cannot add foreign key constraint”

Latest MySQL Performance Blog posts - April 6, 2017 - 11:11am

In this blog, we’ll look at how to resolve MySQL error code 1215: “Cannot add foreign key constraint”.

Our Support customers often come to us with things like “My database deployment fails with error 1215”, “Am trying to create a foreign key and can’t get it working” or “Why am I unable to create a constraint?” To be honest, the error message doesn’t help much. You just get the following line:

ERROR 1215 (HY000): Cannot add foreign key constraint

But MySQL never tells you exactly WHY it failed. There’s actually a multitude of reasons this can happen. This blog post is a compendium of the most common reasons why you can get ERROR 1215, how to diagnose your case to find which one is affecting you and potential solutions for adding the foreign key.

(Note: be careful when applying the proposed solutions, as many involve ALTERing the parent table and that can take a long time blocking the table, depending on your table size, MySQL version and the specific ALTER operation being applied; In many cases using pt-online-schema-change will be likely a good idea).

So, onto the list:

1) The table or index the constraint refers to does not exist yet (usual when loading dumps).

How to diagnose: Run SHOW TABLES or SHOW CREATE TABLE for each of the parent tables. If you get error 1146 for any of them, it means tables are being created in wrong order.
How to fix: Run the missing CREATE TABLE and try again, or temporarily disable foreign-key-checks. This is especially needed during backup restores where circular references might exist. Simply run:

SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS; SET FOREIGN_KEY_CHECKS=0; SOURCE /backups/mydump.sql; -- restore your backup within THIS session SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;

Example:

mysql> CREATE TABLE child ( -> id INT(10) NOT NULL PRIMARY KEY, -> parent_id INT(10), -> FOREIGN KEY (parent_id) REFERENCES `parent`(`id`) -> ) ENGINE INNODB; ERROR 1215 (HY000): Cannot add foreign key constraint # We check for the parent table and is not there. mysql> SHOW TABLES LIKE 'par%'; Empty set (0.00 sec) # We go ahead and create the parent table (we’ll use the same parent table structure for all other example in this blogpost): mysql> CREATE TABLE parent ( -> id INT(10) NOT NULL PRIMARY KEY, -> column_1 INT(10) NOT NULL, -> column_2 INT(10) NOT NULL, -> column_3 INT(10) NOT NULL, -> column_4 CHAR(10) CHARACTER SET utf8 COLLATE utf8_bin, -> KEY column_2_column_3_idx (column_2, column_3), -> KEY column_4_idx (column_4) -> ) ENGINE INNODB; Query OK, 0 rows affected (0.00 sec) # And now we re-attempt to create the child table mysql> CREATE TABLE child ( -> id INT(10) NOT NULL PRIMARY KEY,drop table child; -> parent_id INT(10), -> FOREIGN KEY (parent_id) REFERENCES `parent`(`id`) -> ) ENGINE INNODB; Query OK, 0 rows affected (0.01 sec)

2) The table or index in the constraint references misuses quotes.

How to diagnose: Inspect each FOREIGN KEY declaration and make sure you either have no quotes around object qualifiers, or that you have quotes around the table and a SEPARATE pair of quotes around the column name.
How to fix: Either don’t quote anything, or quote the table and the column separately.
Example:

# wrong; single pair of backticks wraps both table and column ALTER TABLE child ADD FOREIGN KEY (parent_id) REFERENCES `parent(id)`; # correct; one pair for each part ALTER TABLE child ADD FOREIGN KEY (parent_id) REFERENCES `parent`(`id`); # also correct; no backticks anywhere ALTER TABLE child ADD FOREIGN KEY (parent_id) REFERENCES parent(id); # also correct; backticks on either object (in case it’s a keyword) ALTER TABLE child ADD FOREIGN KEY (parent_id) REFERENCES parent(`id`);

3) The local key, foreign table or column in the constraint references have a typo:

How to diagnose: Run SHOW TABLES and SHOW COLUMNS and compare strings with those in your REFERENCES declaration.
How to fix: Fix the typo once you find it.
Example:

# wrong; Parent table name is ‘parent’ ALTER TABLE child ADD FOREIGN KEY (parent_id) REFERENCES pariente(id); # correct ALTER TABLE child ADD FOREIGN KEY (parent_id) REFERENCES parent(id);

4) The column the constraint refers to is not of the same type or width as the foreign column:

How to diagnose: Use SHOW CREATE TABLE parent to check that the local column and the referenced column both have same data type and width.
How to fix: Edit your DDL statement such that the column definition in the child table matches that of the parent table.
Example:

# wrong; id column in parent is INT(10) CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_id BIGINT(10) NOT NULL, FOREIGN KEY (parent_id) REFERENCES `parent`(`id`) ) ENGINE INNODB; # correct; id column matches definition of parent table CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_id INT(10) NOT NULL, FOREIGN KEY (parent_id) REFERENCES `parent`(`id`) ) ENGINE INNODB;

5) The foreign object is not a KEY of any kind

How to diagnose: Use SHOW CREATE TABLE parent to check that if the REFERENCES part points to a column, it is not indexed in any way.
How to fix: Make the column a KEY, UNIQUE KEY or PRIMARY KEY on the parent.
Example:

# wrong; column_1 is not indexed in our example table CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_column_1 INT(10), FOREIGN KEY (parent_column_1) REFERENCES `parent`(`column_1`) ) ENGINE INNODB; # correct; we first add an index and then re-attempt creation of child table ALTER TABLE parent ADD INDEX column_1_idx(column_1); # and then re-attempt creation of child table CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_column_1 INT(10), FOREIGN KEY (parent_column_1) REFERENCES `parent`(`column_1`) ) ENGINE INNODB;

6) The foreign key is a multi-column PK or UK, where the referenced column is not the leftmost one

How to diagnose: Do a SHOW CREATE TABLE parent to check if the REFERENCES part points to a column that is present in some multi-column index(es), but is not the leftmost one in its definition.
How to fix: Add an index on the parent table where the referenced column is the leftmost (or only) column.
Example:

# wrong; column_3 only appears as the second part of an index on parent table CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_column_3 INT(10), FOREIGN KEY (parent_column_3) REFERENCES `parent`(`column_3`) ) ENGINE INNODB; # correct; create a new index for the referenced column ALTER TABLE parent ADD INDEX column_3_idx (column_3); # then re-attempt creation of child CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_column_3 INT(10), FOREIGN KEY (parent_column_3) REFERENCES `parent`(`column_3`) ) ENGINE INNODB;

7) Different charsets/collations among the two table/columns

How to diagnose: Run SHOW CREATE TABLE parent and compare that the child column (and table) CHARACTER SET and COLLATE parts match those of the parent table.
How to fix: Modify the child table DDL so that it matches the character set and collation of the parent table/column (or ALTER the parent table to match the child’s wanted definition.
Example:

# wrong; the parent table uses utf8/utf8_bin for charset/collation CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_column_4 CHAR(10) CHARACTER SET utf8 COLLATE utf8_unicode_ci, FOREIGN KEY (parent_column_4) REFERENCES `parent`(`column_4`) ) ENGINE INNODB; # correct; edited DDL so COLLATE matches parent definition CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_column_4 CHAR(10) CHARACTER SET utf8 COLLATE utf8_bin, FOREIGN KEY (parent_column_4) REFERENCES `parent`(`column_4`) ) ENGINE INNODB;

8) The parent table is not using InnoDB

How to diagnose: Run SHOW CREATE TABLE parent and verify if ENGINE=INNODB or not.
How to fix: ALTER the parent table to change the engine to InnoDB.
Example:

# wrong; the parent table in this example is MyISAM: CREATE TABLE parent ( id INT(10) NOT NULL PRIMARY KEY ) ENGINE MyISAM; # correct: we modify the parent’s engine ALTER TABLE parent ENGINE=INNODB;

9) Using syntax shorthands to reference the foreign key

How to diagnose: Check if the REFERENCES part only mentions the table name. As explained by ex-colleague Bill Karwin in http://stackoverflow.com/questions/41045234/mysql-error-1215-cannot-add-foreign-key-constraint, MySQL doesn’t support this shortcut (even though this is valid SQL).
How to fix: Edit the child table DDL so that it specifies both the table and the column.
Example:

# wrong; only parent table name is specified in REFERENCES CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, column_2 INT(10) NOT NULL, FOREIGN KEY (column_2) REFERENCES parent ) ENGINE INNODB; # correct; both the table and column are in the REFERENCES definition CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, column_2 INT(10) NOT NULL, FOREIGN KEY (column_2) REFERENCES parent(column_2) ) ENGINE INNODB;

10) The parent table is partitioned

How to diagnose: Run SHOW CREATE TABLE parent and find out if it’s partitioned or not.
How to fix: Removing the partitioning (i.e., merging all partitions back into a single table) is the only way to get it working.
Example:

# wrong: the parent table we see below is using PARTITIONs CREATE TABLE parent ( id INT(10) NOT NULL PRIMARY KEY ) ENGINE INNODB PARTITION BY HASH(id) PARTITIONS 6; #correct: ALTER parent table to remove partitioning ALTER TABLE parent REMOVE PARTITIONING;

11) Referenced column is a generated virtual column (this is only possible with 5.7 and newer)

How to diagnose: Run SHOW CREATE TABLE parent and verify that the referenced column is not a virtual column.
How to fix: CREATE or ALTER the parent table so that the column will be stored and not generated.
Example:

# wrong; this parent table has a generated virtual column CREATE TABLE parent ( id INT(10) NOT NULL PRIMARY KEY, column_1 INT(10) NOT NULL, column_2 INT(10) NOT NULL, column_virt INT(10) AS (column_1 + column_2) NOT NULL, KEY column_virt_idx (column_virt) ) ENGINE INNODB; # correct: make the column STORED so it can be used as a foreign key ALTER TABLE parent DROP COLUMN column_virt, ADD COLUMN column_virt INT(10) AS (column_1 + column_2) STORED NOT NULL; # And now the child table can be created pointing to column_virt CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_virt INT(10) NOT NULL, FOREIGN KEY (parent_virt) REFERENCES parent(column_virt) ) ENGINE INNODB;

12) Using SET DEFAULT for a constraint action

How to diagnose: Check your child table DDL and see if any of your constraint actions (ON DELETE, ON UPDATE) try to use SET DEFAULT
How to fix: Remove or modify actions that use SET DEFAULT from the child table CREATE or ALTER statement.
Example:

# wrong; the constraint action uses SET DEFAULT CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_id INT(10) NOT NULL, FOREIGN KEY (parent_id) REFERENCES parent(id) ON UPDATE SET DEFAULT ) ENGINE INNODB; # correct; there's no alternative to SET DEFAULT, removing or picking other is the corrective measure CREATE TABLE child ( id INT(10) NOT NULL PRIMARY KEY, parent_id INT(10) NOT NULL, FOREIGN KEY (parent_id) REFERENCES parent(id) ) ENGINE INNODB;

I realize many of the solutions are not what you might desire, but these are limitations in MySQL that must be overcome on the application side for the time being. I do hope the list above gets shorter by the time 8.0 is released!

If you know other ways MySQL can fail with ERROR 1215, let us know in the comments!

More information regarding Foreign Key restrictions can be found here: https://dev.mysql.com/doc/refman/5.7/en/innodb-foreign-key-constraints.html.

Evaluation of PMP Profiling Tools

Latest MySQL Performance Blog posts - April 5, 2017 - 12:12pm

In this blog post, we’ll look at some of the available PMP profiling tools.

While debugging or analyzing issues with Percona Server for MySQL, we often need a quick understanding of what’s happening on the server. Percona experts frequently use the pt-pmp tool from Percona Toolkit (inspired by http://poormansprofiler.org).

The pt-pmp tool collects application stack traces GDB and then post-processes them. From this you get a condensed, ordered list of the stack traces. The list helps you understand where the application spent most of the time: either running something or waiting for something.

Getting a profile with pt-pmp is handy, but it has a cost: it’s quite intrusive. In order to get stack traces, GDB has to attach to each thread of your application, which results in interruptions. Under high loads, these stops can be quite significant (up to 15-30-60 secs). This means that the pt-pmp approach is not really usable in production.

Below I’ll describe how to reduce GDB overhead, and also what other tools can be used instead of GDB to get stack traces.

  • GDB
    By default, the symbol resolution process in GDB is very slow. As a result, getting stack traces with GDB is quite intrusive (especially under high loads).There are two options available that can help notably reduce GDB tracing overhead:

      1. Use readnever patch. RHEL and other distros based on it include GDB with the readnever patch applied. This patch allows you to avoid unnecessary symbol resolving with the --readnever option. As a result you get  up to 10 times better speed.
      2. Use gdb_index. This feature was added to address symbol resolving issue by creating and embedding a special index into the binaries. This index is quite compact: I’ve created and embedded gdb_index for Percona server binary (it increases the size around 7-8MB). The addition of the gdb_index speeds up obtaining stack traces/resolving symbols two to three times.

    # to check if index already exists: readelf -S | grep gdb_index # to generate index: gdb -batch mysqld -ex "save gdb-index /tmp" -ex "quit" # to embed index: objcopy --add-section .gdb_index=tmp/mysqld.gdb-index --set-section-flags .gdb_index=readonly mysqld mysqld

  • eu-stack (elfutils)
    The eu-stack from the elfutils package prints the stack for each thread in a process or core file.Symbol resolving also is not very optimized in eu-stack. By default, if you run it under load it will take even more time than GDB. But eu-stack allows you to skip resolving completely, so it can get stack frames quickly and then resolve them without any impact on the workload later.
  • Quickstack
    Quickstack is a tool from Facebook that gets stack traces with minimal overheads.

Now let’s compare all the above profilers. We will measure the amount of time it needs to take all the stack traces from Percona Server for MySQL under a high load (sysbench OLTP_RW with 512 threads).

The results show that eu-stack (without resolving) got all stack traces in less than a second, and that Quickstack and GDB (with the readnever patch) got very close results. For other profilers, the time was around two to five times higher. This is quite unacceptable for profiling (especially in production).

There is one more note regarding the pt-pmp tool. The current version only supports GDB as the profiler. However, there is a development version of this tool that supports GDB, Quickstack, eu-stack and eu-stack with offline symbol resolving. It also allows you to look at stack traces for specific threads (tids). So for instance, in the case of Percona Server for MySQL, we can analyze just the purge, cleaner or IO threads.

Below are the command lines used in testing:

# gdb & gdb+gdb_index time gdb -ex "set pagination 0" -ex "thread apply all bt" -batch -p `pidof mysqld` > /dev/null # gdb+readnever time gdb --readnever -ex "set pagination 0" -ex "thread apply all bt" -batch -p `pidof mysqld` > /dev/null # eu-stack time eu-stack -s -m -p `pidof mysqld` > /dev/null # eu-stack without resolving time eu-stack -q -p `pidof mysqld` > /dev/null # quickstack - 1 sample time quickstack -c 1 -p `pidof mysqld` > /dev/null # quickstack - 1000 samples time quickstack -c 1000 -p `pidof mysqld` > /dev/null

Percona Server for MySQL 5.7.17-13 is Now Available

Latest MySQL Performance Blog posts - April 5, 2017 - 8:53am

Percona announces the GA release of Percona Server for MySQL 5.7.17-13 on April 5, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.17, including all the bug fixes in it, Percona Server for MySQL 5.7.17-13 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.17-13 milestone at Launchpad.

Bugs Fixed:
  • MyRocks storage engine detection implemented in mysqldump in Percona Server 5.6.17-12 was using a deprecated INFORMATION_SCHEMA.SESSION_VARIABLES table, causing mysqldump failures on servers running with the show_compatibility_56 variable set to OFF. Bug fixed #1676401.

The release notes for Percona Server for MySQL 5.7.17-13 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.