]]>
]]>

You are here

Feed aggregator

MySQL Mixed Replication uses different method for same query

Lastest Forum Posts - 9 hours 27 min ago

mysql>select*from table_in_question;| id | parent_id |key| array_name | value | time | ip | remark | p_id ||2128363|0| fname | o | Sue |2015-03-0208:47:48|0.0.0.0||166817||2128365|2128363| lname | o | Susanson |2015-03-0208:47:48|||166817||2128367|2128363| address | o |123 Main |2015-03-0208:47:48|||166817||2128373|2128363| city | o | Nowhere |2015-03-0208:47:48|||166817||2128379|2128363| state | o | LA |2015-03-0208:47:48|||166817||2128385|2128363| postcode | o |11111|2015-03-0208:47:48|||166817||2128389|0| fname | o | Amy |2015-03-0208:47:58|2.2.2.2||0||2128391|2128363| telephone | o |(800)555-1212|2015-03-0208:47:48|||166817||2128397|2128363| email_address | o | sue@nowhere.com |2015-03-0208:47:48|||166817||2128403|2128363| click | o |369490878|2015-03-0208:47:48|||166817||2128409|2128363| country | o |223|2015-03-0208:47:48|||166817||2128415|2128363| product | c |7|2015-03-0208:47:48|||166817||2128421|2128363| g_id | c |1|2015-03-0208:47:48|||166817||2128427|2128363| c_id | c |141|2015-03-0208:47:48|||166817| id is the Primary Key of this table.

Last SQL error: Error 'Duplicate entry '2128391' for key 'PRIMARY''on query.Defaultdatabase:'XXX'. Query:'INSERT INTO table_in_question (key,value,array_name,parent_id) VALUES ('lname','Amyson','o',2128389), ('address','444 Main','o',2128389), ('city','Nowhere','o',2128389), ('state','NH','o',2128389), ('postcode','11111','o',2128389), ('telephone','800555-1212','o',2128389), ('email_address','Amy@nowhere.com','o',2128389), ('country','223','o',2128389), ('product','7','c',2128389), ('g_id','1','c',2128389), ('c_id','141','c',2128389)' The issue is: We are upgrading to a PXC cluster with 6 nodes. This causes the auto-increment to be set to 6 on the cluster. We are keeping the data current via a binlog from a Master/Slave pair where the auto-increment is set to 2. The Master/Slave pair has MIXED replication. The row with no parent_id is a simple insert. The remaining rows are inserted as above.
From what I can see the first row and the subsequent 2 rows are populated with row based replication. The remaining rows are switched to statement based. When the 2nd no parent_id row is inserted it takes the ID from the master as row-based replication. When the first row referencing the parent is inserted, it too wants row based replication, causing the collision.
Especially puzzling on the bulk insert, which MySQL apparently breaks up and treats the first 2 as row based replication, and the others as statement based. The question is, because these are all simple inserts and none specifically address the time column (defaulted to the current time stamp) the only potentially non-deterministic column, what would cause the asynchronous replication method to be different?

How to test if CVE-2015-0204 FREAK SSL security flaw affects you

Latest MySQL Performance Blog posts - 13 hours 16 min ago

The CVE-2015-0204 FREAK SSL vulnerability abuses intentionally weak “EXPORT” ciphers which could be used to perform a transparent Man In The Middle attack. (We seem to be continually bombarded with not only SSL vulnerabilities but the need to name vulnerabilities with increasing odd names.)

Is your server vulnerable?

This can be tested using the following GIST

If the result is 0; the server is not providing the EXPORT cipher; and as such is not vulnerable.

Is your client vulnerable?

Point your client to https://oneiroi.co.uk:4443/test if this returns “Vulnerable” then the client is vulnerable, if you find a connection error your client should not be vulnerable for example:

root@host:/tmp$ openssl version
OpenSSL 1.0.1e 11 Feb 2013
root@host:/tmp$ curl https://oneiroi.co.uk:4443/test -k
Vulnerable

root@host:/tmp$ openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify error:num=18:self signed certificate
verify return:1
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify return:1

Certificate chain
0 s:/C=XX/L=Default City/O=Default Company Ltd
i:/C=XX/L=Default City/O=Default Company Ltd

Server certificate
—–BEGIN CERTIFICATE—–
MIIDVzCCAj+gAwIBAgIJANvTn7jl

[root@3654e4df1cc2 bin]# curl https://oneiroi.co.uk:4443/test -k
curl: (35) Cannot communicate securely with peer: no common encryption algorithm(s).
[root@3654e4df1cc2 bin]# openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
139942442694560:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:744:

In short a vulnerable client will complete the connection, and a non vulnerable client should present an SSL handshake failure error.

DIY

You can recreate this setup yourself


openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mycert.pem -out mycert.pem;
openssl s_server -cipher EXPORT -accept 4443 -cert mycert.pem -HTTP;

Is MySQL affected ?

Some of the code per the POODLE Blog post can be re-purposed here.


mysql -Bse "SHOW STATUS LIKE 'Ssl_cipher_list'" | sed 's/:/n/g' | grep EXP | wc -l

A result of 0 means the MySQL instance does not support any of the EXPORT ciphers, and thus should not be vulnerable to this attack.

How about other clients?

Most clients link to another library for SSL purposes; however there are examples where this is not the case; take for example golang http://golang.org/pkg/crypto/tls/ which partially implements the TLS1.2 RFC.

The following test code however shows golang does not appear to be affected.


package main

import (
“fmt”
“net/http”
“crypto/tls”
)

func main() {
tr := &http.Transport{
TLSClientConfig: &tls.Config{},
DisableCompression: true,
}
client := &http.Client{Transport: tr}
resp, err := client.Get(“https://oneiroi.co.uk:4443/test”)
fmt.Println(err)
fmt.Println(resp)
}

Get https://oneiroi.co.uk:4443/test: remote error: handshake failure

References

The post How to test if CVE-2015-0204 FREAK SSL security flaw affects you appeared first on MySQL Performance Blog.

Steve ‘Woz’ Wozniak to speak at Percona Live MySQL Conference & Expo 2015

Apple’s Steve “Woz” Wozniak will talk at Percona Live 2015

I am thrilled to announce that Apple co-founder and Silicon Valley icon and philanthropist Steve Wozniak will participate in a moderated Q&A on creativity and innovation April 14 during this year’s Percona Live MySQL Conference and Expo in Santa Clara, California.

In addition to “The Woz,” as Steve is nicknamed, we have a great lineup of esteemed industry luminaries, with a broad range of talks and tutorials along with fun and informative networking events during the four-day conference (April 13-16).

Harrison Fisk of Facebook’s Database Performance Team will deliver a keynote titled “Polyglot Persistence @Facebook” exploring why Facebook has so many different data solutions deployed at scale and how the company decides to deploy a new one. He’ll also talk about how Facebook is able to manage all these solutions and what types of optimizations are possible across all their data systems.

‘MySQL 101’ program announced
I wrote about the new “MySQL 101” program earlier this week. Largely driven by the broad demand for (and scarcity of) MySQL expertise across many industries, Percona Live is adding dual tracks for both developers and system administrators that will provide a 2-day course on MySQL fundamentals.

The two days of practical training will include everything needed to handle day-to-day MySQL DBA tasks. The motto of this new program is, “You send us developers and admins, and we will send you back MySQL DBAs.” You can check out the full schedule for MySQL 101 here.

Lightning Talks and Birds of a Feather Sessions (BOFs)
In related news, today we also announced the schedules for the popular “Lightning Talks” and “Birds of a Feather Sessions” (BOFs).

Lightning Talks provide an opportunity for attendees to propose, explain, exhort, and rant on any MySQL-related topic for five minutes. Lightning Talks will take place Wednesday night, (April 15) during the MySQL Community Networking Reception, which begins immediately following the breakout sessions.

This year’s topics include:

  • “Everything About MySQL Bugs” presented by Seta Smirnova, senior principal technical support engineer, and Valerii Kravchuk, principle support engineer at Percona
  • “MySQL Galera Cluster, Percona XtraDB Cluster, and MariaDB Galera Cluster” presented by Jay Janseen, managing consultant at Percona
  • “Data Security – Emerging Legal Trends” presented by S. Keith Moulsdale, partner at Whiteford, Taylor & Preston
  • “WebScaleSQL Meeting/Hacking” presented by Steaphan Greene, software engineer at WebScaleSQL

BOFs sessions enable attendees with interests in the same project or topic to enjoy some quality face time. The BOF sessions will take place Tuesday night, April 14, from 6:00 p.m. to 7:00 p.m. This year’s topics include:

  • “Working From Home – Fun, Facts and Scares” presented by Raghavendra Prabhu, product lead, Percona XtraDB Cluster at Percona
  • “The Future of MySQL Quality Assurance: Introducing pquery” presented by Roel Van de Paar, senior QA lead, and Ramesh Sivaraman, junior QA engineer at Percona

Community Reception
One of the most fun and compelling aspects of the Percona Live is the networking, providing the opportunity to make connections that can help enhance a career, facilitate a current project, or inspire new ideas. This year’s conference features the Community Networking Reception in the Expo Hall on Wednesday, April 15, from 5:30 p.m. to 7:00 p.m. The event will include the MySQL Community Awards and the Lightning Talks.

Conference Registration Promotions
Advanced pricing discounts are still available but will end on March 8. Register soon to take advantage of the best conference rates available. A Percona Live MySQL Conference & Expo Ticket provides you with access to all OpenStack Live sessions. In addition, in the spirit of the new MySQL 101 track, the first 101 people who sign-up for MySQL 101 using the “101” discount code will get their tickets for a $101 – a $299 savings! Attendees of MySQL 101 will have full access to Percona Live Keynotes and the Expo Hall.

See you next month!

The post Steve ‘Woz’ Wozniak to speak at Percona Live MySQL Conference & Expo 2015 appeared first on MySQL Performance Blog.

Percona XtraDB Cluster 5.6.22-25.8 is now available

Latest MySQL Performance Blog posts - 17 hours 32 min ago

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on March 5th 2015. Binaries are available from downloads area or from our software repositories.

Based on Percona Server 5.6.22-72.0 including all the bug fixes in it, Galera Replicator 3.9, and on Codership wsrep API 25.8, Percona XtraDB Cluster 5.6.21-25.8 is now the current General Availability release. All of Percona’s software is open-source and free, and all the details of the release can be found in the 5.6.22-25.8 milestone at Launchpad.

Bugs Fixed:

  • XtraBackup SST wouldn’t stop when MySQL was SIGKILLed. This would prevent MySQL to initiate a new transfer as port 4444 was already utilized. Bug fixed #1380697.
  • wsrep_sst_xtrabackup-v2 script was causing innobackupex to print a false positive stack trace into the log. Bug fixed #1407599.
  • MyISAM DDL (CREATE/DROP) isn’t replicated any more when wsrep_replicate_myisam is OFF. Note, for older nodes in the cluster, wsrep_replicate_myisam should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended with wsrep_replicate_myisam OFF since if any table in list is MyISAM, the whole DDL statement is not put under TOI (total order isolation), hence not replicated. Bug fixed #1402338.
  • gcache.mem_size option has been deprecated. A warning will now be generated if the variable has value different than 0. Bug fixed #1392408.
  • stderr of SST/Innobackupex is logged to syslog with appropriate tags if sst-syslog is in [sst] or [mysqld_safe] has syslog in my.cnf. This can be overriden by setting the sst-syslog to -1 in [sst]. Bug fixed #1399134.
  • clustercheck can now check if the node is PRIMARY or not, to allow for synced nodes which go out of PRIMARY not to take any writes/reads. Bug fixed #1403566.
  • SST will now fail early if the xtrabackup_checkpoints is missing on the joiner side. Bug fixed #1405985.
  • socat utility was not properly terminated after a timeout. Bug fixed #1409710.
  • When started (without bootstrap), the node would hang if it couldn’t find a primary node. Bug fixed #1413258.
  • 10 seconds timeout in Xtrabackup SST script was not enough for the joiner to delete existing files before it started the socat receiver on systems with big datadir. Bug fixed #1413879.
  • Non booststrap node could crash while attempting to perform table%cache operations with the BF applier failed to open_and_lock_tables warning. Bug fixed #1414635.
  • Percona XtraDB Cluster 5.6 would crash on ALTER TABLE / CREATE INDEX with Failing assertion: table->n_rec_locks == 0 error. Bug fixed #1282707.
  • Variable length arrays in WSREP code were causing debug builds (-DWITH_DEBUG=ON) to fail. Bug fixed #1409042.
  • Race condition between donor and joiner in Xtrabackup SST configuration has been fixed. This caused XtraBackup SST to fail when joiner took longer to spawn the second listener for SST. Bug fixed #1405668.
  • Signal handling in mysqld has been fixed for SST processes. Bug fixed #1399175.
  • SST processes are now spawned with fork/exec instead of posix_spawn to allow for better cleanup of child processes in event of non-graceful termination (SIGKILL or a crash etc.). Bug fixed #1382797.
  • wsrep_local_cached_downto would underflow when the node on which it is queried had no writesets in gcache. Bug fixed #1262179.
  • A typo in wsrep_provider_options could cause an unhandled exception. Bug fixed #215.
  • Interrupted IST would result in HA_ERR_KEY_NOT_FOUND error in subsequent IST. Bug fixed #210.

Other bugs fixed: #1275814.

Known Issue:

  • For those affected by crashes on donor during SST due to backup locks (#1401133), please add the following to your my.cnf configuration file:[sst]
    inno-backup-opts='--no-backup-locks'

    option as a workaround to force FLUSH TABLES WITH READ LOCK (NOTE: This workaround will is available only if you’re using Percona XtraBackup 2.2.9 or newer.). Or, as an alternative you can set your environment variable FORCE_FTWRL to 1 (for passing environment variables, see description of bug #1381492 in the previous release notes).

Release notes for Percona XtraDB Cluster 5.6.22-25.8 are available in our online documentation along with the installation and upgrade instructions.

Percona XtraDB Cluster code hosting has been moved to Github. The Bazaar branches will not be updated further.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

The post Percona XtraDB Cluster 5.6.22-25.8 is now available appeared first on MySQL Performance Blog.

optimize sql query

Lastest Forum Posts - 19 hours 45 min ago
I have the following query that takes almost 30s to execute.

The table techno_tire_hjul has around 1700 entries.

I am using inner join extensively.

Any tips on how to modify the sql to improve the execution time?

: select `techno_tire_hjul`.`id`, `techno_tire_hjul`.`datum`, `techno_tire_lager`.`namn` as Lager, `techno_tire_hjul`.`lagerplats`, `techno_tire_hjul`.`vinter_sommar_dack`, `techno_tire_kategori`.`namn` as Kategori, `techno_tire_dacktyp`.`namn` as Dacktyp1, `techno_tire_dacktyp_1`.`namn` as Dacktyp2, `techno_tire_dacktyp_2`.`namn` as Dacktyp3, `techno_tire_dacktyp_3`.`namn` as Dacktyp4, `techno_tire_fabrikat`.`namn` as Fabrikat1, `techno_tire_fabrikat_1`.`namn` as Fabrikat2, `techno_tire_fabrikat_2`.`namn` as Fabrikat3, `techno_tire_fabrikat_3`.`namn` as Fabrikat4, `techno_tire_hjul`.`dimension_diameter1` as Diameter1, `techno_tire_hjul`.`dimension_diameter2` as Diameter2, `techno_tire_hjul`.`dimension_diameter3` as Diameter3, `techno_tire_hjul`.`dimension_diameter4` as Diameter4, `techno_tire_hjul`.`dimension_profil1` as Profil1, `techno_tire_hjul`.`dimension_profil2` as Profil2, `techno_tire_hjul`.`dimension_profil3` as Profil3, `techno_tire_hjul`.`dimension_profil4` as Profil4, `techno_tire_hjul`.`dimension_bredd1` as Bredd1, `techno_tire_hjul`.`dimension_bredd2` as Bredd2, `techno_tire_hjul`.`dimension_bredd3` as Bredd3, `techno_tire_hjul`.`dimension_bredd4` as Bredd4, `techno_tire_hjul`.`lo1` as LoadIndex1, `techno_tire_hjul`.`lo2` as LoadIndex2, `techno_tire_hjul`.`lo3` as LoadIndex3, `techno_tire_hjul`.`lo4` as LoadIndex4, `techno_tire_si`.`namn` as SpeedIndex1, `techno_tire_si_2`.`namn` as SpeedIndex2, `techno_tire_si_3`.`namn` as SpeedIndex3, `techno_tire_si_4`.`namn` as SpeedIndex4, `techno_tire_hjul`.`rof1` as RunOnFlat1, `techno_tire_hjul`.`rof2` as RunOnFlat2, `techno_tire_hjul`.`rof3` as RunOnFlat3, `techno_tire_hjul`.`rof4` as RunOnFlat4, `techno_tire_hjul`.`monsterdjup1`, `techno_tire_hjul`.`monsterdjup2`, `techno_tire_hjul`.`monsterdjup3`, `techno_tire_hjul`.`monsterdjup4`, `techno_tire_hjul`.`regnr`, `techno_tire_hjul`.`kommentar1`, `techno_tire_hjul`.`brukare_fornamn`, `techno_tire_hjul`.`brukare_efteramn`, `techno_tire_hjul`.`brukare_telefon`, `techno_tire_hjul`.`brukare_epost`, `techno_tire_falgtyp`.`namn` as Falgtyp1, `techno_tire_falgtyp_1`.`namn` as Falgtyp2, `techno_tire_falgtyp_2`.`namn` as Falgtyp3, `techno_tire_falgtyp_3`.`namn` as Falgtyp4, `techno_tire_service_paket`.`namn` as Service_paket, `techno_tire_hjul`.`lagerplats_datum`, `techno_tire_hjul`.`lagerplats_status`, `techno_tire_hjul`.`orderno`, `techno_tire_hjul`.`kund_nr`, `techno_tire_hjul`.`namn`, `techno_tire_hjul`.`utleverans_datum_forslag`, `techno_tire_hjul`.`utleverans_datum_forslag_status`, `techno_tire_hjul`.`utleverans_datum`, `techno_tire_hjul`.`transportsparning` from `techno_tire_hjul` `techno_tire_hjul` inner join `techno_tire_dacktyp` `techno_tire_dacktyp_1` on `techno_tire_hjul`.`dacktyp1` = `techno_tire_dacktyp_1`.`id` inner join `techno_tire_kategori` `techno_tire_kategori` on `techno_tire_hjul`.`kategori_id` = `techno_tire_kategori`.`id` inner join `techno_tire_falgtyp` `techno_tire_falgtyp_2` on `techno_tire_hjul`.`falgtyp3` = `techno_tire_falgtyp_2`.`id` inner join `techno_tire_dacktyp` `techno_tire_dacktyp_3` on `techno_tire_hjul`.`dacktyp4` = `techno_tire_dacktyp_3`.`id` inner join `techno_tire_falgtyp` `techno_tire_falgtyp_1` on `techno_tire_hjul`.`falgtyp2` = `techno_tire_falgtyp_1`.`id` inner join `techno_tire_fabrikat` `techno_tire_fabrikat` on `techno_tire_hjul`.`fabrikat1` = `techno_tire_fabrikat`.`id` inner join `techno_tire_lager` `techno_tire_lager` on `techno_tire_hjul`.`lager_id` = `techno_tire_lager`.`id` inner join `techno_tire_falgtyp` `techno_tire_falgtyp` on `techno_tire_hjul`.`falgtyp1` = `techno_tire_falgtyp`.`id` inner join `techno_tire_fabrikat` `techno_tire_fabrikat_1` on `techno_tire_hjul`.`fabrikat2` = `techno_tire_fabrikat_1`.`id` inner join `techno_tire_falgtyp` `techno_tire_falgtyp_3` on `techno_tire_hjul`.`falgtyp4` = `techno_tire_falgtyp_3`.`id` inner join `techno_tire_dacktyp` `techno_tire_dacktyp_2` on `techno_tire_hjul`.`dacktyp3` = `techno_tire_dacktyp_2`.`id` inner join `techno_tire_fabrikat` `techno_tire_fabrikat_2` on `techno_tire_hjul`.`fabrikat3` = `techno_tire_fabrikat_2`.`id` inner join `techno_tire_fabrikat` `techno_tire_fabrikat_3` on `techno_tire_hjul`.`fabrikat4` = `techno_tire_fabrikat_3`.`id` inner join `techno_tire_service_paket` `techno_tire_service_paket` on `techno_tire_hjul`.`service_paket_id` = `techno_tire_service_paket`.`id` inner join `techno_tire_dacktyp` `techno_tire_dacktyp` on `techno_tire_hjul`.`dacktyp2` = `techno_tire_dacktyp`.`id` inner join `techno_tire_si` `techno_tire_si` on `techno_tire_hjul`.`si1` = `techno_tire_si`.`id` inner join `techno_tire_si` `techno_tire_si_2` on `techno_tire_hjul`.`si2` = `techno_tire_si_2`.`id` inner join `techno_tire_si` `techno_tire_si_3` on `techno_tire_hjul`.`si3` = `techno_tire_si_3`.`id` inner join `techno_tire_si` `techno_tire_si_4` on `techno_tire_hjul`.`si4` = `techno_tire_si_4`.`id` where `techno_tire_hjul`.`hjul_status` = 1 and `techno_tire_hjul`.`aterforsaljare_id` = 1000

Percona Server 5.6.23-72.1 is now available

Latest MySQL Performance Blog posts - March 4, 2015 - 9:29am

Percona is glad to announce the release of Percona Server 5.6.23-72.1 on March 4, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.23, including all the bug fixes in it, Percona Server 5.6.23-72.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.23-72.1 milestone on Launchpad.

New Features:

  • TokuDB storage engine package has been updated to version 7.5.6.

Bugs Fixed:

  • RPM pre-install script assumed that the PID file was always located in the datadir. If it was not, during installation, wrong assumption could be made if the server was running or not. Bug fixed #1201896.
  • SHOW GRANTS displayed only the privileges granted explicitly to the named account. Other effectively available privileges were not displayed. Fixed by implementing Extended SHOW GRANTS feature. Bug fixed #1354988 (upstream #53645).
  • InnoDB lock monitor output was printed even if it was not requested. Bug fixed #1418996.
  • The stored procedure key was made consistent with other keys in the Slow Query Log by replacing space with an underscore. Bug fixed #1419230.
  • Some --big-test MTR tests were failing for Percona Server because they weren’t updated. Bug fixed #1419827.

Other bugs fixed: #1408232, and #1420303.

Release notes for Percona Server 5.6.23-72.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker

The post Percona Server 5.6.23-72.1 is now available appeared first on MySQL Performance Blog.

Percona Server 5.5.42-37.1 is now available

Latest MySQL Performance Blog posts - March 4, 2015 - 7:22am


Percona is glad to announce the release of Percona Server 5.5.42-37.1 on March 4, 2015. Based on MySQL 5.5.42, including all the bug fixes in it, Percona Server 5.5.42-37.1 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.42-37.1 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • RPM pre-install script assumed that the PID file was always located in the datadir. If it was not, during installation, wrong assumption could be made if the server was running or not. Bug fixed #1201896.
  • SHOW GRANTS displayed only the privileges granted explicitly to the named account. Other effectively available privileges were not displayed. Fixed by implementing Extended SHOW GRANTS feature. Bug fixed #1354988 (upstream #53645).
  • InnoDB lock monitor output was printed even if it was not requested. Bug fixed #1418996.
  • The stored procedure key was made consistent with other keys in the Slow Query Log by replacing space with an underscore. Bug fixed #1419230.

Other bugs fixed: #1408232, #1415843 (upstream #75642), bug fixed #1407941, and bug fixed #1424568 (upstream #75868).

(Please also note that Percona Server 5.6 series is the latest General Availability series and current GA release is 5.6.23-72.1.)

Release notes for Percona Server 5.5.42-37.1 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.42-37.1 is now available appeared first on MySQL Performance Blog.

Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16

Latest MySQL Performance Blog posts - March 3, 2015 - 9:18am

Talking with Percona Live attendees last year I heard a couple of common themes. First, people told me that there is a lot of great advanced content at Percona Live but there is not much for people just starting to learn the ropes with MySQL. Second, they would like us to find a way to make such basic content less expensive.

I’m pleased to say we’re able to accommodate both of these wishes this year at Percona Live! We have created a two-day intensive track called “MySQL 101” that runs April 15-16. MySQL 101 is designed for developers, system administrators and DBAs familiar with other databases but not with MySQL. And of course it’s ideal for anyone else who would like to expand their professional experience to include MySQL. The sessions are designed to lay a solid foundation on many aspects of MySQL development, design and operations.

As for the price: Just $101 for both full days, but only if you are among the first 101 people to register using the promo code “101” at checkout.  After that the price returns to $400 (still a great price!).

The MySQL 101 registration pass includes full access to the Percona Live expo hall (and all the fun stuff happening out there) as well as keynotes, which will inform you about most significant achievements in MySQL ecosystem.

As there is so much information to cover in the MySQL 101 track, we’re running two sessions in parallel – one geared more toward developers using MySQL and the other toward sysadmins and MySQL DBAs, focusing more on database operations. Though I want to point out that you do not have to chose one track to attend exclusively, but rather can mix and match sessions depending what is most relevant to your specific circumstances.

I will be leading a couples tracks myself alongside many other Percona experts who are joining me for those two days!

Here’s a peek at just some of the many classes on the MySQL 101 agenda:

You can see the full MySQL 101 agenda here. Don’t forget the promo code “101” and please feel free to ask any questions below. I hope to see you in Santa Clara at Percona Live! The conference runs April 13-16 in sunny Santa Clara, California.

The post Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16 appeared first on MySQL Performance Blog.

UPDATE with LIMIT 1

Lastest Forum Posts - March 3, 2015 - 9:17am
Given a table with a primary key, if I do an UPDATE with WHERE `primaryKey` = "unique value", would LIMIT 1 give any speed optimization? I was told that LIMIT 1 would speed up the query under certain circumstances but if so, under which circumstances?

Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots

Latest MySQL Performance Blog posts - March 3, 2015 - 6:36am

As the need for information grows so does the size of data we need to keep in our databases. SST is unavoidable for spinning up new nodes in a PXC cluster and when datasets reach the “terra-byte” range this becomes ever more cumbersome requiring many hours for a new node to synchronize.

More often that not, it is necessary to implement custom “wsrep_sst” scripts or resort to manual synchronization processes. Luckily cloud providers provide convenient methods to leverage disk snapshots that can be used to quickly transfer data between nodes.

This article deals with the actions needed to perform a snapshot on Google’s Compute Engine (GCE) infrastructure. A similar method can be used on AWS EC2 instances using EBS snapshots or any other form of snapshots such as LVM, ZFS or SAN. The steps described can be used to add a new node to a PXC cluster or to avoid SST. The following procedure can also be used to take advantage of the performance benefit of GCE Snapshots. A similar procedure can be used for adding a regular slave provided the binary log co-ordinates have been captured. This article assumes your “datadir” is on a separate disk to your operating system partition using the “ext4″ filesystem:

  1. Select a suitable “donor” node, we will use “node1″ for this example.
  2. Stop the MySQL service on “node1″ or perform a FTWRL with the MySQL service running on a node which is in “desync/donor” mode
    # Take the snapshot from a stopped instance [root@node1 /] service mysql stop & tail -f /var/log/mysql/error.log   # OR alternatively take the snapshot from a 'desynced' node   ### desync from cluster replication mysql> set global wsrep_desync=ON;    ### get FTWRL mysql> flush tables with read lock;
  3. While the MySQL service is down on “node1″ or the FTWRL is held create a snapshot in the Google Developer Console for the disk or using the GCE API (* this assumes that the datadir is located in a separate standalone disk). This part of the process takes around 15 minutes for a 3.5 TB disk.
    gcloud compute disks snapshot node1-datadir-disk --snapshot-name node1-datadir-disk-snapshot-1
  4. As soon as the snapshot has completed start the MySQL service on “node1″ (verifying the node has successfully joined the cluster) or release the FTWRL
    # Depending on the steps followed in step 1 either start MySQL on node1 [root@node1 /] service mysql start & tail -f /var/log/mysql/error.log   # OR alternatively release the FTWRL and "sync" the node   ### release FTWRL mysql> unlock tables;   ### if there is high load on the cluster monitor wsrep_local_recv_queue  ### until it reaches 0 before running the following command to rejoin  ### the cluster replication (otherwise it can be run immediately after ### releasing the FTWRL): mysql> set global wsrep_desync=OFF;
    ***** IMPORTANT NOTE: In case “node1″ is unable to rejoin the cluster or requires an SST you will need to re-create the snapshot from another node or after SST completes.
  5. Now connect to the “joiner” node, we will use “node2″ for this example.
  6. Unmount the existing disk from “node2″ for this example (assuming MySQL service is not running else stop the MySQL service first)[root@node2 /] umount /var/lib/mysql
  7. Detach and delete the disk containing the MySQL datadir from the “node2″ instance in the Google Developer Console or using the GCE API
    gcloud compute instances detach-disk node2 --disk node2-datadir-disk gcloud compute disks delete node2-datadir-disk
  8. Create and attach a new disk to the “node2″ instance in the Google Developer Console or using the GCE API using the snapshot you created in step 3. This part of the process takes around 10 minutes for a 3.5 TB disk
    gcloud compute disks create node2-datadir-disk --source-snapshot node1-datadir-disk-snapshot-1 gcloud compute instance attach-disk node2 --disk node2-datadir-disk
  9. [ *** LVM only step *** ]: If you are using LVM the device will not show up in this list until you have activated the Volume Group (“vg_mysql_data” in this example)# this command will report the available volume groups [root@node2 /] vgscan   Reading all physical volumes.  This may take a while...   Found volume group "vg_mysql_data" using metadata type lvm2   # this command will report the available logical volumes, you should see the LV INACTIVE now [root@node2 /] lvscan   INACTIVE            '/dev/vg_mysql_data/lv_mysql' [20.00 TiB] inherit   # this command will activate all logical volumes within the volume group [root@node2 /] vgchange -ay vg_mysql_data   # this command will report the available logical volumes, you should see the LV ACTIVE now [root@node2 /] lvscan   ACTIVE            '/dev/vg_mysql_data/lv_mysql' [20.00 TiB]
  10. After the device has been added it should show up on the “node2″ operating system – you can retrieve the new UUID using the following command (in case you have mounted using “/dev/disk/by-name” and the name of the new disk is the same as the previous you do not need to update “/etc/fstab” e.g. this holds true for VM instances created using the Percona XtraDB click-to-deploy installer)[root@node2 /] ls -l /dev/disk/by-uuid/ total 0 lrwxrwxrwx 1 root root 10 Feb 14 15:56 4ad2d22b-500a-4ad2-b929-12f38347659c -> ../../sda1 lrwxrwxrwx 1 root root 10 Feb 19 03:12 9e48fefc-960c-456f-95c9-9d893bcafc62 -> ../../dm-0   # This is the 'new' disk
  11.  You can now proceed to adding the new UUID you retrieved in step 9 to “/etc/fstab” (unless you are using “/dev/disk/by-name” with the same disk name) and mount the new disk[root@node2 /] vi /etc/fstab ... UUID=9e48fefc-960c-456f-95c9-9d893bcafc62 /var/lib/mysql ext4 defaults,noatime 0 0 ...   [root@node2 /] mount -a
  12. Verify the data is mounted correctly and the ownership of the data directory and sub-contents are using the correct UID / GID for the MySQL user on the destination system (although this is usually OK, it is good to do a quick check)[root@node2 /] ls -lhtR /var/lib/mysql/
  13. You are now ready to start MySQL and verify that the node has in fact initialised with IST (provided you have sufficient “gcache” available there shouldn’t be any other issues)[root@node2 /] service mysql start & tail -f /var/log/mysql/error.log

The Percona XtraDB Click-to-deploy tool can be used for automated deployments and further details on creating a cluster on Google Compute Engine using this method can be found in Jay Janssen’s post, “Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process.”

 

The post Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots appeared first on MySQL Performance Blog.

Replicate PXC from MySQL 5.5 Master?

Lastest Forum Posts - March 3, 2015 - 2:56am
Hi, we are currently investigating moving our current MySQL 5.5 Master > 3 x Slaves (with Statement Based Replication, apx 250GB data and growing) to a HA clustering option. We have ruled out MySQL ndb Cluster as it would require too many changes to our current database structure (large number of BLOB fields amongst others).

We would want to start off by setting the Cluster up as a Slave to the main Master (for testing etc), before moving over to the Cluster as our Primary system in the future. Is this possible with Percona Xtra DB Cluster, and is it a relatively straight forward process?

log rotation oddly broken

Lastest Forum Posts - March 2, 2015 - 12:39pm
I have log rotation setup in /etc/logrotate.d/mysql:

: /var/lib/mysql/mysql*.log { # create 600 mysql mysql notifempty daily dateext rotate 7 missingok compress postrotate # just if mysqld is really running if test -x /usr/bin/mysqladmin && \ /usr/bin/mysqladmin ping &>/dev/null then /usr/bin/mysqladmin flush-logs fi endscript } However, this does not appear to be working correctly. On one system I have log rotation occuring as desired (new log file daily) but I have a month of log files (not 7). On another system I have log rotation occuring but writes are happening to a file labeled mysql-slow.log-20150204. Not quite sure where the break down is at this point and the blog has some good guidance but maybe it's out of date?

Any help (or logrotate.d examples) would be much appreciated.

Emulating MySQL roles with the Percona PAM plugin and proxy users

Latest MySQL Performance Blog posts - March 2, 2015 - 8:50am

From time to time people wonder how to implement roles in MySQL. This can be useful for companies having to deal with many user accounts or for companies with tight security requirements (PCI or HIPAA for instance). Roles do not exist in regular MySQL but here is an example on how to emulate them using Percona Server, the PAM plugin and proxy users.

The goal

Say we have 2 databases: db1 and db2, and we want to be able to create 3 roles:

  • db1_dev: can read and write on db1 only.
  • db2_dev: can read and write on db2 only.
  • stats: can read on db1 and db2

For each role, we will create one user: joe (db1_dev), mike (db2_dev) and tom (stats).

Setting up the Percona PAM plugin

The Percona PAM plugin is distributed with Percona Server 5.5 and 5.6. I will be using Percona Server 5.6 in this post and I will authenticate users with /etc/shadow. As explained here, the setup is easy:

  • Make sure /etc/shadow can be read by the mysql user:
    # chgrp mysql /etc/shadow # chmod g+r /etc/shadow
  • Install the plugin:
    mysql> INSTALL PLUGIN auth_pam SONAME 'auth_pam.so';
  • Create a /etc/pam.d/mysqld file containing:
    auth required pam_warn.so auth required pam_unix.so audit account required pam_unix.so audit

Tinkering with the permissions of /etc/shadow may a security concern. Authenticating users against an LDAP server may be a better option. The configuration of the PAM plugin is similar (replace pam_unix.so with pam_ldap.so and forget the part about /etc/shadow).

Testing authentication with the PAM plugin

Now let’s create a user:

# adduser test_pam # passwd test_pam mysql> GRANT ALL PRIVILEGES ON db1.* TO test_pam@localhost IDENTIFIED WITH auth_pam;

And let’s check that the authentication is working as we expect:

mysql -utest_pam -p Enter password: mysql> show grants; +-----------------------------------------------------------+ | Grants for test_pam@localhost | +-----------------------------------------------------------+ | GRANT USAGE ON *.* TO 'test_pam'@'localhost' | | GRANT ALL PRIVILEGES ON `db1`.* TO 'test_pam'@'localhost' | +-----------------------------------------------------------+

That works! We can delete the user and go to the next step.

Creating proxy user

The key to emulate a role is to create a MySQL account for which nobody will know the password (then nobody will be able to use it directly). Instead we will use the PROXY privilege to make sure we map an anonymous account that will match any incoming user to the right MySQL user.

So the first step is to create an anonymous user:

mysql> CREATE USER ''@'' IDENTIFIED WITH auth_pam AS 'mysqld, pam_db1=db1_dev, pam_db2=db2_dev, pam_stats=stats';

The goal of this user is simply to map Unix users in the pam_db1 group to the db1_dev MySQL user, Unix users in the pam_db2 group to the db2_dev MySQL user and Unix users in the pam_stats group to the stats MySQL user.

Creating the proxied accounts

Now we can create the MySQL users corresponding to each of the roles we want to create:

mysql> GRANT SELECT, INSERT ON db1.* TO 'db1_dev'@localhost IDENTIFIED BY 'XXXXX'; mysql> GRANT PROXY ON 'db1_dev'@'localhost' TO ''@''; mysql> GRANT SELECT, INSERT ON db2.* TO 'db2_dev'@localhost IDENTIFIED BY 'YYYYY'; mysql> GRANT PROXY ON 'db2_dev'@'localhost' TO ''@''; mysql> GRANT SELECT ON db1.* TO 'stats'@localhost IDENTIFIED BY 'ZZZZZ'; mysql> GRANT SELECT ON db2.* TO 'stats'@localhost; mysql> GRANT PROXY ON 'stats'@'localhost' TO ''@'';

Creating the Unix user accounts

The last step is to create the Unix users joe, mike and tom and assign them the correct group:

# useradd joe # passwd joe # groupadd pam_db1 # usermod -g pam_db1 joe # useradd mike # passwd mike # groupadd pam_db2 # usermod -g pam_db2 mike # useradd tom # passwd tom # groupadd pam_stats # usermod -g pam_stats tom

Again you may prefer using an LDAP server to avoid creating the users at the OS level.

Testing it out!

Let’s try to connect as mike:

# mysql -umike -p Enter password: mysql> show grants; +----------------------------------------------------------------------------------------------------------------+ | Grants for db2_dev@localhost | +----------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'db2_dev'@'localhost' IDENTIFIED BY PASSWORD '*C1DDB6E980040762275B29A316FD993B4A19C108' | | GRANT SELECT, INSERT ON `db2`.* TO 'db2_dev'@'localhost' | +----------------------------------------------------------------------------------------------------------------+

Not bad!

Alternatives

The Percona PAM plugin is not the only option to use roles:

  • MariaDB 10 supports roles from version 10.0.5
  • Oracle distributes a PAM plugin for MySQL 5.5 and MySQL 5.6 as part of the MySQL Enterprise subscription
  • Securich is a set of stored procedures that has many features regarding user management
  • Google has been offering support for roles through its google-mysql-tools for a long time.
Conclusion

Even if they are not officially supported, roles can be emulated with an authentication plugin and a proxy user. Let’s hope that roles will be added in MySQL 5.7!

The post Emulating MySQL roles with the Percona PAM plugin and proxy users appeared first on MySQL Performance Blog.

Weird LSN is in the Future Error after Updating

Lastest Forum Posts - March 2, 2015 - 1:47am
Hi!,
Iam running an 5 Node Multi-Instances Galera Cluster since 2 years now.
SST is encrypted xtrabackup. The Donor is doing nothing but backup and sst.

I did a minor Update to 5.5.39-36.0-55-log / wsrep_25.11.r4023
The biggest change was the Update from xtrabackup 2.1 to xtrabackup 2.2.

Normally i let the node re-sync the content after an update. So i delete the grastate.dat.
After starting and successfully SST the Log started to fill up with:

---
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
150210 12:57:21 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
150210 12:57:21 InnoDB: Error: page 2 log sequence number 44208161331
---
Endless scrolling until hard-killing it.

After stopping the Node, cleaning the data-dir and restarting it the sst is completing fine and the error is gone.
This is happening only once per instance an node, so Debugging is really hard!

The same happend on the second node. First I deleted the ib_* Files. The Node recreated them and crashed again.
Deleting the table ibd Files did nothing, too.
Deleting the xtrabackup_* Files on the joiner fixes the error!

My current impression is that the new xtrabackup is not deleting all essential files. The old Version wipes all Content.
Does somebody knows something about this?

Greetz

alter view privileges

Lastest Forum Posts - February 28, 2015 - 3:09pm
I would like to allow a user to create/alter/drop views in a database, but not have any ability to create/alter/drop tables. I have given the user the CREATE VIEW, SHOW VIEW, and SELECT privileges. Creating a view works fine, but when the user tries to alter the view, he gets an error that the DROP command is denied. From my reading of the manual, if I give the user the DROP privilege, he'll be able to drop any table in the database which is not wanted.

Is there a way that I can allow this user only to drop views and not tables? Or be able to edit views in some other method?

One idea I had was to restrict the DROP privilege to a list of views based on a wildcard, assuming all views he creates starts with that wildcard, but it seems that requires an exact list of views upfront. I tried "GRANT DROP ON test.'vw_%' TO user@localhost", but that gives a syntax error. I could list out the individual views, but that means I would have to run a new GRANT query each time he creates a view, which is not really ideal.

Thanks for your help.

Ryan

Disable binary logs while purging the records

Lastest Forum Posts - February 27, 2015 - 1:18pm
Hi Team,

We have a parameter "--binlog-ignore-db" to disable logging for a particular database. Here, if the parameter is not existing in my.cnf file we have to add in the configuration file and restart mysql to have the changes affective. Below are the variables that are currently enabled as per the DB:

+-----------------------------------------+-------+
| Variable_name | Value |
+-----------------------------------------+-------+
| binlog_cache_size | 32768 |
| binlog_direct_non_transactional_updates | OFF |
| binlog_format | ROW |
| binlog_stmt_cache_size | 32768 |
+-----------------------------------------+-------+

Is there a way where I can add the parameter dynamically as we will not be able to restart the mysql instance as there are other DB's running which cannot be down for a second. Please suggest...!

Best Regards,
Krishna

Cluster crash and not restart

Lastest Forum Posts - February 27, 2015 - 7:55am
Hello,
I have XtraDB Cluster 5.6 on 3 node with Ubuntu Server 12.04.2 LTS.
All works fine.
Today one node has crash and when I try to restart service, other 2 nodes goes down.
Now, I restart first node with "bootstrap-pxc" and this node is ok.
When I try to add second node, this join fail and first node crash :|

On first node, on log error I find this:


2015-02-27 16:02:02 7f87a406c700 InnoDB: Error: page 7595 log sequence number 138747407668
InnoDB: is in the future! Current system log sequence number 119269942957.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: http://dev.mysql.com/doc/refman/5.6/...-recovery.html
InnoDB: for more information.
2015-02-27 16:02:04 15322 [Warning] Client failed to provide its character set. 'latin1' will be used as client character set.
2015-02-27 16:02:04 15322 [Warning] Client failed to provide its character set. 'latin1' will be used as client character set.
15:02:04 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=25165824
read_buffer_size=131072
max_used_connections=33
max_threads=202
thread_count=35
connection_count=33
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 105196 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7735e50
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
WSREP_SST: [ERROR] innobackupex finished with error: 9. Check /var/lib/mysql//innobackup.backup.log (20150227 16:02:04.738)
WSREP_SST: [ERROR] Cleanup after exit with status:22 (20150227 16:02:04.743)
WSREP_SST: [INFO] Cleaning up temporary directories (20150227 16:02:04.751)
Segmentation fault
150227 16:02:04 mysqld_safe Number of processes running now: 0
150227 16:02:04 mysqld_safe WSREP: not restarting wsrep node automatically
150227 16:02:04 mysqld_safe mysqld from pid file /var/lib/mysql/cls-mysql1-db1.pid ended



on second node, on log error, I find this:

WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20150227 16:01:03.021)
2015-02-27 16:01:03 26922 [Note] WSREP: (711054fc, 'tcp://0.0.0.0:4567') turning message relay requesting off
grep: /var/lib/mysql//xtrabackup_checkpoints: No such file or directory
WSREP_SST: [INFO] Preparing the backup at /var/lib/mysql/ (20150227 16:02:04.742)
WSREP_SST: [INFO] Evaluating innobackupex --no-version-check --apply-log $rebuildcmd ${DATA} &>${DATA}/innobackup.prepare.log (20150227 16:02:04.744)
WSREP_SST: [ERROR] Cleanup after exit with status:1 (20150227 16:02:04.965)
2015-02-27 16:02:04 26922 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.10.20.80' --auth 'sstuser:Arght64dGyTR32P' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '26922' '' : 1 (Operation not permitted)
2015-02-27 16:02:04 26922 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2015-02-27 16:02:04 26922 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2015-02-27 16:02:04 26922 [ERROR] Aborting

2015-02-27 16:02:05 26922 [Note] WSREP: (711054fc, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.10.20.79:4567
2015-02-27 16:02:06 26922 [Note] WSREP: (711054fc, 'tcp://0.0.0.0:4567') reconnecting to 805bc8ba (tcp://10.10.20.79:4567), attempt 0
2015-02-27 16:02:06 26922 [Note] WSREP: Closing send monitor...
2015-02-27 16:02:06 26922 [Note] WSREP: Closed send monitor.
2015-02-27 16:02:06 26922 [Note] WSREP: gcomm: terminating thread
2015-02-27 16:02:06 26922 [Note] WSREP: gcomm: joining thread
2015-02-27 16:02:06 26922 [Note] WSREP: gcomm: closing backend
2015-02-27 16:02:09 26922 [Note] WSREP: evs:roto(711054fc, LEAVING, view_id(REG,711054fc,5)) suspecting node: 805bc8ba
2015-02-27 16:02:09 26922 [Note] WSREP: evs:roto(711054fc, LEAVING, view_id(REG,711054fc,5)) suspected node without join message, declaring inactive
2015-02-27 16:02:09 26922 [Note] WSREP: view(view_id(NON_PRIM,711054fc,5) memb {
711054fc,0
} joined {
} left {
} partitioned {
805bc8ba,0
})
2015-02-27 16:02:09 26922 [Note] WSREP: view((empty))
2015-02-27 16:02:09 26922 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2015-02-27 16:02:09 26922 [Note] WSREP: gcomm: closed
2015-02-27 16:02:09 26922 [Note] WSREP: Flow-control interval: [16, 16]
2015-02-27 16:02:09 26922 [Note] WSREP: Received NON-PRIMARY.
2015-02-27 16:02:09 26922 [Note] WSREP: Shifting JOINER -> OPEN (TO: 2268)
2015-02-27 16:02:09 26922 [Note] WSREP: Received self-leave message.
2015-02-27 16:02:09 26922 [Note] WSREP: Flow-control interval: [0, 0]
2015-02-27 16:02:09 26922 [Note] WSREP: Received SELF-LEAVE. Closing connection.
2015-02-27 16:02:09 26922 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 2268)
2015-02-27 16:02:09 26922 [Note] WSREP: RECV thread exiting 0: Success
2015-02-27 16:02:09 26922 [Note] WSREP: recv_thread() joined.
2015-02-27 16:02:09 26922 [Note] WSREP: Closing replication queue.
2015-02-27 16:02:09 26922 [Note] WSREP: Closing slave action queue.
2015-02-27 16:02:09 26922 [Note] WSREP: Service disconnected.
2015-02-27 16:02:09 26922 [Note] WSREP: rollbacker thread exiting
2015-02-27 16:02:10 26922 [Note] WSREP: Some threads may fail to exit.
2015-02-27 16:02:10 26922 [Note] Binlog end
2015-02-27 16:02:10 26922 [Note] /usr/sbin/mysqld: Shutdown complete

Error in my_thread_global_end(): 1 threads didn't exit
150227 16:02:16 mysqld_safe mysqld from pid file /var/lib/mysql/cls-mysql1-db2.pid ended

Confusion about Xtrabackup Versions

Lastest Forum Posts - February 27, 2015 - 7:21am
Hi!,

I am running an 5 Head Percona XtraDB-Cluster 5.5.39-36.0-55-log.

Xtrabackup Version installed is
percona-xtrabackup-2.2.8

I found in my logs that the xtrabackup is starting a wrong version of innodb:
xtrabackup version 2.2.8 based on MySQL server 5.6.22

I remember that it is important to use the "right" binary of xtrabackup for the Server. In the xtrabackup version percona-xtrabackup-2.1.9 where 3 binarys and the log said:
xtrabackup_55 version 2.1.9 for Percona Server 5.5.35

I am a little bit concerned that the 5.6 innodb is possibly writing unreadable datafiles for 5.5 innodb.

Can somebody explain this to me?

Greetz
Julian

Explanation of the entries in grastate.dat file

Lastest Forum Posts - February 27, 2015 - 12:49am
Hello.

I'm trying to understand the entries in grastate.dat file, particularly seqno value. If I understand correctly, the 'seqno' value increases as node gets written to. If all nodes are shut down I can use 'seqno' to figure out which node was last written to - it is the one which has the highest 'seqno' value.

Now, am I correct in assuming that in the working (not degraded) cluster all the seqno values should be the same? That is, if I stop the writes completely, then shut down all the units, the seqno values across the cluster should be the same? (Which, in turn, means that I can safely boostrap from any pxc node?)

What also confuses me are values of '0' and '-1' for the seqno. I'm imagining that -1 corresponds to an error of some kind. But, when I bootstrap off the note which has '-1' as a seqno, and then shut it down, the seqno won't change - how am I to determine, then, if the node has failed?
What does seqno value of '0' represents? I was under impression that '0' means that there were no writes to the cluster since the bootstrap, but that seems to be in contradiction with '-1' described above.

Lastly, when is grastate.dat file created, when it gets written to, and is it ever removed by the pxc?
Mario

3 handy tools to remove problematic MySQL processes

Latest MySQL Performance Blog posts - February 27, 2015 - 12:00am

DBAs often encounter situations where they need to kill queries to ensure there are no long-running queries on a MySQL server that would impact performance. Long-running queries can be the result of many factors. Fortunately, Percona Server contains some handy tools to remove problematic MySQL processes. I will highlight all of the tools via some examples in this post.

pt-kill:
There have been some good posts on this blog about the pt-kill tool, like this one by Arunjith Aravindan titled “How a set of queries can be killed in MySQL using Percona Toolkit’s pt-kill.” Let’s dive into pt-kill a bit further with a few more examples. What does pt-kill do? It kills MySQL connections. Say you wanted to run pt-kill from a cronjob and then get an email on every killed process/query. Here is typical example for that.

$ pt-kill --interval 1 --run-time 1 --busy-time 5 --log /path/to/kill_long_running_thread.log --match-info "^(select|SELECT|Select)" --kill --print --user=xxxxxx --password=xxxxxxxxxx

Assume this is running from a cronjob, When pt-kill executes, it will kill queries longer than 5 seconds. By default, pt-kill runs forever –run-time option tells how long pt-kill to run before exiting If –interval and –busy-time parameters are used together then the explicit –interval value is used. Otherwise the default interval is 30 seconds. Note: this will only kill all read queries as per the –match-info parameter.

The above command will log all killed queries in the file referenced with the –log option. If you need to be notified via email for every killed query, the command below will do it. Off-course, you need to have the system configured to send e-mail.

#!/bin/bash tail -n 0 -F /path/to/kill_long_running_thread.log | while read LOG do echo "$LOG" | mail -s "pt-kill alert" sample@test.com done

You can execute this shell script in the background within screen or with the nohup utility continuously to monitor the log file. It will send an email whenever any new killed query arrives to the referenced log file in the pt-kill command. Unfortunately, there is no option to notify-by-email in pt-kill at the moment, so this is sort of a workaround.

In order to log all killed queries into a database table you will need to use the –log-dsn option as per the example below.

$ pt-kill --interval 1 --busy-time 1 --create-log-table --log-dsn=h=localhost,D=percona,t=kill_log --daemonize --match-info "^(select|SELECT|Select)" --kill

All killed queries will be logged into percona.kill_log table. The –daemonize option will run this command in the background forever and will kill all SELECT queries running longer than 1 second (–busy-time 1). The –interval option instructs pt-kill to scan processes every 1 second (–interval 1).

mysql> select * from kill_log; +---------+-----------+---------------------+-------------------------+------------+-------+------+-----------+------+---------+------+------------+-----------------+---------+ | kill_id | server_id | timestamp | reason | kill_error | Id | User | Host | db | Command | Time | State | Info | Time_ms | +---------+-----------+---------------------+-------------------------+------------+-------+------+-----------+------+---------+------+------------+-----------------+---------+ | 17 | 1 | 2015-01-10 08:38:33 | Query matches Info spec | | 35146 | root | localhost | NULL | Query | 0 | User sleep | SELECT SLEEP(5) | NULL | | 20 | 1 | 2015-01-10 08:38:34 | Query matches Info spec | | 35223 | root | localhost | NULL | Query | 0 | User sleep | SELECT SLEEP(5) | NULL | +---------+-----------+---------------------+-------------------------+------------+-------+------+-----------+------+---------+------+------------+-----------------+---------+

With the help of logging killed queries into a database tables. You can easily get all the trends/and /statistics on killed queries via SQL.

By default the tool kills the oldest of the queries that would have been running for more than a given –busy-time.  If you need to kill all the threads that have been busy for more than a specified –busy-time, then this will do it:

$ pt-kill --victims=all --busy-time=60

Statement Timeout in Percona Server:
The max-statement-time feature is ported from the Twitter patches. This feature can be used to limit the query execution time by specifying the timeout value in the max_statement_time variable. When the specified number of milliseconds is reached the server aborts the statement and returns the error below to the client.

ERROR 1877 (70101): Query execution was interrupted, max_statement_time exceeded

Let me demonstrate this through another example:

mysql [localhost] {msandbox} (world) > SET max_statement_time=1; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (world) > show variables like 'max_statement_time'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | max_statement_time | 1 | +--------------------+-------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (world) > SELECT * FROM City WHERE District = 'abcd'; ERROR 1885 (70101): Query execution was interrupted, max_statement_time exceeded mysql [localhost] {msandbox} (world) > UPDATE City SET District='abcd' WHERE ID = 2001; ERROR 1885 (70101): Query execution was interrupted, max_statement_time exceeded mysql [localhost] {msandbox} (world) > ALTER TABLE City ADD INDEX district_idx (district); ERROR 1885 (70101): Query execution was interrupted, max_statement_time exceeded

As you can see from this example statement, the timeout feature works for all statements including SELECT/DML/DDL queries.

mysql [localhost] {msandbox} (world) > show status like 'Max_statement%'; +-------------------------------+-------+ | Variable_name | Value | +-------------------------------+-------+ | Max_statement_time_exceeded | 3 | | Max_statement_time_set | 19 | | Max_statement_time_set_failed | 0 | +-------------------------------+-------+ 3 rows in set (0.00 sec)

The above mentioned status variables are stats for a statement timeout feature. Max_statement_time_exceeded will inform you that the total number of statements exceeded the defined timeout. Max_statement_time_set defines the number of statements for which execution time limit was set. You can find more details in this documentation. The statement timeout feature was introduced in Percona Server 5.6. You can check if your specific version of Percona Server supports this feature or not via the have_statement_timeout variable.

mysql [localhost] {msandbox} (world) > show global variables like 'have_statement_timeout'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | have_statement_timeout | YES | +------------------------+-------+ 1 row in set (0.00 sec)

Bugs you should be aware of:

https://bugs.launchpad.net/percona-server/+bug/1388533 -> This affects how the feature interacts with stored procedures. If you use stored procedures, max_statement_time might not behave as you expect.
https://bugs.launchpad.net/percona-server/+bug/1307432 -> This is documentation bug. Percona Server timeouts might not be safe for some statements like DDL and should not be used with such queries, The documentation does not reflect this. You should be very careful if you set a global statement timeout, It affects data changing queries as well. For best results set the max_statement_time variable in a session before running queries that you want to be killed if they execute too long, instead of using a global variable.
https://bugs.launchpad.net/percona-server/+bug/1376934 -> This affects the statement timeout feature on the query level. You must set max_statement_time in a session or globally instead however, this bug is fixed in latest version i.e. Percona Server 5.6.22-72.0

InnoDB Kill Idle Transactions:
This feature was introduced in Percona Server 5.5. It limits the age of idle XtraDB transactions and will kill idle transactions longer than a specified threshold for innodb_kill_idle_transaction. This feature is useful when autocommit is disabled on the server side and you are relying on the application to commit transactions and want to avoid long running transactions that are uncommitted. Application logic errors sometimes leaves transactions uncommitted. Let me demonstrate it quickly through one example:

mysql [localhost] {msandbox} (world) > show variables like 'autocommit'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | autocommit | OFF | +---------------+-------+ mysql [localhost] {msandbox} (world) > show global variables like 'innodb_kill_idle_transaction'; +------------------------------+-------+ | Variable_name | Value | +------------------------------+-------+ | innodb_kill_idle_transaction | 10 | +------------------------------+-------+ mysql [localhost] {msandbox} (world) > START TRANSACTION; SELECT NOW(); INSERT INTO City_backup (Name,CountryCode,District,Population) VALUES ('Karachi','PK','Sindh','1000000'); Query OK, 0 rows affected (0.00 sec) +---------------------+ | NOW() | +---------------------+ | 2015-01-31 07:11:39 | +---------------------+ 1 row in set (0.00 sec) Query OK, 1 row affected (0.01 sec) mysql [localhost] {msandbox} (world) > SHOW ENGINE INNODB STATUSG *************************** 1. row *************************** ------------ TRANSACTIONS ------------ ---TRANSACTION 173076, ACTIVE 10 sec 1 lock struct(s), heap size 360, 0 row lock(s), undo log entries 1 MySQL thread id 15, OS thread handle 0x7f862e9bf700, query id 176 localhost msandbox init SHOW ENGINE InnoDB STATUS TABLE LOCK table `world`.`City_backup` trx id 173076 lock mode IX ---------------------------- END OF INNODB MONITOR OUTPUT ============================ mysql [localhost] {msandbox} (world) > SELECT NOW(); SELECT * FROM City_backup; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 16 Current database: world +---------------------+ | NOW() | +---------------------+ | 2015-01-31 07:12:06 | +---------------------+ 1 row in set (0.01 sec) Empty set (0.00 sec)

 Conclusion:
In this post, I shared some tools that can help you get rid of long-running transactions to help ensure that you don’t run into performance issues. This is one of the many good reasons to use Percona Server, which has some extra features on top of vanilla MySQL server.

 

The post 3 handy tools to remove problematic MySQL processes appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
]]>