How to let mydumper piped to ssh?

Lastest Forum Posts - July 24, 2015 - 3:54am
From https://www.percona.com/blog/2015/07...l-environment/,Mydumper could be piped, but how to pipe mydumper with ssh?

pt-online-schema-change exited with error "...uninitialized value $mode..."

Lastest Forum Posts - July 24, 2015 - 1:48am

I used pt-online-schema-change to alter a table to ROW_FORMAT=COMPRESSED. Ultimately, it ran successfully, but it did exit with an error. Perhaps this was because it could not connect to the slave server. (Note: the alter table was replcated to the slave without any problems.)

I will probably use pt-online-schema-change in the future so I would like to eliminate this potential bug or, if the fault is mine, get some confirmation so I can RTFM again and do it correctly.

Here's what I ran:

ryant@wgcptdb3:~$ time pt-online-schema-change --recursion-method processlist --alter-foreign-keys-method auto --execute --alter "ROW_FORMAT=COMPRESSED" D=wfs,t=gateway_audit_trail,h=localhost,u=xxx,p=xx x
Cannot connect to D=wfs,h=SLAVEIP,p=...,u=xxx
No slaves found. See --recursion-method if host YYYYY has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
copy_rows, 10, 0.25
create_triggers, 10, 1
drop_triggers, 10, 1
swap_tables, 10, 1
update_foreign_keys, 10, 1
Child tables:
`wfs`.`auth_audit_trail` (approx. 428865 rows)
`wfs`.`product_audit_trail` (approx. 1268930 rows)
Will automatically choose the method to update foreign keys.
Altering `wfs`.`gateway_audit_trail`...
Creating new table...
Created new table wfs._gateway_audit_trail_new OK.
Altering new table...
Altered `wfs`.`_gateway_audit_trail_new` OK.
2015-07-22T22:53:16 Creating triggers...
2015-07-22T22:53:17 Created triggers OK.
2015-07-22T22:53:17 Copying approximately 9862776 rows...
Copying `wfs`.`gateway_audit_trail`: 0% 01:35:09 remain
Copying `wfs`.`gateway_audit_trail`: 0% 01:42:34 remain
Copying `wfs`.`gateway_audit_trail`: 1% 01:48:35 remain
Copying `wfs`.`gateway_audit_trail`: 1% 01:51:39 remain
Copying `wfs`.`gateway_audit_trail`: 2% 01:57:06 remain
Copying `wfs`.`gateway_audit_trail`: 2% 01:58:26 remain
Copying `wfs`.`gateway_audit_trail`: 99% 00:37 remain
Copying `wfs`.`gateway_audit_trail`: 99% 00:06 remain
2015-07-23T04:11:35 Copied rows OK.
2015-07-23T04:11:35 Max rows for the rebuild_constraints method: 1316
Determining the method to update foreign keys...
2015-07-23T04:11:35 `wfs`.`auth_audit_trail`: too many rows: 428945; must use drop_swap
2015-07-23T04:11:35 Drop-swapping tables...
2015-07-23T04:12:09 Dropped and swapped tables OK.
Not dropping old table because --no-drop-old-table was specified.
2015-07-23T04:12:09 Dropping triggers...
2015-07-23T04:12:09 Dropped triggers OK.
Successfully altered `wfs`.`gateway_audit_trail`.
# Exiting on SIGPIPE.
(in cleanup) Use of uninitialized value $mode in numeric eq (==) at /usr/bin/pt-online-schema-change line 7845.

real 318m57.881s
user 0m57.836s
sys 0m6.304s


ryant@wgcptdb3:~$ pt-online-schema-change --version
pt-online-schema-change 2.2.14

ryant@wgcptdb3:~$ mysqladmin version
mysqladmin Ver 8.42 Distrib 5.5.34, for debian-linux-gnu on x86_64
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Server version 5.5.34-0ubuntu0.12.04.1-log
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 637 days 3 hours 54 min 4 sec


Bootstrapping behavior with pc.wait_prim=false ?

Lastest Forum Posts - July 23, 2015 - 9:30am
I have a quick question with pc.wait_prim=false behavior.

According to:

There's a bug that was verified that you can't use pc.wait_prim=false as it just hangs.

If I'm using this option, mysql should fire up where I can connect to it to be able to 'SET GLOBAL wsrep_provider_options="pc.bootstrap=1"; ' if I'm using the latest code, correct?

I ask, because with what's currently in the Percona repo's doesn't allow it for 5.6, though it does at least allow you to kill the daemon to make it shut down gracefully, at least for RHEL6.5.7..

Any help appreciated..

Steps I used to test:

sudo yum install http://pkgs.repoforge.org/socat/soca....rf.x86_64.rpm
sudo yum install http://www.percona.com/downloads/per...1-3.noarch.rpm
sudo yum install Percona-XtraDB-Cluster-56.x86_64

Config file - bare minimums to get this to start (dns is valid and resolves for my env):

# Disabling symbolic-links is recommended to prevent assorted security risks


# Galera
# Automatically adjust auto_increment_increment and auto_increment_offset system variables when cluster membership changes.
wsrep_auto_increment_control = on

# This variable is used to control if Foreign Key checking is done for applier threads.
# wsrep_slave_FK_checks

# Enable debug log output
wsrep_debug = off

# An option to explicitly specify the network address of the node if autoguessing for some reason does not produce
# desirable results (multiple network interfaces, NAT, etc.) If not explicitly overridden by
# wsrep_sst_receive_address, the <address> part will be used to listen for SST (the port is still SST method-
# dependent). And the whole <address>[ort] string will be passed to wsrep provider to be used as a base address
# in its communications.
# wsrep_node_address

# A name (given in wsrep_node_name) of the server that should be used as a source for state transfer.
# If not specified, Galera will choose the most appropriate one.
# wsrep_sst_donor

# A string of provider options passed directly to provider.
# See http://www.codership.com/wiki/doku.p...parameters_0.8
wsrep_provider_options = gmcast.listen_addr=tcp://;evs.keepalive_period=PT3S;evs.inacti ve_check_period=PT10S;evs.suspect_timeout=PT30S;ev s.inactive_timeout=PT1M;evs.install_timeout=PT1M;i st.recv_addr=tcp://springsing.test:13307

# This variable can be used to reject queries for that node.
wsrep_reject_queries = NONE

# This variable is used to control if Unique Key checking is done for applier threads.
# wsrep_slave_UK_checks

# Logical node name - for convenience. (Defaults to hostname)
# wsrep_node_name

# Maximum allowed writeset size. Currently it limits supported size of transaction and LOAD DATA statement.
wsrep_max_ws_size = 1073741824

# A string with authentication information for state snapshot transfer. It depends on the state transfer method.
# For mysqldump state transfer it should be <user>:<password> where user has root privileges on this server.
# For rsync method it is ignored.
wsrep_sst_auth = sstauth:sstauth

# When enabled this option will use new, transparent handling of preordered replication events (like replication from traditional master).
wsrep_preordered = off

# Logical cluster name. If a node tries to connect to a cluster with a different name, connection fails
wsrep_cluster_name = tester

# A path to wsrep provider to load. If not specified, all calls to wsrep provider will be bypassed.
wsrep_provider = /usr/lib64/libgalera_smm.so

# This variable exists for the sole purpose of notifying joining node about state transfer completion.
# See http://www.codership.com/wiki/doku.p...pshot_transfer
# wsrep_start_position

# If an autocommit query fails due to cluster-wide conflict we can retry it without returning error to client.
# This sets how many times to retry.
wsrep_retry_autocommit = 1

# This variable is used to send the DBUG option to the wsrep provider.
wsrep_dbug_option = off

# This variable controls how many replication events will be grouped together. This implementation is still experimental.
wsrep_mysql_replication_bundle = 0

# The address at which this node expects to receive state transfer. Depends on state transfer method. E.g. for
# mysqldump state transfer it is the address and the port on which this server listens.
wsrep_sst_receive_address = springsing.test:13308

# In some cases master may apply event faster than a slave, which can cause master and slave being out-of-sync for a brief moment.
# When this variable is set to ON slave will wait till that event is applied before doing any other queries.
wsrep_causal_reads = off

# Address to connect to cluster. Provider specific. Galera takes addresses in URL format:
# <backend schema>://<cluster address>[?option1=value1[&option2=value2]]
# e.g. gcomm://
wsrep_cluster_address = gcomm://springsing.test:13306,toldcontrolled.test:13306?pc .wait_prim=false

# Maximum number of rows allowed in the writeset. Currently it limits supported size of transaction and LOAD DATA statement.
wsrep_max_ws_rows = 131072

# How many threads to use for applying slave writsets.
wsrep_slave_threads = 4

# Generate primary keys for rows without ones for the purpose of certification. This is required fori
# parallel applying. We don't recommend using tables without primary keys.
wsrep_certify_nonPK = on

# A command to run when cluster membership or state of this node changes.
# wsrep_notify_cmd

# This variable is used to control whether sole cluster conflicts should be logged. When enabled details of conflicting InnoDB lock will be logged.
wsrep_log_conflicts = off

# Online schema upgrade method (MySQL >= 5.5.17).
# See http://www.codership.com/wiki/doku.p...schema_upgrade
wsrep_OSU_method = TOI

# Use wsrep replication. When turned off, no changes made in this session will be replicated.
wsrep_on = on

# When this variable is enabled SST donor node will not accept incoming queries, instead it will reject queries with UNKNOWN COMMAND error code.
# This can be used to signal load-balancer that the node isn’t available.
wsrep_sst_donor_rejects_queries = off

# Address at which server expects client connections. Intended for integration with load balancers.
# Not used for now.
# wsrep_node_incoming_address

# A method to use for state snapshot transfer. wsrep_sst_<wsrep_sst_method> command will be called with
# arguments from http://www.codership.com/wiki/doku.p...pshot_transfer
wsrep_sst_method = xtrabackup-v2


The Q&A: Creating best-in-class backup solutions for your MySQL environment

Latest MySQL Performance Blog posts - July 23, 2015 - 6:55am

Thank you for attending my July 15 webinar, “Creating Best in Class Backup solutions for your MySQL environment.” Due to the amount of content we discussed and some minor technical difficulties faced near the end of webinar we have decided to cover the final two slides of the presentation along with the questions asked by attendees during the webinar via this blog post.

The slides are available for download. And you can watch the webinar in it’s entirety here.

The final two slides were about our tips for having a good backup and recovery strategy. Lets see the bullet points along with what would have been their explanation during the webinar :

  • Use the three types of backups
    • Binary for full restores, new slaves
      • Binary backups are easy to restore, plus takes the least amount of time to restore. The mean time to recover is mostly bound by the time to transfer backup to the appropriate target server,
    • Logical for partial restores
      • Logical backups, especially when done table-wise come in handy when you’re wanting to restore one or few smaller tables,
    • Binlog for point in time recovery
      • Very often the need is to have Point In Time Recovery, with a Full backup of any type (Logical or Binary) its half the story, we still need the DML statements processed on the server in order to bring it to the latest state, thats where Binary logs (Binlog) backups come into picture.
  • Store on more than one server and off-site
    •  Store your backups on more than one location, what if the backup server goes down ? Considering offsite storages like Amazon S3 and Glacier with weekly or monthly backups retention can be cheaper options.
  • Test your backups!!!!
    • Testing your backups is very important, its always great to know backups are recoverable and not corrupted. Spin off an EC2 instance if you want, copy and restore the backup there, roll-forward a days worth of binlogs just to be sure.
  • Document restore procedures, script them and test them!!!
    • Also when you test your backups, make sure to document the steps to restore the backup to avoid last minute hassle over which commands to use.
  • If taking from a slave run pt-table-checksum
    • Backups are mostly taken from slaves, as such make sure to checksum them regularly, you dont wanna backup inconsistent data. 
  • Configuration files, scripts
    • Data is not the only thing you should be backing up, backup your config files, scripts and user access at a secure location.
  • Do you need to backup everything all days?
    • For very large instances doing a logical backup is a toughie, in such cases evaluate your backup needs, do you want to backup all the tables ? Most of the time smaller tables are the more important ones, and needs partial restore, backup only those.
  • Hardlinking backups can save lot of disk space in some circumstances
    • There are schemas which contains only a few high activity tables, rest of them are probably updated once a week or are updated by an archiver job that runs montly, make sure to hardlink the files with the previous backup, it can save good amount of space in such scenarios.
  • Monitor your Backups
    • Lastly, monitor your backups. You do not want to realize that you’re backup had been failing the whole time. Even a simple email notification from your backup scripts can help reduce the chance of failure.

Now lets try to answer some of the questions asked during the webinar :

Q : –use-memory=2G, is that pretty standard, if we have more more, should we have a higher value?
Usually we would evaluate the value based on size of xtrabackup_logfile (amount of transactions to apply). If you have more free memory feel free to provide it to –use-memory, you dont want to let the memory be a bottleneck in the restore process.

Q : which is the best backup option for a 8Tb DB?
Usually it would depend on what type of data would you have and business requirements for the backups. For eg: a full xtrabackup and later incrementals on the weekdays would be a good idea. Time required for backups play an important role here, backing up to a slow NAS share can be time consuming, and it will make xtrabackup record lot of transactions which will further increase your restore time. Also look into backing up very important medium-small size tables via logical backups.

Q : I’m not sure if this will be covered, but if you have a 3 master-master-master cluster using haproxy, is it recommended to run the backup from the haproxy server or directly on a specific server? Would it be wise to have a 4th server which would be part of the cluster, but not read from to perform the backups?
I am assuming this a Galera cluster setup, in which case you can do backups locally on any of the node by using tools like percona xtrabackup, however the best solution would be spinning off a slave from one of the nodes and running backups there.

Q : With Mudumper, can we strem the data over SSH or netcat to another server? Or would one have to use something like NFS? I’ve used mysqldump and piped it over netcat before.. curious if we can do that with Mydumper ?
Mydumper is similar in nature with other mysql client tools. They can be run remotely (–host option). Which means you can run mydumper from another server to backup from the master or slave. Mydumper can be piped for sure too.

Q : Is Mydumper still maintained. It hasn’t had a release since March of last year?
Indeed, Max Bubenick from Percona is currently maintaining the project. Actually he has added new features to the tool which  makes it more comprehensive and feature rich. He is planning the next release soon, stay tuned for the blog post.

Q : Is MyDumper an opensource ? prepare and restore are same ?
Absolutely. Right now we need to download the source and compile, however very soon we will have packages built for it too. Prepare and Restore are common terminologies used in the backup lingo, in the webinar, Restore means copying back the backup files from its storage location to the destination location, whereas Prepare means applying the transactions to the backup and making it ready to restore.

Q : Is binlog mirroring needed on Galera (PXC)?
It is good idea to keep binlog mirroring. Even though the IST and SST will do its job to join the node, the binlogs could play a role in case you wanted to rollforward a particular schema on a slave or QA instance.

Q : As we know that Percona XtraBackup takes Full & Incremental as well. Like that Does MyDumper helps in taking the incremental backup.?
At this moment we do not have the ability to take Incremental backups with mydumper or with any other logical backup tool. However, Weekly full backups (logical) and daily binlog backups can serve as the same strategy with other Incremental backup solutions, plus they are easy to restore

Q : Is it possible to encrypt the output file ? What will be Best methodology to back up data with the database size of 7 to 8 Gb and increses 25 % each day ? what is difference between innobackupex and mydumper ?
Indeed its possible to encrypt the backup files, as a matter of fact, we encrypt backup files with GPG keys before uploading to offsite storage. The best method to backup a 7 to 8G instance would be implementing all 3 types of backup we discussed in the webinar, your scenarios require planning for the future, so its always best to have different solutions available as the data grows. Innobackupex is part of the Percona-Xtrabackup toolkit and is a script which does binary backups of databases, MyDumper on the other hand is a logical backup tool which creates backups as text files.

Q : How can I optimize a MySQL dump of a large database? The main bottleneck while taking MySQL dump backup of a large database is if any table is found to be corrupted then it never goes beyond by skipping this corrupted tables temporary. Can we take database backup of large database without using locking mechanism i.e. Does someone know how to make the backup without locking the tables ? Is there any tools which would faster in restoration and backup technique or how come we use MySQL dump to optimize this kind of issue in future during crash recovery.
Mysqldump is logical backup tool, and as such it executes full table scans to backup the tables and write them down in the output file, hence its very difficult to improve performance of mysqldump (query-wise). Assuming that you’re referring the corruption to MyISAM tables, it is highly recommended you repair them before backing up, also to make sure mysqldump doesnt fail due to error on such a corrupt table try using –force option to mysqldump. If you’re using MyISAM tables first recommendation would be to switch to Innodb, with most of the tables innodb locking can be greatly reduced, actually till a point where the locking is negligible, look into –single-transaction. Faster backup recovery can be achieved with binary backups, look into using Percona Xtrabackup tool, we have comprehensive documentation to get you started.

Hope this was a good webinar and we have answered most of your questions. Stay tuned for more such webinars from Percona.

The post The Q&A: Creating best-in-class backup solutions for your MySQL environment appeared first on MySQL Performance Blog.

missing Percona-server-devel for version 5.6 in debian apt

Lastest Forum Posts - July 23, 2015 - 4:47am

I have been struggling to install devel package but no luck. kindly advise.


root@instance-5:/usr/lib/x86_64-linux-gnu# apt-get install Percona-Server-devel*
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package Percona-Server-devel*
E: Couldn't find any package by regex 'Percona-Server-devel*'

MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh

Latest MySQL Performance Blog posts - July 23, 2015 - 3:00am

Welcome to MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh

  1. Advanced configurable variables & their default/vanilla reducer.sh settings
  2. Some examples

Full-screen viewing @ 720p resolution recommended.

The post MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh appeared first on MySQL Performance Blog.

Index not working for date field used in between query

Lastest Forum Posts - July 22, 2015 - 10:51pm

We have MySQL 5.6.21 enterprise edition.

For below query we added index on createdate field, but even after adding index query plan shows that it is not using it.

Query: select iname db1.User where iname like '0012%' and createDate between '2000-05

-11 00:00:00' and '2015-07-01 23:59:59';

Added index on : createdate

Table structure:
`uuid_` varchar(75) DEFAULT NULL,
`userId` bigint(20) NOT NULL,
`companyId` bigint(20) DEFAULT NULL,
`createDate` datetime DEFAULT NULL,
`modifiedDate` datetime DEFAULT NULL,
`defaultUser` tinyint(4) DEFAULT NULL,
`contactId` bigint(20) DEFAULT NULL,
`password_` varchar(75) DEFAULT NULL,
`passwordEncrypted` tinyint(4) DEFAULT NULL,
`passwordReset` tinyint(4) DEFAULT NULL,
`passwordModifiedDate` datetime DEFAULT NULL,
`digest` varchar(255) DEFAULT NULL,
`reminderQueryQuestion` varchar(75) DEFAULT NULL,
`reminderQueryAnswer` varchar(75) DEFAULT NULL,
`graceLoginCount` int(11) DEFAULT NULL,
`iName` varchar(75) DEFAULT NULL,
`emailAddress` varchar(75) DEFAULT NULL,
`facebookId` bigint(20) DEFAULT NULL,
`openId` varchar(1024) DEFAULT NULL,
`portraitId` bigint(20) DEFAULT NULL,
`languageId` varchar(75) DEFAULT NULL,
`timeZoneId` varchar(75) DEFAULT NULL,
`greeting` varchar(255) DEFAULT NULL,
`comments` longtext,
`firstName` varchar(75) DEFAULT NULL,
`middleName` varchar(75) DEFAULT NULL,
`lastName` varchar(75) DEFAULT NULL,
`jobTitle` varchar(100) DEFAULT NULL,
`loginDate` datetime DEFAULT NULL,
`loginIP` varchar(75) DEFAULT NULL,
`lastLoginDate` datetime DEFAULT NULL,
`lastLoginIP` varchar(75) DEFAULT NULL,
`lastFailedLoginDate` datetime DEFAULT NULL,
`failedLoginAttempts` int(11) DEFAULT NULL,
`lockout` tinyint(4) DEFAULT NULL,
`lockoutDate` datetime DEFAULT NULL,
`agreedToTermsOfUse` tinyint(4) DEFAULT NULL,
`active_` tinyint(4) DEFAULT NULL,
PRIMARY KEY (`userId`),
UNIQUE KEY `IX_C5806019` (`companyId`,`iName`),
UNIQUE KEY `IX_9782AD88` (`companyId`,`userId`),
UNIQUE KEY `IX_5ADBE171` (`contactId`),
KEY `IX_3A1E834E` (`companyId`),
KEY `IX_5204C37B` (`companyId`,`active_`),
KEY `IX_6EF03E4E` (`companyId`,`defaultUser`),
KEY `IX_1D731F03` (`companyId`,`facebookId`),
KEY `IX_89509087` (`companyId`,`openId`(767)),
KEY `IX_762F63C6` (`emailAddress`),
KEY `IX_A18034A4` (`portraitId`),
KEY `IX_E0422BDA` (`uuid_`),
KEY `IX_CDATE` (`createDate`)

QUERY PLAN after adding index:

*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: User_
type: ALL
possible_keys: IX_CDATE
key: NULL
key_len: NULL
ref: NULL
rows: 1118483
Extra: Using where

I believe this was a bug earlier and was fixed in 5.5 and above. Please suggest.

Master-master asyn replication between two PXC clusters and replicate-ignore-table

Lastest Forum Posts - July 22, 2015 - 1:39pm

We have set up master-master async replication between two 3 node 5.6.24 PXC clusters.
One table contains environment specific data and we need to exclude this table from the bi-directional replication.
However, if we set replicate-ignore-table on the async slave node, the table is out of sync between the nodes in the same cluster.

It seems replicate-ignore-table affects async Mysql replication between the clusters and Galera replication within the cluster.

Is there any way we can work around this problem?


SELinux and the MySQL init script

Latest MySQL Performance Blog posts - July 22, 2015 - 10:09am

I recently worked with a customer who had a weird issue: when their MySQL server was started (Percona Server 5.5), if they try to run service mysql start a second time, the init script was not able to detect that an instance was already running. As a result, it tried to start a second instance with the same settings as the first one. Of course this fails and this creates a mess. What was the issue? A missing rule in SELinux. At least it looks like


If SELinux is set to enforcing and if you are using Percona Server on CentOS/RHEL 6 (other versions could be affected), service mysql start doesn’t work properly and a fix is simple to run:

# grep mysqld_safe /var/log/audit/audit.log | audit2allow -M mysqld_safe # semodule -i mysqld_safe.pp # service mysql restart

Other options are:

  • Set SELinux to permissive
  • Use the CentOS/RHEL standard MySQL init script (note I didn’t extensively check if that could trigger other errors)
How did we see the issue?

That was pretty easy: if an instance is already running and if you run service mysql start again, you should see something like this in the MySQL error log:

150717 08:47:58 mysqld_safe A mysqld process already exists

But if you rather see tons of error messages like:

2015-07-17 08:47:05 27065 [ERROR] InnoDB: Unable to lock ./ibdata1, error: 11 2015-07-17 08:47:05 27065 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.

it means that the init script is broken somewhere.


When the issue was brought to my attention, I tried to reproduce it on my local box, but with no luck. What was so special in the configuration used by the customer?

The only thing that was slightly out of the ordinary was SELinux which was set to enforcing. Then we set SELinux to permissive, and guess what? service mysql start was now working properly and it didn’t allow 2 concurrent instances to be run!

Next step was to look at the SELinux logs to find any error related to MySQL, and we discovered messages like:

type=SYSCALL msg=audit(1437121845.464:739): arch=c000003e syscall=62 success=no exit=-13 a0=475 a1=0 a2=0 a3=7fff0e954130 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=5 comm="mysqld_safe" exe="/bin/bash" subj=unconfined_u:system_r:mysqld_safe_t:s0 key=(null)

At this point, we knew that a rule was missing for mysqld_safe, we needed to add a new one.

Deeper investigation

Actually what happens is that SELinux prevents this condition to be executed in mysqld_safe:

if kill -0 $PID > /dev/null 2> /dev/null

and then the script assumes that this means the mysqld process is not running. That’s why a second mysqld is started.

However users of Oracle MySQL will probably never experience that issue, simply because the init script is a bit different: before calling mysqld_safe, the init script tries to ping a potential mysqld instance and if it gets a positive reply or an Access denied error, it concludes that mysqld is already running and it doesn’t invoke mysqld_safe.

The fix

Fortunately, this is quite simple. You can generate the corresponding rule with audit2allow:

grep mysqld_safe /var/log/audit/audit.log | audit2allow -M mysqld_safe

And after checking the corresponding .te file, we were able to load that new module:

semodule -i mysqld_safe.pp

After stopping MySQL, you can now use service mysql start normally.


This issue was quite interesting to work on because finding the culprit was not that easy. Also it only triggers when SELinux is enabled and Percona Server is used. Now should the init script of Percona Server be fixed? I’m not sure of the potential problems that could occur if we did so, but of course feel free to leave your feedback in the comments.

The post SELinux and the MySQL init script appeared first on MySQL Performance Blog.

Percona now offering 24/7 support for MongoDB and TokuMX

Latest MySQL Performance Blog posts - July 21, 2015 - 12:22pm

Today Percona announced the immediate availability of 24/7, enterprise-class support for MongoDB and TokuMX. The new support service helps organizations achieve maximum application performance without database bloat. Customers have round-the-clock access (365 days a year) to the most trusted team of database experts in the open source community.

The news means that Percona now offers support across the entire open-source database ecosystem, including the entire LAMP stack (Linux, Apache, MySQL, and PHP/Python/Perl), providing a single, expert, proven service provider for companies to turn to in good times (always best to be proactive) – and during emergencies, too.

Today’s support announcement follows Percona’s acquisition of Tokutek, which included the Tokutek distribution of MongoDB – making Percona the first vendor to offer both MySQL and MongoDB software and solutions.

Like Percona’s other support services, support for MongoDB and TokuMX enables organizations to talk directly with Percona’s support experts at any time, day or night.

The Percona Support team is always ready to help resolve database and server instability, initiate data recovery, optimize performance, deal with response and outage issues – and ensure proactive system monitoring and alert responses. Percona also offers support across on-premises, cloud, and hybrid deployments.

The post Percona now offering 24/7 support for MongoDB and TokuMX appeared first on MySQL Performance Blog.

MySQL QA Episode 7: Reducing Testcases for Beginners – single-threaded reducer.sh!

Latest MySQL Performance Blog posts - July 21, 2015 - 3:00am

Welcome to MySQL QA Episode #7 – Reducing Testcases for Beginners: single-threaded reducer.sh!

In this episode we’ll learn how to use reducer.sh. Topics discussed;

  1. reducer.sh introduction/concepts
  2. Basic configurable variables & their default reducer.sh settings
    1. INPUTFILE options
    2. MODE=x
    3. TEXT=”text”
    5. MYEXTRA
    6. MYBASE
    9. How to learn more about each of the settings
  3. Manual example
  4. Introduction to the script’s self-recursion concept – subreducer
  5. Quick setup re-cap, details of an already executed QA run
  6. Examples from pquery-prep-red.sh (including some issue reviews)
  7. Gotcha’s

Full-screen viewing @ 720p resolution recommended.

If the speed is too slow for you, consider setting YouTube to 1.25 playback speed.

The post MySQL QA Episode 7: Reducing Testcases for Beginners – single-threaded reducer.sh! appeared first on MySQL Performance Blog.

Percona Live Amsterdam discounted pricing ends July 26!

Latest MySQL Performance Blog posts - July 20, 2015 - 2:32pm

The Percona Live Data Performance Conference in Amsterdam is just two months away and it’s going to be an incredible event. With a new expanded focus on MySQL, NoSQL, and Data in the Cloud, this conference will be jam-packed with talks from some of the industry’s leading experts from MongoDB, VMware, Oracle, MariaDB, Facebook, Booking.com, Pythian, Google, Rackspace, Yelp (and many more, including of course Percona).

Early Bird pricing ends this Sunday (July 26)! So if you want to save €25, then you’d better register now. And for all of my readers, you can take an additional 10% off the entire registration price by using the promo code “10off” at checkout.

It’s also important to book your room at the Mövenpick Hotel for a special rate – but hurry because that deal ends July 27 and the rooms are disappearing fast due to some other big events going on in Amsterdam that week.

Sponsorship opportunities are also still available for Percona Live Amsterdam. Event sponsors become part of a dynamic and fast-growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors and entrepreneurs who typically attend the event. This year’s conference will feature expanded accommodations and turnkey kiosks. Current sponsors include:

We’ve got a fantastic conference schedule on tap. Sessions, which will follow each morning’s keynote addresses, feature a variety of topics related to MySQL and NoSQL, High Availability, DevOps, Programming, Performance Optimization, Replication and Backup, MySQL in the Cloud, MySQL Case Studies, Security, and What’s New in MySQL and MongoDB.

Sessions Include:
  • “InnoDB: A Journey to the Core,” Jeremy Cole, Sr. Systems Engineer, Google, Inc. and Davi Arnaut, Software Engineer, LinkedIn
  • “MongoDB Patterns and Antipatterns for Dev and Ops,” Steffan Mejia, Principal Consulting Engineer, MongoDB, Inc.
  • “NoSQL’s Biggest Lie: SQL Never Went Away,” Matthew Revell, Lead Developer Advocate, Couchbase
  • “The Future of Replication is Today: New Features in Practice,” Giuseppe Maxia, Quality Assurance Architect, VMware
  • “What’s New in MySQL 5.7,” Geir Høydalsvik, Senior Software Development Director, Oracle
Tutorials include:
  • “Best Practices for MySQL High Availability,” Colin Charles, Chief Evangelist, MariaDB
  • “Mongo Sharding from the Trench: A Veterans Field Guide,” David Murphy, Lead DBA, Rackspace Data Stores
  • “Advanced Percona XtraDB Cluster in a Nutshell, La Suite: Hands on Tutorial Not for Beginners!,” Frederic Descamps, Senior Architect, Percona

The conference’s evening events will be a perfect way to network, relax and have FUN while seeing the beautiful city of Amsterdam!

Monday night, September 21, after the tutorial sessions conclude, attendees are invited to the Delirium Cafe located across the street from the conference venue. With more than 500 beers on tap and great food, this will be the perfect way to kick off the Conference.

Tuesday night, September 22, Booking.com will be hosting the Community dinner of the year at their very own headquarters located in historic Rembrandt Square in the heart of the city. Hop on one of the sponsored canal boats that will pick you up right outside of the Mövenpick for your chance to see the city from the water on the way to the community dinner! You’ll be dropped off right next to Booking.com’s offices!

Wednesday night, September 23, there will be a closing reception taking place at the Mövenpick for your last chance to visit with our exhibitors and to wrap up what promises to be an amazing conference!

See you in Amsterdam!

The post Percona Live Amsterdam discounted pricing ends July 26! appeared first on MySQL Performance Blog.

Fractal Tree library as a Key-Value store

Latest MySQL Performance Blog posts - July 20, 2015 - 6:50am

As you may know, Tokutek is now part of Percona and I would like to explain some internals of TokuDB and TokuMX – what performance benefits they bring, along with further optimizations we are working on.

However, before going into deep details, I feel it is needed to explain the fundamentals of Key-Value store, and how Fractal Tree handles it.

Before that, allow me to say that I hear opinions that the “Fractal Tree” name does not reflect an internal structure and looks more like a marketing term than a technical one. I will not go into this discussion and will keep using name “Fractal Tree” just out of the respect to inventors. I think they are in a position to name their invention with any name they want.

So with that said, the Fractal Tree library implements a new data structure for a more efficient handling (with main focus on insertion, but more on this later) of Key-Value store.

You may question how Key-Value is related to SQL Transactional databases – this is more from the NOSQL world. Partially this is true, and Fractal Tree Key-Value library is successfully used in Percona TokuMX (based on MongoDB 2.4) and Percona TokuMXse (storage engine for MongoDB 3.0) products.

But if we look on a Key-Value store in general, actually it maybe a good fit to use in structural databases. To explain this, let’s take a look in Key-Value details.

So what is Key-Value data structure?

We will use a notation (k,v), or key=>val, which basically mean we associate some value “v” with a key “k”. For software developers following analogies may be close:
key-value access is implemented as dictionary in Python, associative array in PHP or map in C++.
(More details in Wikipedia)

I will define key-value structure as a list of pairs (k,v).

It is important to note that both key and value cannot be just scalars (single value), but to be compound.
That is "k1, k2, k3 => v1, v2", which we can read as (give me two values by a 3-part key).

This brings us closer to a database table structure.
If we apply additional requirement that all (k) in list (k,v) must be unique, this will represent
a PRIMARY KEY for a traditional database table.
To understand this better, let’s take a look on following table:
CREATE TABLE metrics (
ts timestamp,
device_id int,
metric_id int,
cnt int,
val double,
PRIMARY KEY (ts, device_id, metric_id),
KEY metric_id (metric_id, ts),
KEY device_id (device_id, ts)

We can state that Key-Value structure (ts, device_id, metric_id => cnt, val), with a requirement
"ts, device_id, metric_id" to be unique, represents PRIMARY KEY for this table, actually this is how InnoDB (and TokuDB for this matter) stores data internally.

Secondary indexes also can be represented in Key=>Value notion, for example, again, how it is used in TokuDB and InnoDB:
(seconday_index_key=>primary_key), where a key for a secondary index points to a primary key (so later we can get values by looking up primary key). Please note that that seconday_index_key may not be unique (unless we add an UNIQUE constraint to a secondary index).

Or if we take again our table, the secondary keys are defined as
(metric_id, ts => ts, device_id, metric_id)
(device_id, ts => ts, device_id, metric_id)

It is expected from a Key-Value storage to support basic data manipulation and extraction operations, such as:

        – Add or Insert: add

(key => value)

        pair to a collection
        – Update: from

(key => value2)


(key => value2)

        , that is update


        assigned to


        – Delete: remove(key): delete a pair

(key => value)

        from a collection
        – Lookup (select): give a


        assigned to


and I want to add fifth operation:

        – Range lookup: give all values for keys defined by a range, such as

"key > 5"


"key >= 10 and key < 15"

They way software implements an internal structure of Key-Value store defines the performance of mentioned operations, and especially if datasize of a store grows over a memory capacity.

For the decades, the most popular data structure to represent Key-Value store on disk is B-Tree, and within the reason. I won’t go into B-Tree details (see for example https://en.wikipedia.org/wiki/B-tree), but it provides probably the best possible time for Lookup operations. However it has challenges when it comes to Insert operations.

And this is an area where newcomers to Fractal Tree and LSM-tree (https://en.wikipedia.org/wiki/Log-structured_merge-tree) propose structures which provide a better performance for Insert operations (often at the expense of Lookup/Select operation, which may become slower).

To get familiar with LSM-tree (this is a structure used by RocksDB) I recommend http://www.benstopford.com/2015/02/14/log-structured-merge-trees/. And as for Fractal Tree I am going to cover details in following posts.

The post Fractal Tree library as a Key-Value store appeared first on MySQL Performance Blog.

Creating sort index takes 2.39990 seconds

Lastest Forum Posts - July 20, 2015 - 12:37am
select this_.WORKLIST_LOG_PK as WORKLIST1_164_0_, this_.APPLICATION_STATUS_LOG as APPLICAT2_164_0_, this_.test_CHECKER1_REMARKS_LOG_COL as test3_164_0_, this_.test_CHECKER2_REMARKS_LOG_COL as test4_164_0_, this_.TRANSACTION_NUMBER_LOG as TRANSACT5_164_0_, this_.HASERRORS_REMARKS_LOG as HASERRORS6_164_0_, this_.NEW_AR_FLAG_LOG_COL as NEW7_164_0_, this_.OWNER_ID_LOG as OWNER8_164_0_, this_.POS_CHECKER_REMARKS as POS9_164_0_, this_.PROPOSER_EIA_NUMBER_LOG as PROPOSER10_164_0_, this_.PROPOSER_ID_LOG_FK as PROPOSER11_164_0_, this_.REJECTED_REMARKS_ID_LOG as REJECTED12_164_0_, this_.REMARKS_LOG as REMARKS13_164_0_, this_.TRANSACTION_TYPE_LOG as TRANSAC14_164_0_, this_.WFACTION_DATE as WFACTION15_164_0_, this_.WFACTION_USER as WFACTION16_164_0_, this_.WF_DISPACTION_LOG as WF17_164_0_, this_.WF_STAGE_LOG as WF18_164_0_ from INSURANCE_WORKLIST_LOG this_ where this_.PROPOSER_EIA_NUMBER_LOG='2000545565555' and this_.TRANSACTION_TYPE_LOG='01' order by this_.WFACTION_DATE desc\G

query profile:

| Status | Duration |
| starting | 0.000066 |
| checking permissions | 0.000007 |
| Opening tables | 0.000011 |
| init | 0.000029 |
| System lock | 0.000009 |
| optimizing | 0.000011 |
| statistics | 0.000012 |
| preparing | 0.000013 |
| Sorting result | 0.000006 |
| executing | 0.000004 |
| Sending data | 0.000013 |
| Creating sort index | 2.393860 |
| end | 0.000018 |
| query end | 0.000010 |
| closing tables | 0.000015 |
| freeing items | 0.000023 |
| cleaning up | 0.000046 |

it takes 2.393 seconds for Creating sort index . how can i rectify this?

Error checksumming table Incorrect decimal value

Lastest Forum Posts - July 18, 2015 - 12:58am
Hello during the checksum after a server crash i can't checksum one table, i get this error:

Error checksumming table knox.prestazioni_dettagli: Error executing checksum query: Checksum query for table knox.prestazioni_dettagli cause
d MySQL error 1366:
Level: Error
Code: 1366
Message: Incorrect decimal value: '' for column '' at row -1
Query: REPLACE INTO `heartbeat`.`checksums` (db, tbl, chunk, chunk_index, lower_boundary, upper_boundary, this_cnt, this_crc) SELECT ?, ?, ?, ?, ?, ?,
COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS(' #', `id`, `idprestazione`, `idtariffa`, `iddescrizionetariffa`, `moltiplicatoretariffa`,
`costofornitore`, `costocliente`, `costoclientefinale`, `userdefined`, `minimofornitore`, `minimocliente`)) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `knox`
.`prestazioni_dettagli` FORCE INDEX(`PRIMARY`) WHERE ((`id` >= ?)) AND ((`id` <= ?)) /*checksum chunk*/

The table was cheched and repaired with mysql and now it's ok.

thank for your help


Load spike issue in PXC

Lastest Forum Posts - July 16, 2015 - 9:52am
I have a PXC setup with 3 nodes in a two cluster groups with cross data center replication between two groups.My cluster setup is like below.
here Group1 is primary datacenter and we have replication from B3 to A3 (A3 is slave).
we have created a new DB(LBC1) in Group1 without replicating it into group2(ignored Db LBC1 in group2).
The problem is when we are doing the load test on GROUP1 on LBC1 database , we are facing high Load issue on A3 and load is normal on A1,A2.
configuration is same in all nodes.Please help me ,how to fix this load issue on node A3.
log-slave-updates is enabled in A3 & B1.

Percona XtraDB Cluster 2nd node unable to join the 1st node

Lastest Forum Posts - July 16, 2015 - 9:10am
I have 3 instances on Amazon EC2 running CentOS6. I want to set up a 3-node PXC.

My first node works fine and my second node also runs successfully. However, the second node is not able to join the first node.

Could anybody please help me with this issue?

Node 1:


Node 2:


Bypassing SST in Percona XtraDB Cluster with binary logs

Latest MySQL Performance Blog posts - July 16, 2015 - 3:00am

In my previous post, I used incremental backups in Percona XtraBackup as a method for rebuilding a Percona XtraDB Cluster (PXC) node without triggering an actual SST. Practically this reproduces the SST steps, but it can be handy if you already had backups available to use.

In this post, I want to present another methodology for this that also uses a full backup, but instead of incrementals uses any binary logs that the cluster may be producing.

Binary logs on PXC

Binary logs are not strictly needed in PXC for replication, but you may be using them for backups or for asynchronous slaves of the cluster.  To set them up properly, we need the following settings added to our config:

server-id=1 log-bin log-slave-updates

As I stated, none of these are strictly needed for PXC.

  • server-id=1 — We recommend PXC nodes share the same server-id.
  • log-bin — actually enable the binary log
  • log-slave-updates — log ALL updates to the cluster to this server’s binary log

This doesn’t need to be set on every node, but likely you would set these on at least two nodes in the cluster for redundancy.

Note that this strategy should work with or without 5.6 asynchronous GTIDs.

Recovering data with backups and binary logs

This methodology is conventional point-in-time backup recovery for MySQL.  We have a full backup that was taken at a specific binary log position:

... backup created in the past... # innobackupex --no-timestamp /backups/full # cat /backups/full/xtrabackup_binlog_info node3-bin.000002 735622700

We have this binary log and all binary logs since:

-rw-r-----. 1 root root 1.1G Jul 14 18:53 node3-bin.000002 -rw-r-----. 1 root root 1.1G Jul 14 18:53 node3-bin.000003 -rw-r-----. 1 root root 321M Jul 14 18:53 node3-bin.000004

Recover the full backup

We start by preparing the backup with –apply-log:

# innobackupex --apply-log --use-memory=1G /backups/full ... xtrabackup: Recovered WSREP position: 1663c027-2a29-11e5-85da-aa5ca45f600f:60072936 ... InnoDB: Last MySQL binlog file position 0 735622700, file name node3-bin.000002 ... # innobackupex --copy-back /backups/full # chown -R mysql.mysql /var/lib/mysql

The output confirms the same binary log file and position that we knew from before.

Start MySQL without Galera

We need to start mysql, but without Galera so we can apply the binary log changes before trying to join the cluster. We can do this simply by commenting out all the wsrep settings in the MySQL config.

# grep wsrep /etc/my.cnf #wsrep_cluster_address = gcomm://pxc.service.consul #wsrep_cluster_name = mycluster #wsrep_node_name = node3 #wsrep_node_address = #wsrep_provider = /usr/lib64/libgalera_smm.so #wsrep_provider_options = "gcache.size=8G; gcs.fc_limit=1024" #wsrep_slave_threads = 4 #wsrep_sst_method = xtrabackup-v2 #wsrep_sst_auth = sst:secret # systemctl start mysql

Apply the binary logs

We now check our binary log starting position:

# mysqlbinlog -j 735622700 node3-bin.000002 | grep Xid | head -n 1 #150714 18:38:36 server id 1 end_log_pos 735623273 CRC32 0x8426c6bc Xid = 60072937

We can compare the Xid on this binary log position to that of the backup. The Xid in a binary log produced by PXC will be the seqno of the GTID of that transaction. The starting position in the binary log shows us the next Xid is one increment higher, so this makes sense: we can start at this position in the binary log and apply all changes as high as we can go to get the datadir up to a more current position.

# mysqlbinlog -j 735622700 node3-bin.000002 | mysql # mysqlbinlog node3-bin.000003 | mysql # mysqlbinlog node3-bin.000004 | mysql

This action isn’t particularly fast as binlog events are being applied by a single connection thread. Remember that if the cluster is taking writes while this is happening, the amount of time you have is limited by the size of gcache and the rate at which it is being filled up.

Prime the grastate

Once the binary logs are applied, we can check the final log’s last position to get the seqno we need:

[root@node3 backups]# mysqlbinlog node3-bin.000004 | tail -n 500 ... #150714 18:52:52 server id 1 end_log_pos 335782932 CRC32 0xb983e3b3 Xid = 63105191 ...

This is indeed the seqno we put in our grastate.dat. Like in the last post, we can copy a grastate.dat from another node to get the proper format. However, this time we must put the proper seqno into place:

# cat grastate.dat # GALERA saved state version: 2.1 uuid: 1663c027-2a29-11e5-85da-aa5ca45f600f seqno: 63105191 cert_index:

Be sure the grastate.dat has the proper permissions, uncomment the wsrep settings and restart mysql on the node:

# chown mysql.mysql /var/lib/mysql/grastate.dat # grep wsrep /etc/my.cnf wsrep_cluster_address = gcomm://pxc.service.consul wsrep_cluster_name = mycluster wsrep_node_name = node3 wsrep_node_address = wsrep_provider = /usr/lib64/libgalera_smm.so wsrep_provider_options = "gcache.size=8G; gcs.fc_limit=1024" wsrep_slave_threads = 4 wsrep_sst_method = xtrabackup-v2 wsrep_sst_auth = sst:secret # systemctl restart mysql

The node should now attempt to join the cluster with the proper GTID:

2015-07-14 19:28:50 4234 [Note] WSREP: Found saved state: 1663c027-2a29-11e5-85da-aa5ca45f600f:63105191

This, of course, still does not guarantee an IST. See my previous post for more details on the conditions needed for that to happen.

The post Bypassing SST in Percona XtraDB Cluster with binary logs appeared first on MySQL Performance Blog.

Bypassing SST in Percona XtraDB Cluster with incremental backups

Latest MySQL Performance Blog posts - July 16, 2015 - 2:00am
Beware the SST

In Percona XtraDB Cluster (PXC) I often run across users who are fearful of SSTs on their clusters. I’ve always maintained that if you can’t cope with a SST, PXC may not be right for you, but that doesn’t change the fact that SSTs with multiple Terabytes of data can be quite costly.

SST, by current definition, is a full backup of a Donor to Joiner.  The most popular method is Percona XtraBackup, so we’re talking about a donor node that must:

  1. Run a full XtraBackup that reads its entire datadir
  2. Keep up with Galera replication to it as much as possible (though laggy donors don’t send flow control)
  3. Possibly still be serving application traffic if you don’t remove Donors from rotation.

So, I’ve been interested in alternative ways to work around state transfers and I want to present one way I’ve found that may be useful to someone out there.

Percona XtraBackup and Incrementals

It is possible to use Percona XtraBackup Full and Incremental backups to build a datadir that might possibly SST.  First we’ll focus on the mechanics of the backups, preparing them and getting the Galera GTID and then later discuss when it may be viable for IST.

Suppose I have fairly recent full Xtrabackup and and one or more incremental backups that I can apply on top of that to get VERY close to realtime on my cluster (more on that ‘VERY’ later).

# innobackupex --no-timestamp /backups/full ... sometime later ... # innobackupex --incremental /backups/inc1 --no-timestamp --incremental-basedir /backups/full ... sometime later ... # innobackupex --incremental /backups/inc2 --no-timestamp --incremental-basedir /backups/inc1

In my proof of concept test, I now have a full and two incrementals:

# du -shc /backups/* 909M full 665M inc1 812M inc2 2.4G total

To recover this data, I follow the normal Xtrabackup incremental apply process:

# cp -av /backups/full /backups/restore # innobackupex --apply-log --redo-only --use-memory=1G /backups/restore ... xtrabackup: Recovered WSREP position: 1663c027-2a29-11e5-85da-aa5ca45f600f:35694784 ... # innobackupex --apply-log --redo-only /backups/restore --incremental-dir /backups/inc1 --use-memory=1G # innobackupex --apply-log --redo-only /backups/restore --incremental-dir /backups/inc2 --use-memory=1G ... xtrabackup: Recovered WSREP position: 1663c027-2a29-11e5-85da-aa5ca45f600f:46469942 ... # innobackupex --apply-log /backups/restore --use-memory=1G

I can see that as I roll forward on my incrementals, I get a higher and higher GTID. Galera’s GTID is stored in the Innodb recovery information, so Xtrabackup extracts it after every batch it applies to the datadir we’re restoring.

We now have a datadir that is ready to go, we need to copy it into the datadir of our joiner node and setup a grastate.dat. Without a grastate, starting the node would force an SST no matter what.

# innobackupex --copy-back /backups/restore # ... copy a grastate.dat from another running node ... # cat /var/lib/mysql/grastate.dat # GALERA saved state version: 2.1 uuid: 1663c027-2a29-11e5-85da-aa5ca45f600f seqno: -1 cert_index: # chown -R mysql.mysql /var/lib/mysql/

If I start the node now, it should see the grastate.dat with the -1 seqo and run –wsrep_recover to extract the GTID from Innodb (I could have also just put that directly into my grastate.dat).

This will allow the node to startup from merged Xtrabackup incrementals with a known Galera GTID.

But will it IST?

That’s the question.  IST happens when the selected donor has all the transactions the joiner needs to get it fully caught up inside of the donor’s gcache.  There are several implications of this:

  • A gcache is mmap allocated and does not persist across restarts on the donor.  A restart essentially purges the mmap.
  • You can query the oldest GTID seqno on a donor by checking the status variable ‘wsrep_local_cached_downto’.  This variable is not available on 5.5, so you are forced to guess if you can IST or not.
  • most PXC 5.6 will auto-select a donor based on IST.  Prior to that (i.e., 5.5) donor selection was not based on IST candidacy at all, meaning you had to be much more careful and do donor selection manually.
  • There’s no direct mapping from the earliest GTID in a gcache to a specific time, so knowing at a glance if a given incremental will be enough to IST is difficult.
  • It’s also difficult to know how big to make your gcache (set in MB/GB/etc.)  with respect to your backups (which are scheduled by the day/hour/etc.)

All that being said, we’re still talking about backups here.  The above method will only work if and only if:

  • You do frequent incremental backups
  • You have a large gcache (hopefully more on this in a future blog post)
  • You can restore a backup faster than it takes for your gcache to overflow

The post Bypassing SST in Percona XtraDB Cluster with incremental backups appeared first on MySQL Performance Blog.

--log-bin-trust-function-creators=1 percona master-master cluster -&amp;gt; safe?

Lastest Forum Posts - July 16, 2015 - 1:17am

one of our web application (I-DOIT) wants to create a function:

Database error : Query error: '
SET len = CHAR_LENGTH( str );
SET c = MID( str, i, 1 );
IF c REGEXP '[[:alpha:]]' THEN
SET ret=CONCAT(ret,c);
SET i = i + 1;
You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

is it save to enable the variable on all nodes? The application connects via HAproxy the cluster (roundrobin).

============================== =

port = 3306
user = mysql
default-storage-engine = InnoDB
socket = /var/lib/mysql/mysql.sock
pid-file = /var/lib/mysql/mysql.pid
collation-server = utf8_unicode_ci
character-set-server = utf8
init_connect = 'SET NAMES utf8, collation_connection=utf8_unicode_ci'

# MyISAM #
key-buffer-size = 32M
myisam-recover = FORCE,BACKUP

max-allowed-packet = 25M
max-connect-errors = 1000000
# dekativiert, wegen i-doit
innodb-strict-mode = 0

datadir = /var/lib/mysql/

binlog_format = ROW
log-bin = /var/lib/mysql/mysql-bin
expire-logs-days = 2
max_binlog_size = 100M
sync-binlog = 1

tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 4096
table-open-cache = 4096

innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 256M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 6G
innodb_autoinc_lock_mode = 2

log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log

# Cluster Configuration #
wsrep_provider = /usr/lib/galera2/libgalera_smm.so
wsrep_provider_options = "pc.ignore_sb = yes; gcache.size=2G; gcache.page_size=1G"
wsrep_cluster_name = "XtraDB_Cluster"
wsrep_sst_method = xtrabackup-v2
wsrep_slave_threads = 16
wsrep_sst_auth = "sstuser:secret"
wsrep_cluster_address = "gcomm://......"

streamfmt = xbstream

!includedir /etc/mysql/conf.d/

cu denny