xtrabackup/innobackupex does not populate PERCONA_SCHEMA.xtrabackup_history

Lastest Forum Posts - November 19, 2015 - 12:59am
I tried with version 2.2 and 2.3 of xtrabackup and cannot have it running properly. Backup is done, PERCONA_SCHEMA.xtrabackup_history is created (if not exists) but is not populated... Maybe I'm missing syntax? Also tried with "innobackupex --history" on 2.2 version.

$ cat xtrabackup_info
uuid = 194268dc-8e98-11e5-9598-080027c81577
name =
tool_name = xtrabackup
tool_command = --defaults-file=/usr/my.cnf --user=root --history=crbv --backup --target-dir=/mysqlbackup/mysqltest/xtrabkp_full
tool_version = 2.3.2
ibbackup_version = 2.3.2
server_version = 5.6.26-74.0-log
start_time = 2015-11-19 10:32:37
end_time = 2015-11-19 10:32:41
lock_time = 0
binlog_pos = (null)
innodb_from_lsn = 0
innodb_to_lsn = 8101997975

Cristi B.

Innobackup script ignore innodb_data_home_dir and/or innodb_log_group_home_dir

Lastest Forum Posts - November 19, 2015 - 12:35am
5.6.74 version from Ubuntu. Got error about failed startup new node

Comment innodb_data_home_dir and/or innodb_log_group_home_dir restore standart functionality.

Innobackup really ignore this parameters?

How to Start Your Own T-shirt Line

Lastest Forum Posts - November 18, 2015 - 7:41pm
If you've got some clever ideas for T-shirts, why not start a T-shirt clothing line? There are a number of helpful online sites, many of which are free and easy to maneuver, for someone looking to design and sell T-shirts. The key is making designs that people will want to buy. To see more Free T-Shirts, you can visit my website http://freet-shirtsonline.net

Alter table locks all tables in database

Lastest Forum Posts - November 18, 2015 - 7:56am
I'm having an issue where running an alter table or optimize table on any table in a database will lock all the other tables until the operation is complete. Data isn't lost, it fills in after the operation is done, but this still concerns me and I'd like to figure out a way around it.

I only write to one node at a time, the other two are failover.

Note this is a three-node xtradb cluster with the following (centos 7) packages installed:

Code: Percona-XtraDB-Cluster-56.x86_64 1:5.6.24-25.11.1.el7 Percona-XtraDB-Cluster-client-56.x86_64 Percona-XtraDB-Cluster-galera-3.x86_64 Percona-XtraDB-Cluster-server-56.x86_64 Percona-XtraDB-Cluster-shared-56.x86_64 percona-xtrabackup.x86_64 2.2.12-1.el7 Here is my mysql configuration:

Code: [mysqld] binlog_format = ROW datadir = /percona default_storage_engine = InnoDB log_bin log_error = /var/log/mysqld.log transaction-isolation = READ-COMMITTED sort_buffer_size = 512K read_rnd_buffer_size = 512K innodb_autoinc_lock_mode = 2 innodb_buffer_pool_size = 8G innodb_file_per_table = 1 innodb_flush_log_at_trx_commit = 0 innodb_flush_method = O_DIRECT innodb_log_files_in_group = 2 innodb_log_file_size = 20M innodb_file_format = Barracuda innodb_file_format_max = Barracuda wsrep_cluster_address = gcomm://xxxxx,xxxxx,xxxxx wsrep_cluster_name = xxxxx wsrep_node_address = xxxxx wsrep_node_name = xxxxx wsrep_provider = /usr/lib64/galera3/libgalera_smm.so wsrep_slave_threads = 4 wsrep_sst_auth = xxxxx:xxxxx wsrep_sst_method = xtrabackup-v2 [mysqld_safe] pid-file = /run/mysqld/mysql.pid syslog !includedir /etc/my.cnf.d And here is an example table (every table is identical):

Code: mysql> show table status like "al_20151118" \G *************************** 1. row *************************** Name: al_20151118 Engine: InnoDB Version: 10 Row_format: Compressed Rows: 7487416 Avg_row_length: 95 Data_length: 716177408 Max_data_length: 0 Index_length: 260849664 Data_free: 5767168 Auto_increment: 23423276 Create_time: 2015-11-18 10:44:02 Update_time: NULL Check_time: NULL Collation: latin1_swedish_ci Checksum: NULL Create_options: row_format=COMPRESSED Comment: 1 row in set (0.00 sec) mysql> describe al_20151118; +-----------+----------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+----------------------+------+-----+---------+----------------+ | id | int(11) unsigned | NO | PRI | NULL | auto_increment | | time | datetime | NO | MUL | NULL | | | source | varchar(10) | NO | | NULL | | | hostname | varchar(50) | NO | MUL | NULL | | | clientip | varchar(15) | NO | MUL | NULL | | | method | varchar(10) | NO | | NULL | | | request | varchar(255) | NO | | NULL | | | response | smallint(5) unsigned | YES | | 0 | | | size | int(10) unsigned | YES | | 0 | | | referrer | varchar(255) | YES | | NULL | | | useragent | varchar(255) | YES | | NULL | | | extra1 | varchar(20) | YES | | NULL | | +-----------+----------------------+------+-----+---------+----------------+ 12 rows in set (0.01 sec) mysql> show create table al_20151118 \G *************************** 1. row *************************** Table: al_20151118 Create Table: CREATE TABLE `al_20151118` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `time` datetime NOT NULL, `source` varchar(10) NOT NULL, `hostname` varchar(50) NOT NULL, `clientip` varchar(15) NOT NULL, `method` varchar(10) NOT NULL, `request` varchar(255) NOT NULL, `response` smallint(5) unsigned DEFAULT '0', `size` int(10) unsigned DEFAULT '0', `referrer` varchar(255) DEFAULT NULL, `useragent` varchar(255) DEFAULT NULL, `extra1` varchar(20) DEFAULT NULL, PRIMARY KEY (`id`), KEY `al_20151118_time_idx` (`time`), KEY `al_20151118_hostname_idx` (`hostname`), KEY `al_20151118_clientip_idx` (`clientip`) ) ENGINE=InnoDB AUTO_INCREMENT=23436428 DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED 1 row in set (0.00 sec) Any advice for the locking problem, my configuration, or my table schema would be *greatly* appreciated. Right now I'm running about 200 inserts/second, reads are very rare since this is for data archival with the occasional report.

MySQL on slave wont start until network is connected

Lastest Forum Posts - November 18, 2015 - 5:52am
Is there any way to override this? Our use case is a bit different and network may not exist for sometime after reboot!

error log
----- 2015-11-18 18:41:40 24283 [ERROR] WSREP: gcs connect failed: Connection timed out
2015-11-18 18:41:40 24283 [ERROR] WSREP: wsrep::connect(gcomm://<master_ip>) failed: 7
2015-11-18 18:41:40 24283 [ERROR] Aborting

2015-11-18 18:41:40 24283 [Note] WSREP: Service disconnected.
2015-11-18 18:41:41 24283 [Note] WSREP: Some threads may fail to exit.
2015-11-18 18:41:41 24283 [Note] Binlog end
2015-11-18 18:41:41 24283 [Note] /usr/sbin/mysqld: Shutdown complete

151118 18:41:41 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

Once connected to the network then it starts without any problem!

[Incremental Backup] I can't restore the first incremental backup

Lastest Forum Posts - November 18, 2015 - 3:03am
Good Morning everyone,

I'm working with innobackupex to generate some incremental backups from my database. The problem is:
I can genere the incremental backups from a full base backup but when I prepared I can't restore the first incremental backup. I've used to the manual from "https://www.percona.com/doc/percona-xtrabackup/2.3/index.html"

The commands that I've used:

Creating the database:

Code: $ sudo innobackupex --user=root --password=XXXX /home/beto/xtrabackup/inc2-sense-compact Now the BASE DIR: 2015-11-18_08-32-18

Code: $ sudo innobackupex --user=root --password=XXXX --incremental /home/beto/xtrabackup/inc2-sense-compact --incremental-basedir=2015-11-18_08-32-18 The first incremental backup: 2015-11-18_08-43-53

Code: $ sudo innobackupex --user=root --password=XXXX --incremental /home/beto/xtrabackup/inc2-sense-compact --incremental-basedir=2015-11-18_08-43-53 The second incremental backup: 2015-11-18_09-25-27

Prepare it:

Code: $ sudo innobackupex --user=root --password=XXXX --apply-log --redo-only 2015-11-18_08-32-18 Up to this point all is ok, but when I execute the first incremental backup... doesn't work!

Code: $ sudo innobackupex --user=root --password=XXXX --apply-log --redo-only 2015-11-18_08-32-18 --incremental-dir=2015-11-18_08-43-53 151118 09:35:57 innobackupex: Starting the apply-log operation IMPORTANT: Please check that the apply-log run completes successfully. At the end of a successful apply-log run innobackupex prints "completed OK!". innobackupex version 2.3.2 based on MySQL server 5.6.24 Linux (i686) (revision id: 306a2e0) incremental backup from 3228084 is enabled. xtrabackup: cd to /home/beto/xtrabackup/inc2-sense-compact/2015-11-18_08-32-18 xtrabackup: This target seems to be already prepared with --apply-log-only. 2015-11-18 09:35:57 b735e700 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. xtrabackup: Warning: cannot open 2015-11-18_08-43-53/xtrabackup_logfile. will try to find. 2015-11-18 09:35:57 b735e700 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. xtrabackup: Fatal error: cannot find 2015-11-18_08-43-53/xtrabackup_logfile. xtrabackup: Error: xtrabackup_init_temp_log() failed. What's the matter? Any suggestion?

Best rgrds,

Authentication plugin with ldap and proxy user

Lastest Forum Posts - November 17, 2015 - 5:24am
Setting up authentication plugin for ldap which is working as I am about to login with ldap credentials but the accounts is not using the proxy user which means the grants are not working.
Server version: 5.6.27-75.0-log Percona Server (GPL), Release 75.0, Revision 8bb53b6
created from database admin account

mysql> CREATE USER ''@'' IDENTIFIED WITH auth_pam AS 'mysqld, mysqldba=dbauser';
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on *.* to 'dbauser'@'localhost' identified by 'xxx' with grant option;
Query OK, 0 rows affected (0.00 sec)

mysql> grant proxy on 'dbauser'@'localhost' to ''@'';
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mysql.proxies_priv;
| Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp |
| localhost | root | | | 1 | | 2015-11-16 16:32:41 |
| localhost | dba_admin | | | 1 | root@localhost | 0000-00-00 00:00:00 |
| | | localhost | dbauser | 0 | dba_admin@localhost | 0000-00-00 00:00:00 |

Question #1- why ''@'' / dbauser@localhost not showing with_grant?


login as ldap user

mysql> show grants;
| Grants for @ |
| GRANT USAGE ON *.* TO ''@'' |
| GRANT PROXY ON 'dbauser'@'localhost' TO ''@'' |
2 rows in set (0.00 sec)

| joep@localhost | @ |

Question #2 - why does it not proxy to dbauser@localhost?cat /etc/pam.d/mysqld
auth required pam_warn.so
auth required pam_ldap.so
account required pam_ldap.so

mysql> show grants for ''@'';
| Grants for @ |
| GRANT USAGE ON *.* TO ''@'' |
| GRANT PROXY ON 'dbauser'@'localhost' TO ''@'' |
2 rows in set (0.00 sec)

mysql> show grants for 'dbauser'@'localhost';
| Grants for dbauser@localhost |

XtraDB 5.5: some mysql commands are replicated but other are limited to one node.

Lastest Forum Posts - November 17, 2015 - 2:50am
Hello everybody

We notice a strange behavior on « Percona XtraDB Cluster” with commands (DROP, INSERT, UPDATE, DELETE, CREATE etc)

************************************************** *****
Server version: 5.5.41-37.0-55 Percona XtraDB Cluster (GPL),
Release rel37.0,
Revision 853,
WSREP version 25.11,
Cluster size: 3 nodes
System: RHEL 6.6
************************************************** ********

Some mysql commands executed in one node are replicated but other commands executed in one node are limited only in the node without a replication.
When we check, the cluster is still synced in any case (command replicated or not).

Below example:
Cluster of 3 node, table : user with a primary key
“CREATE USER” in one node is replicated in all other nodes.
“DELETE USER” in one node is limited on this node, there is no a replication in other nodes.
“DROP USER” in one node is replicated in all other nodes.

1) Is it normal that some commands are replicated and other are limited to one node?

2) Is it possible to have a list of mysql commands that are replicated in the cluster and commands that are limited in one node?

Could you please help use? Thanks in advance for your support

Insert into Select, table locking

Lastest Forum Posts - November 16, 2015 - 11:48pm
My organization, has a Master/Slave setup, replication mode is Mixed-mode. We have read queries/SP(s) running in Slave, having Insert into Select. Is all the source tables will get locked in Slave? If, yes, how can I avoid it. Slave is a read-only slave not a relay slave.

SSL on Percona Mysql

Lastest Forum Posts - November 16, 2015 - 10:16pm
Hello Team,

I having issue connecting MySQL over SSL and I am getting below error.
Can someone assist ?

[root@crm-app mysql-ssl]# mysql --ssl-ca=/etc/mysql-ssl/ca-cert.pem --ssl-cert=/etc/mysql-ssl/client-cert.pem --ssl-key=/etc/mysql-ssl/client-key.pem -hEC-VMA.modeldns.com.au -uecuser_appusr -p
Enter password:
ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1)
[root@crm-app mysql-ssl]#

ec-vma.modeldns.com.au (Master) and crm-app is a client.

[root@crm-app ~]# cat /etc/my.cnf
You have new mail in /var/spool/mail/root
[root@crm-app ~]#
[root@crm-app ~]#
[root@crm-app ~]#
[root@crm-app ~]# cd /etc/mysql-ssl/
[root@crm-app mysql-ssl]# ll -sh
total 16K
4.0K -rw-r--r-- 1 root root 1.2K Nov 4 23:33 ca-cert.pem
4.0K -rw-r--r-- 1 root root 1.3K Nov 4 23:23 ca-cert.pem.s
4.0K -rw-r--r-- 1 root root 1.2K Nov 4 23:23 client-cert.pem
4.0K -rw-r--r-- 1 root root 1.7K Nov 4 23:24 client-key.pem
0 -rw-r--r-- 1 root root 0 Nov 4 23:20 server-cert.pem
[root@crm-app mysql-ssl]#


Kunal Modi

ss_get_mysql_stats.php is not working with MYSQL 5.6.

Lastest Forum Posts - November 16, 2015 - 2:33pm
We recently moved from mysql 5.5 to mysql 5.6 and we are monitoring the mysql using zenoss core 3.2 which uses script ss_get_mysql_stats.php.

After mysql 5.6 zenoss stopped showing the indoor related graphs.

1) We went through doc "https://bugs.launchpad.net/percona-m...s/+bug/1124292" and made the changes accordingly but not sure what else it will break or not.

2) We went through the doc for ZenPacks.zenoss.MySqlMonitor-3.0.5 (http://wiki.zenoss.org/ZenPack:MySQL...Monitor_(Core)) but seems it is not compatible with zenoss core 3.2 (Rather it is for zenoss 4* which we don't have) and it is in python.

3) we went though the latest percona monitoring plugin percona-monitoring-plugins-1.1.5 which have different notation then we were using earlier (a0 became to gg)

'Key_read_requests' => 'a0',
'Key_read_requests' => 'gg',

4)We normally uses following to monitor the mysql as below.

[agarwala@zenoss000las ~]$ ./mysql_stats.php --host semaboutdbm001iad.io.askjeeves.info --items a8,a9,ae,af,ag,al,am,an,ao,ap,aq,ar,as,at,ax,ay,az ,bd,be,bf,bi,bl,bn,bo,bp,bq,br,bs,bv,bw,bx,by,bz,c 2,c3,c4,c5,c6,c7,c8,c9,ca,cb,cc,cd,ce,cf,cg,ch,ci, cj,ck,cl,cr,cs,ct,cu,d1,d2,d3,d4,d5,d6,d7,d8,d9,da ,db,dc,dd,de,df,dg,dh,dy,xy
Success | a8=-1 a9=-1 ae=-1 af=-1 ag=-1 al=-1 am=-1 an=-1 ao=-1 ap=-1 aq=-1 ar=-1 as=-1 at=-1 ax=-1 ay=-1 az=-1 bd=5751 be=953 bf=81 bi=16 bl=1000 bn=121208 bo=0 bp=0 bq=0 br=0 bs=0 bv=0 bw=0 bx=0 by=8878906 bz=0 c2=444373526 c3=391525167 c4=22585150 c5=8879477 c6=1157568 c7=0 c8=5219 c9=58 ca=388 cb=92 cc=0 cd=16156 ce=464 cf=3135067 cg=0 ch=5245031 ci=633 cj=292943 ck=5488671 cl=3909 cr=8388608 cs=-1 ct=-1 cu=-1 d1=-1 d2=-1 d3=-1 d4=-1 d5=-1 d6=1 d7=-1 d8=-1 d9=-1 da=-1 db=-1 dc=-1 dd=-1 de=-1 df=-1 dg=12 dh=3 dy=-1 xy=2[agarwala@zenoss000las ~]$
[agarwala@zenoss000las ~]$
[agarwala@zenoss000las ~]$
[agarwala@zenoss000las ~]$ ./mysql_stats.php --host semaqdbm001iad.io.askjeeves.info --items a8,a9,ae,af,ag,al,am,an,ao,ap,aq,ar,as,at,ax,ay,az ,bd,be,bf,bi,bl,bn,bo,bp,bq,br,bs,bv,bw,bx,by,bz,c 2,c3,c4,c5,c6,c7,c8,c9,ca,cb,cc,cd,ce,cf,cg,ch,ci, cj,ck,cl,cr,cs,ct,cu,d1,d2,d3,d4,d5,d6,d7,d8,d9,da ,db,dc,dd,de,df,dg,dh,dy,xy
Success | a8=-1 a9=-1 ae=-1 af=-1 ag=-1 al=-1 am=-1 an=-1 ao=-1 ap=-1 aq=-1 ar=-1 as=-1 at=-1 ax=-1 ay=-1 az=-1 bd=3595 be=5 bf=30 bi=13 bl=512 bn=65771 bo=0 bp=0 bq=0 br=0 bs=0 bv=0 bw=0 bx=0 by=1205563 bz=0 c2=58414899 c3=42054440 c4=6605561 c5=1209488 c6=6601 c7=0 c8=979 c9=44 ca=585 cb=0 cc=0 cd=11168 ce=32 cf=318209 cg=0 ch=2719756 ci=148 cj=19 ck=1196302 cl=6732 cr=8388608 cs=-1 ct=-1 cu=-1 d1=-1 d2=-1 d3=-1 d4=-1 d5=-1 d6=1 d7=-1 d8=-1 d9=-1 da=-1 db=-1 dc=-1 dd=-1 de=-1 df=-1 dg=9 dh=3 dy=-1 xy=3[agarwala@zenoss000las ~]$

Can you please help us to deploy the right ss_get_mysql_stats.php on mysql 5.6 which can work with zenoss 3.2 and show all the graphs without any issues?

Procedure to restore a single database from a full innobackupex backup

Lastest Forum Posts - November 16, 2015 - 1:35pm
After working with innobackupex I've become familiar with what it takes to do backups and restores. But I have a question about restoring a single database from a full backup. According to the docs, here is the procedure:

1. Apply the log to the backup with the --export option to create the .exp (for Percona) and .cfg (for Community) InnoDB tables.
2. Drop the target database and recreate its DDL only using the DDL that was in existence when the backup was taken (we create this DDL script programmatically during the backup process).
3. Discard the tablespaces for the InnoDB tables in the target database.
4. Copy the .ibd and .exp files from the backup to the target folder (as stated here: https://www.percona.com/doc/percona-...ables_ibk.html)
5. Import the tablespaces for the InnoDB tables.

My question is about #4 above. First of all, if we only copied the .ibd and .exp files that would miss all the MYISAM tables if any existed. Second, why not just copy EVERYTHING from the backup folder (for that database) to the target database folder instead? That would make the restore a lot simpler. I've tried this and it seems to work fine. Here are the file types in the backup folder:

.exp (for restoring to Percona XtraDB Servers)
.cfg (for restoring to Community Servers)
.ibd (InnoDB table)
db.opt (the database options file)
.frm (table format file)
.MYD (MYISAM table)
.MYI (MYISAM index)

Amazon Aurora – Looking Deeper

Latest MySQL Performance Blog posts - November 16, 2015 - 1:26pm

Recently my colleague (by Percona) Yves Trudeau and colleague (by industry) Marco Tusa
published their materials on Amazon Aurora. Indeed, Amazon Aurora is a hot topic these days, and we have a stream of customer inquiries regarding this technology. I’ve decided to form my own opinion, and nothing is better than a personal, hands-on experience, which I am going to share.

The materials I will refer to:

Presentation [1] gives a good general initial overview. However, there is one statement the presenter made I’d like to highlight. It is “Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.”

This does not claim that Amazon Aurora is an open source database, but certainly the goal is to make Amazon look comparable to open source.

I would like to make clear that Amazon Aurora is not open source. Amazon Aurora is based on the Open Source MySQL Edition, but is not open source by itself; it is a proprietary, closed-source database.

By choosing Amazon Aurora you are fully dependent on Amazon for bug fixes or upgrades. In this regard you are locked-in to Amazon RDS. Though it is not the same lock-in as to a commercial database like Oracle or MS-SQL. You should still be able to relatively easily migrate your application to a community MySQL.

Amazon uses a GPLv2 hole, which allows it to not publish source code in a cloud model. How is Amazon Aurora different from Percona Server or MariaDB? Both of these projects are required to publish their sources. It comes to the distribution model. GPLv2 makes a restriction on a traditional distribution model: if you download software or receive a hard copy of software binaries, you also have rights to request corresponding source code. This is not the case with cloud computing: there you do not download anything, just launch an instance. GPLv2 does not make any restrictions for this case, so Amazon is in compliance with GPLv2.

Bug fixes

Speaking of bug fixes, Amazon Aurora exposes itself as “version: 5.6.10, version_comment: MySQL Community Server (GPL)”. MySQL 5.6.10 was released on 2013-02-05. That was 2.5 year ago. It is not clear if Aurora includes 2.5 years worth of bug fixes and just did not update the version, or if this is really binaries based on a 2.5 year old code base.

For example, let’s take the MySQL bug http://bugs.mysql.com/bug.php?id=70577, which was fixed in 5.6.15. I do not see the bug fix present in Amazon Aurora 5.6.10.

Another bug, http://bugs.mysql.com/bug.php?id=68041, which was fixed in MySQL 5.6.13, but is still present in Amazon Aurora.

What about InnoDB’s code base? The bug, http://bugs.mysql.com/bug.php?id=72548, with InnoDB fulltext search, was fixed in MySQL 5.6.20 (released more than a year ago, on 2014-07-31) and is still present in Amazon Aurora.
This leaves me with the impression that the general Aurora codebase was not updated recently.

Although it seems Amazon changed the innodb_version. Right now it is 1.2.10. A couple of weeks ago it was innodb_version: 1.2.8

My question here, does Amazon have the ability to keep up with MySQL bug fixes and regularly update their software? So far it does not seem so.

Amazon Aurora architecture:

My understanding of Amazon Aurora in simplified form is the following:

That is, all Aurora instances share the same storage, and makes it very easy to start new “Reader” instances over the same data.

Communication between Writer (only 1 Writer allowed) and Readers is done by transferring records similar to InnoDB redo log records. And this really limits how many Writers you can have (only one). I do not believe it is possible to implement a proper transactional coordinator between two Writers based on redo records.

A similar way is used to update data stored on shared storage: Aurora just applies redo log records to data.

So, updating data this way, Aurora is able to:

  • Avoid data writes to storage
  • Avoid binary logs
  • Avoid InnoDB transactional logs
  • Disable doublewrites
  • Disable InnoDB checksums

Aurora makes claims about significant performance improvements, but we need to keep in mind that EACH WRITE goes directly to storage and it has to be acknowledged by 4 out of 6 copies (synchronous writes). Aurora Writer works in some kind of “write through” mode – this is needed, as I understand, to make Reader see changes immediately. I expect it also comes with a performance penalty, so whether the performance gain is bigger than the performance penalty will depend on the workload.

Now, I should give credit to the Amazon engineering team for a proper implementation of shipping and applying transactional redo logs. It must have required a lot of work to change the InnoDB engine, and as we see it took probably a year (from the announcement to the general availability) for Amazon to stabilize Aurora. Too bad Amazon keeps their changes closed, even when the main codebase is an open source database.

Work with transactional isolation

Distributed computing is especially complicated from a transactional processing point of view (see for example a story), so I also wanted to check how Amazon Aurora handles transactional isolation levels on different instances.
Fortunately for Aurora, they have an easy way out, allowing only read statements on Readers, but we still check isolation in some cases.

It seems that the only TRANSACTION ISOLATION level supported is REPEATABLE-READ. When I try to change to SERIALIZABLE or READ-COMMITTED, Aurora accepts this without an error, but silently ignores it. tx_isolation stays REPEATABLE-READ.

mysql> SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE; Query OK, 0 rows affected (0.00 sec) mysql> SELECT @@GLOBAL.tx_isolation, @@tx_isolation; +-----------------------+-----------------+ | @@GLOBAL.tx_isolation | @@tx_isolation | +-----------------------+-----------------+ | REPEATABLE-READ | REPEATABLE-READ | +-----------------------+-----------------+ 1 row in set (0.00 sec) mysql> SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED; Query OK, 0 rows affected (0.00 sec) mysql> SELECT @@GLOBAL.tx_isolation, @@tx_isolation; +-----------------------+-----------------+ | @@GLOBAL.tx_isolation | @@tx_isolation | +-----------------------+-----------------+ | REPEATABLE-READ | REPEATABLE-READ | +-----------------------+-----------------+ 1 row in set (0.00 sec)

Actually, there I face another worrisome behaviour: silent changes in Aurora without notification.

I am pretty sure, that when a couple of weeks ago I tried to use SERIALIZABLE level, it failed with an error: “SERIALIZABLE is not supported”. Now it just silently ignores it. So I assume Amazon continues to make changes. I guess this is one of the changes from innodb_version 1.2.8 to 1.2.10. Is there a full Changelog we can see?

The lack of SERIALIZABLE level is not a big problem in my opinion. In the end, we know that Percona XtraDB Cluster does not support it either.

But not being able to use READ-COMMITTED might be an issue for some applications; you need to check if your application is working properly with READ-COMMITTED silently set as REPEATABLE-READ.

I found another unusual behaviour between the reader and writer when I tried to execute ALTER TABLE statement on the Writer (this is another hard area for clusters: to keep a data dictionary synchronized).
execute long SELECT col1 FROM tab1
while SELECT running, execute ALTER TABLE tab1 ADD COLUMN col2 ;
Effect: SELECT on READER fails immediately with an error: “ERROR 1866 (HY000): Query execution was interrupted on a read-only database because of a metadata change on the master”

So there again I think Aurora does its best given architectural limitations and one-directional communication: it just chooses to kill read statements on Readers.

Query cache
I should highlight improvements to query_cache as a good enhancement. Query cache is enabled by default and Amazon fixed the major issue with MySQL query cache, which is when update queries may stall for a long time waiting on invalidation of query cache entries. This problem does not exist in Amazon Aurora. Also Amazon adjusts query_cache to work properly on Writer-Reader pair. Query_cache on the Reader gets invalidated when data is changed on Writer.

Config review

Let’s make a quick review of the MySQL configuration that Aurora proposes.

These are variables which are set by default and you can’t change:

| innodb_change_buffering | none | | innodb_checksum_algorithm | none | | innodb_checksums | OFF | | innodb_doublewrite | OFF | | innodb_flush_log_at_trx_commit | 1 |

Disabled doublewrite and checksums is not a surprise, I mentioned this above.
Also innodb_flush_log_at_trx_commit is strictly set to 1, I think it is also related to how Aurora deals with InnoDB redo log records.

Disabled innodb_change_buffering is also interesting, and it can’t be good for performance. I guess Amazon had to disable any buffering of updates so changes are immediately written to the shared storage.


Traditionally RDS does not provide you with good access to system metrics like vmstat and iostat, and it makes troubleshooting quite challenging. MySQL slow-log is also not available, so it leaves us with only PERFORMANCE_SCHEMA (which is OFF by default)

One good thing is that we can access SHOW GLOBAL STATUS, and this can also be used for monitoring software.

Interesting status after a heavy write load, these counters always stay at 0:

| Innodb_buffer_pool_pages_dirty | 0 | | Innodb_buffer_pool_bytes_dirty | 0 | | Innodb_buffer_pool_pages_flushed | 0 | | Innodb_pages_written | 0 |

I think this supports my guess about the architecture, that Aurora does not keep any changed pages in memory and just directly writes everything to storage.

Final thoughts

I will follow up this post with my benchmarks, as I expect the proposed architecture comes with serious performance penalties, although removing binary logs and eliminating doublewrites should show a positive improvement.

In general I think Amazon Aurora is a quite advanced proprietary version of MySQL. It is not revolutionary, however, and indeed not “reimagined relational databases” as Amazon presents it. This technology does not address a problem with scaling writes, sharding and does not handle cross-nodes transactions.

As for other shortcomings, I see there is no public bug database, no detailed user documentation and Aurora’s code is based on old MySQL source code.

The post Amazon Aurora – Looking Deeper appeared first on MySQL Performance Blog.

Deadlock logging not working

Lastest Forum Posts - November 16, 2015 - 10:46am
Hi, I'm getting deadlocks I guess due to simultaneous writes - this is fine, I understand this is part of running synchronous clustering. But try as I might, I cannot get PXC to log the deadlocks as it should do, so I can start to analyse and resolve them. I've added:
innodb_print_all_deadlocks = on
also tried
innodb_print_all_deadlocks = 1
to /etc/my.cnf

But if I try and verify, I get:
mysql -e 'show global status' |grep deadlock
Innodb_deadlocks 0
ie. no innodb_print_all_deadlocks variable.

Waddup with that? Is this setting not supported in PXC?
I'm running Percona-XtraDB-Cluster-server-56-5.6.26-25.12.1.el7.x86_64

Xtrabackup 2.3.x Creating Log Files With Incorrect Size

Lastest Forum Posts - November 16, 2015 - 9:21am

I've been using innobackupex weekly for years now without any issue on a fairly large database (1.6T).

With recent versions, my backup fails to create a launchable database after the --apply-log step. Yes, the command definitely reaches the "completed OK!" stage. I do not see anything unusual in the output.

The backup-my.cnf file has the correct parameters:

Code: # This MySQL options file was generated by innobackupex. # The MySQL server [mysqld] innodb_checksum_algorithm=innodb innodb_log_checksum_algorithm=innodb innodb_data_file_path=ibdata1:1024M:autoextend innodb_log_files_in_group=2 innodb_log_file_size=268435456 innodb_fast_checksum=false innodb_page_size=16384 innodb_log_block_size=512 innodb_undo_directory=. innodb_undo_tablespaces=0 However, after the process completes, I'm left with:

Code: -rw-r--r-- 1 mysql mysql 50331648 Nov 15 16:24 ib_logfile0 -rw-r--r-- 1 mysql mysql 50331648 Nov 15 16:24 ib_logfile1 At this point, mysql fails to start:

Code: InnoDB: Error: log file ./ib_logfile0 is of different size 0 50331648 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 151115 16:24:54 [ERROR] Plugin 'InnoDB' init function returned error. 151115 16:24:54 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 151115 16:24:54 [ERROR] Unknown/unsupported storage engine: InnoDB 151115 16:24:54 [ERROR] Aborting Deleting the files allows the database to start, albeit with some potentially serious warnings:

Code: 151116 11:13:05 InnoDB: Log file ./ib_logfile0 did not exist: new to be created InnoDB: Setting log file ./ib_logfile0 size to 256 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 151116 11:13:06 InnoDB: Log file ./ib_logfile1 did not exist: new to be created InnoDB: Setting log file ./ib_logfile1 size to 256 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 151116 11:13:08 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 151116 11:13:08 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... My analysis:

It occurs to me that 50331648÷1024÷1024=48, so the log file size is getting a bad value somewhere along the way. This value does not show up anywhere in my.cnf. But it is worth noting that the 256M logfile size is specified under a mysqld_multi configuration, in the [mysqld1] section. Perhaps it is not getting parsed somewhere.

Thanks for looking.

JOINER node not syncing

Lastest Forum Posts - November 16, 2015 - 3:40am

I work with a cluster of 3 nodes (Percona Cluster XtraDB, 5.6). I have a problem with adding other nodes to the cluster. The error occurs when innobackup is executed.
When i try o execute the command on the donor with: innobackupex --user=sstuser --password=sstuser /tmp/ it works fine, but when starting mysql on a JOINER node, innobackup.backup.log on donor shows "Failed to connect to MySQL server: Access denied for user 'sstuser'@'localhost' (using password: YES)".

Has anybody else had this problem?


Tokumx process dies without warning

Lastest Forum Posts - November 15, 2015 - 4:27pm
Hi all.

We run a 3 member replicaset of tokumx 2.0.2 servers each with 60gb ram (r3.2xlarge instances)

Ever since upgrading to version 2.0.2 we get this random issue where the mongod process dies.
Last messages in tokumx.log are normal insert/update etc.

I see this in dmesg :

init: tokumx main process (4312) killed by SEGV signal

Please advise

pt-online-schema-change and innodb_stats_persistent on MySQL 5.6

Latest MySQL Performance Blog posts - November 13, 2015 - 8:46am

Recently we released an updated version of our Percona Toolkit with an important update for pt-online-schema-change if you use MySQL 5.6. A new parameter has been added, analyze-before-swap. What does it do? Let me explain it in more detail.

A bug not a bug

A customer contacted us because pt-online-schema-change caused hundred of queries to get stuck after the ALTER was done. All those queries were trying read from the altered table but for some reason the queries never finished. Of course, it caused downtime. The reason behind this is this “bug not a bug”:


As a summary, if you are running 5.6 with persistent stats enabled (which it is by default), the optimizer in some cases could choose a bad execution plan because it has incorrect statistics to make a good decision. Even simple queries with a WHERE condition trying to find a value on the PK could switch to a full table scan, because the optimizer has no idea there is a PK.

There are two ways to force index calculations:

  • Wait until the background thread recalculates the statistics of the new table. This could take longer than expected if the load on the server is really high.
  • Run analyze table.

So, –analyze-before-swap actually does that. It runs ANALYZE on the new table before the table swap is done. That means that ANALYZE on the new table (before rename) does NOT affect queries of the table we are altering, but could affect the triggers’ operations for the new table.

Analyze table is not the best solution either. Check out this blog post from Peter: https://www.percona.com/blog/2008/09/02/beware-of-running-analyze-in-production/

So in case you want to disable this feature, you would need to add no-analyze-before-swap. If pt-online-schema change finds that the version running is 5.6 and that persistent stats are enabled, then the configuration option will be enabled by default.

The post pt-online-schema-change and innodb_stats_persistent on MySQL 5.6 appeared first on MySQL Performance Blog.

Cacti Monitoring Plugins on CentOS/RedHat 7 issues with memory graph.

Lastest Forum Posts - November 13, 2015 - 4:22am

There is issue with Cacti Template on CentOS/RedHat 7 systems.

Problem is "free" binary version.

Percona Cacti Template uses command "free -ob" to retrieve memory state and parse if.

/usr/bin/free binary comes from procps package and there is version difference between CentOS 6 and CentOS 7

CentOS 6 comes with version 3.2.x
CentOS 7 comes with procps-ng version 3.3.x

and CentOS 7 free does not accept -o flag.

So on this systems memory graph is always empty and all returned values from script is -1.

Xtrabackup restore from Swift fail with: wrong chunk magic at offset 0x0.

Lastest Forum Posts - November 12, 2015 - 10:03am
I am having an issue with trying to retrieve a backup from our swift cluster I receive the error: xb_stream_read_chunk(): wrong chunk magic at offset 0x0.

Here is the information for our setup:
Xtrabackup version -> xtrabackup version 2.3.2 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 306a2e0)
MariaDB Galera Cluster Version -> Server version: 10.0.21-MariaDB-wsrep MariaDB Server, wsrep_25.10.r4144
Swift Version -> swift 2.4.0

Here is the command I run for backups (I get the completed ok message at the end of the backup - see nippit) :
innobackupex --user=$USER --password=$PASSWORD \
--socket=/var/lib/mysql/mysql.sock --stream=xbstream \
--extra-lsndir=/tmp/swift-lsn /tmp/swift-lsn \
| xbcloud put --storage=Swift --swift-container=$SWIFT_BACKUP_CONTAINER \
--swift-user=$SWIFT_BACKUP_USER \
--swift-auth-url=$SWIFT_BACKUP_AUTH_ADDRESS \
--swift-key=$SWIFT_BACKUP_KEY \
--parallel=4 full_backup

Here is my log saying my backup completed successfully:
151111 21:35:27 Executing UNLOCK TABLES
151111 21:35:27 All tables unlocked
151111 21:35:27 Backup created in directory '/tmp/swift-lsn/2015-11-11_21-35-21'
151111 21:35:27 [00] Streaming backup-my.cnf
151111 21:35:27 [00] ...done
151111 21:35:27 [00] Streaming xtrabackup_info
151111 21:35:27 [00] ...done
xtrabackup: Transaction log of lsn (1620087) to (1620087) was copied.
151111 21:35:27 completed OK!
acked chunk a3e9a9389924cb9e6229ad75e92618e7
a3e9a9389924cb9e6229ad75e92618e7 is done

Here is the command I run for restore( this is where I receive the error):

xbcloud get --storage=Swift --swift-container=$SWIFT_BACKUP_CONTAINER \
--swift-user=$SWIFT_BACKUP_USER --swift-auth-url=$SWIFT_BACKUP_AUTH_ADDRESS \
--swift-key=$SWIFT_BACKUP_KEY full_backup | xbstream -xv -C /tmp/

Has anyone ran across this issue?