]]>
]]>

You are here

Feed aggregator

XtraDB Cluster behind NAT

Lastest Forum Posts - February 20, 2015 - 1:55am
Hi,

i want start XtraDB Cluster behind NAT with change ports.

For example i have HOST with IP 5.5.5.5 and XtraDB Cluster node behind NAT with ip 10.10.10.10 and ports

5.5.5.5:31082 -> 10.10.10.10:3306
5.5.5.5:31083 -> 10.10.10.10:4444
5.5.5.5:31084 -> 10.10.10.10:4567
5.5.5.5:31085 -> 10.10.10.10:4568

What i must add to my.cnf config if i want connect to exist cluster cluster.com (cluster can connect to 10.10.10.10 without port forwarding)? Is anything else except wsrep_sst_receive_address and ist.recv_addr?

: [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking innodb_log_file_size=48M binlog_format=ROW wsrep_cluster_address=gcomm://cluster.com wsrep_sst_auth=sst:xxx wsrep_provider=/usr/lib/galera3/libgalera_smm.so # default 4444 wsrep_sst_receive_address = 5.5.5.5:31083 # ist.recv_addr - default 4568 - This variable specifies the address on which nodes listens for Incremental State Transfer (IST). wsrep_provider_options = "ist.recv_addr=tcp://5.5.5.5:31085;" bind-address = *

How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

Latest MySQL Performance Blog posts - February 20, 2015 - 12:00am

This past week was marked by a series of personal findings related to the use of Global Transaction IDs (GTIDs) on Galera-based clusters such as Percona XtraDB Cluster (PXC). The main one being the fact that transactions touching MyISAM tables (and FLUSH PRIVILEGES!) issued on a giving node of the cluster are recorded on a GTID set bearing the node’s server_uuid as “source id” and added to the binary log (if the node has binlog enabled), thus being replicated to any async replicas connected to it. However, they won’t be replicated across the cluster (that is, all of this is by design, if wsrep_replicate_myisam is disabled, which it is by default).

My colleague Stéphane covered this story in one of his recent blog posts titled, “Percona XtraDB Cluster 5.6: a tale of 2 GTIDs,” explaining how those local (in reference to the node) transactions are saved in a different GTID set to the cluster’s main one and the impact this may cause when re-pointing an async slave to replicate from a different node.

GTIDs is a feature introduced in MySQL 5.6 that made replication management much easier and considering there’s a series of advantages in having an async replica attached to a PXC cluster, why hasn’t this popped out earlier to either of us? I guess there aren’t so many people using GTIDs with Galera-based clusters around yet so here’s a post to show you how to do it.

Initializing a PXC cluster configured with GTIDs

My testing environment for a 3-node cluster is composed of node1 (192.168.70.2), node2 (.3) and node3 (.4). All of them have the same PXC binaries installed:

$ rpm -qa |grep -i percona-xtradb-cluster Percona-XtraDB-Cluster-56-5.6.21-25.8.938.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.8-1.3390.rhel6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.21-25.8.938.el6.x86_64 Percona-XtraDB-Cluster-client-56-5.6.21-25.8.938.el6.x86_64 Percona-XtraDB-Cluster-shared-56-5.6.21-25.8.938.el6.x86_64

and are configured with an almost identical /etc/my.cnf (apart from wsrep_node_address and the prompt line):

[mysql] prompt='mysql {node1} > ' [mysqld] datadir = /var/lib/mysql log_warnings=2 server_id=1 log_bin=percona-bin log_slave_updates binlog_format = ROW enforce_gtid_consistency=1 gtid_mode=on wsrep_cluster_name = my-three-node-cluster wsrep_cluster_address = gcomm://192.168.70.2,192.168.70.3,192.168.70.4 wsrep_node_address = 192.168.70.2 wsrep_provider = /usr/lib64/libgalera_smm.so wsrep_sst_method = xtrabackup-v2 wsrep_sst_auth = sst:secret innodb_locks_unsafe_for_binlog = 1 innodb_autoinc_lock_mode = 2

server_id, log_bin, log_slave_updates and binlog_format are not needed for the cluster to operate but they are required to setup regular replication later on so I’ve added those to let the nodes ready to operate as masters.

We start with an empty, inexisting datadir on all nodes so I’ll use the mysql_install_db script to create a base datadir with all that is needed for MySQL to work on node1, which will be the reference node of the cluster:

[node1]$ mysql_install_db --user=mysql

We’re now ready to bootstrap the cluster from this reference node:

[node1]$ service mysql bootstrap-pxc

With that, we have an operational reference node:

mysql [node1] > select @@server_uuid; +--------------------------------------+ | @@server_uuid | +--------------------------------------+ | 834bca7f-b45e-11e4-a7b5-0800272e951c | +--------------------------------------+ 1 row in set (0.00 sec)
Note from above my.cnf settings that I’ve chosen xtrabackup-v2 as the State Snapshot Transfer (SST) method, which requires authentication (wsrep_sst_auth). For this reason, if we now try to start MySQL on node2 it will fail with its error log showing:
2015-02-14 16:58:26 24149 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --addre ss '192.168.70.3' --auth 'sst:secret' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --parent '24149' --binlog 'percona-bin' : 1 (Operation not permitted) 2015-02-14 16:58:26 24149 [ERROR] WSREP: Failed to read uuid:seqno from joiner script. 2015-02-14 16:58:26 24149 [ERROR] WSREP: SST failed: 1 (Operation not permitted) 2015-02-14 16:58:26 24149 [ERROR] Aborting
The problem here is that the SST method requires MySQL authentication: even though the credentials are shared on the wsrep_sst_auth variable of all node’s my.cnf configuration file this only tells xtrabackup-v2 to require it, it doesn’t actually configure MySQL with it. That’s a step left for us:
mysql [node1] > GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret'; Query OK, 0 rows affected (0.02 sec)
And this constitutes our very first commited transaction, which goes into the cluster’s GTID set:
mysql [node1] > select @@global.gtid_executed; +----------------------------------------+ | @@global.gtid_executed | +----------------------------------------+ | 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1 | +----------------------------------------+ 1 row in set (0.00 sec)
This GRANT only needs to be issued once, in the reference node. Now you start MySQL on node2 and node3: they will use XtraBackup to make a backup of node1’s dataset, which will contain this GRANT, and restore it in their own datadir:
[node2]$ service mysql start
and:
[node3]$ service mysql start

OK, that’s done. But how do I attach an async replica to the cluster?

Just to make it clear, you cannot attach an async replica to the cluster: you need to choose a node that is member of the cluster and make it the master for the async replica. Considering all nodes should have the same data replicated it should be easy to change the async replica’s configuration and make it replicate from any other node from the cluster – and it is, though you may need to do some adjustments (more on this later).

The initial procedure is exactly the same one used with regular replication: you start by taking a backup of the master and restoring it on the replica. We’ll use XtraBackup again to perform the backup and we’ll start by having node2 as the master of our async replica (192.168.70.7). We could stream the backup from node2 directly to the async replica and later “prepare” it (by applying the logs, which needs to be done using the same version of Percona XtraBackup that you’ve used to take the backup), but to make things simple we’ll first take the backup on node2:

[node2]$ innobackupex /tmp (...) innobackupex: Using server version 5.6.21-70.1-56-log innobackupex: Created backup directory /tmp/2015-02-14_17-53-22 (...) 150214 17:53:26 innobackupex: completed OK!

then “prepare” it:

[node2]$ innobackupex --apply-log /tmp/2015-02-14_17-53-22 (...) 150214 17:56:10 innobackupex: Starting the apply-log operation (...) 150214 17:56:14 innobackupex: completed OK!

and from our async replica we’ll copy the backup from node2 using rsync over ssh (you could use scp or any other mathod to copy the files from one server to the other):

[replica]$ rsync -av -e 'ssh -l root' 192.168.70.3:/tmp/2015-02-14_17-53-22/ /var/lib/mysql

We now need to change the ownership of those files to the ‘mysql‘ user:

[replica]$ chown mysql:mysql -R /var/lib/mysql

and take note of the “replication coordinates” – but related to GTID. We have those in the xtrabackup_info file:

[replica]$ cat /var/lib/mysql/xtrabackup_info (...) binlog_pos = GTID of the last change '7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1' (...)

In our case we took the backup very early in this database’s lifecycle (seeing by the GTID # in there) but in practice that’s seldom the case. We can now start MySQL:

[replica]$ service MySQL start

Before we can point our async replica to replicate from node2 we need to create a replication user with the right set of privileges there:

mysql [node2] > GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO repl@'192.168.70.%' IDENTIFIED BY 'p4ssword'; Query OK, 0 rows affected (0.06 sec)

Now we can use CHANGE MASTER TO on our async replica to point it to node2 using the above credentials:

mysql [replica] > CHANGE MASTER TO MASTER_HOST='192.168.70.3', MASTER_USER='repl', MASTER_PASSWORD='p4ssword', MASTER_AUTO_POSITION = 1; Query OK, 0 rows affected, 2 warnings (0.05 sec)

If we START SLAVE now we’ll run into error 1236: we need first to set the replication coordinates from when the backup was taken. With GTID replication, this is done in a different manner: instead of providing replicate coordinates in the CHANGE MASTER TO command (where we’ve used MASTER_AUTO_POSITION=1 instead) we do it by redefining the gtid_purged global variable with the GTID sets we got from the xtrabackup_info file (in this example there’s only one set):

mysql [replica] > SET @@global.gtid_purged='7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1'; Query OK, 0 rows affected (0.02 sec)

We can now START SLAVE and check its status:

mysql [replica] > START SLAVE; SHOW SLAVE STATUSG Query OK, 0 rows affected (0.00 sec) *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.70.3 Master_User: repl (...) Slave_IO_Running: Yes Slave_SQL_Running: Yes (...) Master_Server_Id: 2 Master_UUID: 8a157d9c-b465-11e4-aafa-0800272e951c (...) Retrieved_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:2 Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-2 Auto_Position: 1 1 row in set (0.00 sec)

Note that Executed_Gtid above is showing a second transaction in the cluster’s GTID set: this is related to the GRANT statement we issued on node2 to setup the replication account and it means it is now configured on all nodes members of the cluster (and this async replica as well). With that in place we can easily point our async replica to replicate from a different node, such as node3:

mysql [replica] > STOP SLAVE; CHANGE MASTER TO MASTER_HOST='192.168.70.4'; START SLAVE; Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.01 sec)

Let’s verify this is indeed the case:

mysql [replica] > SHOW SLAVE STATUSG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.70.4 Master_User: repl (...) Slave_IO_Running: Yes Slave_SQL_Running: Yes (...) Master_Server_Id: 3 Master_UUID: aa9acb85-b465-11e4-ab09-0800272e951c (...) Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-2 Auto_Position: 1

Nice! What about the caveats you were talking about in the other blog post?

The problem arises when you start to have other GTID sets on the gtid_executed variable of the nodes. This happens when a DML transaction involving a MyISAM table is issued on the node while having wsrep_replicate_myisam disabled: instead of having that transaction going to the cluster’s main GTID set it will go to a new one, bearing the node’s server_uuid as source id. In fact, that’s the behavior you find on regular MySQL replication configured with GTIDs. Here’s an example:

mysql [node1] > CREATE TABLE test.fernando1 (id int) ENGINE=MYISAM; Query OK, 0 rows affected (0.03 sec)

Creating a MyISAM table per si is not an issue, because this is a DDL, so the transaction will go to the cluster’s GTID set and replicated to the other nodes:

mysql [node1] > select @@global.gtid_executedG *************************** 1. row *************************** @@global.gtid_executed: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3 1 row in set (0.00 sec)

But doing an INSERT in such a table is a DML so the transaction will go to a different GTID set:

mysql [node1] > select @@global.gtid_executedG *************************** 1. row *************************** @@global.gtid_executed: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3, 834bca7f-b45e-11e4-a7b5-0800272e951c:1 1 row in set (0.00 sec)

Now if we move our async replica to node1 it might just works:

mysql [replica] > STOP SLAVE; CHANGE MASTER TO MASTER_HOST='192.168.70.2'; START SLAVE; Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.02 sec) Query OK, 0 rows affected (0.01 sec) mysql [replica] > show slave statusG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.70.2 (...) Retrieved_Gtid_Set: 834bca7f-b45e-11e4-a7b5-0800272e951c:1 Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3, 834bca7f-b45e-11e4-a7b5-0800272e951c:1 Auto_Position: 1 1 row in set (0.00 sec)

and that is because transaction ‘834bca7f-b45e-11e4-a7b5-0800272e951c:1‘ could still be found on the binary logs of node1. Let’s do something slightly different now, but on node2:

mysql [node2] > INSERT INTO test.fernando1 values (2); Query OK, 1 row affected (0.01 sec) mysql [node2] > select @@global.gtid_executedG *************************** 1. row *************************** @@global.gtid_executed: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3, 8a157d9c-b465-11e4-aafa-0800272e951c:1 1 row in set (0.00 sec) mysql [node2] > SHOW MASTER STATUSG *************************** 1. row *************************** File: percona-bin.000008 Position: 923 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3, 8a157d9c-b465-11e4-aafa-0800272e951c:1 1 row in set (0.00 sec) mysql [node2] > FLUSH LOGS; Query OK, 0 rows affected (0.01 sec) mysql [node2] > SHOW MASTER STATUSG *************************** 1. row *************************** File: percona-bin.000009 Position: 231 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3, 8a157d9c-b465-11e4-aafa-0800272e951c:1 1 row in set (0.00 sec) mysql [node2] > PURGE BINARY LOGS TO 'percona-bin.000009'; Query OK, 0 rows affected (0.01 sec)

Transaction ‘8a157d9c-b465-11e4-aafa-0800272e951c:1‘ that contains the INSERT statement we issued from node2 was recorded into binary log file percona-bin.000008 but the following PURGE BINARY LOGS TO command deleted that file. Now, if we point our async replica back to node2 we’ll run into a problem:

mysql [replica] > STOP SLAVE; CHANGE MASTER TO MASTER_HOST='192.168.70.3'; START SLAVE; Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.01 sec) mysql [replica] > show slave statusG *************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.70.3 (...) Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.' (...) Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3, 834bca7f-b45e-11e4-a7b5-0800272e951c:1 Auto_Position: 1 1 row in set (0.00 sec)

The master can no longer provide transaction ‘834bca7f-b45e-11e4-a7b5-0800272e951c:1‘ to the replica as requested thus breaking replication.

Ouch! Is there a fix for this?

Well, kind of. You can trick MySQL into believing it has processed this transaction by injecting an empty transaction bearing the same GTID of the missing one:

mysql [replica] > SELECT GTID_SUBTRACT('7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,8a157d9c-b465-11e4-aafa-0800272e951c:1', '7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,834bca7f-b45e-11e4-a7b5-0800272e951c:1')G *************************** 1. row *************************** GTID_SUBTRACT('7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,8a157d9c-b465-11e4-aafa-0800272e951c:1', '7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,834bca7f-b45e-11e4-a7b5-0800272e951c:1'): 8a157d9c-b465-11e4-aafa-0800272e951c:1 1 row in set (0.00 sec) mysql [replica] > SET GTID_NEXT='8a157d9c-b465-11e4-aafa-0800272e951c:1'; Query OK, 0 rows affected (0.00 sec) mysql [replica] > BEGIN;COMMIT; Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) mysql [replica] > SET gtid_next = 'AUTOMATIC'; Query OK, 0 rows affected (0.00 sec) mysql [replica] > show slave statusG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.70.3 (...) Slave_IO_Running: Yes Slave_SQL_Running: Yes (...) Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3, 834bca7f-b45e-11e4-a7b5-0800272e951c:1, 8a157d9c-b465-11e4-aafa-0800272e951c:1 Auto_Position: 1 1 row in set (0.00 sec)

The problem is that you’ll end up with an inconsistent dataset – injecting an empty transaction is the same as using SQL_SLAVE_SKIP_COUNTER on regular MySQL replication.

Take-home lesson

It is not complicated to configure a PXC cluster to use GTIDs, in fact you may prefer to do so once you get used to it. And it makes that all much easier to have an asynchronous slave replicating from one of the nodes, even moving it around to replicate from a different node. It should all go well while the nodes record their transactions on the cluster’s main GTID set, which is shared by all nodes: this implies all of them will have the same transactions in their binlogs, as designed. The problem appears when we start seeing a deviation on this pattern, with a few nodes recording local transactions on their “own” GTID set. This won’t bother the cluster operation per si, as these other GTID sets are simply ignored and won’t be replicated to other nodes, but they may complicate things for attached async replicas if you need to point them to a different node.

In such environments, strive to maintain a unified GTID set around the cluster’s main one and if you find out that one of the nodes has started adding transactions to a different set, investigate it. You shouldn’t be using MyISAM tables on an XtraDB Cluster as those aren’t officially supported but if you must do them you should probably consider using wsrep_replicate_myisam. Operations on mysql system tables should be done through the use of DDLs instead of DMLs (like GRANT instead of INSERT) and, above all, keep distance from this bug.

The post How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!) appeared first on MySQL Performance Blog.

Errors after starting new slave server replication using Percona XtraBackup's backup

Lastest Forum Posts - February 19, 2015 - 10:45pm
Hey!

We have one master MySQL server and one slave MySQL server, everything works fine. Now we need to start second slave server. So I made backup with Percona XtraBackup from slave server using Percona's manual. I use this command:
sudo innobackupex --defaults-file=/etc/my.cnf --parallel=4 --password='mypass' --slave-info --safe-slave-backup --compress --compress-threads=4 --rsync --throttle=30 /backups/test/
Then I do decompress and applylog, copying files to new slave, change directory rights, start MySQL, it works fine. Then I do RESET SLAVE ALL; on new slave server, copy my.cnf from first slave server, add there server-id = 3 and skip-slave-start option. Then I do

CHANGE MASTER TO
MASTER_HOST='master_ip',
MASTER_USER='replicationuser',
MASTER_PASSWORD='replicationpass',
MASTER_LOG_FILE='(from above)',
MASTER_LOG_POS= (from above);

With correct and fresh settings from master server using SHOW SLAVE STATUS\G command on it. Then, in 2 hours from backup creation, I start new slave, first 30 second everything in SHOW SLAVE STATUS\G works fine, LOG_POS is changing, but then I see this error:
http://s15.postimg.org/twk7u6ybv/image.jpg
I try to ignore 1062 error using my.cnf, but in this case I see:
http://s4.postimg.org/mvlugcg6l/image.jpg

So I think that something wrong with this replication or new slave server. Other slave server is working perfectly. Do you have any ideas what should I check or do?

More on MySQL 5.6 multi-threaded replication and GTIDs (and Feb. 25 webinar)

Latest MySQL Performance Blog posts - February 19, 2015 - 12:00am

In a previous post, titled “Multi-threaded replication with MySQL 5.6: Use GTIDs,” I explained that using GTID replication is almost a requirement when using MySQL 5.6 MTS. Let’s see now how to perform the day-to-day operations when MTS and GTIDs are both enabled. (I’ll also be presenting a related webinar next week titled “Multi-threaded Replication in MySQL 5.6 and 5.7″).

Seeing the execution gaps

If you have a look at SHOW SLAVE STATUS while the slave is running, you may not be expecting such an output:

[...] Executed_Gtid_Set: 1381aa44-9a60-11e4-b6d8-94dbc999324d:1-2520:2522:2524:2526-2528:2531-2533:2536-2538:2540-2541:2544:2546-2547:2550-2551:2555:2565-2566:2569:2575-2577:2579-2581:2584-2586:2588:2590-2591:2595-2597:2599:2602:2604-2605:2607-2610:2613:2615-2620:2622-2624:2626-2627:2629:2631:2634:2636-2639:2641-2642:2644:2646-2647:2649:2651-2653:2657-2658:2661-2662:2666-2672:2676-2678:2680:2683-2684:2686-2693:2695:2701:2704:2706-2707:2709:2711:2713-2714:2717:2720-2722:2729-2730:2735:2744:2746:2749:2751-2752:2762:2764-2765:2768-2769:2771:2774:2776:2780-2782:2784:2786-2787:2789:2791:2793:2800:2803:2805-2807:2809:2811-2814:2816-2817:2819-2820:2822-2826:2828-2834:2837-2840:2842:2844-2845:2847:2850-2851:2853:2855:2857-2859:2861-2863:2865-2868:2870-2871:2873-2874:2878:2880-2884:2886-2888:2891:2893:2895-2896:2899:2903:2906-2907:2910:2912:2915-2918:2921-2923:2925-2926:2930:2932:2934:2936:2939-2940:2943-2944:2946:2948-2957:2966:2969-2970:2974:2976:2979-2980:2982:2985:2987:2989:2994:2996-2997:2999:3001:3003:3006:3008:3011-3013 [...]

Ouch! What does that insane list of GTIDs mean?

It is actually easy to understand as long as you know that the GTID of all executed transaction is tracked in Executed_Gtid_Set and that execution gaps are allowed with MTS.

Then 1-2520:2522:2524 simply means that transactions #1 to #2520 have been executed, as well as transactions #2522 and #2524, but not #2521 and #2523.

You can also see that a gap at a specific position will not last for long. If you run SHOW SLAVE STATUS an instant later, you will see:

[...] Executed_Gtid_Set: 1381aa44-9a60-11e4-b6d8-94dbc999324d:1-4095:4098:4100:4103-4105[...]

This time the first execution gap is for transaction #4096.

If you stop the writes on the master, all gaps will be filled once replication has caught up and you will simply see:

[...] Executed_Gtid_Set: 1381aa44-9a60-11e4-b6d8-94dbc999324d:1-20599 [...]

Dealing with replication errors

Say replication has stopped with an error and you want to skip the offending event. The only option with GTID replication is to inject an empty transaction, which in turn means you must know the GTID of the transaction you want to skip.

Let’s look at SHOW SLAVE STATUS:

Executed_Gtid_Set: 1381aa44-9a60-11e4-b6d8-94dbc999324d:1-1052769:1052805-1052806:1052808:1052810-1052811:1052814:1052816:1052819:1052823:1052825:1052827-1052828:1052830-1052831:1052835:1052838:1052840:1052842:1052844-1052846:1052848-1052849:1052851-1052852:1052855-1052857:1052859:1052862-1052863:1052867-1052868:1052870:1052873-1052875:1052878-1052879:1052882-1052883:1052885:1052887:1052890-1052892:1052896:1052901:1052905:1052908-1052909:1052911:1052915:1052917-1052918:1052922-1052923:1052927-1052929:1052931-1052933:1052937-1052938:1052940:1052943:1052946:1052948-1052949:1052953:1052955-1052956:1052958:1052962-1052964:1052967-1052969:1052972:1052975-1052977:1052979:1052981-1052983:1052985:1052987:1052989:1052991:1052993-1052995:1052999:1053001:1053003:1053005-1053016:1053018:1053020:1053024-1053026:1053029:1053032-1053034:1053037-1053038:1053040:1053043:1053045-1053046

So which transaction should we skip: probably 1381aa44-9a60-11e4-b6d8-94dbc999324d:1052770, right? This is the first transaction that could not be executed.

This is confirmed by looking at the Last_SQL_Error field:

Last_SQL_Error: Worker 0 failed executing transaction '1381aa44-9a60-11e4-b6d8-94dbc999324d:1052770' [...]

Once you know the GTID to skip, it is easy to restart replication (and fix the inconsistency later):

mysql> SET gtid_next='1381aa44-9a60-11e4-b6d8-94dbc999324d:1052770'; mysql> BEGIN;COMMIT; mysql> SET gtid_next='AUTOMATIC'; mysql> START SLAVE;

Taking backups

When using GTID replication, taking a backup from a multi-threaded slave is not difficult at all.

With Percona XtraBackup, simply add the --slave-info option as usual and you will get the list of executed GTIDs in the xtrabackup_slave_info file:

$ less xtrabackup_slave_info SET GLOBAL gtid_purged='1381aa44-9a60-11e4-b6d8-94dbc999324d:1-1095246:1095248-1095249:1095253-1095254:1095257-1095266:1095270-1095271:1095273:1095275:1095284-1095285:1095296:1095305:1095311-1095312:1095315-1095316:1095318:1095321:1095323-1095324:1095326-1095332:1095335:1095337-1095338:1095348-1095351:1095353-1095354:1095356:1095358-1095359:1095363-1095364:1095366:1095368-1095369:1095372:1095375-1095376:1095378:1095380-1095385:1095388:1095390-1095392:1095394-1095411:1095419:1095424-1095425:1095428:1095430-1095433:1095437-1095439:1095442:1095444-1095445:1095447:1095449:1095461-1095464:1095468:1095473:1095482-1095484:1095488-1095490:1095494:1095496-1095497:1095499-1095500:1095502:1095505:1095507:1095509:1095511-1095516:1095521:1095524-1095525:1095527:1095530-1095531:1095534-1095552:1095554:1095557:1095559-1095560:1095563:1095566:1095569:1095572-1095573:1095582:1095594-1095595:1095597:1095601-1095605:1095607-1095610:1095612-1095614:1095618:1095621-1095623:1095625-1095628:1095630:1095632:1095634:1095636:1095639-1095642:1095645:1095649:1095659:1095661:1095664-1095665:1095667-1095669:1095671:1095674'; CHANGE MASTER TO MASTER_AUTO_POSITION=1

Then starting replication on a new instance bootstrapped from this backup is easy:

mysql> SET GLOBAL gtid_purged='...'; mysql> RESET SLAVE ALL; mysql> CHANGE MASTER TO [...], MASTER_AUTO_POSITION=1; mysql> START SLAVE;

With mysqldump, simply discard the --dump-slave option as the list of executed GTIDs will be automatically added at the top of the dump:

-- MySQL dump 10.13 Distrib 5.6.22, for linux-glibc2.5 (x86_64) [...] -- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='1381aa44-9a60-11e4-b6d8-94dbc999324d:1-1095246:1095248-1095249:1095253-1095254:1095257-1095266:1095270-1095271:1095273:1095275:1095284-1095285:1095296:1095305:1095311-1095312:1095315-1095316:1095318:1095321:1095323-1095324:1095326-1095332:1095335:1095337-1095338:1095348-1095351:1095353-1095354:1095356:1095358-1095359:1095363-1095364:1095366:1095368-1095369:1095372:1095375-1095376:1095378:1095380-1095385:1095388:1095390-1095392:1095394-1095411:1095419:1095424-1095425:1095428:1095430-1095433:1095437-1095439:1095442:1095444-1095445:1095447:1095449:1095461-1095464:1095468:1095473:1095482-1095484:1095488-1095490:1095494:1095496-1095497:1095499-1095500:1095502:1095505:1095507:1095509:1095511-1095516:1095521:1095524-1095525:1095527:1095530-1095531:1095534-1095552:1095554:1095557:1095559-1095560:1095563:1095566:1095569:1095572-1095573:1095582:1095594-1095595:1095597:1095601-1095605:1095607-1095610:1095612-1095614:1095618:1095621-1095623:1095625-1095628:1095630:1095632:1095634:1095636:1095639-1095642:1095645:1095649:1095659:1095661:1095664-1095665:1095667-1095669:1095671:1095674';

And then replication can be started like stated before.

Conclusion

Seeing the execution gaps in the output of SHOW SLAVE STATUS can be disturbing at first sight, and of course you may have to change a few habits, but overall there is no specific issue when using GTIDs and MTS.

Next week I will be presenting a free webinar on multi-threaded replication (Wednesday February 25th at 10 a.m. PST). If you are interested in learning more on the topic, feel free to register. It will also be recorded – you’ll be able to use that same link to watch the presentation and download my slides.

The post More on MySQL 5.6 multi-threaded replication and GTIDs (and Feb. 25 webinar) appeared first on MySQL Performance Blog.

can not use innobackupex

Lastest Forum Posts - February 18, 2015 - 8:28pm
Hi everyone,
I'm using xtrabackup 2.2.9 and mysql 5.1.73.
I tried to used innobackupex command:
innobackupex --ibbackup=xtrabackup --user=root --password=abc /home/backupInno/base/
and I got this error: I tried to read some advice but nothing help me fix this bug. Please help me.
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 4754
main::check_server_version() called at /usr/bin/innobackupex line 1541
innobackupex: Error: Built-in InnoDB in MySQL 5.1 is not supported in this release. You can either use Percona XtraBackup 2.0, or upgrade to InnoDB plugin.

XtraDB Cluster and Docker

Lastest Forum Posts - February 18, 2015 - 7:43am
Hi,

i want run XtraDB Cluster in docker (under Marathon/Messos). But i have problem if i want connect from docker container to exist cluster. If i connect to docker container from native node, everything is ok. Do you have any idea?

For example i have navite node BDB and DOCK. I want connect DOCK to exist cluster in BDB.

Error message from BDB /var/lib/mysql/innobackup.backup.log after start mysql in DOCK
: ... innobackupex: Created backup directory /tmp/tmp.srM5t2EAga ^Gxbstream: Error writing file 'UNOPENED' (Errcode: 32 - Broken pipe) innobackupex: 'xbstream -c' returned with exit code 1. innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 4882 main::backup_file_via_stream('/tmp', 'backup-my.cnf') called at /usr//bin/innobackupex line 4931 main::backup_file('/tmp', 'backup-my.cnf', '/tmp/backup-my.cnf') called at /usr//bin/innobackupex line 4955 main::write_to_backup_file('/tmp/backup-my.cnf', '# This MySQL options file was generated by innobackupex.\x{a}\x{a}# T...') called at /usr//bin/innobackupex line 3762 main::write_backup_config_file('/tmp/backup-my.cnf') called at /usr//bin/innobackupex line 3689 main::init() called at /usr//bin/innobackupex line 1566 innobackupex: Error: Failed to stream '/tmp/backup-my.cnf': 1 at /usr//bin/innobackupex line 4882. If i try add to configuration wsrep_sst_method = xtrabackup, so innobackupex: completed OK! but i got message in DOCK.

: ... 2015-02-18 16:28:58 1043 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 4) 2015-02-18 16:28:58 1043 [Note] WSREP: Requesting state transfer: success, donor: 0 2015-02-18 16:29:00 1043 [Note] WSREP: (daf27d21, 'tcp://0.0.0.0:4567') turning message relay requesting off 2015-02-18 16:29:02 1043 [Note] WSREP: 0.0 (bdb1): State transfer to 1.0 (f73fff52892b) complete. 2015-02-18 16:29:02 1043 [Note] WSREP: Member 0.0 (bdb1) synced with group. WSREP_SST: [ERROR] xtrabackup process ended without creating '/var/lib/mysql//xtrabackup_galera_info' (20150218 16:29:02.388) WSREP_SST: [INFO] Contents of datadir (20150218 16:29:02.393) WSREP_SST: [INFO] -rw-rw---- 1 mysql mysql 8820 Feb 18 15:14 /var/lib/mysql//mysql/columns_priv.frm -rw-rw---- 1 mysql mysql 0 Feb 18 15:14 /var/lib/mysql//mysql/columns_priv.MYD -rw-rw---- 1 mysql mysql 4096 Feb 18 15:14 /var/lib/mysql//mysql/columns_priv.MYI ... -rw-rw---- 1 mysql mysql 8676 Feb 18 15:14 /var/lib/mysql//performance_schema/users.frm -rw-rw---- 1 mysql mysql 0 Feb 18 15:14 /var/lib/mysql//test/db.opt (20150218 16:29:02.424) WSREP_SST: [ERROR] Cleanup after exit with status:32 (20150218 16:29:02.429) WSREP_SST: [INFO] Removing the sst_in_progress file (20150218 16:29:02.434) 2015-02-18 16:29:02 1043 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup --role 'joiner' --address '10.22.1.29' --auth 'sst:xxx' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '1043' '' : 32 (Broken pipe) 2015-02-18 16:29:02 1043 [ERROR] WSREP: Failed to read uuid:seqno from joiner script. 2015-02-18 16:29:02 1043 [ERROR] WSREP: SST failed: 32 (Broken pipe) 2015-02-18 16:29:02 1043 [ERROR] Aborting 2015-02-18 16:29:04 1043 [Note] WSREP: Closing send monitor... 2015-02-18 16:29:04 1043 [Note] WSREP: Closed send monitor. 2015-02-18 16:29:04 1043 [Note] WSREP: gcomm: terminating thread 2015-02-18 16:29:04 1043 [Note] WSREP: gcomm: joining thread 2015-02-18 16:29:04 1043 [Note] WSREP: gcomm: closing backend 2015-02-18 16:29:05 1043 [Note] WSREP: view(view_id(NON_PRIM,864bbb23,6) memb { daf27d21,0 } joined { } left { } partitioned { 864bbb23,0 }) 2015-02-18 16:29:05 1043 [Note] WSREP: view((empty)) 2015-02-18 16:29:05 1043 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 2015-02-18 16:29:05 1043 [Note] WSREP: gcomm: closed 2015-02-18 16:29:05 1043 [Note] WSREP: Flow-control interval: [16, 16] 2015-02-18 16:29:05 1043 [Note] WSREP: Received NON-PRIMARY. 2015-02-18 16:29:05 1043 [Note] WSREP: Shifting JOINER -> OPEN (TO: 4) 2015-02-18 16:29:05 1043 [Note] WSREP: Received self-leave message. 2015-02-18 16:29:05 1043 [Note] WSREP: Flow-control interval: [0, 0] 2015-02-18 16:29:05 1043 [Note] WSREP: Received SELF-LEAVE. Closing connection. 2015-02-18 16:29:05 1043 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 4) 2015-02-18 16:29:05 1043 [Note] WSREP: RECV thread exiting 0: Success 2015-02-18 16:29:05 1043 [Note] WSREP: recv_thread() joined. 2015-02-18 16:29:05 1043 [Note] WSREP: Closing replication queue. 2015-02-18 16:29:05 1043 [Note] WSREP: Closing slave action queue. 2015-02-18 16:29:05 1043 [Note] WSREP: Service disconnected. 2015-02-18 16:29:05 1043 [Note] WSREP: rollbacker thread exiting 2015-02-18 16:29:06 1043 [Note] WSREP: Some threads may fail to exit. 2015-02-18 16:29:06 1043 [Note] Binlog end 2015-02-18 16:29:06 1043 [Note] /usr/sbin/mysqld: Shutdown complete Error in my_thread_global_end(): 1 threads didn't exit 150218 16:29:11 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended BDB my.cnf
: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking binlog_format=ROW wsrep_cluster_address=gcomm:// wsrep_sst_auth=sst:xxx wsrep_provider=/usr/lib/galera3/libgalera_smm.so bind-address = * DOCK my.cnf
: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking binlog_format=ROW wsrep_cluster_address=gcomm://bdb1 wsrep_sst_auth=sst:xxx wsrep_provider=/usr/lib/galera3/libgalera_smm.so wsrep_sst_receive_address=10.22.1.29 bind-address = * docker run (HOSTIP i use for wsrep_sst_receive_address in container)
: docker run --name galera-1 -i -t -e HOSTIP=`hostname -i` -p 4568:4568 -p 3306:3306 -p 4444:4444 -p 4567:4567 -d szn-percona-cluster

Percona submits 7 talks for Vancouver OpenStack Summit (voting ends Feb. 23)

Latest MySQL Performance Blog posts - February 18, 2015 - 6:43am

Percona has submitted seven talks for the next OpenStack Summit in Vancouver this May. And as with all OpenStack Summit events, the community decides the content. Voting ends February 23, and if you aren’t already an OpenStack Foundation member (required to vote), you can join now for free here.

Percona’s Vancouver OpenStack Summit proposals

Percona’s proposals include collaborations with top contributors across a variety of OpenStack services including Trove and Swift. You can vote for our talks by clicking the titles below that interest you.

MySQL and OpenStack Deep Dive
Speakers: Peter Boros, Jay Pipes (Mirantis)

Deep Dive into MySQL Replication with OpenStack Trove, and Kilo
Speakers: George Lorch, Amrith Kumar (Tesora)

MySQL on Ceph Storage: Exploring Design, Challenges and Benefits
Speakers: Yves Trudeau, Kyle Bader (Red Hat)

Core Services MySQL Database Backup & Recovery to Swift
Speakers: Kenny Gryp, Chris Nelson (SwiftStack)

Smart MySQL Log Management with Swift
Speakers: Matt Griffin, Chris Nelson (SwiftStack)

Discovering Better Database Architectures For Core Services In OpenStack
Speakers: Kenny Gryp, Matt Griffin

Upgrading your OpenStack Core Services On The Database Side: Nova, Neutron, Cinder…
Speakers: Kenny Gryp, Matt Griffin

Other interesting proposals

Here are a few proposals from other organizations that look particularly interesting. Please consider them as well.

Exploration into OpenStack Trove, customer use cases, and the future of Trove for the community
Speakers: Amrith Kumar (Tesora), Brad Topol (IBM), Mariam John (IBM)

The Entrepreneur’s Challenge: The Realities of Starting an OpenStack Company
Speakers: Simon Anderson (Dreamhost, Inktank), Ken Rugg (Tesora), Josh McKenty (Pivotal, Piston Cloud), Jesse Proudman (BlueBox), Joe Arnold (SwiftStack)

Making a Case for Your OpenStack Deployment: How Vendor’s Can Help
Speaker: Ryan Floyd (Storm Ventures)

Real World Experiences with Upgrading OpenStack at Time Warner Cable
Speakers: Clayton ONeill (Time Warner Cable), Matt Fischer (Time Warner Cable)

Percona & OpenStack

According to the most recent OpenStack User Survey in November 2014, Percona’s database software is a popular choice for OpenStack operators needing high availability.

OpenStack User Survey results from November 2014 shows Percona XtraDB Cluster as the top Galera-based choice for production clouds.

Percona XtraDB Cluster, the top Galera-based MySQL cluster solution for production OpenStack deployments, incorporates the latest version of MySQL 5.6, Percona Server 5.6, Percona XtraBackup, and Galera. This combination delivers top performance, high availability, and critical security coverage with the latest features and fixes. Additionally, Percona Server is a popular guest database option with unique features designed for cloud operators offering DBaaS.

In addition to sharing our open source software with the OpenStack community, Percona is sharing our expertise in services like Trove, projects like the HA Guide update, extensive benchmark testing activities, and upcoming events like OpenStack Live 2015. The inaugural OpenStack Live Conference, April 13-14 in Santa Clara, California, will be a user-focused event. The program will cover database-related on topics like Trove as well as other OpenStack services and feature multiple 3-hour hands-on tutorials.

Percona is a proud supporter of OpenStack and we hope to see you in both Santa Clara in April and Vancouver in May. And in the meantime, don’t forget to vote!

The post Percona submits 7 talks for Vancouver OpenStack Summit (voting ends Feb. 23) appeared first on MySQL Performance Blog.

innobackupex fails on apply-log step

Lastest Forum Posts - February 18, 2015 - 3:23am
The innobackupex script fails to apply-logs with the following error:
innobackupex --apply-log --rebuild-indexes 2015-02-17_15-03-01/
InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.
This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p
IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints "completed OK!".
150218 09:08:02 innobackupex: Starting ibbackup with command: xtrabackup --defaults-file="/tmp/2015-02-17_15-03-01/backup-my.cnf" --defaults-group="mysqld" --prepare --target-dir=/tmp/2015-02-17_15-03-01 --rebuild-indexes
xtrabackup version 2.3.0alpha1 based on MySQL server 5.6.21 Linux (x86_64) (revision id: )
xtrabackup: cd to /tmp/2015-02-17_15-03-01
xtrabackup: This target seems to be not prepared yet.
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(48539303)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
Starting to expand compacted .ibd files.
Expanding ./database2/log.ibd
Expanding ./mysql/slave_master_info.ibd
Expanding ./mysql/innodb_table_stats.ibd
Expanding ./mysql/slave_relay_log_info.ibd
Expanding ./mysql/slave_worker_info.ibd
Expanding ./mysql/innodb_index_stats.ibd
Expanding ./perconatest/heartbeat.ibd
Expanding ./perconatest/example.ibd
Expanding ./database1/XXXXXX.ibd
Expanding ./database1/XXXXXX2.ibd
Expanding ./database1/XXXXXX3.ibd
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: Using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 100.0M
InnoDB: Completed initialization of buffer pool
InnoDB: Highest supported file format is Barracuda.
InnoDB: The log sequence numbers 30578769 and 30578769 in ibdata files do not match the log sequence number 48539303 in the ib_logfiles!
InnoDB: Database was not shutdown normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages
InnoDB: from the doublewrite buffer...
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
[01] Checking if there are indexes to rebuild in table mysql/innodb_index_stats (space id: 2)
[01] Checking if there are indexes to rebuild in table mysql/innodb_table_stats (space id: 3)
[01] Checking if there are indexes to rebuild in table mysql/slave_master_info (space id: 4)
[01] Checking if there are indexes to rebuild in table mysql/slave_relay_log_info (space id: 5)
[01] Checking if there are indexes to rebuild in table mysql/slave_worker_info (space id: 6)
[01] Checking if there are indexes to rebuild in table perconatest/example (space id: 163)
[01] Checking if there are indexes to rebuild in table perconatest/heartbeat (space id: 161)
[01] Checking if there are indexes to rebuild in table database1/XXXXX (space id: 321)
[01] Found index Name_XXXXXX
[01] Found index fk_table4_table51_idx
[01] Rebuilding 2 index(es).
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 322)
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 323)
[01] Found index XXXXXXX
[01] Found index XXXXXX_cns_1
[01] Found index XXXXXX_idx
[01] Rebuilding 3 index(es).
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 324)
[01] Found index Level
[01] Found index Name
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Rebuilding 4 index(es).
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 325)
[01] Found index XXXXXX
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Rebuilding 3 index(es).
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 327)
[01] Found index XXXXXX_cns_1
[01] Found index XXXXXX_cns_1
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Rebuilding 6 index(es).
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_00000000000007b3_INDEX_1 (space id: 377)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_00000000000007b3_INDEX_2 (space id: 378)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_00000000000007b3_INDEX_3 (space id: 379)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_00000000000007b3_INDEX_4 (space id: 380)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_00000000000007b3_INDEX_5 (space id: 381)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_00000000000007b3_INDEX_6 (space id: 382)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_XXXXXX (space id: 374)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_XXXXXX (space id: 375)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_XXXXXX (space id: 376)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_XXXXXX (space id: 372)
[01] Checking if there are indexes to rebuild in table database1/FTS_0000000000000181_XXXXXX (space id: 373)
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 351)
[01] Found index XXXXXX_cns_1
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Rebuilding 3 index(es).
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 352)
[01] Found index XXXXXX_cns_1
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index fk_XXXXXX_table51_idx
[01] Rebuilding 6 index(es).
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 353)
[01] Found index XXXXXX_cns_1
[01] Found index XXXXXX_cns_2
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Rebuilding 5 index(es).
[01] Checking if there are indexes to rebuild in table database1/XXXXXX (space id: 371)
[01] Found index XXXXXX
[01] Found index XXXXXX_cns_1
[01] Found index XXXXXX_cns_2
[01] Found index XXXXXX_cns_3
[01] Found index XXXXXX_cns_4
[01] Found index XXXXXX_cns_8
[01] Found index XXXXXX_cns_9
[01] Found index XXXXXX_cns_5
[01] Found index XXXXXX_cns_6
[01] Found index XXXXXX_cns_7
[01] Found index XXXXXX_cns_10
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_ibfk_7_idx
[01] Found index XXXXXX_idx
[01] Found index XXXXXX_table51_idx
[01] Found index XXXXXX_idx
[01] Rebuilding 29 index(es).
09:08:05 UTC - xtrabackup got signal 11 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
xtrabackup(my_print_stacktrace+0x2e) [0x912a0e]
xtrabackup(handle_fatal_signal+0x235) [0x819545]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x7fce72140cb0]
Please report a bug at https://bugs.launchpad.net/percona-xtrabackup
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 2616
main::apply_log() called at /usr/bin/innobackupex line 1559
innobackupex: Error:
innobackupex: ibbackup failed at /usr/bin/innobackupex line 2616.
The command to generate the backup is:
innobackupex --encrypt=AES256 --encrypt-key-file=/etc/innobackupex.key --compact --compress --user=XXXX --password=XXXXX /tmp
And, after the apply-log step, the decrypt step works perfectly, the command is:
innobackupex --decrypt=AES256 --encrypt-key-file=/etc/innobackupex.key --decompress 2015-02-17_15-03-01
The apply-log command is:
innobackupex --apply-log --rebuild-indexes 2015-02-17_15-03-01
We have tested several percona-xtrabackup version; 2.2.8, 2.2.9 and 2.3-alpha with the same result.
The info about the system is:
percona xtradb cluster: 5.6.21-25.8-938.precise
OS: ubuntu server 12.04.5
percona-xtrabackup: 2.2.8, 2.2.9 and 2.3.0-alpha
I've attached the report for the 2.2.9 version.

Percona XtraBackup 2.2.9 is now available

Latest MySQL Performance Blog posts - February 17, 2015 - 10:22am

Percona is glad to announce the release of Percona XtraBackup 2.2.9 on February 17, 2015. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

Bugs Fixed:

  • Percona XtraBackup was vulnerable to MITM attack which could allow exfiltration of MySQL configuration information via --version-check option. This vulnerability was logged as CVE 2015-1027. Bug fixed #1408375.
  • xtrabackup_galera_info file isn’t overwritten during the Galera auto-recovery. Bug fixed #1418584.
  • Percona XtraBackup man pages are now included with binary packages. Bug fixed #1156209.
  • Percona XtraBackup now sets the maximum supported session value for lock_wait_timeout variable to prevent unnecessary timeouts when the global value is changed from the default. Bug fixed #1410339.
  • New option --backup-locks, enabled by default, has been implemented to control if backup locks will be used even if they are supported by the server. To disable backup locks innobackupex should be run with innobackupex --no-backup-locks option. Bug fixed #1418820.

Release notes with all the bugfixes for Percona XtraBackup 2.2.9 are available in our online documentation. Bugs can be reported on the launchpad bug tracker. Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases.

The post Percona XtraBackup 2.2.9 is now available appeared first on MySQL Performance Blog.

‘Indexing’ JSON documents for efficient MySQL queries over JSON data

Latest MySQL Performance Blog posts - February 17, 2015 - 8:45am
MySQL meets NoSQL with JSON UDF

I recently got back from FOSDEM, in Brussels, Belgium. While I was there I got to see a great talk by Sveta Smirnova, about her MySQL 5.7 Labs release JSON UDF functions. It is important to note that while the UDF come in a 5.7 release it is absolutely possible to compile and use the UDF with earlier versions of MySQL because the UDF interface has not changed for a long time. However, the UDF should still be considered alpha/preview level of quality and should not be used in production yet! For this example I am using Percona Server 5.6 with the UDF.

That being said, the proof-of-concept that I’m about to present here uses only one JSON function (JSON_EXTRACT) and it has worked well enough in my testing to present my idea here. The JSON functions will probably be GA sometime soon anyway, and this is a useful test of the JSON_EXTRACT function.

The UDF let you parse, search and manipulate JSON data inside of MySQL, bringing MySQL closer to the capabilities of a document store.

Since I am using Percona Server 5.6, I needed to compile and install the UDF. Here are the steps I took to compile the plugin:

  1. $ cd mysql-json-udfs-0.3.3-labs-json-udfs-src
  2. $ cmake -DMYSQL_DIR=/usr/local/mysql .
  3. $ sudo make install
  4. $ sudo cp *.so /usr/local/mysql/lib/plugin
JSON UDF are great, but what’s the problem

The JSON functions work very well for manipulating individual JSON objects, but like all other functions, using JSON_EXTRACT in the WHERE clause will result in a full table scan. This means the functions are virtually useless for searching through large volumes of JSON data.  If you want to use MySQL as a document store, this is going to limit the usefulness in the extreme as the ability to extract key/value pairs from JSON documents is powerful, but without indexing it can’t scale well.

What can be done to index JSON in MySQL for efficient access?

The JSON UDF provides a JSON_EXTRACT function which can pull data out of a JSON document. There are two ways we can use this function to “index” the JSON data.

  1. Add extra columns to the table (or use a separate table, or tables) containing the JSON and populate the columns using JSON_EXTRACT in a trigger. The downside is that this slows down inserts and modifications of the documents significantly.
  2. Use Flexviews materialized views to maintain an index table separately and asynchronously. The upside is that insertion/modification speed is not affected, but there is slight delay before index is populated. This is similar to eventual consistency in a document store.

Writing triggers is an exercise I’ll leave up to the user. The rest of this post will discuss using Flexviews materialized views to create a JSON index.

What is Flexviews?

Flexviews can create ‘incrementally refreshable’ materialized views. This means that the views are able to be refreshed efficiently using changes captured by FlexCDC, the change data capture tool that ships with Flexviews. Since the view can be refreshed fast, it is possible to refresh it frequently and have a low latency index, but not one perfectly in sync with the base table at all times.

The materialized view is a real table that is indexed to provide fast access. Flexviews includes a SQL_API, or a set of stored procedures for defining and maintaining materialized views.

See this set of slides for an overview of Flexviews: http://www.slideshare.net/MySQLGeek/flexviews-materialized-views-for-my-sql

Demo/POC using materialized view as an index

The first step to creating an incrementally refreshable materialized view with Flexviews, is to create a materialized view change log on all of the tables used in the view. The CREATE_MVLOG($schema, $table) function creates the log and FlexCDC will immediately being to collect changes into it.

mysql> call flexviews.create_mvlog('ssb_json','json'); Query OK, 1 row affected (0.01 sec)

Next, the materialized view name, and refresh type must be registered with the CREATE($schema, $mvname, $refreshtype) function:

mysql> call flexviews.create('ssb_json','json_idx','INCREMENTAL'); Query OK, 0 rows affected (0.00 sec) -- save the just generated identifier for the view. You can use GET_ID($schema,$mvname) later. mysql> set @mvid := last_insert_id(); Query OK, 0 rows affected (0.00 sec)

Now one or more tables have to be added to the view using the ADD_TABLE($mvid, $schema, $table, $alias,$joinclause) function. This example will use only one table, but Flexviews supports joins too.

mysql> call flexviews.add_table(@mvid, 'ssb_json','json','json',null); Query OK, 1 row affected (0.00 sec)

Expressions must be added to the view next. Since aggregation is not used in this example, the expressions should be ‘COLUMN’ type expressions. The function ADD_EXPR($mvid, $expression_type, $expression, $alias) is used to add expressions. Note that JSON_EXTRACT returns a TEXT column, so I’ve CAST the function to integer so that it can be indexed. Flexviews does not currently have a way to define prefix indexes.

mysql> call flexviews.add_expr(@mvid, 'COLUMN', "cast(json_extract(doc,'D_DateKey') as date)", 'D_DateKey'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'COLUMN',"cast(json_extract(doc,'C_CustomerKey') as unsigned)", 'C_CustomerKey'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'COLUMN',"cast(json_extract(doc,'S_SuppKey') as unsigned)", 'S_SuppKey'); Query OK, 1 row affected (0.01 sec) mysql> call flexviews.add_expr(@mvid, 'COLUMN',"cast(json_extract(doc,'P_PartKey') as unsigned)", 'P_PartKey'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'COLUMN',"cast(json_extract(doc,'LO_OrderKey') as unsigned)", 'LO_OrderKey'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'COLUMN',"cast(json_extract(doc,'LO_LineNumber') as unsigned)", 'LO_LineNumber'); Query OK, 1 row affected (0.00 sec)

I’ve also projected out the ‘id’ column from the table, which is the primary key. This ties the index entries to the original row, so that the original document can be retrieved.

mysql> call flexviews.add_expr(@mvid, 'COLUMN', 'id', 'id'); Query OK, 1 row affected (0.00 sec)

Since we want to use the materialized view as an index, we need to index the columns we’ve added to it.

mysql> call flexviews.add_expr(@mvid, 'KEY',"LO_LineNumber", 'LO_LineNumber_Idx'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'KEY',"LO_OrderKey", 'LO_OrderKey_Idx'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'KEY',"P_PartKey", 'P_PartKey_Idx'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'KEY',"S_SuppKey", 'S_SuppKey_Idx'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'KEY',"D_DateKey", 'D_DateKey_Idx'); Query OK, 1 row affected (0.00 sec) mysql> call flexviews.add_expr(@mvid, 'KEY',"C_CustomerKey", 'C_CustomerKey_Idx'); Query OK, 1 row affected (0.00 sec)

Finally, the view has to be created. There are 6 million rows in my table, the JSON functions are UDF so they are not as fast as built in functions, and I indexed a lot of things (six different indexes are being populated at once) so it takes some time to build the index:

mysql> call flexviews.enable(@mvid); Query OK, 2 rows affected (35 min 53.17 sec)

After the materialized view is built, you can see it in the schema. Note there is also a delta table, which I will explain a bit later.

mysql> show tables; +--------------------+ | Tables_in_ssb_json | +--------------------+ | json | | json_idx | | json_idx_delta | +--------------------+ 3 rows in set (0.00 sec)

Here is the table definition of json_idx, our materialized view. You can see it is indexed:

CREATE TABLE `json_idx` ( `mview$pk` bigint(20) NOT NULL AUTO_INCREMENT, `D_DateKey` date DEFAULT NULL, `C_CustomerKey` bigint(21) unsigned DEFAULT NULL, `S_SuppKey` bigint(21) unsigned DEFAULT NULL, `P_PartKey` bigint(21) unsigned DEFAULT NULL, `LO_OrderKey` bigint(21) unsigned DEFAULT NULL, `LO_LineNumber` bigint(21) unsigned DEFAULT NULL, `id` bigint(20) NOT NULL DEFAULT '0', `mview$hash` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`mview$pk`), KEY `LO_LineNumber_Idx` (`LO_LineNumber`), KEY `LO_OrderKey_Idx` (`LO_OrderKey`), KEY `P_PartKey_Idx` (`P_PartKey`), KEY `S_SuppKey_Idx` (`S_SuppKey`), KEY `D_DateKey_Idx` (`D_DateKey`), KEY `C_CustomerKey_Idx` (`C_CustomerKey`), KEY `mview$hash_key` (`mview$hash`) ) ENGINE=InnoDB AUTO_INCREMENT=6029221 DEFAULT CHARSET=latin1;

Here are some sample contents. You can see the integer values extracted out of the JSON:

mysql> select * from json_idx limit 10; +----------+------------+---------------+-----------+-----------+-------------+---------------+----+------------+ | mview$pk | D_DateKey | C_CustomerKey | S_SuppKey | P_PartKey | LO_OrderKey | LO_LineNumber | id | mview$hash | +----------+------------+---------------+-----------+-----------+-------------+---------------+----+------------+ | 1 | 1996-08-08 | 6748 | 1 | 178778 | 35620 | 2 | 1 | 3151656687 | | 2 | 1994-05-20 | 5272 | 1 | 52362 | 102790 | 4 | 2 | 2181615425 | | 3 | 1995-05-04 | 22870 | 1 | 136407 | 146757 | 3 | 3 | 544130577 | | 4 | 1996-06-16 | 12217 | 1 | 129103 | 151200 | 1 | 4 | 2969697839 | | 5 | 1992-07-20 | 21757 | 1 | 35243 | 151745 | 1 | 5 | 1438921571 | | 6 | 1997-08-16 | 18760 | 1 | 150766 | 159232 | 6 | 6 | 3941775529 | | 7 | 1994-03-04 | 757 | 1 | 15750 | 188902 | 3 | 7 | 2142628088 | | 8 | 1993-11-04 | 17830 | 1 | 192023 | 253828 | 5 | 8 | 3480147565 | | 9 | 1993-07-12 | 16933 | 1 | 59997 | 269062 | 5 | 9 | 3572286145 | | 10 | 1998-06-16 | 26660 | 1 | 30023 | 298272 | 3 | 10 | 1971966244 | +----------+------------+---------------+-----------+-----------+-------------+---------------+----+------------+ 10 rows in set (0.00 sec)

Now, there needs to be an easy way to use this index in a select statement. Since a JOIN is needed between the materialized view and the base table, a regular VIEW makes sense to access the data. We’ll call this the index view:

mysql> create view json_idx_v as select * from json natural join json_idx; Query OK, 0 rows affected (0.00 sec)

And just for completeness, here is the contents of a row from our new index view:

mysql> select * from json_idx_v limit 1G *************************** 1. row *************************** id: 1 doc: {"LO_OrderKey":"35620","LO_LineNumber":"2","LO_CustKey":"6748","LO_PartKey":"178778","LO_SuppKey":"1","LO_OrderDateKey":"19960808","LO_OrderPriority":"3-MEDIUM","LO_ShipPriority":"0","LO_Quantity":"38","LO_ExtendedPrice":"7055726","LO_OrdTotalPrice":"14691804","LO_Discount":"8","LO_Revenue":"6491267","LO_SupplyCost":"111406","LO_Tax":"1","LO_CommitDateKey":"19960909","LO_ShipMode":"REG AIR","C_CustomerKey":"6748","C_Name":"Customer#000006748","C_Address":"RSPYBRlR7RX6 f7J8","C_City":"PERU 5","C_Nation":"PERU","C_Region":"AMERICA","C_Phone":"27-580-967-4556","C_MktSegment":"AUTOMOBILE","S_SuppKey":"1","S_Name":"Supplier#000000001","S_Address":"sdrGnXCDRcfriBvY0KL,i","S_City":"PERU 0","S_Nation":"PERU","S_Region":"AMERICA","S_Phone":"27-989-741-2988","D_DateKey":"19960808","D_Date":"Augest 8, 1996","D_DayOfWeek":"Friday","D_Month":"Augest","D_Year":"1996","D_YearMonthNum":"199608","D_YearMonth":"Aug1996","D_DayNumInWeek":"6","D_DayNumInMonth":"8","D_DayNumInYear":"221","D_MonthNumInYear":"8","D_WeekNumInYear":"32","D_SellingSeason":"Summer","D_LastDayInWeekFl":"0","D_LastDayInMonthFl":"1","D_HolidayFl":"0","D_WeekDayFl":"1","P_PartKey":"178778","P_Name":"turquoise powder","P_MFGR":"MFGR#1","P_Category":"MFGR#11","P_Brand":"MFGR#1125","P_Colour":"beige","P_Type":"STANDARD POLISHED NICKEL","P_Size":"25","P_Container":"JUMBO BAG"} mview$pk: 1 D_DateKey: 1996-08-08 C_CustomerKey: 6748 S_SuppKey: 1 P_PartKey: 178778 LO_OrderKey: 35620 LO_LineNumber: 2 mview$hash: 3151656687 1 row in set (0.00 sec)

Using the UDF to find a document

The UDF does a full table scan, parsing all six million documents (TWICE!) as it goes along. Unsurprisingly, this is slow:

mysql> select * from json where json_extract(doc,'LO_OrderKey') = 35620 and json_extract(doc,'LO_LineNumber') = 2G *************************** 1. row *************************** id: 1 doc: {"LO_OrderKey":"35620","LO_LineNumber":"2","LO_CustKey":"6748","LO_PartKey":"178778","LO_SuppKey":"1","LO_OrderDateKey":"19960808","LO_OrderPriority":"3-MEDIUM","LO_ShipPriority":"0","LO_Quantity":"38","LO_ExtendedPrice":"7055726","LO_OrdTotalPrice":"14691804","LO_Discount":"8","LO_Revenue":"6491267","LO_SupplyCost":"111406","LO_Tax":"1","LO_CommitDateKey":"19960909","LO_ShipMode":"REG AIR","C_CustomerKey":"6748","C_Name":"Customer#000006748","C_Address":"RSPYBRlR7RX6 f7J8","C_City":"PERU 5","C_Nation":"PERU","C_Region":"AMERICA","C_Phone":"27-580-967-4556","C_MktSegment":"AUTOMOBILE","S_SuppKey":"1","S_Name":"Supplier#000000001","S_Address":"sdrGnXCDRcfriBvY0KL,i","S_City":"PERU 0","S_Nation":"PERU","S_Region":"AMERICA","S_Phone":"27-989-741-2988","D_DateKey":"19960808","D_Date":"Augest 8, 1996","D_DayOfWeek":"Friday","D_Month":"Augest","D_Year":"1996","D_YearMonthNum":"199608","D_YearMonth":"Aug1996","D_DayNumInWeek":"6","D_DayNumInMonth":"8","D_DayNumInYear":"221","D_MonthNumInYear":"8","D_WeekNumInYear":"32","D_SellingSeason":"Summer","D_LastDayInWeekFl":"0","D_LastDayInMonthFl":"1","D_HolidayFl":"0","D_WeekDayFl":"1","P_PartKey":"178778","P_Name":"turquoise powder","P_MFGR":"MFGR#1","P_Category":"MFGR#11","P_Brand":"MFGR#1125","P_Colour":"beige","P_Type":"STANDARD POLISHED NICKEL","P_Size":"25","P_Container":"JUMBO BAG"} 1 row in set (54.49 sec) mysql> explain select * from json where json_extract(doc,'LO_OrderKey') = 35620 and json_extract(doc,'LO_LineNumber') = 2G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: json type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 5233236 Extra: Using where 1 row in set (0.00 sec)

Using the index view to find a document

mysql> select id, doc from json_idx_v where LO_OrderKey = 35620 and LO_LineNumber = 2G *************************** 1. row *************************** id: 1 doc: {"LO_OrderKey":"35620","LO_LineNumber":"2","LO_CustKey":"6748","LO_PartKey":"178778","LO_SuppKey":"1","LO_OrderDateKey":"19960808","LO_OrderPriority":"3-MEDIUM","LO_ShipPriority":"0","LO_Quantity":"38","LO_ExtendedPrice":"7055726","LO_OrdTotalPrice":"14691804","LO_Discount":"8","LO_Revenue":"6491267","LO_SupplyCost":"111406","LO_Tax":"1","LO_CommitDateKey":"19960909","LO_ShipMode":"REG AIR","C_CustomerKey":"6748","C_Name":"Customer#000006748","C_Address":"RSPYBRlR7RX6 f7J8","C_City":"PERU 5","C_Nation":"PERU","C_Region":"AMERICA","C_Phone":"27-580-967-4556","C_MktSegment":"AUTOMOBILE","S_SuppKey":"1","S_Name":"Supplier#000000001","S_Address":"sdrGnXCDRcfriBvY0KL,i","S_City":"PERU 0","S_Nation":"PERU","S_Region":"AMERICA","S_Phone":"27-989-741-2988","D_DateKey":"19960808","D_Date":"Augest 8, 1996","D_DayOfWeek":"Friday","D_Month":"Augest","D_Year":"1996","D_YearMonthNum":"199608","D_YearMonth":"Aug1996","D_DayNumInWeek":"6","D_DayNumInMonth":"8","D_DayNumInYear":"221","D_MonthNumInYear":"8","D_WeekNumInYear":"32","D_SellingSeason":"Summer","D_LastDayInWeekFl":"0","D_LastDayInMonthFl":"1","D_HolidayFl":"0","D_WeekDayFl":"1","P_PartKey":"178778","P_Name":"turquoise powder","P_MFGR":"MFGR#1","P_Category":"MFGR#11","P_Brand":"MFGR#1125","P_Colour":"beige","P_Type":"STANDARD POLISHED NICKEL","P_Size":"25","P_Container":"JUMBO BAG"} 1 row in set (0.00 sec) mysql> explain select id, doc from json_idx_v where LO_OrderKey = 35620 and LO_LineNumber = 2G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: json_idx type: ref possible_keys: LO_LineNumber_Idx,LO_OrderKey_Idx key: LO_OrderKey_Idx key_len: 9 ref: const rows: 4 Extra: Using index condition; Using where *************************** 2. row *************************** id: 1 select_type: SIMPLE table: json type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 8 ref: ssb_json.json_idx.id rows: 1 Extra: NULL 2 rows in set (0.00 sec)

Keeping the index in sync

Flexviews materialized views need to be refreshed when the underlying table changes. Flexviews includes a REFRESH($mvid, $mode, $transaction_id) function.

I am going to remove one document from the table:

mysql> delete from json where id = 10000; Query OK, 1 row affected (0.01 sec)

Note there is now one row in the materialized view change log. dml_type is -1 because it is a delete:

mysql> select * from flexviews.mvlog_f1673fac9814a93508a1c917566ecd4dG *************************** 1. row *************************** dml_type: -1 uow_id: 113 fv$server_id: 33 fv$gsn: 1083 id: 10000 doc: {"LO_OrderKey":"3359521","LO_LineNumber":"2","LO_CustKey":"10306","LO_PartKey":"77997","LO_SuppKey":"4","LO_OrderDateKey":"19951010","LO_OrderPriority":"2-HIGH","LO_ShipPriority":"0","LO_Quantity":"43","LO_ExtendedPrice":"8492457","LO_OrdTotalPrice":"27032802","LO_Discount":"2","LO_Revenue":"8322607","LO_SupplyCost":"118499","LO_Tax":"4","LO_CommitDateKey":"19951228","LO_ShipMode":"FOB","C_CustomerKey":"10306","C_Name":"Customer#000010306","C_Address":"4UR9tz8","C_City":"ROMANIA 5","C_Nation":"ROMANIA","C_Region":"EUROPE","C_Phone":"29-361-986-3513","C_MktSegment":"BUILDING","S_SuppKey":"4","S_Name":"Supplier#000000004","S_Address":"qGTQJXogS83a7MB","S_City":"MOROCCO 4","S_Nation":"MOROCCO","S_Region":"AFRICA","S_Phone":"25-128-190-5944","D_DateKey":"19951010","D_Date":"Octorber 10, 1995","D_DayOfWeek":"Wednesday","D_Month":"Octorber","D_Year":"1995","D_YearMonthNum":"199510","D_YearMonth":"Oct1995","D_DayNumInWeek":"4","D_DayNumInMonth":"10","D_DayNumInYear":"283","D_MonthNumInYear":"10","D_WeekNumInYear":"41","D_SellingSeason":"Fall","D_LastDayInWeekFl":"0","D_LastDayInMonthFl":"1","D_HolidayFl":"0","D_WeekDayFl":"1","P_PartKey":"77997","P_Name":"burnished olive","P_MFGR":"MFGR#2","P_Category":"MFGR#24","P_Brand":"MFGR#2426","P_Colour":"orchid","P_Type":"MEDIUM PLATED TIN","P_Size":"16","P_Container":"WRAP PKG"} 1 row in set (0.01 sec)

Now we can verify the materialized view is out of date:

mysql> select * from json_idx where id = 10000; +----------+------------+---------------+-----------+-----------+-------------+---------------+-------+------------+ | mview$pk | D_DateKey | C_CustomerKey | S_SuppKey | P_PartKey | LO_OrderKey | LO_LineNumber | id | mview$hash | +----------+------------+---------------+-----------+-----------+-------------+---------------+-------+------------+ | 10000 | 1995-10-10 | 10306 | 4 | 77997 | 3359521 | 2 | 10000 | 2937185172 | +----------+------------+---------------+-----------+-----------+-------------+---------------+-------+------------+ 1 row in set (2.60 sec)

To bring the index up to date we must refresh it. Usually you will use the ‘BOTH’ mode to ‘COMPUTE’ and ‘APPLY’ the changes at the same time, but I am going to use COMPUTE mode to show you what ends up in the delta table:

mysql> select * from json_idx_deltaG *************************** 1. row *************************** dml_type: -1 uow_id: 113 fv$gsn: 1083 D_DateKey: 1995-10-10 C_CustomerKey: 10306 S_SuppKey: 4 P_PartKey: 77997 LO_OrderKey: 3359521 LO_LineNumber: 2 id: 10000 mview$hash: 2937185172 1 row in set (0.00 sec)

Delta tables are similar to materialized view change log tables, except they contain insertions and deletions to the view contents. In this case, you can see dml_type is -1 and id = 10000, so the row from the view corresponding to the row we deleted, will be deleted when the change is applied.

Finally the change can be applied:

mysql> call flexviews.refresh(flexviews.get_id('ssb_json','json_idx'), 'APPLY',NULL); Query OK, 2 rows affected (0.47 sec) mysql> select * from json_idx where id = 10000; -- note, we didn't index id in the MV Empty set (2.61 sec)

Finally, it makes sense to try to keep the index in sync as quickly as possible using a MySQL event:

DELIMITER ;; CREATE EVENT IF NOT EXISTS flexviews.refresh_json_idx ON SCHEDULE EVERY 1 SECOND DO BEGIN DECLARE v_got_lock tinyint default 0; SELECT GET_LOCK('JSON_IDX_LOCK', 0) INTO v_got_lock; IF v_got_lock = 1 THEN CALL flexviews.refresh(flexviews.get_id('ssb_json','json_idx'),'BOTH',NULL); SELECT RELEASE_LOCK('JSON_IDX_LOCK') INTO @discard; END IF; END;; DELIMITER ;

So there you have it. A way to index and quickly search through JSON documents and keep the index in sync automatically.

The post ‘Indexing’ JSON documents for efficient MySQL queries over JSON data appeared first on MySQL Performance Blog.

Can't create logfiles after successfull backup

Lastest Forum Posts - February 17, 2015 - 3:18am
Hey!
Thanks you for great software at first. I am new in Linux Servers, trying to use Percona XtraBackup for creating backups of mysql with InnoDB tables.
First I start to create backup from slave server:

sudo innobackupex --defaults-file=/etc/my.cnf --parallel=4 --password='mypass' --slave-info --safe-slave-backup --compress --compress-threads=4 --rsync --throttle=30 /backups/test/

Some InnoDB info:
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 1073741824


After 15 minutes I recieve message
innobackupex: completed OK!

I see files:
backup-my.cnf
mysql
xtrabackup_checkpoints
zabbix
ibdata1
performance_schema
xtrabackup_info
ibdata1.qp
test
xtrabackup_slave_info

Then, if I understood everything correctly, I must re-create log files. So I do
sudo innobackupex --apply-log --redo-only --defaults-file=/etc/my.cnf --parallel=4 --password='mypass' --no-timestamp --slave-info --safe-slave-backup --compress --compress-threads=4 --rsync --throttle=30 /backups/test/2015-02-17_09-26-12

And I see:
xtrabackup version 2.2.6 based on MySQL server 5.6.21 Linux (x86_64) (revision id: )
xtrabackup: cd to /backups/test/2015-02-17_09-26-12
xtrabackup: This target seems to be not prepared yet.
2015-02-17 12:56:25 7ff58bdc8720 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
xtrabackup: Warning: cannot open ./xtrabackup_logfile. will try to find.
xtrabackup: Fatal error: cannot find ./xtrabackup_logfile.
xtrabackup: Error: xtrabackup_init_temp_log() failed.
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 2632
main::apply_log() called at /usr/bin/innobackupex line 1561
innobackupex: Error:
innobackupex: ibbackup failed at /usr/bin/innobackupex line 2632.

For some reason I do not have "xtrabackup_logfile", although backup creation was completed successfully.

Do you have any ideas about what am I doing wrong?

Thanks in advance, Viktor.

Error: Reading from the stream has failed on Percona release 0.1-3

Lastest Forum Posts - February 17, 2015 - 12:02am
Hi,
I am using a service .Net that connect to Mysql server 5.6 and move
data between two databases of the server. The versions of all elements
are:
-Mysql server 5.6 (Percona xtraback).
-Percona release 0.1-3
-Percona-xtrabackup-2.2.7
-Windows 6.1 7601
-Versión .NET Framework 4.5.51209
-MySQL.Data 6.9.5.0(connector .Net)

The problem is that randomly this process don't connect and lunch the
following error:

"Reading from the stream has failed"

I have searched if could be a bug of Mysql, but only I have seen bugs that would
be solved: exceeded net_write_timeout no longer should lunch this
exception. Other solutions are check if the last connection is open and it's not then open a new connection but I can't open other connection by
checking the last one because this connection should be the same. Any more.

You have knowledge of any bug o problem associate with percona in this case?

Thank you very much.

Error: Reading from the stream has failed on Percona release 0.1-3

Lastest Forum Posts - February 17, 2015 - 12:02am
Hi,
I am using a service .Net that connect to Mysql server 5.6 and move
data between two databases of the server. The versions of all elements
are:
-Mysql server 5.6 (Percona xtraback).
-Percona release 0.1-3
-Percona-xtrabackup-2.2.7
-Windows 6.1 7601
-Versión .NET Framework 4.5.51209
-MySQL.Data 6.9.5.0(connector .Net)

The problem is that randomly this process don't connect and lunch the
following error:

"Reading from the stream has failed"

I have searched if could be a bug of Mysql, but only I have seen bugs that would
be solved: exceeded net_write_timeout no longer should lunch this
exception. Other solutions are check if the last connection is open and it's not then open a new connection but I can't open other connection by
checking the last one because this connection should be the same. Any more.

You have knowledge of any bug o problem associate with percona in this case?

Thank you very much.

slow create table on Percona Server 5.5 and 5.6

Lastest Forum Posts - February 16, 2015 - 9:28am
I have a Percona server with around 50 databases, and each database has around 600 tables.
We create a lot of this databases for testing automation.
Table creation takes a lot of time, and I don't find the cause of this.
We use innodb and file per table is activated.
Here is more data:
SHOW PROFILE FOR QUERY x;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000138 |
| checking permissions | 0.000008 |
| Opening tables | 0.000094 |
| creating table | 3.104143 |
| After create | 0.000050 |
| query end | 0.056317 |
| closing tables | 0.000013 |
| freeing items | 0.000093 |
| cleaning up | 0.000061 |
+----------------------+----------+



BUFFER POOL AND MEMORY
----------------------
Total memory allocated 549453824; in additional pool allocated 0
Total memory allocated by read views 1312
Internal hash tables (constant factor + variable factor)
Adaptive hash index 13801856 (8853472 + 4948384)
Page hash 553976 (buffer pool 0 only)
Dictionary cache 12334921 (2214224 + 10120697)
File system 7685104 (812272 + 6872832)
Lock system 1333496 (1329176 + 4320)
Recovery system 0 (0 + 0)
Dictionary memory allocated 10120697
Buffer pool size 32767
Buffer pool size, bytes 536854528
Free buffers 8265
Database pages 22999
Old database pages 8469
Modified db pages 0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 41998, not young 226275
0.00 youngs/s, 0.00 non-youngs/s
Pages read 17188, created 5811, written 69152
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 22999, unzip_LRU len: 2450
I/O sum[0]:cur[0], unzip sum[0]:cur[0]

Right now it's not possible to give more RAM to the server (which is a virtual machine).
The thing is that this started happening 3 months ago when we had Percona Server 5.5.
We upgraded to 5.6 and the problem persists.
Any ideas?

Conversation from innodb engine to myiasm engine

Lastest Forum Posts - February 16, 2015 - 1:56am
The following error I am getting while converting from innodb engine to myiasm engine please let me know resolution
Cannot delete or update a parent row: a foreign key constraint fails

MySQL Create Table with Partition Very Slow on server machine

Lastest Forum Posts - February 16, 2015 - 12:45am

[COLOR=#707070 !important]0[/COLOR]down votefavorite
MySQL version 5.6 InnoDB engine

Following create table statement with Partition
runs in 10 Second on My Development Machine windows 7, i5 , 8GBRam, single disk
runs in 16 Second on Virtual Machine windows server 2012 , xeon 2GHz , 615MB Ram, single disk
While takes 100 Seconds on my server machine Windows server 2008 R2 Standard, xeon 2.4 2 Processors, 32 GB Ram, ATA SCSI Mirror drives
I am not able to understand why there is so much performance difference ..
Related question on MySql forum http://forums.mysql.com/read.php?24,392502,392502

CREATETABLEIFNOTEXISTS`TestSpeed`.`CTM`(`ServerID` BIGINT NOTNULL,`ClientID` BIGINT NOTNULL,`CommunityID` INT NOTNULL,PRIMARYKEY(`ClientID`,`CommunityID`),UNIQUEINDEX`IX_CTM`(`ServerID`ASC,`CommunityID`),INDEX`IX_CTM_ClientID`(`ClientID`ASC))PARTITIONBY HASH (CommunityID) PARTITIONS 300;

MyIsam tables

Lastest Forum Posts - February 14, 2015 - 11:35am
I'm new to XtraBackup. Just tried my first backup. I'm trying to understand the results. The database I backed up was on a remote host and all the tables in the db are in MyIsam format. Yet all I see is the Innodb files in the backup results directory. Can someone explain how this works and confirm I got a valid result? I'd like to use this utility but the results aren't really transparent to me (at least not yet).

Percona XtraDB Cluster 5.6: a tale of 2 GTIDs

Latest MySQL Performance Blog posts - February 13, 2015 - 6:27am

Say you have a cluster with 3 nodes using Percona XtraDB Cluster (PXC) 5.6 and one asynchronous replica connected to node1. If asynchronous replication is using GTIDs, moving the replica so that it is connected to node2 is trivial, right? Actually replication can easily break for reasons that may not be obvious at first sight.

Summary

Let’s assume we have the following setup with 3 PXC nodes and one asynchronous replica:


Regarding MySQL GTIDs, a Galera cluster behaves like a distributed master: transactions coming from any node will use the same auto-generated uuid. This auto-generated uuid is related to the Galera uuid, it’s neither ABC, nor DEF, nor GHI.

Transactions executed for instance on node1 but not replicated to all nodes with Galera replication will generate a GTID using the uuid of the node (ABC). This can happen for writes on MyISAM tables if wsrep_replicate_myisam is not enabled.

Such local transactions bring the same potential issues as errant transactions do for a regular master-slave setup using GTID-based replication: if node3 has a local transaction, when you connect replica1 to it, replication may break instantly.

So do not assume that moving replica1 from node2 to node3 is a safe operation if you don’t check errant transactions on node3 first.

And if you find errant transactions that you don’t want to get replicated to replica1, there is only one good fix: insert a corresponding empty transaction on replica1.

Galera GTID vs MySQL GTID

Both kinds of GTIDs are using the same format: <source_id:trx_number>.

For Galera, <source_id> is generated when the cluster is bootstrapped. This <source_id> is shared by all nodes.

For MySQL, <source_id> is the server uuid. So it is easy to identify from which server a transaction originates.

Knowing the Galera GTID of a transaction will give you no clue about the corresponding MySQL GTID of the same transaction, and vice versa. You should simply consider them as separate identifiers.

MySQL GTID generation when writing to the cluster

What can be surprising is that writing to node1 will generate a MySQL GTID where <source_id> is not the server uuid:

node1> select @@server_uuid; +--------------------------------------+ | @@server_uuid | +--------------------------------------+ | 03c236a0-f860-11e3-9b80-9cebe8067a3f | +--------------------------------------+ node1> select @@global.gtid_executed; +------------------------------------------+ | @@global.gtid_executed | +------------------------------------------+ | b1f3f33a-0789-ee1c-43f3-f8373e12f1ea:1 | +------------------------------------------+

Even more surprising is that if you write to node2, you will see a single GTID set as if both transactions had been executed on the same server:

node2> select @@global.gtid_executed; +------------------------------------------+ | @@global.gtid_executed | +------------------------------------------+ | b1f3f33a-0789-ee1c-43f3-f8373e12f1ea:2 | +------------------------------------------+

Actually this is reasonable: the cluster acts as a distributed master regarding MySQL replication, so it makes sense that all nodes share the same <source_id>.

And by the way, if you are puzzled about how this ‘anonymous’ <source_id> is generated, look at this:

mysql> show global status like 'wsrep_local_state_uuid'; +------------------------+--------------------------------------+ | Variable_name | Value | +------------------------+--------------------------------------+ | wsrep_local_state_uuid | 4e0c0cc5-f876-11e3-bc0c-07c8c1ed0e15 | +------------------------+--------------------------------------+ node1> select @@global.gtid_executed; +------------------------------------------+ | @@global.gtid_executed | +------------------------------------------+ | b1f3f33a-0789-ee1c-43f3-f8373e12f1ea:1 | +------------------------------------------+

If you ‘sum’ both <source_id>, you will get ffffffff-ffff-ffff-ffff-ffffffffffff.

How can local transactions show up?

Now the question is: given that any transaction executed on any node of the cluster is replicated to all nodes, how can a local transaction (a transaction only found on one node) appear?

The most common reason is probably a write on a MyISAM table if wsrep_replicate_myisam is not enabled, simply because writes on MyISAM tables are not replicated by Galera by default:

# Node1 mysql> insert into myisam_table (id) values (1); mysql> select @@global.gtid_executed; +----------------------------------------------------------------------------------+ | @@global.gtid_executed | +----------------------------------------------------------------------------------+ | 03c236a0-f860-11e3-9b80-9cebe8067a3f:1, b1f3f33a-0789-ee1c-43f3-f8373e12f1ea:1-8 | +----------------------------------------------------------------------------------+ # Node2 mysql> select @@global.gtid_executed; +------------------------------------------+ | @@global.gtid_executed | +------------------------------------------+ | b1f3f33a-0789-ee1c-43f3-f8373e12f1ea:1-8 | +------------------------------------------+

As you can see the GTID of the local transaction on node1 uses the uuid of node1, which makes it easy to spot.

Are there other statements that can create a local transaction? Actually we found that this is also true for FLUSH PRIVILEGES, which is tracked in the binlogs but not replicated by Galera. See the bug report.

As you probably know, these local transactions can hurt when you connect an async replica on a node with a local transaction: the replication protocol used by GTID-based replication will make sure that the local transaction will be fetched from the binlogs and executed on the async slave. But of course, if the transaction is no longer in the binlogs, that triggers a replication error!

Conclusion

I will repeat here what I always say about MySQL GTIDs: it is a great feature but replication works a bit differently from regular position-based replication. So make sure you understand the main differences otherwise it is quite easy to be confused.

The post Percona XtraDB Cluster 5.6: a tale of 2 GTIDs appeared first on MySQL Performance Blog.

mysql.innodb_index_stats &amp;quot;Duplication Entry&amp;quot; error on restore

Lastest Forum Posts - February 13, 2015 - 4:49am
Hej there,

since MySQL 5.6 there are at least two new tables located in the mysql database:

- innodb_index_stats
- innodb_table_stats

Now my problem. I tried to recover from a full database dump, including the mysql database. All databases were created and rows inserted successfully, except the mysql database.

At this point i always get:

ERROR 1062 (23000) at line 9476752: Duplicate entry 'XXXXXXXXXXXX-PRIMARY-n_diff_pfx01' for key

I think i know why this is happening. And I fixed this issue by editing the dump file and changed “insert into” to “replace into”. But I don’t know how to act right to avoid further problems like this.

I’d think the following way is a good solution. What do you think? Is there an alternative?

on dump ignore `mysql.innodb_index_stats' and `mysql.innodb_table_stats'
and an replication also ignore these tables

What is best practice for this matter?

Regards,
Christian

How parallelize the SST network transfer

Lastest Forum Posts - February 13, 2015 - 2:25am
I know the xtrabackup option to parallelize the xtrabackup dump, but I would also like to parallelize the network transfer during the SST from the donor to the joiner.

The reason is that the joiner is an AWS instance that has a bandwidth capping of 8Mbit/s per process (the donor is not an AWS instance). A known workaround is to spread the replication between multiple processes, but I can't find a way to achieve this with XtraDB Cluster.

Any ideas how to solve this problem ?

Pages

Subscribe to Percona aggregator
]]>