]]>
]]>

You are here

Feed aggregator

​Reports: Warriors' Stephen Curry to be named 2014-15 NBA MVP

Lastest Forum Posts - May 5, 2015 - 2:05am


The best player on the best team of the NBA regular season looks set to claim the league's top individual award. Golden State Warriors point guard Stephen Curry http://www.officialbluejaysstore.com...or-jersey.html will soon be announced as the 2014-15 NBA Most Valuable Player, according to a report from Monte Poole of CSNBayArea.com confirmed by Yahoo Sports' Marc Spears.

Warriors guard Stephen Curry will be named the NBA’s Most Valuable Player,
Curry was considered to be the frontrunner for the award for the bulk of the regular season and outlasted plenty of deserving competition, including Houston Rockets guard James Harden.

[Follow Dunks Don't Lie on Tumblr: The best slams from all of basketball]

Poole reported the news following the Warriors' comfortable victory over the Memphis Grizzlies in Game 1 of the Western Conference Semifinals. He also provides a few details on the announcement, including its potential date:

Official announcement will come this week, sources said, likely on Monday between Games 1 and 2 of the Western Conference Paul Molitor Black Jersey semifinals series between the Warriors and the Memphis Grizzlies. The timing of the announcement, one source said Sunday morning, remains flexible and could become between Games 2 and 3 of the Warriors-Grizzlies series.
Curry, who on Sunday scored a game-high 22 points in a 101-86 Warriors victory over Memphis, will become the first Warrior in the franchise's 53-year California history to win the league's highest individual honor. The 27-year-old in February started the All-Star Game for the second consecutive season, this time leading all players in number of votes. [...]

The runner-up in the voting, according to sources, was Houston guard James Harden, who was considered the co-leader in the two-man race with Curry.
Curry had been anxiously awaiting news of the award. A week ago, he told Yahoo's Marc Spears that he got nervous after receiving a call on his phone that said it was from "NBA." It was actually NBA vice president Rod Thorn asking what the referees told him after he was fouled on a game-tying 3-pointer at the end of regulation during Game 3 of the Warriors' first-round series against the New Orleans Pelicans.

Curry worked his way into the NBA conversation early as the Warriors established themselves as the best team of the regular season. He was the top player on a roster that won a franchise-record 67 games, averaging 23.8 ppg (48.7 percent from the field, 44.3 percent from deep) Authentic R.A. Dickey Jersey and 7.7 apg while breaking his own 2-year-old record for 3-pointers made in a single season. Curry also improved his previously poor defense and played with plenty of style to make himself the most consistently fun superstar in the NBA.

Harden led a number of challengers for the award, including Oklahoma City Thunder dynamo Russell Westbrook (whose candidacy died as soon as his team missed out on the playoffs), New Orleans Pelicans forward Anthony Davis, Los Angeles Clippers point guard Chris Paul, and some guy on the Cleveland Cavaliers named LeBron James. Harden's candidacy depended on his brutally efficient scoring and role as the offensive linchpin for a Rockets squad that lost Dwight Howard for several months due to knee soreness. He would have been a perfectly acceptable winner of the award, but Curry had the credentials and broad popularity to become the clear frontrunner. He then won even more http://www.officialbluejaysstore.com...ey-jersey.html support with several fantastic performances after the Warriors had already clinched the West's No. 1 seed. Curry also has a strong argument for his level of importance to the Warriors relative to that of other players, because first-year head coach Steve Kerr reformulated the team's elite offense around his skills when he took over last summer.

Wilt Chamberlain is the only player in Warriors history to win MVP, but he did so in 1959-60 when the franchise was located in Philadelphia.

help interpreting output for log_slow_verbosity = full

Lastest Forum Posts - May 4, 2015 - 9:21am
I have a slow query I'd like to optimize.

When I run the query in question, I can see this output in the slow query log:

: # Time: 150504 15:42:27 # User@Host: root[root] @ localhost [] Id: 37 # Query_time: 6.167118 Lock_time: 0.000311 Rows_sent: 490 Rows_examined: 2792 Rows_affected: 0 # Bytes_sent: 9664 Tmp_tables: 1 Tmp_disk_tables: 0 Tmp_table_sizes: 126992 # InnoDB_trx_id: 2F6CC547 # QC_Hit: No Full_scan: No Full_join: No Tmp_table: Yes Tmp_table_on_disk: No # Filesort: Yes Filesort_on_disk: No Merge_passes: 0 # InnoDB_IO_r_ops: 1623 InnoDB_IO_r_bytes: 26591232 InnoDB_IO_r_wait: 6.007482 # InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000 # InnoDB_pages_distinct: 2016 What I can see is the value for InnoDB_IO_r_wait is too high: above 6 seconds.

What does this mean? How can I optimize it?

The query in question is a join of two tables, one with about 6 million records the other with about 65 million rows.

Server version: 5.6.22-72.0-log Percona Server (GPL), Release 72.0, Revision 738
Machine: virtual box with 10 cores and 16 GB RAM.

Keep your MySQL data in sync when using Tungsten Replicator

Latest MySQL Performance Blog posts - May 4, 2015 - 3:00am

MySQL replication isn’t perfect and sometimes our data gets out of sync, either by a failure in replication or human intervention. We are all familiar with Percona Toolkit’s pt-table-checksum and pt-table-sync to help us check and fix data inconsistencies – but imagine the following scenario where we mix regular replication with the Tungsten Replicator:

We have regular replication going from master (db1) to 4 slaves (db2, db3, db4 and db5), but also we find that db3 is also master of db4 and db5 using Tungsten replication for 1 database called test. This setup is currently working this way because it was deployed some time ago when multi-source replication was not possible using regular MySQL replication. This is now a working feature in MariaDB 10 and also a feature coming with the new MySQL 5.7 (not released yet)… in our case it is what it is

So how do we checksum and sync data when we have this scenario? Well we can still achieve it with these tools but we need to consider some extra actions:

pt-table-checksum  

First of all we need to understand that this tool was designed to checksum tables against a regular MySQL replication environment, so we need to take special care on how to avoid checksum errors by considering replication lag (yes Tungsten replication may still suffer replication lag). We also need to instruct the tool to discover slaves via dsn because the tool is designed to discover replicas using regular replication. This can be done by using the –plugin function.

My colleague Kenny already wrote an article about this some time ago but let’s revisit it to put some graphics around our case. In order to make pt-table-checksum work properly within Tungsten replicator environment we need to:
– Configure the –plugin flag using this plugin to check replication lag.
– Use –recursion-method=dsn to avoid auto-discover of slaves.

[root@db3]$ pt-table-checksum --replicate=percona.cksums  --create-replicate-table  --no-check-replication-filters  --no-check-binlog-format --recursion-method=dsn=h=db1,D=percona,t=dsns  --plugin=/home/mysql/bin/pt-plugin-tungsten_replicator.pl --check-interval=5  --max-lag=10  -d test Created plugin from /home/mysql/bin/pt-plugin-tungsten_replicator.pl. PLUGIN get_slave_lag: Using Tungsten Replicator to check replication lag Checksumming test.table1: 2% 18:14 remain Checksumming test.table1: 5% 16:25 remain Checksumming test.table1: 9% 15:06 remain Checksumming test.table1: 12% 14:25 remain Replica lag is 2823 seconds on db5 Waiting. Checksumming test.table1: 99% 14:25 remain TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 04-28T14:17:19 0 13 279560873 4178 0 9604.892 test.table1

So far so good. We have implemented a good plugin that allows us to perform checksums considering replication lag, and we found differences that we need to take care of, let’s see how to do it.

pt-table-sync

pt-table-sync is the tool we need to fix data differences but in this case we 2 problems:
1- pt-table-sync doesn’t support –recursion-method=dsn, so we need to pass hostnames to be synced as parameter. A feature request to add this recursion method can be found here (hopefully it will be added soon). This means we will need to sync each slave separately.
2- Because of 1 we can’t use –replicate flags so pt-table-sync will need to re run checksums again to find and fix differences. If checksum found differences in more than 1 table I’d recommend running the sync in separate steps, pt-table-sync modifies data. We don’t want to blindly ask it to fix our servers, right?

That being said I’d recommend running pt-table-sync with –print flag first just to make sure the sync process is going to do what we want it to do, as follows:

[root@db3]$ pt-table-sync --print --verbose --databases test -t table1 --no-foreign-key-checks h=db3 h=db4 # Syncing h=db4 # DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE .... UPDATE `test`.`table1` SET `id`='2677', `status`='open', `created`='2015-04-27 02:22:33', `created_by`='8', `updated`='2015-04-27 02:22:33', WHERE `ix_id`='9585' LIMIT 1 /*percona-toolkit src_db:test src_tbl:table1 src_dsn:h=db3 dst_db:test dst_tbl:table1 dst_dsn:h=db4 lock:0 transaction:1 changing_src:0 replicate:0 bidirectional:0 pid:16135 user:mysql host:db3*/; UPDATE `test`.`table1` SET `id`='10528', `status`='open', `created`='2015-04-27 08:22:21', `created_by`='8', `updated`='2015-04-28 10:22:55', WHERE `ix_id`='9586' LIMIT 1 /*percona-toolkit src_db:test src_tbl:table1 src_dsn:h=db3 dst_db:test dst_tbl:table1 dst_dsn:h=db4 lock:0 transaction:1 changing_src:0 replicate:0 bidirectional:0 pid:16135 user:mysql host:db3*/; UPDATE `test`.`table1` SET `id`='8118', `status`='open', `created`='2015-04-27 18:22:20', `created_by`='8', `updated`='2015-04-28 10:22:55', WHERE `ix_id`='9587' LIMIT 1 /*percona-toolkit src_db:test src_tbl:table1 src_dsn:h=db3 dst_db:test dst_tbl:table1 dst_dsn:h=db4 lock:0 transaction:1 changing_src:0 replicate:0 bidirectional:0 pid:16135 user:mysql host:db3*/; UPDATE `test`.`table1` SET `id`='1279', `status`='open', `created`='2015-04-28 06:22:16', `created_by`='8', `updated`='2015-04-28 10:22:55', WHERE `ix_id`='9588' LIMIT 1 /*percona-toolkit src_db:test src_tbl:table1 src_dsn:h=db3 dst_db:test dst_tbl:table1 dst_dsn:h=db4 lock:0 transaction:1 changing_src:0 replicate:0 bidirectional:0 pid:16135 user:mysql host:db3*/; .... # 0 0 0 31195 Chunk 11:11:11 11:11:12 2 test.table1

Now that we are good to go, we will switch –print to –execute

[root@db3]$ pt-table-sync --execute --verbose --databases test -t table1 --no-foreign-key-checks h=db3 h=db4 # Syncing h=db4 # DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE # 0 0 0 31195 Nibble 13:26:19 14:48:54 2 test.table1

And voila: data is in sync now.

Conclusions

Tungsten Replicator is a useful tool to deploy these kind of scenarios, with no need to upgrade/change MySQL version – but it still has some tricks to avoid data inconsistencies. General recommendations on good replication practices still applies here, i.e. not allowing users to run write commands on slaves and so on.

Having this in mind we can still have issues with our data but now with an extra small effort we can keep things in good health without much pain.

The post Keep your MySQL data in sync when using Tungsten Replicator appeared first on MySQL Performance Blog.

about databases comparison

Lastest Forum Posts - May 2, 2015 - 1:10am
I want to compare between three kind of databases (performances,security and cost) to meausre which one is better by using queries to read CPU-IO-storage, memory ..can any one help me??I did not got sysbench well....need one teach me in detail step by step

Compiling Percona Server-5.6.23-72.1 test hp_test2 fails on OS X Yosemite

Lastest Forum Posts - May 1, 2015 - 1:22pm
Hi folks,
I am attempting to compile Percona Server-5.6.23-72.1 from source on OS X Yosemite all compiles OK but make test gives:

The following tests FAILED:
2 - hp_test2 (SEGFAULT)
Errors while running CTest
make: *** [test] Error 8

Can anyone give me some pointers? Thanks

First incremental backup shows no increment on live Slave server?

Lastest Forum Posts - May 1, 2015 - 9:48am
I completed my first full backup of the server which went flawlessly, but I did my first incremental and it seems to have backed up the entire database again (not unexpected as all the tables are MyISAM). However the concern here is what the xtrabackup_checkpoints is reporting:
backup_type = incremental
from_lsn = 1377199387
to_lsn = 1377199387
last_lsn = 1377199387

Shouldn't there be even a little bit of incrementing up here when backing up a live slave? I want to make sure I'm interpreting this right, but according to the documentation, I should see a larger number in the to_lsn and last_lsn than the starting number, right?

Thanks in advance.

Jim Yarrow

Problems on setting up TokuDB with Percona Server

Lastest Forum Posts - May 1, 2015 - 8:10am
Not sure if it's the right place to post it, but, I've been setting up some virtual machines to test the TokuDB Storage Engine and have been getting an issue when trying ti setup TokuDB from the YUM repo. The issue is not with Percona repo but with the EPEL as TokuDB setup has jemalloc as its dependence.

: [vagrant@node01 vagrant]$ sudo yum -y install jemalloc Loaded plugins: fastestmirror, versionlock Loading mirror speeds from cached hostfile * base: centos.ufes.br * epel: mirror.globo.com * extras: centos.ufes.br * updates: centos.ufes.br Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package jemalloc.x86_64 0:3.4.0-1.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved =============================================================================================== Package Arch Version Repository Size =============================================================================================== Installing: jemalloc x86_64 3.4.0-1.el6 epel 97 k Transaction Summary =============================================================================================== Install 1 Package(s) Total download size: 97 k Installed size: 298 k Downloading Packages: http://mirror.globo.com/epel/6/x86_64/jemalloc-3.4.0-1.el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" Trying other mirror. http://mirror.uta.edu.ec/fedora-epel/6/x86_64/jemalloc-3.4.0-1.el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" Trying other mirror. http://mirror.cedia.org.ec/fedora-epel/6/x86_64/jemalloc-3.4.0-1.el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" Trying other mirror. http://mirror.ci.ifes.edu.br/epel/6/x86_64/jemalloc-3.4.0-1.el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" Trying other mirror. Error Downloading Packages: jemalloc-3.4.0-1.el6.x86_64: failure: jemalloc-3.4.0-1.el6.x86_64.rpm from epel: [Errno 256] No more mirrors to try. Looking for the jemalloc-3.4.0-1.el6.x86_64.rpm on the EPEL repo, I found that the package version is not there and the one on the repo is jemalloc-3.6.0-1.el6.x86_64.rpm. After setting up the jemalloc package that exists on the EPEL repo I was able to setup the TokuDB one.

: [vagrant@node01 vagrant]$ wget http://mirror.uta.edu.ec/fedora-epel/6/x86_64/jemalloc-3.6.0-1.el6.x86_64.rpm --2015-05-01 15:04:51-- http://mirror.uta.edu.ec/fedora-epel/6/x86_64/jemalloc-3.6.0-1.el6.x86_64.rpm Resolving mirror.uta.edu.ec... 200.93.227.165 Connecting to mirror.uta.edu.ec|200.93.227.165|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 102624 (100K) [application/x-rpm] Saving to: “jemalloc-3.6.0-1.el6.x86_64.rpm” 100%[===================================================>] 102,624 130K/s in 0.8s 2015-05-01 15:04:52 (130 KB/s) - “jemalloc-3.6.0-1.el6.x86_64.rpm” saved [102624/102624] [vagrant@node01 vagrant]$ sudo rpm -ivh jemalloc-3.6.0-1.el6.x86_64.rpm Preparing... ########################################### [100%] 1:jemalloc ########################################### [100%] [vagrant@node01 vagrant]$ sudo yum -y install Percona-Server-tokudb-56.x86_64 Loaded plugins: fastestmirror, versionlock Loading mirror speeds from cached hostfile * base: centos.ufes.br * epel: mirror.globo.com * extras: centos.ufes.br * updates: centos.ufes.br Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package Percona-Server-tokudb-56.x86_64 0:5.6.23-rel72.1.el6 will be installed --> Finished Dependency Resolution Warning: RPMDB altered outside of yum. Installing : Percona-Server-tokudb-56-5.6.23-rel72.1.el6.x86_64 1/1 * This release of Percona Server is distributed with TokuDB storage engine. * Run the following script to enable the TokuDB storage engine in Percona Server: ps_tokudb_admin --enable -u <mysql_admin_user> -p[mysql_admin_pass] [-S <socket>] [-h <host> -P <port>] * See https://www.percona.com/doc/percona-server/5.6/tokudb/tokudb_installation.html for more installation details * See https://www.percona.com/doc/percona-server/5.6/tokudb/tokudb_intro.html for an introduction to TokuDB Verifying : Percona-Server-tokudb-56-5.6.23-rel72.1.el6.x86_64 1/1 Installed: Percona-Server-tokudb-56.x86_64 0:5.6.23-rel72.1.el6 Not sure if you guys can point it to the last existing version of jemalloc on EPEL repo, just a feedback.

Cheers,

when i use percona command i found below error:

Lastest Forum Posts - May 1, 2015 - 6:00am
when I use below percona command , at that time I found error. Can you please tell me…………….
pt-online-schema-change D= dmbdemo_dmbdemo, t=test --execute --alter ‘add column x int’
Errors in command-line arguments: * Specify only one DSN on the command line

LinkBenchX: benchmark based on arrival request rate

Latest MySQL Performance Blog posts - May 1, 2015 - 12:00am

An idea for a benchmark based on the “arrival request” rate that I wrote about in a post headlined “Introducing new type of benchmark” back in 2012 was implemented in Sysbench. However, Sysbench provides only a simple workload, so to be able to compare InnoDB with TokuDB, and later MongoDB with Percona TokuMX, I wanted to use more complicated scenarios. (Both TokuDB and TokuMX are part of Percona’s product line, in the case you missed Tokutek now part of the Percona family.)

Thanks to Facebook – they provide LinkBench, a benchmark that emulates the social graph database workload. I made modifications to LinkBench, which are available here: https://github.com/vadimtk/linkbenchX. The summary of modifications is

  • Instead of generating events in a loop, we generate events with requestrate and send the event for execution to one of available Requester thread.
  • At the start, we establish N (requesters) connections to database, which are idle by default, and just waiting for an incoming event to execute.
  • The main output of the benchmark is 99% response time for ADD_LINK (INSERT + UPDATE request) and GET_LINKS_LIST (range SELECT request) operations.
  • The related output is Concurrency, that is how many Requester threads are active during the time period.
  • Ability to report stats frequently (5-10 sec interval); so we can see a trend and a stability of the result.

Also, I provide a Java package, ready to execute, so you do not need to compile from source code. It is available on the release page at https://github.com/vadimtk/linkbenchX/releases

So the main focus of the benchmark is the response time and its stability over time.

For an example, let’s see how TokuDB performs under different request rates (this was a quick run to demonstrate the benchmark abilities, not to provide numbers for TokuDB).

First graph is the 99% response time (in milliseconds), measured each 10 sec, for arrival rate 5000, 10000 and 15000 operations/sec:

or, to smooth spikes, the same graph, but with Log 10 scale for axe Y:

So there are two observations: the response time increases with an increase in the arrival rate (as it supposed to be), and there are periodical spikes in the response time.

And now we can graph Concurrency (how many Threads are busy working on requests)…

…with an explainable observation that more threads are needed to handle bigger arrival rates, and also during spikes all available 200 threads (it is configurable) become busy.

I am looking to adopt LinkBenchX to run an identical workload against MongoDB.
The current schema is simple

CREATE TABLE `linktable` ( `id1` bigint(20) unsigned NOT NULL DEFAULT '0', `id2` bigint(20) unsigned NOT NULL DEFAULT '0', `link_type` bigint(20) unsigned NOT NULL DEFAULT '0', `visibility` tinyint(3) NOT NULL DEFAULT '0', `data` varchar(255) NOT NULL DEFAULT '', `time` bigint(20) unsigned NOT NULL DEFAULT '0', `version` int(11) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (link_type, `id1`,`id2`), KEY `id1_type` (`id1`,`link_type`,`visibility`,`time`,`id2`,`version`,`data`) ) ENGINE=TokuDB DEFAULT CHARSET=latin1; CREATE TABLE `counttable` ( `id` bigint(20) unsigned NOT NULL DEFAULT '0', `link_type` bigint(20) unsigned NOT NULL DEFAULT '0', `count` int(10) unsigned NOT NULL DEFAULT '0', `time` bigint(20) unsigned NOT NULL DEFAULT '0', `version` bigint(20) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (`id`,`link_type`) ) ENGINE=TokuDB DEFAULT CHARSET=latin1; CREATE TABLE `nodetable` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `type` int(10) unsigned NOT NULL, `version` bigint(20) unsigned NOT NULL, `time` int(10) unsigned NOT NULL, `data` mediumtext NOT NULL, PRIMARY KEY(`id`) ) ENGINE=TokuDB DEFAULT CHARSET=latin1;

I am open for suggestions as to what is the proper design of documents for MongoDB – please leave your recommendations in the comments.

The post LinkBenchX: benchmark based on arrival request rate appeared first on MySQL Performance Blog.

innobackupex can not recover whole database

Lastest Forum Posts - April 30, 2015 - 5:49am
Hi all, Thank you all for support these great tool to hot backup the mysql. Then I met a problem. I am using innobackupex tools to hot backup mysql database, it's very well in test environment. But it fail on the prod environment. The story is: I stop the mysql master server for do some change on my.cnf file. but the replication is doing something at the same time. Then it give me some warning: [Warning] Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. Statements writing to a table with an auto-increment column after selecting from another table are unsafe because the order in which rows are retrieved determines what (if any) rows will be written. This order cannot be predicted and may differ on master and the slave. Statement: update xxxx(table name) ....(sql command) I don't care about it because I can do hot backup again after that. For safely master restart, I do 'flush tables with read lock', then stop the server. change the cnf file and start it. OK, I do the hot backup using command: innobackupex --user=root --password=xxxx --defaults-file=/etc/mysql/my.cnf /home/user/xtratest --no-timestamp run successful. then I recover them to another server to restore the database for mysql slave. the command is : innobackupex --user=root --password=xxxx --defaults-file=/etc/mysql/my.cnf --apply-log /home/user/xtratest innobackupex --user=root --password=xxxx --defaults-file=/etc/mysql/my.cnf --copy-back /home/user/xtratest start server;change master;start slave; All of them are so right without any error messages. Then I check the slave mysql server missing a database, which just have an empty table. missing a database, which just have some view tables missing two users. missing 10 view tables. There are two tables , I means master.A1 and slave.A1, master.B1 and slave.B1 which of them have almost 1w+ records different. The total recordes of them are only 10w+ records/per table. Can not sync by replication. What happen? Please help me about it! Thansk a lot!

The Cycle: Incremental Backup &amp;gt; Preparation &amp;gt; Restoration &amp;gt; Incremental Backup

Lastest Forum Posts - April 30, 2015 - 2:28am
Hello,

I have been trying to test Percona innobackupex to achieve incremental backups.

Full backup and restore:
  1. Take full backup - let's call it FB1
  2. Prepare FB1
  3. Restore FB1 into the MySQL data directory.
That worked like a charm.

Then I tried the incremental backup - below are the steps:
  1. Take full backup - let's call it FB2
  2. Add a few records in tables and then take incremental backup - let's call it IB1
  3. Add a few more records, then take another incremental backup on top of IB1 - let's call this IB2
  4. Add last set of records, take another incremental backup IB3.
  5. Now prepare the backup - starting with FB2 (--apply-log --redo-only), then IB1 (--apply-log --redo-only), followed by IB2 (--apply-log --redo-only) and finally IB3 (--apply-log).
  6. The optional step of preparing the composite backup [let's call it CB1 (which is FB2+IB1+IB2+IB3)] using --apply-log failed - should this be of some concern?
  7. However, I went ahead and restored the composite backup, and it seems the restoration was successful as I can see all the records that I've created even though I dropped the corresponding table before doing the restoration.
  8. Now here comes the problem - when I try to add new records here in the table after restoration, and then try to take the incremental backup on top of CB1, it throws me an error. I can provide the content of that error if you need to have a look. But before we dig into that, I just wanted to know if that not allowed by design or something, or if I doing something wrong process-wise?
Could you kindly let me know your thoughts? Thanks very much in advance!

Nikhil

Problem Cycle: Incremental Backup &amp;gt; Preparation &amp;gt; Restoration &amp;gt; Incremental Backup

Lastest Forum Posts - April 30, 2015 - 2:26am
Hello,

I have been trying to test Percona innobackupex to achieve incremental backups.

Full backup and restore:
  1. Take full backup - let's call it FB1
  2. Prepare FB1
  3. Restore FB1 into the MySQL data directory.
That worked like a charm.

Then I tried the incremental backup - below are the steps:
  1. Take full backup - let's call it FB2
  2. Add a few records in tables and then take incremental backup - let's call it IB1
  3. Add a few more records, then take another incremental backup on top of IB1 - let's call this IB2
  4. Add last set of records, take another incremental backup IB3.
  5. Now prepare the backup - starting with FB2 (--apply-log --redo-only), then IB1 (--apply-log --redo-only), followed by IB2 (--apply-log --redo-only) and finally IB3 (--apply-log).
  6. The optional step of preparing the composite backup [let's call it CB1 (which is FB2+IB1+IB2+IB3)] using --apply-log failed - should this be of some concern?
  7. However, I went ahead and restored the composite backup, and it seems the restoration was successful as I can see all the records that I've created even though I dropped the corresponding table before doing the restoration.
  8. Now here comes the problem - when I try to add new records here in the table after restoration, and then try to take the incremental backup on top of CB1, it throws me an error. I can provide the content of that error if you need to have a look. But before we dig into that, I just wanted to know if that not allowed by design or something, or if I doing something wrong process-wise?
Could you kindly let me know your thoughts? Thanks very much in advance!

Nikhil

Optimizer hints in MySQL 5.7.7 – The missed manual

Latest MySQL Performance Blog posts - April 30, 2015 - 12:00am

In version MySQL 5.7.7 Oracle presented a new promising feature: optimizer hints. However it did not publish any documentation about the hints. The only note which I found in the user manual about the hints is:

  • It is now possible to provide hints to the optimizer by including /*+ ... */ comments following the SELECT, INSERT, REPLACE, UPDATE, or DELETE keyword of SQL statements. Such statements can also be used with EXPLAIN. Examples:
    SELECT /*+ NO_RANGE_OPTIMIZATION(t3 PRIMARY, f2_idx) */ f1 FROM t3 WHERE f1 > 30 AND f1 < 33; SELECT /*+ BKA(t1, t2) */ * FROM t1 INNER JOIN t2 WHERE ...; SELECT /*+ NO_ICP(t1) */ * FROM t1 WHERE ...;

There are also three worklogs: WL #3996, WL #8016 and WL #8017. But they describe the general concept and do not have much information about which optimizations can be used and how. More light on this provided by slide 59 from Øystein Grøvlen’s session at Percona Live. But that’s all: no “official” full list of possible optimizations, no use cases… nothing.

I tried to sort it out myself.

My first finding is the fact that slide #59 really lists six of seven possible index hints. Confirmation for this exists in one of two new files under sql directory of MySQL source tree, created for this new feature.

$cat sql/opt_hints.h ... /** Hint types, MAX_HINT_ENUM should be always last. This enum should be synchronized with opt_hint_info array(see opt_hints.cc). */ enum opt_hints_enum { BKA_HINT_ENUM= 0, BNL_HINT_ENUM, ICP_HINT_ENUM, MRR_HINT_ENUM, NO_RANGE_HINT_ENUM, MAX_EXEC_TIME_HINT_ENUM, QB_NAME_HINT_ENUM, MAX_HINT_ENUM };

Looking into file sql/opt_hints.cc we can find out what these optimizations give not much choice: either enable or disable.

$cat sql/opt_hints.cc ... struct st_opt_hint_info opt_hint_info[]= { {"BKA", true, true}, {"BNL", true, true}, {"ICP", true, true}, {"MRR", true, true}, {"NO_RANGE_OPTIMIZATION", true, true}, {"MAX_EXECUTION_TIME", false, false}, {"QB_NAME", false, false}, {0, 0, 0} };

A choice for the way to include hints into SQL statements: inside comments with sign “+”/*+ NO_RANGE_OPTIMIZATION(t3 PRIMARY, f2_idx) */, – is compatible with style of optimizer hints which Oracle uses.

We actually had access to these hints before: they were accessible via variable optimizer_switch. At least such optimizations like BKA, BNL, ICP, MRR. But with new syntax we cannot only modify this access globally or per session, but can turn on or off particular optimization for a single table and column in the query. I can demonstrate it on this quite artificial but always accessible example:

mysql> use mysql Database changed mysql> explain select * from user where host in ('%', '127.0.0.1'); +----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------+ | 1 | SIMPLE | user | NULL | range | PRIMARY | PRIMARY | 180 | NULL | 2 | 100.00 | Using index condition | +----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------+ 1 row in set, 1 warning (0.01 sec) mysql> explain select /*+ NO_RANGE_OPTIMIZATION(user PRIMARY) */ * from user where host in ('%', '127.0.0.1'); +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | user | NULL | ALL | PRIMARY | NULL | NULL | NULL | 5 | 40.00 | Using where | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.00 sec)

I used one more hint, which we could not turn on or off directly earlier: range optimization.

One more “intuitively” documented feature is the ability to turn on or off a particular optimization. This works only for BKA, BNL, ICP and MRR: you can specify NO_BKA(table[[, table]…]), NO_BNL(table[[, table]…]), NO_ICP(table indexes[[, table indexes]…]) and NO_MRR(table indexes[[, table indexes]…]) to avoid using these algorithms for particular table or index in the JOIN.

MAX_EXECUTION_TIME does not require any table or key name inside. Instead you need to specify maximum time in milliseconds which query should run:

mysql> select /*+ MAX_EXECUTION_TIME(1000) */ sleep(1) from user; ERROR 3024 (HY000): Query execution was interrupted, max_statement_time exceeded mysql> select /*+ MAX_EXECUTION_TIME(10000) */ sleep(1) from user; +----------+ | sleep(1) | +----------+ | 0 | | 0 | | 0 | | 0 | | 0 | +----------+ 5 rows in set (5.00 sec)

QB_NAME is more complicated. WL #8017 tells us this is custom context. But what is this? The answer is in the MySQL test suite! Tests for optimizer hints exist in file t/opt_hints.test For QB_NAME very first entry is query:

EXPLAIN SELECT /*+ NO_ICP(t3@qb1 f3_idx) */ f2 FROM (SELECT /*+ QB_NAME(QB1) */ f2, f3, f1 FROM t3 WHERE f1 > 2 AND f3 = 'poiu') AS TD WHERE TD.f1 > 2 AND TD.f3 = 'poiu';

So we can specify custom QB_NAME for any subquery and specify optimizer hint only for this context.

To conclude this quick overview I want to show a practical example of when query hints are really needed. Last week I worked on an issue where a customer upgraded from MySQL version 5.5 to 5.6 and found some of their queries started to work slower than before. I wrote an answer which could sound funny, but still remains correct: “One of the reasons for such behavior is optimizer  improvements. While they all are made for better performance, some queries – optimized for older versions – can start working slower than before.”

To demonstrate a public example of such a query I will use my favorite source of information: MySQL Community Bugs Database. In a search for Optimizer regression bugs that are still not fixed we can find bug #68919 demonstrating regression in case the MRR algorithm is used for queries with LIMIT. In run queries, shown in the bug report, we will see a huge difference:

mysql> SELECT * FROM t1 WHERE i1>=42 AND i2<=42 LIMIT 1; +----+----+----+----+ | pk | i1 | i2 | i3 | +----+----+----+----+ | 42 | 42 | 42 | 42 | +----+----+----+----+ 1 row in set (6.88 sec) mysql> explain SELECT * FROM t1 WHERE i1>=42 AND i2<=42 LIMIT 1; +----+-------------+-------+------------+-------+---------------+------+---------+------+---------+----------+----------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+------+---------+------+---------+----------+----------------------------------+ | 1 | SIMPLE | t1 | NULL | range | idx | idx | 4 | NULL | 9999958 | 33.33 | Using index condition; Using MRR | +----+-------------+-------+------------+-------+---------------+------+---------+------+---------+----------+----------------------------------+ 1 row in set, 1 warning (0.00 sec) mysql> SELECT /*+ NO_MRR(t1) */ * FROM t1 WHERE i1>=42 AND i2<=42 LIMIT 1; +----+----+----+----+ | pk | i1 | i2 | i3 | +----+----+----+----+ | 42 | 42 | 42 | 42 | +----+----+----+----+ 1 row in set (0.00 sec)

With MRR query execution takes 6.88 seconds and 0 if MRR is not used! But the bug report itself suggests usingoptimizer_switch="mrr=off";as a workaround. And this will work perfectly well if you are OK to runSET optimizer_switch="mrr=off";every time you are running a query which will take advantage of having it OFF. With optimizer hints you can have one or another algorithm to be ON for particular table in the query and OFF for another one. I, again, took quite an artificial example, but it demonstrates the method:

mysql> explain select /*+ MRR(dept_emp) */ * from dept_emp where to_date in (select /*+ NO_MRR(salaries)*/ to_date from salaries where salary >40000 and salary <45000) and emp_no >10100 and emp_no < 30200 and dept_no in ('d005', 'd006','d007'); +----+--------------+-------------+------------+--------+------------------------+------------+---------+----------------------------+---------+----------+-----------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+--------------+-------------+------------+--------+------------------------+------------+---------+----------------------------+---------+----------+-----------------------------------------------+ | 1 | SIMPLE | dept_emp | NULL | range | PRIMARY,emp_no,dept_no | dept_no | 8 | NULL | 10578 | 100.00 | Using index condition; Using where; Using MRR | | 1 | SIMPLE | <subquery2> | NULL | eq_ref | <auto_key> | <auto_key> | 3 | employees.dept_emp.to_date | 1 | 100.00 | NULL | | 2 | MATERIALIZED | salaries | NULL | ALL | salary | NULL | NULL | NULL | 2838533 | 17.88 | Using where | +----+--------------+-------------+------------+--------+------------------------+------------+---------+----------------------------+---------+----------+-----------------------------------------------+ 3 rows in set, 1 warning (0.00 sec)

 

The post Optimizer hints in MySQL 5.7.7 – The missed manual appeared first on MySQL Performance Blog.

Optimizer hints in MySQL 5.7.7 – The missed manual

Latest MySQL Performance Blog posts - April 30, 2015 - 12:00am

In version MySQL 5.7.7 Oracle presented a new promising feature: optimizer hints. However it did not publish any documentation about the hints. The only note which I found in the user manual about the hints is:

  • It is now possible to provide hints to the optimizer by including /*+ ... */ comments following the SELECT, INSERT, REPLACE, UPDATE, or DELETE keyword of SQL statements. Such statements can also be used with EXPLAIN. Examples:
    SELECT /*+ NO_RANGE_OPTIMIZATION(t3 PRIMARY, f2_idx) */ f1 FROM t3 WHERE f1 > 30 AND f1 < 33; SELECT /*+ BKA(t1, t2) */ * FROM t1 INNER JOIN t2 WHERE ...; SELECT /*+ NO_ICP(t1) */ * FROM t1 WHERE ...;

There are also three worklogs: WL #3996, WL #8016 and WL #8017. But they describe the general concept and do not have much information about which optimizations can be used and how. More light on this provided by slide 59 from Øystein Grøvlen’s session at Percona Live. But that’s all: no “official” full list of possible optimizations, no use cases… nothing.

I tried to sort it out myself.

My first finding is the fact that slide #59 really lists six of seven possible index hints. Confirmation for this exists in one of two new files under sql directory of MySQL source tree, created for this new feature.

$cat sql/opt_hints.h ... /** Hint types, MAX_HINT_ENUM should be always last. This enum should be synchronized with opt_hint_info array(see opt_hints.cc). */ enum opt_hints_enum { BKA_HINT_ENUM= 0, BNL_HINT_ENUM, ICP_HINT_ENUM, MRR_HINT_ENUM, NO_RANGE_HINT_ENUM, MAX_EXEC_TIME_HINT_ENUM, QB_NAME_HINT_ENUM, MAX_HINT_ENUM };

Looking into file sql/opt_hints.cc we can find out what these optimizations give not much choice: either enable or disable.

$cat sql/opt_hints.cc ... struct st_opt_hint_info opt_hint_info[]= { {"BKA", true, true}, {"BNL", true, true}, {"ICP", true, true}, {"MRR", true, true}, {"NO_RANGE_OPTIMIZATION", true, true}, {"MAX_EXECUTION_TIME", false, false}, {"QB_NAME", false, false}, {0, 0, 0} };

A choice for the way to include hints into SQL statements: inside comments with sign “+”/*+ NO_RANGE_OPTIMIZATION(t3 PRIMARY, f2_idx) */, – is compatible with style of optimizer hints which Oracle uses.

We actually had access to these hints before: they were accessible via variable optimizer_switch. At least such optimizations like BKA, BNL, ICP, MRR. But with new syntax we cannot only modify this access globally or per session, but can turn on or off particular optimization for a single table and column in the query. I can demonstrate it on this quite artificial but always accessible example:

mysql> use mysql Database changed mysql> explain select * from user where host in ('%', '127.0.0.1'); +----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------+ | 1 | SIMPLE | user | NULL | range | PRIMARY | PRIMARY | 180 | NULL | 2 | 100.00 | Using index condition | +----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-----------------------+ 1 row in set, 1 warning (0.01 sec) mysql> explain select /*+ NO_RANGE_OPTIMIZATION(user PRIMARY) */ * from user where host in ('%', '127.0.0.1'); +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | user | NULL | ALL | PRIMARY | NULL | NULL | NULL | 5 | 40.00 | Using where | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.00 sec)

I used one more hint, which we could not turn on or off directly earlier: range optimization.

One more “intuitively” documented feature is the ability to turn on or off a particular optimization. This works only for BKA, BNL, ICP and MRR: you can specify NO_BKA(table[[, table]…]), NO_BNL(table[[, table]…]), NO_ICP(table indexes[[, table indexes]…]) and NO_MRR(table indexes[[, table indexes]…]) to avoid using these algorithms for particular table or index in the JOIN.

MAX_EXECUTION_TIME does not require any table or key name inside. Instead you need to specify maximum time in milliseconds which query should run:

mysql> select /*+ MAX_EXECUTION_TIME(1000) */ sleep(1) from user; ERROR 3024 (HY000): Query execution was interrupted, max_statement_time exceeded mysql> select /*+ MAX_EXECUTION_TIME(10000) */ sleep(1) from user; +----------+ | sleep(1) | +----------+ | 0 | | 0 | | 0 | | 0 | | 0 | +----------+ 5 rows in set (5.00 sec)

QB_NAME is more complicated. WL #8017 tells us this is custom context. But what is this? The answer is in the MySQL test suite! Tests for optimizer hints exist in file t/opt_hints.test For QB_NAME very first entry is query:

EXPLAIN SELECT /*+ NO_ICP(t3@qb1 f3_idx) */ f2 FROM (SELECT /*+ QB_NAME(QB1) */ f2, f3, f1 FROM t3 WHERE f1 > 2 AND f3 = 'poiu') AS TD WHERE TD.f1 > 2 AND TD.f3 = 'poiu';

So we can specify custom QB_NAME for any subquery and specify optimizer hint only for this context.

To conclude this quick overview I want to show a practical example of when query hints are really needed. Last week I worked on an issue where a customer upgraded from MySQL version 5.5 to 5.6 and found some of their queries started to work slower than before. I wrote an answer which could sound funny, but still remains correct: “One of the reasons for such behavior is optimizer  improvements. While they all are made for better performance, some queries – optimized for older versions – can start working slower than before.”

To demonstrate a public example of such a query I will use my favorite source of information: MySQL Community Bugs Database. In a search for Optimizer regression bugs that are still not fixed we can find bug #68919 demonstrating regression in case the MRR algorithm is used for queries with LIMIT. In run queries, shown in the bug report, we will see a huge difference:

mysql> SELECT * FROM t1 WHERE i1>=42 AND i2<=42 LIMIT 1; +----+----+----+----+ | pk | i1 | i2 | i3 | +----+----+----+----+ | 42 | 42 | 42 | 42 | +----+----+----+----+ 1 row in set (6.88 sec) mysql> explain SELECT * FROM t1 WHERE i1>=42 AND i2<=42 LIMIT 1; +----+-------------+-------+------------+-------+---------------+------+---------+------+---------+----------+----------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+------+---------+------+---------+----------+----------------------------------+ | 1 | SIMPLE | t1 | NULL | range | idx | idx | 4 | NULL | 9999958 | 33.33 | Using index condition; Using MRR | +----+-------------+-------+------------+-------+---------------+------+---------+------+---------+----------+----------------------------------+ 1 row in set, 1 warning (0.00 sec) mysql> SELECT /*+ NO_MRR(t1) */ * FROM t1 WHERE i1>=42 AND i2<=42 LIMIT 1; +----+----+----+----+ | pk | i1 | i2 | i3 | +----+----+----+----+ | 42 | 42 | 42 | 42 | +----+----+----+----+ 1 row in set (0.00 sec)

With MRR query execution takes 6.88 seconds and 0 if MRR is not used! But the bug report itself suggests usingoptimizer_switch="mrr=off";as a workaround. And this will work perfectly well if you are OK to runSET optimizer_switch="mrr=off";every time you are running a query which will take advantage of having it OFF. With optimizer hints you can have one or another algorithm to be ON for particular table in the query and OFF for another one. I, again, took quite an artificial example, but it demonstrates the method:

mysql> explain select /*+ MRR(dept_emp) */ * from dept_emp where to_date in (select /*+ NO_MRR(salaries)*/ to_date from salaries where salary >40000 and salary <45000) and emp_no >10100 and emp_no < 30200 and dept_no in ('d005', 'd006','d007'); +----+--------------+-------------+------------+--------+------------------------+------------+---------+----------------------------+---------+----------+-----------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+--------------+-------------+------------+--------+------------------------+------------+---------+----------------------------+---------+----------+-----------------------------------------------+ | 1 | SIMPLE | dept_emp | NULL | range | PRIMARY,emp_no,dept_no | dept_no | 8 | NULL | 10578 | 100.00 | Using index condition; Using where; Using MRR | | 1 | SIMPLE | <subquery2> | NULL | eq_ref | <auto_key> | <auto_key> | 3 | employees.dept_emp.to_date | 1 | 100.00 | NULL | | 2 | MATERIALIZED | salaries | NULL | ALL | salary | NULL | NULL | NULL | 2838533 | 17.88 | Using where | +----+--------------+-------------+------------+--------+------------------------+------------+---------+----------------------------+---------+----------+-----------------------------------------------+ 3 rows in set, 1 warning (0.00 sec)

 

The post Optimizer hints in MySQL 5.7.7 – The missed manual appeared first on MySQL Performance Blog.

Generated (Virtual) Columns in MySQL 5.7 (labs)

Latest MySQL Performance Blog posts - April 29, 2015 - 3:00am

About 2 weeks ago Oracle published the MySQL 5.7.7-labs-json version which includes a very interesting feature called “Generated columns” (also know as Virtual or Computed columns). MariaDB has a similar feature as well: Virtual (Computed) Columns.

The idea is very simple: if we store a column

`FlightDate` date

in our table we may want to filter or group by year(FlightDate), month(FlightDate) or even dayofweek(FlightDate). The “brute-force” approach: use the above Date and Time MySQL functions in the query; however it will prevent MySQL from using an index (see below). Generated columns will allow you to declare a “Virtual”, non-stored column which is computed based on the existing field; you can then add index on that virtual column, so the query will use that index.

Here is the original example:

CREATE TABLE `ontime` ( `id` int(11) NOT NULL AUTO_INCREMENT, `FlightDate` date DEFAULT NULL, `Carrier` char(2) DEFAULT NULL, `OriginAirportID` int(11) DEFAULT NULL, `OriginCityName` varchar(100) DEFAULT NULL, `OriginState` char(2) DEFAULT NULL, `DestAirportID` int(11) DEFAULT NULL, `DestCityName` varchar(100) DEFAULT NULL, `DestState` char(2) DEFAULT NULL, `DepDelayMinutes` int(11) DEFAULT NULL, `ArrDelayMinutes` int(11) DEFAULT NULL, `Cancelled` tinyint(4) DEFAULT NULL, `CancellationCode` char(1) DEFAULT NULL, `Diverted` tinyint(4) DEFAULT NULL, PRIMARY KEY (`id`), KEY `FlightDate` (`FlightDate`) ) ENGINE=InnoDB

Now I want to find all flights on Sundays (in 2013) and group by airline.

mysql> EXPLAIN SELECT carrier, count(*) FROM ontime_sm WHERE dayofweek(FlightDate) = 7 group by carrier *************************** 1. row *************************** id: 1 select_type: SIMPLE table: ontime_sm type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 151253427 Extra: Using where; Using temporary; Using filesort Results: 32 rows in set (1 min 57.93 sec)

The problem here is: MySQL will not be able to use index when you use a function which will “extract” something from the column. The standard approach is to “materialize” the column:

ALTER TABLE ontime_sm ADD Flight_dayofweek tinyint NOT NULL;

Then we will need to load data into that by running “UPDATE ontime_sm SET Flight_dayofweek = dayofweek(flight_date)”. After that we will also need to change the application to support that additional column or use a trigger to update the column. Here is the trigger example:

CREATE DEFINER = CURRENT_USER TRIGGER ontime_insert BEFORE INSERT ON ontime_sm_triggers FOR EACH ROW SET NEW.Flight_dayofweek = dayofweek(NEW.FlightDate);

One problem with the trigger is that it is slow. In my simple example it took almost 2x slower to “copy” the table using “insert into ontime_sm_copy select * from ontime_sm” when the trigger was on.

The Generated Columns from MySQL 5.7.7-labs-json version (only this version supports it on the time of writing) solves this problem. Here is the example which demonstrate its use:

CREATE TABLE `ontime_sm_virtual` ( `id` int(11) NOT NULL AUTO_INCREMENT, `FlightDate` date DEFAULT NULL, `Carrier` char(2) DEFAULT NULL, `OriginAirportID` int(11) DEFAULT NULL, `OriginCityName` varchar(100) DEFAULT NULL, `OriginState` char(2) DEFAULT NULL, `DestAirportID` int(11) DEFAULT NULL, `DestCityName` varchar(100) DEFAULT NULL, `DestState` char(2) DEFAULT NULL, `DepDelayMinutes` int(11) DEFAULT NULL, `ArrDelayMinutes` int(11) DEFAULT NULL, `Cancelled` tinyint(4) DEFAULT NULL, `CancellationCode` char(1) DEFAULT NULL, `Diverted` tinyint(4) DEFAULT NULL, `CRSElapsedTime` int(11) DEFAULT NULL, `ActualElapsedTime` int(11) DEFAULT NULL, `AirTime` int(11) DEFAULT NULL, `Flights` int(11) DEFAULT NULL, `Distance` int(11) DEFAULT NULL, `Flight_dayofweek` tinyint(4) GENERATED ALWAYS AS (dayofweek(FlightDate)) VIRTUAL, PRIMARY KEY (`id`), KEY `Flight_dayofweek` (`Flight_dayofweek`), ) ENGINE=InnoDB

Here we add Flight_dayofweek tinyint(4) GENERATED ALWAYS AS (dayofweek(FlightDate)) VIRTUAL column and index it.

Now MySQL can use this index:

mysql> EXPLAIN SELECT carrier, count(*) FROM ontime_sm_virtual WHERE Flight_dayofweek = 7 group by carrier *************************** 1. row *************************** id: 1 select_type: SIMPLE table: ontime_sm_virtual partitions: NULL type: ref possible_keys: Flight_dayofweek key: Flight_dayofweek key_len: 2 ref: const rows: 165409 filtered: 100.00 Extra: Using where; Using temporary; Using filesort

To further increase performance of this query we want to add a combined index on (Flight_dayofweek, carrier) so MySQL will avoid creating temporary table. However it is not currently supported:

mysql> alter table ontime_sm_virtual add key comb(Flight_dayofweek, carrier); ERROR 3105 (HY000): 'Virtual generated column combines with other columns to be indexed together' is not supported for generated columns.

We can add an index on 2 generated columns thou, which is good. So a trick here will be to create a “dummy” virtual column on “carrier” and index 2 of those columns:

mysql> alter table ontime_sm_virtual add Carrier_virtual char(2) GENERATED ALWAYS AS (Carrier) VIRTUAL; Query OK, 0 rows affected (0.43 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> alter table ontime_sm_virtual add key comb(Flight_dayofweek, Carrier_virtual); Query OK, 999999 rows affected (36.79 sec) Records: 999999 Duplicates: 0 Warnings: 0 mysql> EXPLAIN SELECT Carrier_virtual, count(*) FROM ontime_sm_virtual WHERE Flight_dayofweek = 7 group by Carrier_virtual *************************** 1. row *************************** id: 1 select_type: SIMPLE table: ontime_sm_virtual partitions: NULL type: ref possible_keys: Flight_dayofweek,comb key: comb key_len: 2 ref: const rows: 141223 filtered: 100.00 Extra: Using where; Using index

Now MySQL will use an index and completely avoid the filesort.

The last, but not the least: loading data to the table with generated columns is significantly faster compared to loading it into the same table with triggers:

mysql> insert into ontime_sm_triggers (id, YearD, FlightDate, Carrier, OriginAirportID, OriginCityName, OriginState, DestAirportID, DestCityName, DestState, DepDelayMinutes, ArrDelayMinutes, Cancelled, CancellationCode,Diverted, CRSElapsedTime, ActualElapsedTime, AirTime, Flights, Distance) select * from ontime_sm; Query OK, 999999 rows affected (27.86 sec) Records: 999999 Duplicates: 0 Warnings: 0 mysql> insert into ontime_sm_virtual (id, YearD, FlightDate, Carrier, OriginAirportID, OriginCityName, OriginState, DestAirportID, DestCityName, DestState, DepDelayMinutes, ArrDelayMinutes, Cancelled, CancellationCode,Diverted, CRSElapsedTime, ActualElapsedTime, AirTime, Flights, Distance) select * from ontime_sm; Query OK, 999999 rows affected (16.29 sec) Records: 999999 Duplicates: 0 Warnings: 0

Now the big disappointment: all operations with generated columns are not online right now.

mysql> alter table ontime_sm_virtual add Flight_year year GENERATED ALWAYS AS (year(FlightDate)) VIRTUAL, add key (Flight_year), lock=NONE; ERROR 1846 (0A000): LOCK=NONE is not supported. Reason: '%s' is not supported for generated columns.. Try LOCK=SHARED. mysql> alter table ontime_sm_virtual add key (Flight_year), lock=NONE; ERROR 1846 (0A000): LOCK=NONE is not supported. Reason: '%s' is not supported for generated columns.. Try LOCK=SHARED.

I hope it will be fixed in the future releases.

Conclusion

Generated columns feature is very useful. Imagine an ability to add a column + index for any “logical” piece of data without actually duplicating the data. And this can be any function: date/time/calendar, text (extract(), reverse(), metaphone()) or anything else. I hope this feature will be available in MySQL 5.7 GA. Finally, I wish adding a generated column and index can be online (it is not right now).

More information:

The post Generated (Virtual) Columns in MySQL 5.7 (labs) appeared first on MySQL Performance Blog.

Generated (Virtual) Columns in MySQL 5.7 (labs)

Latest MySQL Performance Blog posts - April 29, 2015 - 3:00am

About 2 weeks ago Oracle published the MySQL 5.7.7-labs-json version which includes a very interesting feature called “Generated columns” (also know as Virtual or Computed columns). MariaDB has a similar feature as well: Virtual (Computed) Columns.

The idea is very simple: if we store a column

`FlightDate` date

in our table we may want to filter or group by year(FlightDate), month(FlightDate) or even dayofweek(FlightDate). The “brute-force” approach: use the above Date and Time MySQL functions in the query; however it will prevent MySQL from using an index (see below). Generated columns will allow you to declare a “Virtual”, non-stored column which is computed based on the existing field; you can then add index on that virtual column, so the query will use that index.

Here is the original example:

CREATE TABLE `ontime` ( `id` int(11) NOT NULL AUTO_INCREMENT, `FlightDate` date DEFAULT NULL, `Carrier` char(2) DEFAULT NULL, `OriginAirportID` int(11) DEFAULT NULL, `OriginCityName` varchar(100) DEFAULT NULL, `OriginState` char(2) DEFAULT NULL, `DestAirportID` int(11) DEFAULT NULL, `DestCityName` varchar(100) DEFAULT NULL, `DestState` char(2) DEFAULT NULL, `DepDelayMinutes` int(11) DEFAULT NULL, `ArrDelayMinutes` int(11) DEFAULT NULL, `Cancelled` tinyint(4) DEFAULT NULL, `CancellationCode` char(1) DEFAULT NULL, `Diverted` tinyint(4) DEFAULT NULL, PRIMARY KEY (`id`), KEY `FlightDate` (`FlightDate`) ) ENGINE=InnoDB

Now I want to find all flights on Sundays (in 2013) and group by airline.

mysql> EXPLAIN SELECT carrier, count(*) FROM ontime_sm WHERE dayofweek(FlightDate) = 7 group by carrier *************************** 1. row *************************** id: 1 select_type: SIMPLE table: ontime_sm type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 151253427 Extra: Using where; Using temporary; Using filesort Results: 32 rows in set (1 min 57.93 sec)

The problem here is: MySQL will not be able to use index when you use a function which will “extract” something from the column. The standard approach is to “materialize” the column:

ALTER TABLE ontime_sm ADD Flight_dayofweek tinyint NOT NULL;

Then we will need to load data into that by running “UPDATE ontime_sm SET Flight_dayofweek = dayofweek(flight_date)”. After that we will also need to change the application to support that additional column or use a trigger to update the column. Here is the trigger example:

CREATE DEFINER = CURRENT_USER TRIGGER ontime_insert BEFORE INSERT ON ontime_sm_triggers FOR EACH ROW SET NEW.Flight_dayofweek = dayofweek(NEW.FlightDate);

One problem with the trigger is that it is slow. In my simple example it took almost 2x slower to “copy” the table using “insert into ontime_sm_copy select * from ontime_sm” when the trigger was on.

The Generated Columns from MySQL 5.7.7-labs-json version (only this version supports it on the time of writing) solves this problem. Here is the example which demonstrate its use:

CREATE TABLE `ontime_sm_virtual` ( `id` int(11) NOT NULL AUTO_INCREMENT, `FlightDate` date DEFAULT NULL, `Carrier` char(2) DEFAULT NULL, `OriginAirportID` int(11) DEFAULT NULL, `OriginCityName` varchar(100) DEFAULT NULL, `OriginState` char(2) DEFAULT NULL, `DestAirportID` int(11) DEFAULT NULL, `DestCityName` varchar(100) DEFAULT NULL, `DestState` char(2) DEFAULT NULL, `DepDelayMinutes` int(11) DEFAULT NULL, `ArrDelayMinutes` int(11) DEFAULT NULL, `Cancelled` tinyint(4) DEFAULT NULL, `CancellationCode` char(1) DEFAULT NULL, `Diverted` tinyint(4) DEFAULT NULL, `CRSElapsedTime` int(11) DEFAULT NULL, `ActualElapsedTime` int(11) DEFAULT NULL, `AirTime` int(11) DEFAULT NULL, `Flights` int(11) DEFAULT NULL, `Distance` int(11) DEFAULT NULL, `Flight_dayofweek` tinyint(4) GENERATED ALWAYS AS (dayofweek(FlightDate)) VIRTUAL, PRIMARY KEY (`id`), KEY `Flight_dayofweek` (`Flight_dayofweek`), ) ENGINE=InnoDB

Here we add Flight_dayofweek tinyint(4) GENERATED ALWAYS AS (dayofweek(FlightDate)) VIRTUAL column and index it.

Now MySQL can use this index:

mysql> EXPLAIN SELECT carrier, count(*) FROM ontime_sm_virtual WHERE Flight_dayofweek = 7 group by carrier *************************** 1. row *************************** id: 1 select_type: SIMPLE table: ontime_sm_virtual partitions: NULL type: ref possible_keys: Flight_dayofweek key: Flight_dayofweek key_len: 2 ref: const rows: 165409 filtered: 100.00 Extra: Using where; Using temporary; Using filesort

To further increase performance of this query we want to add a combined index on (Flight_dayofweek, carrier) so MySQL will avoid creating temporary table. However it is not currently supported:

mysql> alter table ontime_sm_virtual add key comb(Flight_dayofweek, carrier); ERROR 3105 (HY000): 'Virtual generated column combines with other columns to be indexed together' is not supported for generated columns.

We can add an index on 2 generated columns thou, which is good. So a trick here will be to create a “dummy” virtual column on “carrier” and index 2 of those columns:

mysql> alter table ontime_sm_virtual add Carrier_virtual char(2) GENERATED ALWAYS AS (Carrier) VIRTUAL; Query OK, 0 rows affected (0.43 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> alter table ontime_sm_virtual add key comb(Flight_dayofweek, Carrier_virtual); Query OK, 999999 rows affected (36.79 sec) Records: 999999 Duplicates: 0 Warnings: 0 mysql> EXPLAIN SELECT Carrier_virtual, count(*) FROM ontime_sm_virtual WHERE Flight_dayofweek = 7 group by Carrier_virtual *************************** 1. row *************************** id: 1 select_type: SIMPLE table: ontime_sm_virtual partitions: NULL type: ref possible_keys: Flight_dayofweek,comb key: comb key_len: 2 ref: const rows: 141223 filtered: 100.00 Extra: Using where; Using index

Now MySQL will use an index and completely avoid the filesort.

The last, but not the least: loading data to the table with generated columns is significantly faster compared to loading it into the same table with triggers:

mysql> insert into ontime_sm_triggers (id, YearD, FlightDate, Carrier, OriginAirportID, OriginCityName, OriginState, DestAirportID, DestCityName, DestState, DepDelayMinutes, ArrDelayMinutes, Cancelled, CancellationCode,Diverted, CRSElapsedTime, ActualElapsedTime, AirTime, Flights, Distance) select * from ontime_sm; Query OK, 999999 rows affected (27.86 sec) Records: 999999 Duplicates: 0 Warnings: 0 mysql> insert into ontime_sm_virtual (id, YearD, FlightDate, Carrier, OriginAirportID, OriginCityName, OriginState, DestAirportID, DestCityName, DestState, DepDelayMinutes, ArrDelayMinutes, Cancelled, CancellationCode,Diverted, CRSElapsedTime, ActualElapsedTime, AirTime, Flights, Distance) select * from ontime_sm; Query OK, 999999 rows affected (16.29 sec) Records: 999999 Duplicates: 0 Warnings: 0

Now the big disappointment: all operations with generated columns are not online right now.

mysql> alter table ontime_sm_virtual add Flight_year year GENERATED ALWAYS AS (year(FlightDate)) VIRTUAL, add key (Flight_year), lock=NONE; ERROR 1846 (0A000): LOCK=NONE is not supported. Reason: '%s' is not supported for generated columns.. Try LOCK=SHARED. mysql> alter table ontime_sm_virtual add key (Flight_year), lock=NONE; ERROR 1846 (0A000): LOCK=NONE is not supported. Reason: '%s' is not supported for generated columns.. Try LOCK=SHARED.

I hope it will be fixed in the future releases.

Conclusion

Generated columns feature is very useful. Imagine an ability to add a column + index for any “logical” piece of data without actually duplicating the data. And this can be any function: date/time/calendar, text (extract(), reverse(), metaphone()) or anything else. I hope this feature will be available in MySQL 5.7 GA. Finally, I wish adding a generated column and index can be online (it is not right now).

More information:

The post Generated (Virtual) Columns in MySQL 5.7 (labs) appeared first on MySQL Performance Blog.

tar: -: Cannot write: Broken pipe error

Lastest Forum Posts - April 28, 2015 - 1:17pm
My third galera mariadb node (out of 3) will not start and I get this:

150428 15:11:27 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/etc/my.cnf;mysql_read_default_group=xtrabackup;mysql_s ocket=/var/lib/mysql/mysql.sock' as 'root' (using password: YES).
150428 15:11:27 innobackupex: Connected to MySQL server
150428 15:11:27 innobackupex: Executing a version check against the server...
150428 15:11:27 innobackupex: Done.
150428 15:11:27 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints "completed OK!".

innobackupex: Using server version 10.0.17-MariaDB-wsrep

innobackupex: Created backup directory /tmp
tar: -: Cannot write: Broken pipe
tar: Error is not recoverable: exiting now
innobackupex: 'tar chf -' returned with exit code 2.
innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 4894.
main::backup_file_via_stream('/tmp', 'backup-my.cnf') called at /usr//bin/innobackupex line 4943
main::backup_file('/tmp', 'backup-my.cnf', '/tmp/backup-my.cnf') called at /usr//bin/innobackupex line 4967
main::write_to_backup_file('/tmp/backup-my.cnf', '# This MySQL options file was generated by innobackupex.\x{a}\x{a}# T...') called at /usr//bin/innobackupex line 3774
main::write_backup_config_file('/tmp/backup-my.cnf') called at /usr//bin/innobackupex line 3701
main::init() called at /usr//bin/innobackupex line 1566
innobackupex: Error: Failed to stream '/tmp/backup-my.cnf': 2 at /usr//bin/innobackupex line 4894.


Any ideas? Thanks

Percona cluster - one server crashed - memory bug?

Lastest Forum Posts - April 28, 2015 - 3:49am
I have three servers in a cluster - running ubuntu 12.04, xeon systems with ECC ram and raid 5 arrays, 32GB ram each. Unlikely to be a hardware issue. This is what the error log said:

18:19:44 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona XtraDB Cluster better by reporting any bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=37
max_threads=153
thread_count=19
connection_count=2
It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 69252 K bytes of memory Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f2d18000990 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong...

stack_bottom = 7f30005e0a70 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x8e811e]
/usr/sbin/mysqld(handle_fatal_signal+0x392)[0x65ffa2]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f303d3e5cb0]
/usr/lib/libgalera_smm.so(_ZN6galera13Certification16purge_ for_trx_v3EPNS_9TrxHandleE+0xa0)[0x7f302225a0f0]
/usr/lib/libgalera_smm.so(_ZN6galera13Certification16purge_ trxs_upto_Elb+0x158)[0x7f302225b8c8]
/usr/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM18proces s_commit_cutEll+0x85)[0x7f3022288215]
/usr/lib/libgalera_smm.so(_ZN6galera15GcsActionSource8dispa tchEPvRK10gcs_actionRb+0x405)[0x7f3022269d75]
/usr/lib/libgalera_smm.so(_ZN6galera15GcsActionSource7proce ssEPvRb+0x5e)[0x7f302226a8ee]
/usr/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_ recvEPv+0x78)[0x7f302228f958]
/usr/lib/libgalera_smm.so(galera_recv+0x1e)[0x7f30222a4c8e] /usr/sbin/mysqld[0x5a491c]
/usr/sbin/mysqld(start_wsrep_THD+0x287)[0x58d247] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f303d3dde9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f303c8f88bd]

Trying to get some variables. Some pointers may be invalid and cause the dump to abort.
Query (0): is an invalid pointer
Connection ID (thread ID): 12
Status: NOT_KILLED

Percona cluster - one server crashed - memory bug?

Lastest Forum Posts - April 28, 2015 - 3:49am
I have three servers in a cluster - running ubuntu 12.04, xeon systems with ECC ram and raid 5 arrays, 32GB ram each. Unlikely to be a hardware issue. This is what the error log said:

18:19:44 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona XtraDB Cluster better by reporting any bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=37
max_threads=153
thread_count=19
connection_count=2
It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 69252 K bytes of memory Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f2d18000990 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong...

stack_bottom = 7f30005e0a70 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x8e811e]
/usr/sbin/mysqld(handle_fatal_signal+0x392)[0x65ffa2]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f303d3e5cb0]
/usr/lib/libgalera_smm.so(_ZN6galera13Certification16purge_ for_trx_v3EPNS_9TrxHandleE+0xa0)[0x7f302225a0f0]
/usr/lib/libgalera_smm.so(_ZN6galera13Certification16purge_ trxs_upto_Elb+0x158)[0x7f302225b8c8]
/usr/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM18proces s_commit_cutEll+0x85)[0x7f3022288215]
/usr/lib/libgalera_smm.so(_ZN6galera15GcsActionSource8dispa tchEPvRK10gcs_actionRb+0x405)[0x7f3022269d75]
/usr/lib/libgalera_smm.so(_ZN6galera15GcsActionSource7proce ssEPvRb+0x5e)[0x7f302226a8ee]
/usr/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_ recvEPv+0x78)[0x7f302228f958]
/usr/lib/libgalera_smm.so(galera_recv+0x1e)[0x7f30222a4c8e] /usr/sbin/mysqld[0x5a491c]
/usr/sbin/mysqld(start_wsrep_THD+0x287)[0x58d247] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f303d3dde9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f303c8f88bd]

Trying to get some variables. Some pointers may be invalid and cause the dump to abort.
Query (0): is an invalid pointer
Connection ID (thread ID): 12
Status: NOT_KILLED

Test your knowledge: Percona XtraDB Cluster (PXC) quiz

Latest MySQL Performance Blog posts - April 28, 2015 - 3:00am

I often talk with people who are very interested in the features of Percona XtraDB Cluster (PXC) such as synchronous and parallel replication, multi-node writing and high availability. However some get confused when operating a real PXC cluster because they do not fully realize the implications of these features. So here is a fun way to test your PXC knowledge: try to solve these 12 questions related to PXC! (you will find the answers at the end of the post.)

Workload

1. With Galera 3.x, support for MyISAM is experimental. When can we expect to have full MyISAM support?
a. This will never happen as Galera is designed for transactional storage engines.
b. This is planned for Galera 4.0.

2. Why aren’t all workloads a good fit for PXC?
a. Execution plans can change compared to a regular MySQL server, so performance is sometimes not as good as with a regular MySQL server.
b. Large transactions and write hotspots can create performance issues with Galera.

3. For workloads with a write hot spot, writing on all nodes to distribute the load is a good way to solve the issue.
a. True
b. False

4. Optimistic locking is used in a PXC cluster. What does it mean?
a. When a transaction starts on a node, locks are only set on this node but never on the remote nodes.
b. When a transaction starts on a node, locks are only set on the remote nodes but never on the local node.
c. Write conflict detection is built-in, so there is no need to set locks at all.

Replication

5. Galera implements virtually synchronous replication. What does it mean?
a. A transaction is first committed locally, and then it is committed on all remote nodes at the same exact point in time.
b. Transactions are replicated synchronously, but they are applied asynchronously on remote nodes.
c. Replication is actually asynchronous, but as it is faster than MySQL replication, so marketing decided to name it ‘virtually synchronous’.

6. When the receive queue of a node exceeds a threshold, the node sends flow control messages. What is the goal of these flow control messages?
a. They instruct the other nodes that they must pause processing writes for some time, to allow the slow node to catch up.
b. The other nodes trigger an election and if they have quorum they will evict the slow node.
c. The messages can be used by monitoring systems to detect a slow node, but they have no effect.

7. When you change the state of a node to Donor/Desynced, what happens?
a. The node stops receiving writes from the other nodes.
b. The node intentionally replicates writes at a slower pace, this is roughly equivalent to a delayed replica when using MySQL replication.
c. The node keeps working as usual, but it will not send flow control messages if its receive queue becomes large.

High Availability

8. You should always use an odd number of nodes, because with an even number (say 4 or 6), the failure of one node will create a split-brain situation.
a. True
b. False

9. With a 3-node cluster, what happens if you gracefully stop 2 nodes?
a. The remaining node can process queries normally.
b. The remaining node is up but it stops processing queries as it does not have quorum.

Operations

10. If a node has been stopped for less than 5 minutes, it will always perform an IST.
a. True: SST is only performed after a node crash, never after a regular shutdown.
b. False: it depends on the gcache size.

11. Even with datasets under 5GB, the preferred SST method is xtrabackup-v2 not mysqldump.
a. True
b. False

12. Migration from a master-slave setup to a PXC cluster always involves a downtime to dump and reload the database.
a. True, because MySQL replication and Galera replication are incompatible.
b. False, one node of the PXC cluster can be set up as an asynchronous replica of the old master.

Solutions

1. a      2. b      3. b
4. a      5. b      6. a
7. c      8. b      9. a
10. b    11. a    12. b

The post Test your knowledge: Percona XtraDB Cluster (PXC) quiz appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
]]>