EmergencyEMERGENCY? Get 24/7 Help Now!

building Percona server on osx

Lastest Forum Posts - January 29, 2016 - 7:26am
I'm trying to build Percona server on osx, using 'cmake' (something I'm not familar with),
and having issues.

using the command line specified in the documentation:

cmake . -DCMAKE_BUILD_TYPE=RelWithDebInfo -DBUILD_CONFIG=mysql_release -DFEATURE_SET=community

I get the following errors from the cmake phase:

-- Performing Test HAVE_C_-Wno-error=tautological-constant-out-of-range-compare - Failed
-- Performing Test HAVE_CXX_-Wno-error=tautological-constant-out-of-range-compare - Failed
-- Performing Test HAVE_C_-Wno-error=extern-c-compat - Failed
-- Performing Test HAVE_CXX_-Wno-error=extern-c-compat - Failed


and I get the following errors when attempting to build (there are more, but I'm just posting
a few since they're similar):

/Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/tokudb_thread.h:211:13: error: use of
undeclared identifier 'pthread_mutex_timedlock'
int r = pthread_mutex_timedlock(&_mutex, &waittime);

/Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/tokudb_thread.h:260:17: error: use of
undeclared identifier 'pthread_rwlock_timedrdlock'
while ((r = pthread_rwlock_timedrdlock(&_rwlock, &waittime)) != 0) {

In file included from /Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/ha_tokudb.cc:31:
In file included from /Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/ha_tokudb.h:31:
/Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/tokudb_background.h:30:10: fatal error:
'atomic' file not found

#include <atomic>
^
7 warnings and 6 errors generated.
make[2]: *** [storage/tokudb/CMakeFiles/tokudb.dir/ha_tokudb.cc.o] Error 1
make[1]: *** [storage/tokudb/CMakeFiles/tokudb.dir/all] Error 2
make: *** [all] Error 2

I'm running osx 10.7.5, with xcode 4.6.3, and gcc 4.9.1

thanks in advance for any assistance.

rm.

searching forums?

Lastest Forum Posts - January 29, 2016 - 7:09am
I'm new to this forum, so sorry if this is a dumb question, but I can't find
any way to search the forums for specific topics...

Percona XtraDB Cluster 5.6.28-25.14 is now available

Latest MySQL Performance Blog posts - January 29, 2016 - 5:34am

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on January 29, 2016. Binaries are available from the downloads area or from our software repositories.

Percona XtraDB Cluster 5.6.28-25.14 is now the current release, based on the following:

All of Percona software is open-source and free, and all the details of the release can be found in the 5.6.28-25.14 milestone at Launchpad.

For more information about relevant Codership releases, see this announcement.

Bugs Fixed:

  • 1494399: Fixed issue caused by replication of events on certain system tables (for example, mysql.slave_master_info, mysql.slave_relay_log_info). Replication in the Galera eco-system is now avoided when bin-logging is disabled for said tables.
    NOTE: As part of this fix, when bin-logging is enabled, replication in the Galera eco-system will happen only if BINLOG_FORMAT is set to either ROW or STATEMENT. The recommended format is ROW, while STATEMENT is required only for the pt-table-checksum tool to operate correctly. If BINLOG_FORMAT is set to MIXED, replication of events in the Galera eco-system tables will not happen even with bin-logging enabled for those tables.
  • 1522385: Fixed GTID holes caused by skipped replication. A slave might ignore an event replicated from master, if the same event has already been executed on the slave. Such events are now propagated in the form of special GTID events to maintain consistency.
  • 1532857: The installer now creates a /var/lib/galera/ directory (assigned to user nobody), which can be used by garbd in the event it is started from a directory that garbd cannot write to.

Known Issues:

  • 1531842: Two instances of garbd cannot be started from the same working directory. This happens because each instance creates a state file (gvwstate.dat) in the current working directory by default. Although garbd is configured to use the base_dir variable, it was not registered due to a bug. Until garbd is fixed, you should start each instance from a separate working directory.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

XtraBackup for single database

Lastest Forum Posts - January 29, 2016 - 2:22am
We have a production MySQL installation with about 20 databases containing 100+ tables each. The sizes of the databases vary between 5 and 30 GB of tablespace. We have to copy individual production databases to test servers regularly, which we have been doing with mysqldump so far, but that takes several days in some cases. I read a lot of documentation on XtraBackup, innobackupex, transportable tablespaces, etc., but still haven't found a suitable solution. Our main concern is restore time, not backup time. The production system runs a dedicated backup slave that can be taken off replication and even powered off if needed. I read in the manual (https://www.percona.com/doc/percona-...obackupex.html) that XtraBackup has the --databases option, which we could use, but do I understand correctly that to restore the database(s), you have to do it individually for each table instead of using --copy-back? With the number of tables in our databases, I don't think that's feasible, particularly if we have to manually create tables on the target systems before restoring them from the backup. But on the other hand, transferring full backups of the entire set of databases to the test systems is not what is desired either; it has to happen independently. Any ideas?

Percona command found error when i use it

Lastest Forum Posts - January 28, 2016 - 8:00pm
When I use below Percona command, i found it error. Can anyone tell me what the reason? Thanks


----------------------------
www.homemadespatreatmentsandrecipes.com

Vote Percona Server in LinuxQuestions.org Members Choice Awards

Latest MySQL Performance Blog posts - January 28, 2016 - 1:13pm

Percona is calling on you! Vote Percona for Database of the Year in LinuxQuestions.org Members Choice Awards 2015. Help our Percona Server get recognized as one of the best database options for data performance. Percona Server is a free, fully compatible, enhanced, open source drop-in replacement for MySQL® that provides superior performance, scalability and instrumentation.

LinuxQuestions.org, or LQ for short, is a community-driven, self-help web site for Linux users. Each year, LinuxQuestions.org holds an annual competition to recognize the year’s best-in-breed technologies. The winners of each category are determined by the online Linux community!

You can vote now for your favorite products of 2015 (Percona, of course!). This is your chance to be heard!

Voting ends on February 10th, 2016. You must be a registered member of LinuxQuestions.org with at least one post on their forums to vote.

Setup a MongoDB replica/sharding set in seconds

Latest MySQL Performance Blog posts - January 28, 2016 - 11:09am

In the MySQL world, we’re used to playing in the MySQL Sandbox. It allows us to deploy a testing replication environment in seconds, without a great deal of effort or navigating multiple virtual machines. It is a tool that we couldn’t live without in Support.

In this post I am going to walk through the different ways we have to deploy a MongoDB replica/sharding set test in a similar way. It is important to mention that this is not intended for production, but to be used for troubleshooting, learning or just playing around with replication.

Replica Set regression test’s diagnostic commands

MongoDB includes a .js that allows us to deploy a replication set from the MongoDB’s shell. Just run the following:

# mongo --nodb > var rstest = new ReplSetTest( { name: 'replicaSetTest', nodes: 3 } ) > rstest.startSet() ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "replicaSetTest", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 0, "set" : "replicaSetTest" } } ReplSetTest Starting.... [...]

At some point our mongod daemons will be running, each with its own data directory and port:

2133 pts/0 Sl+ 0:01 mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet replicaSetTest --dbpath /data/db/replicaSetTest-0 --setParameter enableTestCommands=1 2174 pts/0 Sl+ 0:01 mongod --oplogSize 40 --port 31001 --noprealloc --smallfiles --rest --replSet replicaSetTest --dbpath /data/db/replicaSetTest-1 --setParameter enableTestCommands=1 2213 pts/0 Sl+ 0:01 mongod --oplogSize 40 --port 31002 --noprealloc --smallfiles --rest --replSet replicaSetTest --dbpath /data/db/replicaSetTest-2 --setParameter enableTestCommands=1

Perfect. Now we need to initialize the replicaset:

> rstest.initiate() { "replSetInitiate" : { "_id" : "replicaSetTest", "members" : [ { "_id" : 0, "host" : "debian:31000" }, { "_id" : 1, "host" : "debian:31001" }, { "_id" : 2, "host" : "debian:31002" } ] } } m31000| 2016-01-24T10:42:36.639+0100 I REPL [ReplicationExecutor] Member debian:31001 is now in state SECONDARY m31000| 2016-01-24T10:42:36.639+0100 I REPL [ReplicationExecutor] Member debian:31002 is now in state SECONDARY [...]

and it is done!

> rstest.status() { "set" : "replicaSetTest", "date" : ISODate("2016-01-24T09:43:41.261Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "debian:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 329, "optime" : Timestamp(1453628552, 1), "optimeDate" : ISODate("2016-01-24T09:42:32Z"), "electionTime" : Timestamp(1453628554, 1), "electionDate" : ISODate("2016-01-24T09:42:34Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "debian:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 68, "optime" : Timestamp(1453628552, 1), "optimeDate" : ISODate("2016-01-24T09:42:32Z"), "lastHeartbeat" : ISODate("2016-01-24T09:43:40.671Z"), "lastHeartbeatRecv" : ISODate("2016-01-24T09:43:40.677Z"), "pingMs" : 0, "configVersion" : 1 }, { "_id" : 2, "name" : "debian:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 68, "optime" : Timestamp(1453628552, 1), "optimeDate" : ISODate("2016-01-24T09:42:32Z"), "lastHeartbeat" : ISODate("2016-01-24T09:43:40.672Z"), "lastHeartbeatRecv" : ISODate("2016-01-24T09:43:40.690Z"), "pingMs" : 0, "configVersion" : 1 } ], "ok" : 1 }

There are many more commands you can run, just type rstest. and then press Tab twice to get the list. Follow this link if you need more info:

http://api.mongodb.org/js/current/symbols/_global_.html#ReplSetTest

What about sharding? Pretty similar:

> var shtest = new ShardingTest({ shards: 2, mongos: 1 })

This is the documentation link if you need more info:

http://api.mongodb.org/js/current/symbols/_global_.html#ShardingTest

It is important to mention that if you close the mongo shell where you run the commands, then all the spawned mongod will also shut down.

Mtools

mtools is a collection of tools and scripts that make MongoDB’s DBA lives much easier. It includes mlaunch, which can be used to start replicate sets and sharded systems for testing.

https://github.com/rueckstiess/mtools

The mlaunch tool requires pymongo, so you need to install it:

# pip install pymongo

You can also use pip to install mtools:

# pip install mtools

Then, we can just start our replica set. In this case, with two nodes and one arbiter:

# mlaunch --replicaset --nodes 2 --arbiter --name "replicaSetTest" --port 3000 launching: mongod on port 3000 launching: mongod on port 3001 launching: mongod on port 3002 replica set 'replicaSetTest' initialized. # ps -x | grep mongod 10246 ? Sl 0:03 mongod --replSet replicaSetTest --dbpath /root/data/replicaSetTest/rs1/db --logpath /root/data/replicaSetTest/rs1/mongod.log --port 3000 --logappend --fork 10257 ? Sl 0:03 mongod --replSet replicaSetTest --dbpath /root/data/replicaSetTest/rs2/db --logpath /root/data/replicaSetTest/rs2/mongod.log --port 3001 --logappend --fork 10274 ? Sl 0:03 mongod --replSet replicaSetTest --dbpath /root/data/replicaSetTest/arb/db --logpath /root/data/replicaSetTest/arb/mongod.log --port 3002 --logappend --fork

Done. You can also deploy a shared cluster, or a sharded replica set. More information in the following link:

https://github.com/rueckstiess/mtools/wiki/mlaunch

Ognom Toolkit

“It is a set of utilities, functions and tests with the goal of making the life of MongoDB/TokuMX administrators easier.”

This toolkit has been created by Fernando Ipar and Sveta Smirnova, and includes a set of scripts that allow us to deploy a testing environment for both sharding and replication configurations. The main difference is that you can specify what storage engine will be the default, something you cannot do with other to methods.

https://github.com/Percona-Lab/ognom-toolkit

We have the tools we need under “lab” directory. Most of the names are pretty self-explanatory:

~/ognom-toolkit/lab# ls README.md start_multi_dc_simulation start_sharded_test stop_all_mongo stop_sharded_test common.sh start_replica_set start_single stop_replica_set stop_single

So, let’s say we want a replication cluster with four nodes that will use PerconaFT storage engine. We have to do the following:

Set a variable with the storage engine we want to use:

# export MONGODB_ENGINE=PerconaFT

Specify where is our mongod binary:

# export MONGOD=/usr/bin/mongod

Start our 4 nodes replica set:

# ./start_replica_set Starting 4 mongod instances 2016-01-25T12:36:04.812+0100 I STORAGE Compression: snappy 2016-01-25T12:36:04.812+0100 I STORAGE MaxWriteMBPerSec: 1024 2016-01-25T12:36:04.813+0100 I STORAGE Crash safe counters: 0 about to fork child process, waiting until server is ready for connections. forked process: 1086 child process started successfully, parent exiting [...] MongoDB shell version: 3.0.8 connecting to: 127.0.0.1:27001/test { "set" : "rsTest", "date" : ISODate("2016-01-25T11:36:09.039Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "debian:27001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 5, "optime" : Timestamp(1453721767, 5), "optimeDate" : ISODate("2016-01-25T11:36:07Z"), "electionTime" : Timestamp(1453721767, 2), "electionDate" : ISODate("2016-01-25T11:36:07Z"), "configVersion" : 4, "self" : true }, { "_id" : 1, "name" : "debian:27002", "health" : 1, "state" : 5, "stateStr" : "STARTUP2", "uptime" : 1, "optime" : Timestamp(0, 0), "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2016-01-25T11:36:07.991Z"), "lastHeartbeatRecv" : ISODate("2016-01-25T11:36:08.093Z"), "pingMs" : 0, "configVersion" : 2 }, { "_id" : 2, "name" : "debian:27003", "health" : 1, "state" : 0, "stateStr" : "STARTUP", "uptime" : 1, "optime" : Timestamp(0, 0), "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2016-01-25T11:36:07.991Z"), "lastHeartbeatRecv" : ISODate("2016-01-25T11:36:08.110Z"), "pingMs" : 2, "configVersion" : -2 }, { "_id" : 3, "name" : "debian:27004", "health" : 1, "state" : 0, "stateStr" : "STARTUP", "uptime" : 1, "optime" : Timestamp(0, 0), "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2016-01-25T11:36:08.010Z"), "lastHeartbeatRecv" : ISODate("2016-01-25T11:36:08.060Z"), "pingMs" : 18, "configVersion" : -2 } ], "ok" : 1 }

Now, just start using it:

rsTest:PRIMARY> db.names.insert({ "a" : "Miguel"}) rsTest:PRIMARY> db.names.stats() { "ns" : "mydb.names", "count" : 1, "size" : 36, "avgObjSize" : 36, "storageSize" : 16384, "capped" : false, "PerconaFT" : { [...]

Conclusion

When dealing with bugs, troubleshooting or testing some application that needs a complex MongoDB infrastructure, these processes can save us lot of time. No need of set up multiple virtual machines, deal with networking and human mistakes. Just say “I want a sharded cluster, do it for me.”

Cluster crashing quite often

Lastest Forum Posts - January 28, 2016 - 6:30am
In preparation for migrating our tokumx prod cluster to mongo3 cluster and before our percona visit in a couple weeks, we decided to stand up a dev and qa cluster. This was done this week and it is quite unstable, crashing several times a day.

Both environments have 3 machines. generally the same two will crash (the primary and one secondary) and the third will remain in a secondary state.

Machines are Red Hat Enterprise VMs. 8 CPU, 16gb ram. data directory is on a SAN.

This is the log entry during the crash. I looked on both machines and it appears to be the exact same trace dump

2016-01-27T18:36:10.526-0600 F - Got signal: 6 (Aborted).

0x10b6cd2 0x10b6583 0x10b694a 0x7f22377286a0 0x7f2237728625 0x7f2237729e05 0x15c0a23 0x161a714 0x15ddf3f 0x15df70d 0x161fe60 0x1644f36 0x7f2238c8ca51 0x7f22377de93d
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"CB6CD2"},{"b":"400000","o":"CB6 583"},{"b":"400000","o":"CB694A"},{"b":"7F22376F60 00","o":"326A0"},{"b":"7F22376F6000","o":"32625"}, {"b":"7F22376F6000","o":"33E05"},{"b":"400000","o" :"11C0A23"},{"b":"400000","o":"121A714"},{"b":"400 000","o":"11DDF3F"},{"b":"400000","o":"11DF70D"},{ "b":"400000","o":"121FE60"},{"b":"400000","o":"124 4F36"},{"b":"7F2238C85000","o":"7A51"},{"b":"7F223 76F6000","o":"E893D"}],"processInfo":{ "mongodbVersion" : "3.0.8", "gitVersion" : "nogitversion", "uname" : { "sysname" : "Linux", "release" : "2.6.32-573.7.1.el6.x86_64", "version" : "#1 SMP Thu Sep 10 13:42:16 EDT 2015", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "7E06EF067281BA0E4AB5A7FDD89C759DFE5CEB71" }, { "b" : "7FFD648EF000", "elfType" : 3, "buildId" : "2426D85978796C7ED259CDC601A7C310C339A21C" }, { "b" : "7F22392C9000", "path" : "/usr/lib64/libsasl2.so.2", "elfType" : 3, "buildId" : "E0AEE889D5BF1373F2F9EE0D448DBF3F5B5113F0" }, { "b" : "7F22390B3000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "D053BB4FF0C2FC983842F81598813B9B931AD0D1" }, { "b" : "7F2238EA2000", "path" : "/lib64/libbz2.so.1", "elfType" : 3, "buildId" : "1250B1D041DD7552F0C870BB188DC3A34DF2651D" }, { "b" : "7F2238C85000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "D467973C46E563CDCF64B5F12B2D6A50C7A25BA1" }, { "b" : "7F2238A19000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "93610457BCF424BEBBF1F3FB44E51B51B50F2B55" }, { "b" : "7F2238636000", "path" : "/usr/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "06DDBB192AF74F99DB58F2150BFB83F42F5EBAD3" }, { "b" : "7F223842E000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "58C5A5FF5C82D7BE3113BE36DD87C7004E3C4DB1" }, { "b" : "7F223822A000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "B5AE05CEDC0CE917F50A3A468CFA2ACD8592E8F6" }, { "b" : "7F2237F24000", "path" : "/usr/lib64/libstdc++.so.6", "elfType" : 3, "buildId" : "28AF9321EBEA9D172CA43E11A60E02D0F7014870" }, { "b" : "7F2237CA0000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "989FE3A42CA8CEBDCC185A743896F23A0CF537ED" }, { "b" : "7F2237A8A000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "2AC15B051D1B8B53937E3341EA931D0E96F745D9" }, { "b" : "7F22376F6000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "A6D15926E61580E250ED91F84FF7517F3970CD83" }, { "b" : "7F22394E3000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "04202A4A8BE624D2193E812A25589E2DD02D5B5C" }, { "b" : "7F22374DC000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "F704FA7D21D05EF31E90FB4890FCA7F3D91DA138" }, { "b" : "7F22372A5000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "128802B73016BE233837EA9F2DCBC2153ACC2D6A" }, { "b" : "7F2237061000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "0C72521270790A1BD52C8F6B989EEA5A575085BF" }, { "b" : "7F2236D7A000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "DC11D5D89BDC77FF242481122D51E5A08DB60DA8" }, { "b" : "7F2236B76000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "13FFCD68952B7715DDF34C9321D82E3041EA9006" }, { "b" : "7F223694A000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "15782495E3AF093E67DDAE9A86436FFC6B3CC4D3" }, { "b" : "7F2236747000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "58BAC04A1DB3964A8F594EFFBE4838AD01214EDC" }, { "b" : "7F223653C000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "44A3A1C1891B4C8170C3DB80E7117A022E5EECD0" }, { "b" : "7F2236339000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "3BCCABE75DC61BBA81AAE45D164E26EF4F9F55DB" }, { "b" : "7F223611A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "2D0F26E648D9661ABD83ED8B4BBE8F2CFA50393B" } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x10b6cd2]
mongod(+0xCB6583) [0x10b6583]
mongod(+0xCB694A) [0x10b694a]
libc.so.6(+0x326A0) [0x7f22377286a0]
libc.so.6(gsignal+0x35) [0x7f2237728625]
libc.so.6(abort+0x175) [0x7f2237729e05]
mongod(_Z23toku_ftnode_pf_callbackPvS_S_iP11pair_a ttr_s+0xAC3) [0x15c0a23]
mongod(_Z30toku_cachetable_pf_pinned_pairPvPFiS_S_ S_iP11pair_attr_sES_P9cachefile10blocknum_sj+0x104 ) [0x161a714]
mongod(_Z24toku_ft_flush_some_childP2ftP6ftnodeP14 flusher_advice+0x23F) [0x15ddf3f]
mongod(_Z28toku_ftnode_cleaner_callbackPv10blocknu m_sjS_+0x1DD) [0x15df70d]
mongod(_ZN7cleaner11run_cleanerEv+0x270) [0x161fe60]
mongod(+0x1244F36) [0x1644f36]
libpthread.so.0(+0x7A51) [0x7f2238c8ca51]
libc.so.6(clone+0x6D) [0x7f22377de93d]
----- END BACKTRACE -----

Can't access cloud.percona.com

Lastest Forum Posts - January 28, 2016 - 1:32am
I have tried many time to access cloud.percona.com, but the I still cannot enter the website. Is there any other ways to access the website? Any of you have the same problem with me?



-----------------------------------
www.bestbeginnerkettlebellworkoutroutines.com

Percona TokuDB cannot build on OS X EI Capitan

Lastest Forum Posts - January 28, 2016 - 1:20am
percona tokuDB plugin cannot build on OS X EI Capitan,it's show this info

Scanning dependencies of target tokudb_static_conv
[ 61%] Building CXX object storage/tokudb/PerconaFT/src/CMakeFiles/tokudb_static_conv.dir/ydb.cc.o
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/src/ydb.cc:52:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/ft/ft-flusher.h:41:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/ft/ft-internal.h:49:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/ft/node.h:40:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/ft/bndata.h:40:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.h:679:
/Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.cc:873:9: error:
nonnull parameter 'outlen' will evaluate to 'true' on first encounter
[-Werror,-Wpointer-bool-conversion]
if (outlen) {
~~ ^~~~~~
/Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.cc:883:9: error:
nonnull parameter 'outlen' will evaluate to 'true' on first encounter
[-Werror,-Wpointer-bool-conversion]
if (outlen) {
~~ ^~~~~~
/Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.cc:893:9: error:
nonnull parameter 'outlen' will evaluate to 'true' on first encounter
[-Werror,-Wpointer-bool-conversion]
if (outlen) {
~~ ^~~~~~
/Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.cc:903:9: error:
nonnull parameter 'outlen' will evaluate to 'true' on first encounter
[-Werror,-Wpointer-bool-conversion]
if (outlen) {
~~ ^~~~~~
4 errors generated.
make[2]: *** [storage/tokudb/PerconaFT/src/CMakeFiles/tokudb_static_conv.dir/ydb.cc.o] Error 1
make[1]: *** [storage/tokudb/PerconaFT/src/CMakeFiles/tokudb_static_conv.dir/all] Error 2
make: *** [all] Error 2



Can't add Google account linked users

Lastest Forum Posts - January 27, 2016 - 5:40pm

I have some user signed up with their google account, I wanted to add them to Persona Cloud Organization.But, I can't add the users and the computer shows Error: "A user does not exist for (email)" Any other ways to add the users?

Thanks


--------------------------------------
www.howtostopandreversegrayhair.com

MongoDB revs you up: What storage engine is right for you? (Part 4)

Latest MySQL Performance Blog posts - January 27, 2016 - 12:13pm
Differentiating Between MongoDB Storage Engines: PerconaFT

In this series of posts, we discussed what a storage engine is, and how you can determine the characteristics of one versus the other:

“A database storage engine is the underlying software that a DBMS uses to create, read, update and delete data from a database. The storage engine should be thought of as a “bolt on” to the database (server daemon), which controls the database’s interaction with memory and storage subsystems.”

Generally speaking, it’s important to understand what type of work environment the database is going to interact with, and to select a storage engine that is tailored to that environment.

The first post looked at MMAPv1, the original default engine for MongoDB (through release 3.0). The second post examined WiredTiger, the new default MongoDB engine. The third post reviewed RocksDB, an engine developed for the Facebook environment.

This post will cover PerconaFT. PerconaFT was developed out of Percona’s acquisition of Tokutek, from their TokuDB product.

PerconaFT

Find it in: Percona Builds

PerconaFT is the newest version of the Fractal Tree storage engine that was designed and implemented by Tokutek, which was acquired by Percona in April of 2015. Designed at MIT, SUNY Stony Brook and Rutgers, the Fractal Tree is a data structure that aimed to remove disk bottlenecks from databases that were using the B-tree with datasets that were several times larger that cache.

PerconaFT is arguably the most “mature” storage engine for MongoDB, with support for document level concurrency and compression. The Fractal Tree was first commercially implemented in June of 2013 in TokuMX, a fork of MongoDB, with an advanced feature set.

As described previously, the Fractal Tree (which is available for MongoDB in the PerconaFT storage engine) is a write-optimized data structure utilizing many log-like “queues” called message buffers, but has an arrangement like that of a read-optimized data structure. With the combination of these properties, PerconaFT can provide high performance for applications with high insert rates, while providing very efficient lookups for update/query-based applications. This will theoretically provide very predictable and consistent performance as the database grows. Furthermore, PerconaFT typically provides, comparatively, the deepest compression rates of any of the engines we’ve discussed in this series.

An ideal fit for the PerconaFT storage engine is a system with varied workloads, where predictable vertical scaling is required in addition to the horizontal scaling provide MongoDB. Furthermore, the ability of PerconaFT to maintain performance while compressing – along with support for multiple compression algorithms (snappy, quicklz, zlib and lzma) – make it one of the best options for users looking to optimize their data footprint.

Conclusion

Most people don’t know that they have a choice when it comes to storage engines, and that the choice should be based on what the database workload will look like. Percona’s Vadim Tkachenko performed an excellent benchmark test comparing the performances of PerconaFT and WiredTiger to help specifically differentiate between these engines.

Part 1: Intro and the MMAPv1 storage engine.

Part 2: WiredTiger storage engine.

Part 3: RocksDB storage engine.

Percona CEO Peter Zaitsev discusses working remotely with Fortune Magazine

Latest MySQL Performance Blog posts - January 27, 2016 - 11:33am

As a company that believes in and supports the open source community, embracing innovation and change is par for the course at Percona. We wouldn’t be the company we are today without fostering a culture that rewards creative thinking and rapid evolution.

Part of this culture is making sure that Percona is a place where people love to work, and can transmit their passion for technology into tangible rewards – both personally and financially. One of the interesting facts about Percona’s culture is that almost 95 percent of its employees are working remotely. Engineers, support, marketing, even executive staff – most of these people interact daily via electronic medium rather than in person. Percona’s staff is worldwide across 29 countries and 19 U.S. states. How does that work? How do you make sure that the staff is happy, committed, and engaged enough to stay on? How do you attract prospective employees with this unusual model?

It turns out that not only does it work, but it works very well. It can be challenging to manage the needs of such a geographically diverse group, but the rewards (and the results) outweigh the effort.

The secret is, of course, good communication, an environment of respect and personal empowerment.

Percona’s CEO Peter Zaitsev recently provided some of his thoughts to Fortune magazine about how our business model helps to not only to foster incredible dedication and innovation, but create a work environment that encourages passion, commitment and teamwork.

Read about his ideas on Percona’s work model here.

Oh, and by the way, Percona is currently hiring! Perhaps a career here might fit in with your plans . . .

innobackupex --apply-log fails with The transaction log file is corrupted

Lastest Forum Posts - January 27, 2016 - 8:48am
Dear support,

I am running percona server 5.6.24-72.2-log, innobackupex 2.3.3 Linux (x86_64).

A full backup is created using the following command:

innobackupex --default-file=/path/to/my/defaults/file.cnf --user=myuser --password=mypasswd --no-timestamp /my/path/to/backup/timestamp | tee -a /my/path/to/log/file.log

(...)
160125 19:34:45 Finished backing up non-InnoDB tables and files
160125 19:34:45 Executing LOCK BINLOG FOR BACKUP...
160125 19:34:45 [00] Writing xtrabackup_binlog_info
160125 19:34:45 [00] ...done
160125 19:34:45 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
xtrabackup: The latest check point (for incremental): '2980804563434'
xtrabackup: Stopping log copying thread.
.160125 19:34:45 >> log scanned up to (2980804563434)
(...)
160125 19:34:45 Executing UNLOCK BINLOG
160125 19:34:45 Executing UNLOCK TABLES
160125 19:34:45 All tables unlocked
160125 19:34:45 Backup created in directory '/data2//20160125_181155'
MySQL binlog position: filename 'mysql-bin-log.000016', position '120'
160125 19:34:45 [00] Writing backup-my.cnf
160125 19:34:45 [00] ...done
160125 19:34:45 [00] Writing xtrabackup_info
160125 19:34:45 [00] ...done
xtrabackup: Transaction log of lsn (2980804563434) to (2980804563434) was copied.
160125 19:34:45 completed OK!

Takes 82 minutes.

Then I prepare this backup using:

$ /home/products/xtrabackup/bin/innobackupex --use-memory=1G --apply-log /my/path/to/backup/timestamp | tee -a /my/path/to/prepare/log/file.log

160126 10:57:47 innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints "completed OK!".

/home/products/xtrabackup/bin/innobackupex version 2.3.3 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 525ca7d)
xtrabackup: cd to /data2/20160125_181155
xtrabackup: This target seems to be not prepared yet.
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(2980804563434)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 1073741824 bytes for buffer pool (set by --use-memory parameter)
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: Using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 1.0G
InnoDB: Completed initialization of buffer pool
InnoDB: The first specified data file ./ibdata1 did not exist: a new database to be created!
InnoDB: Setting file ./ibdata1 size to 10 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Setting log file ./ib_logfile101 size to 2 MB
InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
InnoDB: New log files created, LSN=45781
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
InnoDB: 1 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.6.24 started; log sequence number 0

xtrabackup: error: The transaction log file is corrupted.
xtrabackup: error: The log was not applied to the intended LSN!
xtrabackup: Log applied to lsn 0
xtrabackup: The intended lsn is 2980804563434


Interestingly the xtrabackup_logfile file has been deleted from the backup path. If I save the xtrabackup_logfile before running the prepare, and restore it after, the first prepare crashes with transaction log corrupted, the next try to prepare works:

# Restore the xtrabackup_log file for previous copy after full backup
$ cp /tmp/xtrabackup_logfile /my/path/to/backup/timestamp/

# Run the prepare again:
$ /home/products/xtrabackup/bin/innobackupex --use-memory=1G --apply-log /my/path/to/backup/timestamp/ | tee -a /my/path/to/prepare/log/file.log
160126 10:58:21 innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints "completed OK!".

/home/products/xtrabackup/bin/innobackupex version 2.3.3 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 525ca7d)
xtrabackup: cd to /data2/20160125_181155
xtrabackup: This target seems to be not prepared yet.
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(2980804563434)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 1073741824 bytes for buffer pool (set by --use-memory parameter)
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: Using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 1.0G
InnoDB: Completed initialization of buffer pool
InnoDB: Highest supported file format is Barracuda.
InnoDB: The log sequence numbers 0 and 0 in ibdata files do not match the log sequence number 2980804563434 in the ib_logfiles!
InnoDB: Database was not shutdown normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages
InnoDB: from the doublewrite buffer...
InnoDB: 1 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.6.24 started; log sequence number 2980804563434

xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 2980804563444
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 50331648
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: Using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 1.0G
InnoDB: Completed initialization of buffer pool
InnoDB: Setting log file ./ib_logfile101 size to 48 MB
InnoDB: Setting log file ./ib_logfile1 size to 48 MB
InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
InnoDB: New log files created, LSN=2980804563444
InnoDB: Highest supported file format is Barracuda.
InnoDB: 1 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.6.24 started; log sequence number 2980804563468
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 2980804563478
160126 10:58:41 completed OK!

I have no problem in 2.1.8. I am going to test in 2.2. I looked into the forum posts and I didn't find a match. Could it be a bug on 2.3.3 ?

Thanks,

Slave Replication breaks

Lastest Forum Posts - January 27, 2016 - 7:50am
Hi,

We have migrated our application recently to Percona Server(Percona-Server-server-56-5.6.27-rel75.0.el7.x86_64) on Centos OS 7 (CentOS Linux release 7.1.1503 (Core)) from mysql.

The parameters are tuned as per the guidelines provided by Percona team during their audit activity to our prod environment.

Our application was working very well for a more than a month, until now, where in we are facing replication issues on slave.

The architecture is of single master and 3 slaves(all identical to master) and with a fourth additional slave(this slave is running on Ubuntu OS with ubuntu percona binaries and default my.cnf params)

Replication on all 3 slaves fails with "Last_SQL_Error: Error 'Duplicate entry '*********' for key 'PRIMARY'' on query." error.

Interestingly, replication does not break on the fourth server. The error is noticed randomly for insert/update query and not to a particular one.

When we try to restore from a complete dump of master, causes the same issue.

we tried the following steps:

1. Stop the master for any writes
2. Created a full db dump using mysqldump command
3. Restore on slave
4. set the POS and bin log position.
5. start slave.

it is observed that the entry is not duplicate in the master binlogs. We tried with a skip error option wherein the record value actually differs from the master giving stale data for reads.

Also, we host these server on AWS with IOPS enabled.

Below are some mysql variables would like to bring to the notice:
innodb_flush_log_at_trx_commit = 2
innodb_read_io_threads = 8
innodb_write_io_threads = 8
innodb_doublewrite = OFF

We have a similar setup for our stage environment with a difference of no IOPS for it. Tried replicating the issue with multiple load tests/updates without any luck.

Any pointers in this regard will be appreciated.

Is it possible to use different configurations on the servers of an xtradb cluster ?

Lastest Forum Posts - January 27, 2016 - 6:49am


Hi percona masters :-)


I'm currently updating the configuration of an XtraDB cluster, and I wonder if it is safe to use different values for the "innodb_buffer_pool_size" on different servers of the same XtraDb cluster?

My guess is that it has little to no impact on the global performance, but i'd wish to have confirmation on this point.

A little bit of context : I have a three-node XtraDb cluster, where each node is an identical machine with 32gb RAM. Two serveurs out of the three are dedicated to serve client requests, and they also run an Apache tomcat server. The third one is dedicated to creating hourly incremental enciphered backups with xtrabackup.

I'm considering to increase the "innodb_buffer_pool_size", as its current value is innadequate (4Gb). I'd like to set its value to 20Gb (60% of the RAM) on the first and second server, and to something like 10Gb on the third server, as building backups is a ressource-intensive task in itself.

JB.

TokuFT file system space is low

Lastest Forum Posts - January 26, 2016 - 6:34pm
I am getting error while loading sql file

ERROR 1021 (HY000) at line 94222: Disk full (IFT_OBUMESSAGE_DTL); waiting for someone to free some space...

Space Left : /dev/sda3 1.7T 1.5T 83G 95% /

In error Log :
Mon Sep 21 14:23:10 2015 TokuFT file system space is low
Mon Oct 12 13:08:21 2015 TokuFT file system space is really low and access is restricted
Mon Jan 18 20:21:54 2016 TokuFT file system space is low
Sat Jan 23 22:22:12 2016 TokuFT file system space is really low and access is restricted
Wed Jan 27 04:06:37 2016 TokuFT file system space is really low and access is restricted

83 GB Space left still Getting Error ?

Finding MySQL Table Size on Disk

Latest MySQL Performance Blog posts - January 26, 2016 - 2:16pm

So you want to know how much space a given MySQL table takes on disk. Looks trivial, right? Shouldn’t this information be readily available in the INFORMATION_SCHEMA.TABLES? Not so fast!

This simple question actually is quite complicated in MySQL. MySQL supports many storage engines (some of which don’t store data on disk at all) and these storage engines often each store data in different layouts. For example, there are three “basic” layouts that the InnoDB storage engine supports for MySQL 5.7, with multiple variations for row_formats and two types of available compression.

So let’s simplify the situation: instead of a general question, let’s ask how to find the table size on disk for an InnoDB table stored in its own tablespace (as the parameter innodb_file_per_table=1 provides).

Before we get to the answer, let me show you the table size graph that I get by running sysbench prepare (basically populating tables with multi-value inserts):

This graphs shows the table size defined by data_length plus index_length captured from INFORMATION_SCHEMA.TABLES. You would expect gradual table growth as data is inserted into it, rather than a flat table size followed by jumps (sometimes by 10GB or more).

The graph does not match how data is changing on disk, where it is growing gradually (as expected):

-rw-r----- 1 mysql mysql 220293234688 Jan 25 17:03 sbtest1.ibd -rw-r----- 1 mysql mysql 220310011904 Jan 25 17:03 sbtest1.ibd -rw-r----- 1 mysql mysql 222499438592 Jan 25 17:07 sbtest1.ibd

As we see from this experiment, MySQL does not really maintain live data_length and index_length values,  but rather refreshes them periodically – and rather irregularly. The later part of the graph is especially surprising, where we see a couple of data refreshes becoming more regular. This is different from the first part of the graph which seems to be in line with statistics being updated when when 10 percent of the rows are changed.  (manual)

What makes it especially confusing is that there are other values such as table_rows, data_free or update_time  that are updated in the real time (even though I can’t imagine why table size related values would be any more difficult to maintain in real time!).

Is there way to get real time data_length and index_length updates as we query information_schema? There is, but it is costly.

To get information_schema to provide accurate information in MySQL 5.7, you need to do two things: disable innodb_stats_persistent and enable innodb_stats_on_metadata – both of which come with significant side effects.

Disabling persistent statistics means InnoDB has to refresh the statistics each time the server starts, which is expensive and can produce volatile query plans between restarts. Enabling innodb_stats_on_metadata makes access to information_schema slower, much slower, as I wrote few years ago.

Is there a better way? It turns out there is. You can look into the tablespaces information table using INNODB_SYS_TABLESPACES to see the actual file size. Unlike index_length and data_length, INNODB_SYS_TABLESPACES is updated in real time with no special configuration required:

mysql> select * from INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES where name='sbinnodb/sbtest1' G *************************** 1. row ***************************         SPACE: 42          NAME: sbinnodb/sbtest1          FLAG: 33   FILE_FORMAT: Barracuda    ROW_FORMAT: Dynamic     PAGE_SIZE: 16384 ZIP_PAGE_SIZE: 0    SPACE_TYPE: Single FS_BLOCK_SIZE: 4096     FILE_SIZE: 245937209344 ALLOCATED_SIZE: 245937266688 1 row in set (0.00 sec)

The great thing about using this table is that it also handles new “Innodb Page Compression” properly showing the difference between file_size  (which is the logical file size on disk) and allocated_size (which is space allocated for this file and can be significantly smaller):

mysql> select * from INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES where name='sbinnodb/testcomp' G *************************** 1. row ***************************         SPACE: 48          NAME: sbinnodb/testcomp          FLAG: 33   FILE_FORMAT: Barracuda    ROW_FORMAT: Dynamic     PAGE_SIZE: 16384 ZIP_PAGE_SIZE: 0    SPACE_TYPE: Single FS_BLOCK_SIZE: 4096     FILE_SIZE: 285212672 ALLOCATED_SIZE: 113004544 1 row in set (0.00 sec)

Finally, let’s look into how different InnoDB compression variants impact the information provided in information_schema.   

If you use the old Innodb compression (Innodb Table Compression) you will see the compressed data size shown in data_length and index_length as result. For example, avg_row_length will be much lower than you would expect.

If you use the new InnoDB compression in MySQL 5.7 (Innodb Page Compression) you will see the values corresponding to file size, not allocated size as shown in information_schema.

Conclusion
Answering the trivial question “How much space does this table take on disk?” is really not a simple request in MySQL – look into the obvious place and you’re likely to get the wrong value. Look at INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES to get actual file size value for InnoDB tables.  

EXPLAIN FORMAT=JSON has details for subqueries in HAVING, nested selects and subqueries that update values

Latest MySQL Performance Blog posts - January 25, 2016 - 2:18pm

Over several previous blog posts, we’ve already discussed what information the EXPLAIN FORMAT=JSON output provides for some subqueries. You can review those discussions here, here and here. EXPLAIN FORMAT=JSON shows many details that you can’t get with other commands. Let’s now finish this topic and discuss the output for the rest of the subquery types.

First, let’s look at the subquery in the HAVING clause, such as in the following example:

select count(emp_no), salary from salaries group by salary having salary > ALL (select avg(s) from (select dept_no, sum(salary) as s from salaries join dept_emp using (emp_no) group by dept_no) t )

This example prints the number of employees and their salaries, if their salary is greater than the average salary in their department. EXPLAIN FORMAT=JSON provides a lot details on how this subquery is optimized:

mysql> explain format=json select count(emp_no), salary from salaries group by salary having salary > ALL (select avg(s) from (select dept_no, sum(salary) as s from salaries join dept_emp using (emp_no) group by dept_no) t)G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "3073970.40" }, "grouping_operation": { "using_temporary_table": true, "using_filesort": true, "cost_info": { "sort_cost": "2557022.00" }, "table": { "table_name": "salaries", "access_type": "ALL", "rows_examined_per_scan": 2557022, "rows_produced_per_join": 2557022, "filtered": "100.00", "cost_info": { "read_cost": "5544.00", "eval_cost": "511404.40", "prefix_cost": "516948.40", "data_read_per_join": "39M" }, "used_columns": [ "emp_no", "salary", "from_date" ] }, "having_subqueries": [ { "dependent": false, "cacheable": true, "query_block": { "select_id": 2, "cost_info": { "query_cost": "771970.25" }, "table": { "table_name": "t", "access_type": "ALL", "rows_examined_per_scan": 3087841, "rows_produced_per_join": 3087841, "filtered": "100.00", "cost_info": { "read_cost": "154402.05", "eval_cost": "617568.20", "prefix_cost": "771970.25", "data_read_per_join": "94M" }, "used_columns": [ "dept_no", "s" ], "materialized_from_subquery": { "using_temporary_table": true, "dependent": false, "cacheable": true, "query_block": { "select_id": 3, "cost_info": { "query_cost": "1019140.27" }, "grouping_operation": { "using_filesort": false, "nested_loop": [ { "table": { "table_name": "dept_emp", "access_type": "index", "possible_keys": [ "PRIMARY", "emp_no", "dept_no" ], "key": "dept_no", "used_key_parts": [ "dept_no" ], "key_length": "4", "rows_examined_per_scan": 331570, "rows_produced_per_join": 331570, "filtered": "100.00", "using_index": true, "cost_info": { "read_cost": "737.00", "eval_cost": "66314.00", "prefix_cost": "67051.00", "data_read_per_join": "5M" }, "used_columns": [ "emp_no", "dept_no" ] } }, { "table": { "table_name": "salaries", "access_type": "ref", "possible_keys": [ "PRIMARY", "emp_no" ], "key": "PRIMARY", "used_key_parts": [ "emp_no" ], "key_length": "4", "ref": [ "employees.dept_emp.emp_no" ], "rows_examined_per_scan": 9, "rows_produced_per_join": 3087841, "filtered": "100.00", "cost_info": { "read_cost": "334520.92", "eval_cost": "617568.35", "prefix_cost": "1019140.27", "data_read_per_join": "47M" }, "used_columns": [ "emp_no", "salary", "from_date" ] } } ] } } } } } } ] } } } 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select count(`employees`.`salaries`.`emp_no`) AS `count(emp_no)`,`employees`.`salaries`.`salary` AS `salary` from `employees`.`salaries` group by `employees`.`salaries`.`salary` having <not>((`employees`.`salaries`.`salary` <= <max>(/* select#2 */ select avg(`t`.`s`) from (/* select#3 */ select `employees`.`dept_emp`.`dept_no` AS `dept_no`,sum(`employees`.`salaries`.`salary`) AS `s` from `employees`.`salaries` join `employees`.`dept_emp` where (`employees`.`salaries`.`emp_no` = `employees`.`dept_emp`.`emp_no`) group by `employees`.`dept_emp`.`dept_no`) `t`)))

We see that the subquery in the HAVING clause is not dependent, but cacheable:

"having_subqueries": [ { "dependent": false, "cacheable": true,

It has its own query block:

"query_block": { "select_id": 2,

Which accesses table “t”:

"table": { "table_name": "t", "access_type": "ALL", "rows_examined_per_scan": 3087841, "rows_produced_per_join": 3087841, "filtered": "100.00", "cost_info": { "read_cost": "154402.05", "eval_cost": "617568.20", "prefix_cost": "771970.25", "data_read_per_join": "94M" }, "used_columns": [ "dept_no", "s" ],

Table “t” was also materialized from the subquery:

], "materialized_from_subquery": { "using_temporary_table": true, "dependent": false, "cacheable": true, "query_block": { "select_id": 3,

Another kind of subquery is in the SELECT list. If we want to compare the salary of an employee with the average salary in the company, for example, we can use the query select emp_no, salary, (select avg(salary) from salaries) from salaries. Lets examine the EXPLAIN output:

mysql> explain format=json select emp_no, salary, (select avg(salary) from salaries) from salariesG *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "516948.40" }, "table": { "table_name": "salaries", "access_type": "ALL", "rows_examined_per_scan": 2557022, "rows_produced_per_join": 2557022, "filtered": "100.00", "cost_info": { "read_cost": "5544.00", "eval_cost": "511404.40", "prefix_cost": "516948.40", "data_read_per_join": "39M" }, "used_columns": [ "emp_no", "salary" ] }, "select_list_subqueries": [ { "dependent": false, "cacheable": true, "query_block": { "select_id": 2, "cost_info": { "query_cost": "516948.40" }, "table": { "table_name": "salaries", "access_type": "ALL", "rows_examined_per_scan": 2557022, "rows_produced_per_join": 2557022, "filtered": "100.00", "cost_info": { "read_cost": "5544.00", "eval_cost": "511404.40", "prefix_cost": "516948.40", "data_read_per_join": "39M" }, "used_columns": [ "salary" ] } } } ] } } 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select `employees`.`salaries`.`emp_no` AS `emp_no`,`employees`.`salaries`.`salary` AS `salary`,(/* select#2 */ select avg(`employees`.`salaries`.`salary`) from `employees`.`salaries`) AS `(select avg(salary) from salaries)` from `employees`.`salaries`

EXPLAIN FORMAT=JSON in this case shows that the subquery is part of the first query_block, not dependent and cacheable.

The last type of subquery I want to discuss is the subquery updating values. For example, I added a new column to the titles table from the standard employees database:

mysql> alter table titles add column full_title varchar(100); Query OK, 0 rows affected (24.42 sec) Records: 0 Duplicates: 0 Warnings: 0

Now I want full_title to contain both the department’s name and title, separated by a space. I can use UPDATE with the subquery to achieve this:

update titles set full_title=concat((select dept_name from departments join dept_emp using(dept_no) where dept_emp.emp_no=titles.emp_no and dept_emp.to_date='9999-01-01') ,' ', title) where to_date = '9999-01-01';

To find out how it is optimized, we can use EXPLAIN FORMAT=JSON:

mysql> explain format=json update titles set full_title=concat((select dept_name from departments join dept_emp using(dept_no) where dept_emp.emp_no=titles.emp_no and dept_emp.to_date='9999-01-01') ,' ', title) where to_date = '9999-01-01'G *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "table": { "update": true, "table_name": "titles", "access_type": "index", "key": "PRIMARY", "used_key_parts": [ "emp_no", "title", "from_date" ], "key_length": "59", "rows_examined_per_scan": 442843, "filtered": "100.00", "using_temporary_table": "for update", "attached_condition": "(`employees`.`titles`.`to_date` = '9999-01-01')" }, "update_value_subqueries": [ { "dependent": true, "cacheable": false, "query_block": { "select_id": 2, "cost_info": { "query_cost": "1.35" }, "nested_loop": [ { "table": { "table_name": "dept_emp", "access_type": "ref", "possible_keys": [ "PRIMARY", "emp_no", "dept_no" ], "key": "PRIMARY", "used_key_parts": [ "emp_no" ], "key_length": "4", "ref": [ "employees.titles.emp_no" ], "rows_examined_per_scan": 1, "rows_produced_per_join": 0, "filtered": "10.00", "cost_info": { "read_cost": "1.00", "eval_cost": "0.02", "prefix_cost": "1.22", "data_read_per_join": "1" }, "used_columns": [ "emp_no", "dept_no", "to_date" ], "attached_condition": "(`employees`.`dept_emp`.`to_date` = '9999-01-01')" } }, { "table": { "table_name": "departments", "access_type": "eq_ref", "possible_keys": [ "PRIMARY" ], "key": "PRIMARY", "used_key_parts": [ "dept_no" ], "key_length": "4", "ref": [ "employees.dept_emp.dept_no" ], "rows_examined_per_scan": 1, "rows_produced_per_join": 0, "filtered": "100.00", "cost_info": { "read_cost": "0.11", "eval_cost": "0.02", "prefix_cost": "1.35", "data_read_per_join": "5" }, "used_columns": [ "dept_no", "dept_name" ] } } ] } } ] } } 1 row in set, 1 warning (0.00 sec) Note (Code 1276): Field or reference 'employees.titles.emp_no' of SELECT #2 was resolved in SELECT #1

We can see in this output that the subquery is dependent, not cacheable, and will be executed for each row that needs to be updated.

Conclusion: EXPLAIN FORMAT=JSON  provides various information about all kind of subqueries.

mysql 5.5 bad query plan

Lastest Forum Posts - January 25, 2016 - 11:58am
0down votefavorite I am using mysql 5.5 with innodb. I ran into an issue where one particular query suddenly started using sub optimal query plan. Prior to this, the query was working fine with a response time of 1 second or so. But suddenly mysql started using a different query plan with a different index which was least selective resulting in a response time of 200+ secs. Can somebody explain why this would happen? When I ran the same query with an index hint of the correct index, all the subsequent runs of the query started using the new query plan.
Can somebody please provide some explanation to the root cause of this issue? One possible thing I can think of is the table statistics would be out of date but I don't think the underlying table data changed so drastically that it would completely change the query plan and use a new index



General Inquiries

For general inquiries, please send us your question and someone will contact you.