]]>
]]>

You are here

Feed aggregator

Fresh installation on Vagrant fails

Lastest Forum Posts - March 29, 2015 - 11:49am
Hi I did a clean installation on ubuntu64 via vagrant and every time i start it keeps failing. I'm not a pro so I would appreciate some noob friendly guidelines. I followed the guidelines from here https://www.digitalocean.com/communi...-replace-mysql
vagrant@precise64:/$ sudo service mysql stop
* Stopping MySQL (Percona Server) mysqld [ OK ]
vagrant@precise64:/$ sudo service mysql start
* Starting MySQL (Percona Server) database server mysqld [fail]
vagrant@precise64:/$

Has Shape-Shifting Spikes and Teen Angst

Lastest Forum Posts - March 29, 2015 - 8:12am
Cook and Other Tech CEOs Blast Indiana Religious https://www.reddit.com/r/ioposlas/comments/30p4n7/

OPTIMIZE, CHECK or REPAIR TABLE crashes tables and frequently server

Lastest Forum Posts - March 28, 2015 - 5:50am
We are using Percona 5.6.23-72.1 on Centos 7.0.1406 64bit (with TokuDB plugin) and migrated the tablespace from MySQL 5.5.27.


( 1 )

We see frequent table marked as crashed after OPTIMIZE, CHECK, REPAIR command (MyISAM)

Log entry:

2015-03-28 05:03:28 123161 [ERROR] Got an error from thread_id=7373, /mnt/workspace/percona-server-5.6-redhat-binary/label_exp/centos7-64/rpmbuild/BUILD/percona-server-5.6.23-72.1/storage/myisam/ha_myisam.cc:910
2015-03-28 05:03:29 123161 [ERROR] MySQL thread id 7373, OS thread handle 0x7f4f663bd700, query id 653385 192.168.0.*. * Checking table *** multiple tables ***
2015-03-28 05:03:30 123161 [ERROR] /usr/sbin/mysqld: Table '***' is marked as crashed and should be repaired

Fixing the table with REPAIR TABLE works for most cases - in some cases REPAIR TABLE fails and we are using myisamcheck, which works well.


( 2 )

In some cases we don't see the behaviour in (1) , but we see a server crash (only for multiple tables in CHECK TABLE or REPAIR TABLE). Error log:

05:06:41 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=8589934592
read_buffer_size=2097152
max_used_connections=261
max_threads=5002
thread_count=135
connection_count=135
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2641195005 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f4edb468000
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f4f23dcad40 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0x8d9b8b]
/usr/sbin/mysqld(handle_fatal_signal+0x471)[0x658ab1]
/lib64/libpthread.so.0(+0xf130)[0x7f524e348130]
/usr/sbin/mysqld(_ZNK7handler22ha_statistic_incrementEM17sys tem_status_vary+0xc)[0x59a6ac]
/usr/sbin/mysqld(_ZN7handler16ha_external_lockEP3THDi+0x33)[0x5a02b3]
/usr/sbin/mysqld(_Z17mysql_lock_tablesP3THDPP5TABLEjj+0x75a)[0x7d5cea]
/usr/sbin/mysqld(_Z11lock_tablesP3THDP10TABLE_LISTjj+0x530)[0x690180]
/usr/sbin/mysqld(_Z20open_and_lock_tablesP3THDP10TABLE_LISTb jP19Prelocking_strategy+0xa2)[0x696e52]
/usr/sbin/mysqld[0x812853]
/usr/sbin/mysqld(_ZN20Sql_cmd_repair_table7executeEP3THD+0xc 8)[0x8140b8]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x16f0)[0x6dbd40]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x5e 8)[0x6e1618]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3 THDPcj+0xfc8)[0x6e2d78]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x172)[0x6afdf2]
/usr/sbin/mysqld(handle_one_connection+0x40)[0x6afef0]
/usr/sbin/mysqld(pfs_spawn_thread+0x143)[0x9119b3]
/lib64/libpthread.so.0(+0x7df3)[0x7f524e340df3]
/lib64/libc.so.6(clone+0x6d)[0x7f524c9b91ad]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7f4ede41d010): is an invalid pointer
Connection ID (thread ID): 7367
Status: NOT_KILLED

You may download the Percona Server operations manual by visiting
http://www.percona.com/software/percona-server/. You may find information
in the manual which will help you identify the cause of the crash.
150328 06:06:42 mysqld_safe Transparent huge pages are already set to: never.
150328 06:06:42 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
2015-03-28 06:06:43 0 [Warning] The syntax 'pre-4.1 password hash' is deprecated and will be removed in a future release. Please use post-4.1 password hash instead.
2015-03-28 06:06:43 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2015-03-28 06:06:44 124453 [Note] Plugin 'FEDERATED' is disabled.
2015-03-28 06:06:44 124453 [Note] InnoDB: Using atomics to ref count buffer pool pages
2015-03-28 06:06:44 124453 [Note] InnoDB: The InnoDB memory heap is disabled
2015-03-28 06:06:44 124453 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2015-03-28 06:06:44 124453 [Note] InnoDB: Memory barrier is not used
2015-03-28 06:06:44 124453 [Note] InnoDB: Compressed tables use zlib 1.2.3
2015-03-28 06:06:44 124453 [Note] InnoDB: Using Linux native AIO
2015-03-28 06:06:44 124453 [Note] InnoDB: Using CPU crc32 instructions
2015-03-28 06:06:44 124453 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2015-03-28 06:06:44 124453 [Note] InnoDB: Completed initialization of buffer pool
2015-03-28 06:06:44 124453 [Note] InnoDB: Highest supported file format is Barracuda.
2015-03-28 06:06:44 124453 [Note] InnoDB: The log sequence numbers 1626265 and 1626265 in ibdata files do not match the log sequence number 1626285 in the ib_logfiles!
2015-03-28 06:06:44 124453 [Note] InnoDB: Database was not shutdown normally!
2015-03-28 06:06:44 124453 [Note] InnoDB: Starting crash recovery.
2015-03-28 06:06:44 124453 [Note] InnoDB: Reading tablespace information from the .ibd files...
2015-03-28 06:06:51 124453 [Note] InnoDB: Restoring possible half-written data pages
2015-03-28 06:06:51 124453 [Note] InnoDB: from the doublewrite buffer...
2015-03-28 06:06:51 124453 [Note] InnoDB: 128 rollback segment(s) are active.
2015-03-28 06:06:51 124453 [Note] InnoDB: Waiting for purge to start
2015-03-28 06:06:51 124453 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.23-72.1 started; log sequence number 1626285
Sat Mar 28 06:06:51 2015 TokuFT recovery starting in env /var/lib/mysql/
Sat Mar 28 06:06:51 2015 TokuFT recovery scanning backward from 140611
Sat Mar 28 06:06:51 2015 TokuFT recovery bw_end_checkpoint at 140611 timestamp 1427519174964095 xid 140607 (bw_newer)
Sat Mar 28 06:06:51 2015 TokuFT recovery bw_begin_checkpoint at 140607 timestamp 1427519174964044 (bw_between)
Sat Mar 28 06:06:51 2015 TokuFT recovery turning around at begin checkpoint 140607 time 51
Sat Mar 28 06:06:51 2015 TokuFT recovery starts scanning forward to 140611 from 140607 left 4 (fw_between)
Sat Mar 28 06:06:51 2015 TokuFT recovery closing 2 dictionaries
Sat Mar 28 06:06:51 2015 TokuFT recovery making a checkpoint
Sat Mar 28 06:06:51 2015 TokuFT recovery done
2015-03-28 06:06:51 124453 [Note] Recovering after a crash using mysql-bin
2015-03-28 06:06:52 124453 [Note] Starting crash recovery...
2015-03-28 06:06:52 124453 [Note] Crash recovery finished.
2015-03-28 06:06:52 124453 [Note] RSA private key file not found: /var/lib/mysql//private_key.pem. Some authentication plugins will not work.
2015-03-28 06:06:52 124453 [Note] RSA public key file not found: /var/lib/mysql//public_key.pem. Some authentication plugins will not work.
2015-03-28 06:06:52 124453 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306
2015-03-28 06:06:52 124453 [Note] - '0.0.0.0' resolves to '0.0.0.0';
2015-03-28 06:06:52 124453 [Note] Server socket created on IP: '0.0.0.0'.
2015-03-28 06:06:52 124453 [Note] Event Scheduler: Loaded 0 events
2015-03-28 06:06:52 124453 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.6.23-72.1-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona Server (GPL), Release 72.1, Revision 0503478



Any suggestions?

Thanks,

Ralf

‘Woz on your mind?’ Share your questions for Steve Wozniak during his Percona Live keynote!

Latest MySQL Performance Blog posts - March 27, 2015 - 2:34pm

Here’s your chance to get on stage with Woz! Sort of. Apple co-founder and Silicon Valley icon and philanthropist Steve Wozniak will participate in a moderated Q&A on creativity and innovation April 14 during the Percona Live MySQL Conference and Expo in Santa Clara, California.

Woz once said that he never intended to change the world. That was the other Steve, Steve Jobs.

“I didn’t want to start this company,” Woz told the Seattle Times of Apple’s beginnings in a 2006 interview. “My goal wasn’t to make a ton of money. It was to build good computers. I only started the company when I realized I could be an engineer forever.”

What would you ask Woz if given the opportunity?

“Woz, what first sparked your interest in engineering?”
“Hey Woz, how did you come up with the design for the first Apple?”
“Woz, what do you see as the next big thing in personal computers?”
“Hi Woz, what’s the deal with your giant vacuum tube watch?”

Now it’s your turn! Ask a question in the comments below and be sure to include your Twitter handle – or your Facebook page or LinkedIn profile. If we use your question, then your profile and question will be displayed on the giant screen behind Woz on stage as it’s being asked during his big keynote! How cool is that?

Want to be there in person? See Woz speak for just $5! That’s $70 off the regular admission price! Just use the promo code “KEY” at registration under the “Expo Hall and Keynote Pass” selection. Following Woz’s keynote, be sure to stop by the Percona booth, say “hello, Tom,” and I’ll give you a limited-edition Percona t-shirt.

In the meantime, help spread the word! Please share this tweet:

“Woz on your mind?” Tweet @Percona your questions for Apple’s Steve Wozniak who speaks April 14 at #PerconaLive! http://ow.ly/KTmES

Do that, then follow @Percona and I’ll send a DM for your address and will ship a t-shirt right to your door. See you at the conference!

The post ‘Woz on your mind?’ Share your questions for Steve Wozniak during his Percona Live keynote! appeared first on MySQL Performance Blog.

XtraBackup depenencies for Centos 6.2 and 5.5

Lastest Forum Posts - March 27, 2015 - 11:31am
I'm wanting to install XtraBackup 2.2.9 as the backup up tool for my environment and it is mixed with 6.2 and 5.5.
I have been able to get the 6.2 without an issue but it seems that perl 5.10 is the minimum needed. Can somebody give me the minimums for Centos 5.5 and 6.2? I know I need DBI, DBD, IO-Socket-SSL, Time-Hi-Res. What else is needed and if I can't install 2.2.9 on Centos 5.5 what version would I be able to use? I'm not in a position to go upgrade all the 5.5 machines to 6.2 either.

Innobackupex - MySQL server has gone away when SET SESSION lock_wait_timeout=31536000

Lastest Forum Posts - March 27, 2015 - 9:39am
Hi,

I have mysql server 5.5 with > 300GB data.
I tried create a slave from this master server.

When i run innobackupex, it died at SET SESSION lock_wait_timeout=31536000

This is error:

: DBD::mysql::db do failed: MySQL server has gone away at /usr/bin/innobackupex line 3045. innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 3048 main::mysql_query('HASH(0x19b4fed0)', 'SET SESSION lock_wait_timeout=31536000') called at /usr/bin/innobackupex line 3456 main::mysql_lock_tables('HASH(0x19b4fed0)') called at /usr/bin/innobackupex line 1991 main::backup() called at /usr/bin/innobackupex line 1601 innobackupex: Error: Error executing 'SET SESSION lock_wait_timeout=31536000': DBD::mysql::db do failed: MySQL server has gone away at /usr/bin/innobackupex line 3045. 150322 21:25:24 innobackupex: Waiting for ibbackup (pid=30331) to finish
I tried repeat 2 times but same errors.

I read tutorial at http://www.percona.com/doc/percona-x...ved_ftwrl.html
and set options same this tut but not success.

Sorry, my english is very poor

Thanks!

Innobackupex - MySQL server has gone away when SET SESSION lock_wait_timeout=31536000

Lastest Forum Posts - March 27, 2015 - 9:38am
Hi,

I have mysql server 5.5 with > 300GB data.
I tried create a slave from this master server.

When i run innobackupex, it died at SET SESSION lock_wait_timeout=31536000

This is error:

DBD::mysql::db do failed: MySQL server has gone away at /usr/bin/innobackupex line 3045.
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 3048
main::mysql_query('HASH(0x19b4fed0)', 'SET SESSION lock_wait_timeout=31536000') called at /usr/bin/innobackupex line 3456
main::mysql_lock_tables('HASH(0x19b4fed0)') called at /usr/bin/innobackupex line 1991
main::backup() called at /usr/bin/innobackupex line 1601
innobackupex: Error:
Error executing 'SET SESSION lock_wait_timeout=31536000': DBD::mysql::db do failed: MySQL server has gone away at /usr/bin/innobackupex line 3045.
150322 21:25:24 innobackupex: Waiting for ibbackup (pid=30331) to finish


I tried repeat 2 times but same errors.

I read tutorial at http://www.percona.com/doc/percona-x...ved_ftwrl.html
and set options same this tut but not success.

Sorry, my english is very poor

Thanks!

xtrabackup datadir must be empty - options?

Lastest Forum Posts - March 27, 2015 - 6:55am
In the Documentation for XtraBackup it says that

The datadir must be empty; Percona XtraBackup innobackupex --copy-back option will not copy
over existing files.

Is there any way round this?

My DataDir is /var/lib/mysql. I created my backup in /var/lib/mysql/backups/ as I don't have enough space anywhere else on the server. I removed everything from /var/lib/mysql except the `backups` directory but it still wont copy-back, giving error:

Error: Original data directory '/var/lib/mysql' is not empty! at /usr/bin/innobackupex line 2194.

FoundationDB is acquired by Apple: My thoughts

Latest MySQL Performance Blog posts - March 27, 2015 - 6:00am

TechCrunch reported yesterday that Apple has acquired FoundationDB. And while I didn’t see any mention if this news on the FoundationDB website, they do have an announcement saying: “We have made the decision to evolve our company mission and, as of today, we will no longer offer downloads.”

This is an unfortunate development – I have been watching FoundationDB technology for years and was always impressed in terms of its performance and features. I was particularly impressed by their demo at last year’s Percona Live MySQL and Expo. Using their Intel NUC-based Cluster, I remember Ori Herrnstadt showing me how FoundationDB handles single-node failure as well as recovery from complete power-down – very quickly and seamlessly. We have borrowed a lot of ideas from this setup for our Percona XtraDB Cluster Demos.

I think it was a great design to build a distributed, shared-nothing transaction aware key value store, and then have an SQL Layer built on top of it. I did not have a chance to test it hands-on, though. Such a test would have revealed the capabilities of the SQL optimizer – the biggest challenge for distributed relational database systems.

My hope was to see, over time, this technology becoming available as open source (fully or partially), which would have dramatically increased adoption by the masses. It will be interesting to see Apple’s long-terms plans for this technology.

In any case it looks like FoundationDB software is off limits. If you are an existing FoundationDB customer looking for alternatives, we here at Percona would be happy to help evaluate options and develop a migration strategy if necessary.

The post FoundationDB is acquired by Apple: My thoughts appeared first on MySQL Performance Blog.

Freshly installed mysql 5.5 unresponsive

Lastest Forum Posts - March 27, 2015 - 12:18am
Hi Guys,

I installed via apt-get mysql 5.5 and created following custom config that i placed in /etc/mysql/conf.d/custom.cnf

: [mysqld] #bind-address = 0.0.0.0 innodb_buffer_pool_size = 10G query_cache_limit = 80M query_cache_size = 64M tmp_table_size = 256M max_heap_table_size = 256M table_open_cache = 2000 join_buffer_size = 64M max_allowed_packet=512M log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 #log-queries-not-using-indexes max_connections = 250 An apache web server in the same network is connecting to this server (roughly 70 max connections) and a couple of times the database would become unresponsive.

Now my fear is that there is something not okay with my config above.
The server has a SSD, 32GB of ram and 4 cores. All tables are InnoDB

Any input would be greatly appreciated.

Thanks in advance!

False negatives from pt-table-checksum

Lastest Forum Posts - March 26, 2015 - 1:59pm
I have come across a problem in which pt-table-checksum is reporting false negatives, i.e. failing to report differences in even very small tables.

The situation:

Customer has an existing MySQL cluster consisting of two MySQL 5.5.32 master-slave pairs, with master-master replication between the two masters. If we call the pairs A,B and C,D, then A and C are the masters and their replication topology is: B<-A<=>C->D
There is also a new five-node XtraDB 5.6.22-72 cluster, with a single asynchronous replication slave for backups. Node 1 of the cluster, for now, replicates asynchronously from node C in the above topology, and will do so until migration is accomplished. To hopefullu ensure compatibility with the cluster, replication on A-B-C-D has been set to ROW, since the cluster's internal replication is and must remain ROW. Due to the sheer volume of traffic the customer is processing, replication between A and C is by now routinely falling behind by as much as an hour during the day, with obvious impacts on the cleanliness of the data.

To validate that data on the cluster matches that on the production servers prior to attempting migration to the new cluster, the customer is running pt-table-checksum on node C. pt-table-checksum is of course setting SESSION BINLOG_FORMAT to STATEMENT; equally obviously, this is not propagating past node A, so checksums reported from B cannot be trusted. That's OK. We don't actually care about checksums from B. What we care about is that the data on C, which has been declared the authoritative copy of the data, and the cluster match. And that should be fine for pt-table-checksum, because there is only a single replication link between node C and cluster node 1, so checksums between C and cluster node 1 should be accurate.

Unfortunately, they are not. pt-table-checksum is reporting tables as having zero diffs and matching checksums between C and cluster node 1, when we can look at the two tables side by side and immediately see at a glance that they are different. This is alarming, because if pt-table-checksum is lying to us and failing to report diffs that we know exist, we cannot trust what it tells us about any of the other data. And we cannot manually compare almost a terabyte of DB data, and the production environment cannot be taken offline to check all of the data. (Nor can it be taken offline to update it.)

Can anyone shed any light on why pt-table-checksum, in this configuration, is throwing false negatives?

xtrabackup does not have access rights

Lastest Forum Posts - March 26, 2015 - 9:43am
I've just installed percona-xtrabackup, and so for my first test I have Created a Directory (testdata) in my Home directory:
`mkdir testdata`

changed permissions on it
`sudo chmod 777 testdata`

and ran innobackupex
`innobackupex --user=root --password=xxxx /home/user/testdata/`

But I then get:
`2015-03-26 16:37:54 7fee54ebd740 InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.`


Is this the `datatest` directory, or my MySQL '/var/lib/mysql' directory, or somewhere else?



Trying to install innobackex

Lastest Forum Posts - March 26, 2015 - 5:07am
Hi, I tried installing Percona Toolkit using:
: dpkg -i percona-toolkit.deb apt-get install --fix-missing -f Eventually I got that to work, but looking in : /usr/bin there is no sign of : innobackupex .
I can see all the : pt-* files in there though.

I tried reinstalling using:
: apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A that gives me
: . . . gpgkeys: key 1C4CBDCDCD2EFD2A not found on keyserver gpg: no valid OpenPGP data found. gpg: Total number processed: 0 Then I try
: apt-get install xtrabackup which gives me
: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package xtrabackup Am I doing something wrong?

I've also tried innobackupex, innobackupex_55, percona-toolkit, and they all have the same result.

creators discuss its neon

Lastest Forum Posts - March 26, 2015 - 2:20am
http://www.youtube.com/watch?v=YZN5Xa-0iCA
Hotline Miami makes you feel things that you don't want to feel. It shows you a veritable smorgasbord of ugly

binlog positions &amp;quot;off&amp;quot; a bit after restore

Lastest Forum Posts - March 25, 2015 - 8:52am
I have a few shards in an application that are approaching 600-800G on-disk, but aren't heavily used. I'm spinning up new off-site backup & reporting copies of all shards and let four streaming xtrabackup runs go last night. I have a script that I use frequently to clone out new slaves. Two of the shards, with active customer bases, started right up as normal (150-200G). The two shards with larger data sizes appear to have slightly wrong (behind) master coordinates in xtrabackup_slave_info.

So, when I do a streaming backup from an existing slave, at the end I get:
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.002238', MASTER_LOG_POS=263779110

overnight, the master moved through to
| mysql-bin.002239 | 30261034 |

So it passes a quick sanity check. Built a CHANGE MASTER with the correct ip/user/etc, fire it off, and start replication. Duplicate key error. Oops!

I verified from the error log that the expected slave statement was issued:
Slave SQL thread initialized, starting replication in log 'mysql-bin.002238' at position 263779110, relay log '/mysql/binlog/mysqld-relay-bin.000001' position: 4. I get:

Last_SQL_Error: Error 'Duplicate entry '68407820' for key 'PRIMARY'' on query.... This is against a Rails sessions table.
mysql> select id, created_at from sessions where id = '68407821';

+----------+---------------------+
| id | created_at |
+----------+---------------------+
| 68407821 | 2015-03-24 03:54:15 |
+----------+---------------------+

Using mysqlbinlog, I found that insert into the original binlogs. It seems to be halfway through a transaction for thread 841292:


#150325 2:37:43 server id 130161118 end_log_pos 263780113 CRC32 0xa869770f Query thread_id=841292 exec_time=0 error_code=0
SET TIMESTAMP=1427265463/*!*/;
INSERT INTO `sessions` (--- redacted --- )
/*!*/;

Thus, the xtrabackup_slave_info position should have been *at least* the next one:

#150325 2:37:43 server id 130161118 end_log_pos 263780693 CRC32 0x5b59698f Query thread_id=841292 exec_time=0 error_code=0

but as noted this appears to be splitting a transaction. So, the entire transaction for thread 841292 was committed to disk (I verified on the restore that the data is correct for the entire transaction) AND data from the next few transactions is present.

Info:

Source Slave:
:~$ dpkg -l|grep percona
ii libperconaserverclient18.1 5.6.19-67.0-618.wheezy amd64 Percona Server database client library
ii libperconaserverclient18.1-dev 5.6.19-67.0-618.wheezy amd64 Percona Server database development files
ii percona-server-client-5.6 5.6.19-67.0-618.wheezy amd64 Percona Server database client binaries
ii percona-server-common-5.6 5.6.19-67.0-618.wheezy amd64 Percona Server database common files (e.g. /etc/mysql/my.cnf)
ii percona-server-server 5.6.19-67.0-618.wheezy amd64 Percona Server database server
ii percona-server-server-5.6 5.6.19-67.0-618.wheezy amd64 Percona Server database server binaries
ii percona-xtrabackup 2.2.9-5067-1.wheezy amd64 Open source backup tool for InnoDB and XtraDB

Destination:
ii libperconaserverclient18.1 5.6.22-71.0-726.wheezy amd64 Percona Server database client library
ii libperconaserverclient18.1-dev 5.6.22-71.0-726.wheezy amd64 Percona Server database development files
ii percona-server-client-5.6 5.6.22-71.0-726.wheezy amd64 Percona Server database client binaries
ii percona-server-common-5.6 5.6.22-71.0-726.wheezy amd64 Percona Server database common files (e.g. /etc/mysql/my.cnf)
ii percona-server-server 5.6.22-71.0-726.wheezy amd64 Percona Server database server
ii percona-server-server-5.6 5.6.22-71.0-726.wheezy amd64 Percona Server database server binaries
ii percona-xtrabackup 2.2.9-5067-1.wheezy amd64 Open source backup tool for InnoDB and XtraDB


I feel like I'm missing something obvious here, like a failed roll-back or something. If it hadn't happened on 2/4 of the servers overnight, I probably wouldn't bother posting. What have I done wrong/misunderstood?

Thanks!

Wes

Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more

Latest MySQL Performance Blog posts - March 25, 2015 - 3:00am

Founded in 2004 to help people find great local businesses, Yelp has some 135 million monthly unique visitors. With those traffic volumes Yelp’s 300+ engineers are constantly working to keep things moving smoothly – and when you move that fast you learn many things.

Fortunately for the global MySQL community, three Yelp DBAs will be sharing what they’ve learned at the annual Percona Live MySQL Conference and Expo this April 13-16 in Santa Clara, California.

Say “hello” to Susanne Lehmann, Jenni Snyder and Josh Snyder! I chatted with them over email about their presentations, on how MySQL is used at Yelp, and about the shortage of women in MySQL.

***

Tom: Jenni, you and Josh will be co-presenting “Next generation monitoring: moving beyond Nagios ” on April 14.

You mentioned that Yelp’s databases scale dynamically, and so does your monitoring of those databases. And to minimize human intervention, you’ve created a Puppet and Sensu monitoring ensemble… because “if it’s not monitored, it’s not in production.” Talk to me more about Yelp’s philosophy of “opt-out monitoring.” What does that entail? How does that help Yelp?

Jenni: Before we moved to Sensu, our Nagios dashboards were a sea of red, muted, acknowledged, or disabled service checks. In fact, we even had a cluster check to make sure that we never accidentally put a host into use that was muted or marked for downtime. It was possible for a well-meaning operator to acknowledge checks on a host and forget about it, and I certainly perpetrated a couple of instances of disks filling up after acknowledging a 3am “warning” page that I’d rather forget about. With Sensu, hosts and services come out of the downtime/acknowledgement state automatically after a number of days, ensuring that we’re kept honest and stay on top of issues that need to be addressed.

Also, monitoring is deployed with a node, not separate monitoring configuration. Outside of a grace period we employ when a host is first provisioned or rebooted, if a host is up, it’s being monitored and alerting. Also, alerting doesn’t always mean paging. We also use IRC and file tickets directly into our tracking system when we don’t need eyes on a problem right away.

Tom: Susanne, in your presentation, titled “insert cassandra into prod where use_case=?;” you’ll discuss the situations you’ve encountered where MySQL just wasn’t the right tool for the job.

What led up to that discovery and how did you come up with finding the right tools (and what were they) to run alongside and support MySQL?

Susanne: Our main force behind exploring other datastores alongside MySQL was that Yelp is growing outside the US market a lot. Therefore we wanted the data to be nearer to the customer and needed multi-master writes.

Also, we saw use cases where our application data was organized very key-value like and not relational, which made them a better fit for a NoSQL solution.

We decided to use Cassandra as a datastore and I plan to go more into detail why during my talk. Now we offer developers more choices on how to store our application data, but we also believe in the “right tool for the job” philosophy and might add more solutions to the mix in the future.

Tom: Jenni, you’ll also be presenting “Schema changes multiple times a day? OK!” I know that you and your fellow MySQL DBAs are always improving and also finding better ways of supporting new and existing features for Yelp users like me. Delivering on such a scale must entail some unique processes and tools. Does this involve a particular mindset among your fellow DBAs? Also, what are some of those key tools – and processes and how are they used?

Jenni: Yelp prizes the productivity of our developers and our ability to iterate and develop new features quickly. In order to do that, we need to be able to not only create new database tables, but also modify existing ones, many of which are larger than MySQL can alter without causing considerable replication delay. The first step is to foster a culture of automated testing, monitoring, code reviews, and partnership between developers and DBAs to ensure that we can quickly & safely roll out schema changes. In my talk, I’ll be describing tools that we’ve talked about before, like our Gross Query Checker, as well as the way the DBA team works with developers while still getting the rest of our work done. The second, easy part is using a tool like pt-online-schema-change to run schema changes online without causing replication delay or degrading performance

Tom: Josh, you’ll also be speaking on “Bootstrapping databases in a single command: elastic provisioning for the win.” What is “elastic provisioning” and how are you using it for Yelp’s tooling?

Josh: When I say that we use elastic provisioning, I mean that we can reliably and consistently build a database server from scratch, with minimal human involvement. The goal is to encompass every aspect of the provisioning task, including configuration, monitoring, and even load balancing, in a single thoroughly automated process. With this process in place, we’ve found ourselves able to quickly allocate and reallocate resources, both in our datacenters and in the cloud. Our tools for implementing the above goals give us greater confidence in our infrastructure, while avoiding single-points of failure and achieving the maximum possible level of performance. We had a lot of fun building this system, and we think that many of the components involved are relevant to others in the field.

Tom: Susanne and Jenni, last year at Percona Live there was a BoF session titled “MySQL and Women (or where are all the women?).” The idea was to discuss why there are “just not enough women working on the technology side of tech.” In a nutshell, the conversation focused on why there are not more women in MySQL and why so relatively few attend MySQL conferences like Percona Live.

The relative scarcity of women in technical roles was also the subject of an article published in the August 2014 issue of Forbes, citing a recent industry report.

Why, in your (respective) views, do you (or don’t) think that there are so few women in MySQL? And how can this trend be reversed?

Susanne: I think there are few women in MySQL and the reasons are manifold. Of course there is the pipeline problem. Then there is the problem, widely discussed right now, that women who are entering STEM jobs are less likely staying in there. These are reasons not specific for MySQL jobs, but rather for STEM in general. What is more specific for database/MySQL jobs is, in my opinion, that often times DBAs need to be on call, they need to stay in the office if things go sideways. Database problems tend often to be problems that can’t wait till the next morning. That makes it more demanding when you have a family for example (which is true for men as well of course, but seems still to be more of a problem for women).

As for how to reverse the trend, I liked this Guardian article because it covers a lot of important points. There is no easy solution.

I like that more industry leaders and technology companies are discussing what they can do to improve diversity these days. In general, it really helps to have a great professional (female) support system. At Yelp, we have AWE, the Awesome Women in Engineering group, in which Jenni and I are both active. We participate in welcoming women to Yelp engineering, speaking at external events and workshops to help other women present their work, mentoring, and a book club.

Jenni: I’m sorry that I missed Percona Live and this BoF last year; I was out on maternity leave. I believe that tech/startup culture is a huge reason that fewer women are entering and staying these days, but a quick web search will lead you to any number of articles debating the subject. I run into quite a few women working with MySQL; it’s large, open community and generally collaborative and supportive nature is very welcoming. As the article you linked to suggests, MySQL has a broad audience. It’s easy to get started with and pull into any project, and as a result, most software professionals have worked with it at some time or another.

On another note, I’m happy to see that Percona Live has a Code of Conduct. I hope that Percona and/or MySQL will consider adopting a Community Code of Conduct like Python, Puppet, and Ubuntu. Doing so raises the bar for all participants, without hampering collaboration and creativity!

* * *

Thanks very much, Susanne, Jenni and Josh! I look forward to seeing you next month at the conference. And readers, if you’d like to attend Percona Live, use the promo code Yelp15 for 15% off! Just enter that during registration. If you’re already attending, be sure to tweet about your favorite sessions using the hashtag #PerconaLive. And if you need to find a great place to eat while attending Percona Live, click here for excellent Yelp recommendations.

The post Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more appeared first on MySQL Performance Blog.

HaProxy configuration to prevent multiple node writing with Percona Cluster

Lastest Forum Posts - March 24, 2015 - 2:16pm
So as recommended with Percona Cluster, we are (trying) to only write to one cluster node at a time. The application is using JDBC connection pooling. Whenever there is a flap in the service, it seems we end up with multiple nodes being written and then cluster deadlocks / local certification errors. We've improved this a little by changing our configuration to 'stick on dst' instead of 'stick on src'.
Below is the configuration. Any suggestions? Should we not be using sticky sessions?
global
log 127.0.0.1 local0
maxconn 4960
#debug
#quiet
user haproxy
group haproxy
stats socket /var/run/haproxy-stats uid haproxy mode 770
stats maxconn 10
noepoll

defaults
log global
option dontlognull
retries 2
option redispatch
maxconn 2000
timeout connect 4s
timeout client 1800s
timeout server 1800s

peers hapeers
peer xxxxxxx yyyyyy:1024
peer aaaaaa bbbbbb:1024

frontend percona_cluster
bind 0.0.0.0:3306
default_backend percona_cluster

backend percona_cluster
mode tcp
option tcpka
option tcplog
option mysql-check
stick-table type ip size 1 peers hapeers nopurge
stick on dst
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www

# set up application listeners configured via json
listen ssl_cert
bind 0.0.0.0:443 ssl crt /etc/haproxy/haproxy.pem no-sslv3
balance roundrobin
stick-table type ip size 200k peers hapeers expire 30m
mode http
stats enable
stats scope .
stats uri /haproxy-hp?stats


Restoring XtraBackup files onto a Windows located Database

Lastest Forum Posts - March 24, 2015 - 9:36am
I currently work in a MySQL on Windows based environment (mainly 5.5), but we are gradually moving more and more to MySQL on Linux (in fact we now have more Linux Servers than Windows). At present we use MySQLDump for our backups, but this is getting increasingly time consuming and unwieldy, so I am looking at alternatives.

If we use XtraBackup to take backups from one of our Linux DB's can we restore that file onto a Windows machine if required?

Second node won't join cluster/SST fails

Lastest Forum Posts - March 24, 2015 - 5:25am
Hello,

I've had a 3-node cluster online for a few days, and I tried to take the second node down to change the tmpdir in my.cnf (disk was getting full). When I start MySQL now, the node will not recieve an SST and fails with some frustrating error messages.

Packages (same on both nodes)
ii percona-xtradb-cluster-server-5.6 5.6.21-25.8-938.trusty
ii percona-xtrabackup 2.2.9-5067-1.trusty

my.cnf

: # Path to Galera library wsrep_provider=/usr/lib/libgalera_smm.so # Cluster connection URL wsrep_cluster_address=gcomm://1.1.1.1,2.2.2.2,3.3.3.3 #wsrep_cluster_address=gcomm:// # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how |InnoDB| autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Authentication for SST method wsrep_sst_auth="sstuser:passwordhere" # Node #1 address wsrep_node_address=2.2.2.2 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=brp #innodb_buffer_pool_size=145774M innodb_flush_log_at_trx_commit=2 innodb_file_per_table=1 innodb_data_file_path = ibdata1:100M:autoextend ## You may want to tune the below depending on number of cores and disk sub innodb_read_io_threads=4 innodb_write_io_threads=4 innodb_io_capacity=200 innodb_doublewrite=1 innodb_log_file_size=1024M innodb_log_buffer_size=96M innodb_buffer_pool_instances=8 innodb_log_files_in_group=2 innodb_thread_concurrency=64 #innodb_file_format=barracuda innodb_flush_method = O_DIRECT innodb_autoinc_lock_mode=2 ## avoid statistics update when doing e.g show tables innodb_stats_on_metadata=0 innodb_data_home_dir=/var/lib/mysql innodb_log_group_home_dir=/var/lib/mysql
innobackup.prepare.log (JOINER)
: InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. Get the latest version of Percona XtraBackup, documentation, and help resources: http://www.percona.com/xb/p 150324 12:01:13 innobackupex: Starting the apply-log operation IMPORTANT: Please check that the apply-log run completes successfully. At the end of a successful apply-log run innobackupex prints "completed OK!". 150324 12:01:13 innobackupex: Starting ibbackup with command: xtrabackup --defaults-file="/var/lib/mysql/.sst/backup-my.cnf" --defaults-group="mysqld" --prepare --target-dir=/var/lib/mysql/.sst xtrabackup version 2.2.9 based on MySQL server 5.6.22 Linux (x86_64) (revision id: ) xtrabackup: cd to /var/lib/mysql/.sst xtrabackup: Error: cannot open ./xtrabackup_checkpoints xtrabackup: error: xtrabackup_read_metadata() xtrabackup: This target seems not to have correct metadata... 2015-03-24 12:01:13 7fc3e1ed3780 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. xtrabackup: Warning: cannot open ./xtrabackup_logfile. will try to find. 2015-03-24 12:01:13 7fc3e1ed3780 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. xtrabackup: Fatal error: cannot find ./xtrabackup_logfile. xtrabackup: Error: xtrabackup_init_temp_log() failed. innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 2642. main::apply_log() called at /usr//bin/innobackupex line 1570 innobackupex: Error: innobackupex: ibbackup failed at /usr//bin/innobackupex line 2642. innobackup.backup.log (DONOR)
: InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. Get the latest version of Percona XtraBackup, documentation, and help resources: http://www.percona.com/xb/p 150324 11:58:24 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/etc/mysql/my.cnf;mysql_read_default_group=xtrabackup;mysql_socket=/var/run/mysqld/mysqld.sock' as 'sstuser' (using password: YES). 150324 11:58:24 innobackupex: Connected to MySQL server 150324 11:58:24 innobackupex: Starting the backup operation IMPORTANT: Please check that the backup run completes successfully. At the end of a successful backup run innobackupex prints "completed OK!". innobackupex: Using server version 5.6.21-70.1-56 innobackupex: Created backup directory /tmp/tmp.EZdqYbIZoL 150324 11:58:24 innobackupex: Starting ibbackup with command: xtrabackup --defaults-file="/etc/mysql/my.cnf" --defaults-group="mysqld" --backup --suspend-at-end --target-dir=/tmp --tmpdir=/tmp --extra-lsndir='/tmp' --stream=xbstream innobackupex: Waiting for ibbackup (pid=20892) to suspend innobackupex: Suspend file '/tmp/xtrabackup_suspended_2' xtrabackup version 2.2.9 based on MySQL server 5.6.22 Linux (x86_64) (revision id: ) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql xtrabackup: open files limit requested 1024000, set to 1024000 xtrabackup: using the following InnoDB configuration: xtrabackup: innodb_data_home_dir = /var/lib/mysql xtrabackup: innodb_data_file_path = ibdata1:100M:autoextend xtrabackup: innodb_log_group_home_dir = /var/lib/mysql xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 1073741824 xtrabackup: using O_DIRECT >> log scanned up to (1374935811) xtrabackup: Generating a list of tablespaces 2015-03-24 11:58:24 7f9248add780 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 2704. main::wait_for_ibbackup_file_create('/tmp/xtrabackup_suspended_2') called at /usr//bin/innobackupex line 2724 main::wait_for_ibbackup_suspend('/tmp/xtrabackup_suspended_2') called at /usr//bin/innobackupex line 1977 main::backup() called at /usr//bin/innobackupex line 1601 innobackupex: Error: The xtrabackup child process has died at /usr//bin/innobackupex line 2704. Now, there are 2 errors that look like permissions/OS errors. OS error 13 Permission Denied, and OS error 2, file not found. The file not found relates to not finding the xtrabackup-checkpoints I think, but I have no idea if I have to fix that or not.

The Permission Denied errors, make no sense to me, here is some permissions from my setup.

(JOINER)
-rw-r--r-- 1 mysql root 5164 Mar 24 11:56 /etc/mysql/my.cnf <--- my.cnf
drwxrwxrwt 5 root root 4096 Mar 24 12:20 tmp/ <----tmpdir
drwxr-xr-x 3 mysql mysql 12288 Mar 24 12:08 mysql/ <--- datadir

(DONER)
-rw-r--r-- 1 mysql root 5164 Mar 24 11:56 my.cnf <--- my.cnf
drwxrwxrwt 4 root root 4096 Mar 24 12:20 tmp/ <-- tmpdir
drwxr-xr-x 20 mysql mysql 12288 Mar 24 12:09 mysql/ <----datadir



Any help is appreciated. I've gone round in circles for hours and hours checking the basic config and it all seems OK.

new percona cluster with excisting data

Lastest Forum Posts - March 24, 2015 - 2:43am
H everyone

I am building up a staging percona cluster with three servers from a test database taken from xtrabackup (includes grastate files, etc) . The relevant database is about 30G and what i did was to copy the data files on all servers to the /var/lib/mysql/ directory and changed ownership to mysql

I bootstrap the first server and then when i start the second server it starts SST process.

How can i avoid the SST process?
Also, since the data are exactly the same, why does the joiner starts SST?

Thanks

Pages

Subscribe to Percona aggregator
]]>