]]>
]]>

You are here

Feed aggregator

Restore fails on fresh Ubuntu/Debian-installation, "mysqld"-group missing

Lastest Forum Posts - January 26, 2015 - 2:59am
Admins - Please delete this thread.

Sorry about the double-post, forum-software appears to be buggy (Error-message "An unexpected error was returned: 'Subscriptions'", but there was no indiciation that the post did go through successfully).

Can't find an option to delete the post as OP either?!

Restore fails on fresh Ubuntu/Debian-installation - missing "mysqld"-group

Lastest Forum Posts - January 26, 2015 - 2:58am
While testing XtraBackup, I didn't manage to get it to restore on Debian 7.8 "wheezy" or Ubuntu 14.04 "Trusty".

It complains about missing the "mysqld" group in my config-file, despite that there is a mysqld-group in the config-file.

: innobackupex --apply-log --defaults-file=/etc/mysql/my.conf --defaults-group=mysqld --ibbackup=xtrabackup_51 . InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. Get the latest version of Percona XtraBackup, documentation, and help resources: http://www.percona.com/xb/p 150126 11:42:19 innobackupex: Starting the apply-log operation IMPORTANT: Please check that the apply-log run completes successfully. At the end of a successful apply-log run innobackupex prints "completed OK!". sh: 1: xtrabackup_51: not found innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 4482. main::get_option('innodb_data_file_path') called at /usr/bin/innobackupex line 2615 main::apply_log() called at /usr/bin/innobackupex line 1561 innobackupex: Error: no 'mysqld' group in server configuration file '/etc/mysql/my.conf' at /usr/bin/innobackupex line 4482. Here's /etc/mysql/my.conf (Which is an untouched standard debian/ubuntu-installation from the official repo):

: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/
Since I couldn't find this error online, I hope someone in this forum can help me.

Harbaugh’s stance might have fueled the reported coverup

Lastest Forum Posts - January 26, 2015 - 1:41am
As we wait (and wait . . . and wait) for the Ravens to handle the alleged errors, inaccuracies, false assumptions and, maybe, misunderstandings�?in the ESPN report alleging that Ravens director of safety Darren Sanders knew the contents of the notorious elevator video in February and Ravens president Dick Cass knew in early April Martavis Bryant Jersey, coach John Harbaugh has addressed the report that he lobbied for the group to reduce Rice in February.
“Every single football decision we make, we perform collectively, Harbaugh advised reporters following Sundays win at Cleveland, as Josh Alper has pointed out. Just like each football decision. You get together, you hash it out. [G.M.] Ozzie [Newsome] employs the term scrimmaging. You scrimmage it out, everybody’s received their opinions. It’s not black and white.�?/p>
Asked by Peter King of TheMMQB.com regardless of whether Harbaugh desired to reduce Rice in February, Harbaugh didnt offer an unequivocal no.
“That is such an unfair characterization, Harbaugh explained. It is not fair to the organization. We said all along that the information would establish the consequences, and that was my stance from the start of this.�?/p>
Reading through people comments with each other in light of the ESPN report https://www.steelersboutique.com/110...eCastro_Jersey, its a honest characterization to say that Harbaugh at least raised the likelihood of cutting Rice in February https://www.steelersboutique.com/457..._Foster_Jersey, and that Newsomes scrimmaging method resulted in a consensus that the Ravens would preserve Rice but that ultimately the information would establish the consequences.
The details, once they ultimately came to light through TMZ, determined the ultimate consequence for Rice. If Harbaugh indeed raised for the duration of the scrimmaging approach that the group must lower Rice, its affordable to feel that Harbaughs agreement to maintain Rice hinged on the facts displaying that Rice didnt punch his then-fiancée.
As soon as the information showed he did, end of story.
So if, as the ESPN report contends, Sanders, Cass, and possibly other people in the organization knew the true contents of the elevator video ahead of the elevator video came out https://www.steelersboutique.com/434...itchell_Jersey, probably they concealed the reality not only to safe a quick suspension from the league office, but also to maintain Harbaugh from winning the internal scrimmage as to whether or not a player who had been paid $25 million among July 2012 and December 2013 need to be dumped from the roster.
Either way, ESPNs contention that the Ravens knew the contents of the video long just before seeing it has not but been rebutted by Sanders or Cass. The only man or woman who has spoken is Harbaugh, whose remarks really support show why a coverup happened, if a coverup in truth did occur.

NFL threatens Marshawn Lynch with six-figure fine

Lastest Forum Posts - January 26, 2015 - 1:40am
Jets coach Rex Ryan was fined $one hundred,000 final weekend for utilizing profanity. Seahawks running back Marshawn Lynch faces an identical punishment for not employing profanity. Or any other words.
Per a league supply, the NFL has threatened to fine Lynch $a hundred,000 if he fails to speak to the media right after Sundays game towards the Chiefs https://www.steelersboutique.com/131..._Dawson_Jersey.
The sum comes from the $50,000 fine that was imposed then suspended final yr for failing to speak to the media. When the NFL lifted the fine, it also warned Lynch that future failure to cooperate would consequence in reinstatement of the fine, plus one more $50,000.
Although the necessity that players talk to the media is aimed at allowing the media to serve as the conduit to the fans, supporters usually side with players who are fined for refusing to communicate to the media https://www.steelersboutique.com/89-..._Carter_Jersey. Thats very likely what will take place in this case, too, specially because the NFL has spent significantly of the season unable to get out of its own way on matters this kind of as the Ray Rice and Adrian Peterson situations Dermontti Dawson Jersey.
If an individual is going to be fined $100 https://www.steelersboutique.com/310...eenwood_Jersey,000 for bungling the Rice investigation or reneging on the agreement to reinstate Peterson, then go ahead and fine Lynch for not talking to the media. If the door isnt going to swing both ways, then let Lynch keep silent if he chooses to do so.

Darrelle Revis: 10 Motives the Jets Should Get a Deal Accomplished This Week

Lastest Forum Posts - January 25, 2015 - 11:06pm
Based on who you feel, Darrelle Revis could be back in uniform significantly sooner than originally anticipated.Tim Cowlishaw of the Dallas Morning Information is reporting on Twitter that Revis is expected to signal a deal sometime this week Akeem Ayers Jersey. ESPN's Adam Schefter has refuted the report and explained his sources have not indicated the talks have progressed to a level where a deal looks imminent.Despite the conflicting reviews Monday, there is no question that both sides would be wise to get a deal wrapped up this week. Here's a look at ten reasons why Darrelle Revis and the Jets want to reconcile their variations.

New England Patriots 2014 NFL Draft Reality or Fiction

Lastest Forum Posts - January 25, 2015 - 11:05pm
The structure of the NFL offseason has left supporters in a bit of a holding pattern at the moment Vince Wilfork Jersey. However totally free-agent bargains are even now offered, the bulk of the money in that department has previously been invested. And nevertheless, with about a month to go before the draft, it's still a bit early to make any definitive conclusions about a team's draft plans.Of program, that will not halt the frequently fruitless draft speculation train. Now is the time period when myriad rumors make likely connections, fostering a exciting (but also exhausting) workout about nearly every single conceivable draft chance for every team Jonathan Casillas Jersey.The New England Patriots are as tight-lipped an organization as any, but even the steel curtain shrouding Foxboro sometimes springs a leak. Consequently, Pats supporters at least have an concept of who their staff may well pick on draft day, even if New England is as likely as any crew to throw a curveball.Not all rumors are real of course Brandon Bolden Jersey, but it's also real that we need to not readily dismiss all individuals whispers. With the caveat that prospect evaluations are not ultimate at this stage, here is one viewpoint on the validity of some recent Patriots' draft rumors.

Error 1193: Unknown system variable 'port' on testing agent's MySQL connection

Lastest Forum Posts - January 24, 2015 - 5:05am
I got this error every time I want to add MySQL server to my dashboard,
Error 1193: Unknown system variable 'port'
I've found nothing useful to fix it in this forum or other websites,
So any help regarding troubleshooting this issue is highly appreciated.

Kind Regards

Error 1193: Unknown system variable 'port' on testing agent's MySQL connection

Lastest Forum Posts - January 24, 2015 - 12:31am
I got this error every time I want to add MySQL server to my dashboard,
Error 1193: Unknown system variable 'port'
I've found nothing useful to fix it in this forum or other websites,
So any help regarding troubleshooting this issue is highly appreciated.

Kind Regards

Using Percona Cloud Tools to solve real-world MySQL problems

Latest MySQL Performance Blog posts - January 23, 2015 - 11:49am

For months when speaking with customers I have been positioning Percona Cloud Tools (PCT) as a valuable tool for the DBA/Developer/SysAdmin but only recently have I truly been able to harness the data and make a technical recommendation to a customer that I feel would have been very difficult to accomplish otherwise.

Let me provide some background: I was tasked with performing a Performance Audit for one of our customers (Performance Audits are extremely popular as they allow you to have a MySQL Expert confirm or reveal challenges within your MySQL environment and make your database run faster!) and as part of our conversation we discussed and agreed to install Percona Cloud Tools. We let the site run for a few days, and then I started my audit. What I noticed was that at regular intervals there was often a CPU spike, along with a corresponding drop in Queries Per Second (QPS), but that lasted only for a few seconds. We decided that further investigation was warranted as the customer was concerned the spikes impacted their users’ experience with the application.

Here are the tasks that Percona Cloud Tools made easy while I worked to identify the source of the CPU spike and QPS drop:

  1. Per-second granularity data capture of PCT allowed me to identify how significant the spike and QPS actually were – if I was looking at the 1 minute or higher average values (such as Cacti would provide) I probably wouldn’t have been able to detect the spike or stall as clearly in the first place, it would have been lost in the average. In the case of PCT the current graphs group at the 1 minute range but you have the ability to view the min and max values during this 1 minute range since they are the true highest and lowest observed 1s intervals during the 1 minute group.
  2. Ability for all graphs to maintain the same resolution time allowed me to zero-in on the problematic time period and then quickly look across all graphs for corresponding deflections. This analysis led me to discover a significant spike in InnoDB disk reads.
  3. Ability to use the Query Analytics functionality to zoom-in again on the problematic query. By adjusting Query Analytics to an appropriate time period narrowed down the range of unique queries that could be considered the cause. This task in my opinion is the best part of using PCT.
  4. Query Analytics allowed me to view the Rows Examined in Total for each query based on just this shortened interval. I then tagged those that had higher than 10k Rows Examined (arbitrary but most queries for this customer seemed to fall below this) so that I could then review in real-time with the customer before making a decision on what to do next. We can only view this sort of information by leveraging the slow query log – this data is not available via Performance_Schema or via network sniffing.

Once we were able to identify the problematic queries then the rest was routine query optimization – 10 minutes work using Percona Cloud Tools for what might have been an hour using traditional methods!

For those of you wondering how else this can be done, assuming you detected the CPU spike / QPS drop (perhaps you are using Graphite or other tool that can deliver per-second resolution) then you’d also need to be capturing the slow query log at a good enough resolution level (I prefer long_query_time=0 to just get it all), and then be adept at leveraging pt-query-digest with –since and –until options to narrow down your range of queries.  The significant drawback to this approach is that each time you want to tweak your time range you probably need to stream through a fairly large slow log file multiple times which can be both CPU and disk intensive operations, which means it can take some time (probably minutes, maybe hours) depending on the size of your log file.  Certainly a workable approach but nowhere near as quick as reloading a page in your browser

So what are you waiting for? Start using Percona Cloud Tools today, it’s free! Register for the free beta here.

The post Using Percona Cloud Tools to solve real-world MySQL problems appeared first on MySQL Performance Blog.

Full graceful restart without bootstrap, pc.recovery=true hangs: no gvwstate.dat!?

Lastest Forum Posts - January 23, 2015 - 3:23am
Hi,

I tried to do a full cluster graceful restart without bootstrap, and relying on the pc.recovery=true feature + gvwstate.dat.

"This feature can be used for automatic recovery from full cluster crashes, such as in the case of a data center power outage and graceful full cluster restarts without the need for explicitly bootstrapping a new Primary Component."


However, gvwstate.dat was deleted on graceful shutdown, so all the daemons hung on reboot.

systemctl stop mysql
ls -l /var/lib/mysql/gv*
No such file or directory


How is full graceful restart supposed to work?

Version: Percona-XtraDB-Cluster-56-5.6.21-25.8.940.el7.x86_64

Oracle MySQL Security Patches January 2015

Lastest Forum Posts - January 22, 2015 - 6:57am
Hi,

I'm new to the forum, but wanted to know that in the light of the new Oracle MySQL Security patches for January 2015 for MySQL releases 5.5.40 and earlier and 5.6.21 and earlier, how will Percona incorporate these patches or will the current Percona servers need to be patched?

Reason I'm asking is because we have a host of Percona servers (ranging on different versions) and want to know how these patches will be handled? Will a new release be brought out or do we need to apply these patches to our servers?

Any help will be greatly appreciated.

Many Thanks,
Sunny

Nodes terminated when addeted new

Lastest Forum Posts - January 22, 2015 - 12:59am
Hello!

I have cluster of 2 nodes (master-master). One of them master for slave.
When i tried add new master node in cluster Donor node and acceptor fall down. On donor node in innobackup.backup.log i see next:
150121 14:49:11 innobackupex: Finished backing up non-InnoDB tables and files

150121 14:49:11 innobackupex: Executing LOCK BINLOG FOR BACKUP...
DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr//bin/innobackupex line 3036.
innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 3039
main::mysql_query('HASH(0x10a9720)', 'LOCK BINLOG FOR BACKUP') called at /usr//bin/innobackupex line 3501
main::mysql_lock_binlog('HASH(0x10a9720)') called at /usr//bin/innobackupex line 2000
main::backup() called at /usr//bin/innobackupex line 1592
innobackupex: Error:
Error executing 'LOCK BINLOG FOR BACKUP': DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr//bin/innobackupex line 3036.
150121 14:49:11 innobackupex: Waiting for ibbackup (pid=44712) to finish


Versions:
mysqld Ver 5.6.21-70.1-56 for Linux on x86_64 (Percona XtraDB Cluster (GPL), Release rel70.1, Revision 938, WSREP version 25.8, wsrep_25.8.r4150)

xtrabackup version 2.2.8 based on MySQL server 5.6.22 Linux (x86_64) (revision id: )



I tried to setup new node and got same error.

CPU Usage

Lastest Forum Posts - January 21, 2015 - 3:52pm
I looking at the graphs for CPU usage and I'm getting values with 'u' and 'm'. Can you explain to me what these mean?

Also I'm looking at the graphs of the Daily (5 min average) and the other graphs. The values for the "Daily (5 min average)" on the 'y' axis exceeds 500. However, the values for all other graphs only reach 400 and reflects the increase by 100 with each added virtual CPU (it's a four core processor).

Below are examples of the Daily and Weekly graphs. Thanks.

innobackupex Incremental backup fails on LOCK BINLOG FOR BACKUP...

Lastest Forum Posts - January 21, 2015 - 11:36am
Executing an incremental backup on my database and it failed with the following trace. Any clues to what is wrong.

Backup Version is: InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p

Server Version is: Server version: 5.6.21-70.0 Percona Server (GPL), Release 70.0, Revision 688

Command is: innobackupex --compress --incremental /usb/ariel/innobackupex/mysql --incremental-basedir=/usb/ariel/innobackupex/mysql/2015-01-14_05-51-30

Log Trace is:
innobackupex: Backing up files '/srv/mysql//annotation_service/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM ,CSV,opt,par}' (10 files)
innobackupex: Backing up file '/srv/mysql//entrez/db.opt'
innobackupex: Backing up file '/srv/mysql//entrez/gene.frm'
>> log scanned up to (8048683632701)
innobackupex: Backing up file '/srv/mysql//entrez/synonym.frm'
150121 10:38:01 innobackupex: Finished backing up non-InnoDB tables and files

150121 10:38:01 innobackupex: Executing LOCK BINLOG FOR BACKUP...
DBD::mysql::db do failed: MySQL server has gone away at /usr/bin/innobackupex line 3036.
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 3039
main::mysql_query('HASH(0x1df0d20)', 'LOCK BINLOG FOR BACKUP') called at /usr/bin/innobackupex line 3501
main::mysql_lock_binlog('HASH(0x1df0d20)') called at /usr/bin/innobackupex line 2000
main::backup() called at /usr/bin/innobackupex line 1592
innobackupex: Error:
Error executing 'LOCK BINLOG FOR BACKUP': DBD::mysql::db do failed: MySQL server has gone away at /usr/bin/innobackupex line 3036.
150121 10:38:01 innobackupex: Waiting for ibbackup (pid=4074) to finish


Encrypted and incremental backups.

Lastest Forum Posts - January 21, 2015 - 1:50am
Hello,


I'm currently writing backups and restorations scripts, on the basis of innobackupex. I need the backups to be encrypted and incremental.

Therefore, In order to enable incremental backups on the basis of an encrypted basedir, I'm using the '--extra-lsndir' option to save an alternative cleartext 'xtrabackup_checkpoints' file. I think it is a good solution (this solution comes from this blog post).

My question is :
Can I safely set the '--extra-lsndir' value to the same value as the backupDir, and then delete xtrabackup_checkpoints.xbcrypt ?

Identifying useful info from MySQL row-based binary logs

Latest MySQL Performance Blog posts - January 20, 2015 - 7:34am

As a MySQL DBA/consultant, it is part of my job to decode the MySQL binary logs – and there are a number of reasons for doing that. In this post, I’ll explain how you can get the important information about your write workload using MySQL row-based binary logs and a simple awk script.

First, it is important to understand that row-based binary logs contain the actual changes done by a query. For example, if I run a delete query against a table, the binary log will contain the rows that were deleted. MySQL provides the mysqlbinlog utility to decode the events stored in MySQL binary logs. You can read more about mysqlbinlog in detail in the reference manual here.

The following example illustrates how mysqlbinlog displays row events that specify data modifications. These correspond to events with the WRITE_ROWS_EVENT, UPDATE_ROWS_EVENT, and DELETE_ROWS_EVENT type codes.

We will use following options of mysqlbinlog.
–base64-output=decode-rows
–verbose, -v
–start-datetime=”datetime”
–stop-datetime=”datetime”

We have a server running with row based binary logging.

mysql> show variables like '%binlog_format%'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | binlog_format | ROW | +---------------+-------+ 1 row in set (0.00 sec)

We created a test table and executed the following sequence of statements under a transaction.

use test; CREATE TABLE t ( id INT NOT NULL, name VARCHAR(20) NOT NULL, date DATE NULL ) ENGINE = InnoDB; START TRANSACTION; INSERT INTO t VALUES(1, 'apple', NULL); UPDATE t SET name = 'pear', date = '2009-01-01' WHERE id = 1; DELETE FROM t WHERE id = 1; COMMIT;

Now let’s see how it is represented in binary logs.

# mysqlbinlog --base64-output=decode-rows -vv --start-datetime="2015-01-12 21:40:00" --stop-datetime="2015-01-12 21:45:00" mysqld-bin.000023 /*!*/; # at 295 #150112 21:40:14 server id 1 end_log_pos 367 CRC32 0x19ab4f0f Query thread_id=108 exec_time=0 error_code=0 SET TIMESTAMP=1421079014/*!*/; BEGIN /*!*/; # at 367 #150112 21:40:14 server id 1 end_log_pos 415 CRC32 0x6b1f2240 Table_map: `test`.`t` mapped to number 251 # at 415 #150112 21:40:14 server id 1 end_log_pos 461 CRC32 0x7725d174 Write_rows: table id 251 flags: STMT_END_F ### INSERT INTO `test`.`t` ### SET ### @1=1 /* INT meta=0 nullable=0 is_null=0 */ ### @2='apple' /* VARSTRING(20) meta=20 nullable=0 is_null=0 */ ### @3=NULL /* VARSTRING(20) meta=0 nullable=1 is_null=1 */ # at 461 #150112 21:40:14 server id 1 end_log_pos 509 CRC32 0x7e44d741 Table_map: `test`.`t` mapped to number 251 # at 509 #150112 21:40:14 server id 1 end_log_pos 569 CRC32 0x0cd1363a Update_rows: table id 251 flags: STMT_END_F ### UPDATE `test`.`t` ### WHERE ### @1=1 /* INT meta=0 nullable=0 is_null=0 */ ### @2='apple' /* VARSTRING(20) meta=20 nullable=0 is_null=0 */ ### @3=NULL /* VARSTRING(20) meta=0 nullable=1 is_null=1 */ ### SET ### @1=1 /* INT meta=0 nullable=0 is_null=0 */ ### @2='pear' /* VARSTRING(20) meta=20 nullable=0 is_null=0 */ ### @3='2009:01:01' /* DATE meta=0 nullable=1 is_null=0 */ # at 569 #150112 21:40:14 server id 1 end_log_pos 617 CRC32 0xf134ad89 Table_map: `test`.`t` mapped to number 251 # at 617 #150112 21:40:14 server id 1 end_log_pos 665 CRC32 0x87047106 Delete_rows: table id 251 flags: STMT_END_F ### DELETE FROM `test`.`t` ### WHERE ### @1=1 /* INT meta=0 nullable=0 is_null=0 */ ### @2='pear' /* VARSTRING(20) meta=20 nullable=0 is_null=0 */ ### @3='2009:01:01' /* DATE meta=0 nullable=1 is_null=0 */ # at 665 #150112 21:40:15 server id 1 end_log_pos 696 CRC32 0x85ffc9ff Xid = 465 COMMIT/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

  • Row with “Table_map: test.t” defines the table name used by query.
  • Rows with “Write_rows/Update_rows/Delete_rows” defines the event type.
  • Lines that start with “###” defines the actual row that got changed.
  • Columns are represented as @1, @2 and so on.

Now have a look at our simple awk script that will use the mysqlbinlog output and print a beautiful summary for INSERT/UPDATE/DELETE events from row-based binary logs. Just replace the “mysqld-bin.000023″ with your binary log file. The string “#15″ in third line is for year 2015. If you are decoding a binary log file from 2014, just change it to “#14″. It is also recommended to use “–start-datetime” and ” –stop-datetime” options to decode binary logs of a specific time range instead of decoding a large binary log file.

Script :

mysqlbinlog --base64-output=decode-rows -vv --start-datetime="2015-01-12 21:40:00" --stop-datetime="2015-01-12 21:45:00" mysqld-bin.000023 | awk 'BEGIN {s_type=""; s_count=0;count=0;insert_count=0;update_count=0;delete_count=0;flag=0;} {if(match($0, /#15.*Table_map:.*mapped to number/)) {printf "Timestamp : " $1 " " $2 " Table : " $(NF-4); flag=1} else if (match($0, /(### INSERT INTO .*..*)/)) {count=count+1;insert_count=insert_count+1;s_type="INSERT"; s_count=s_count+1;} else if (match($0, /(### UPDATE .*..*)/)) {count=count+1;update_count=update_count+1;s_type="UPDATE"; s_count=s_count+1;} else if (match($0, /(### DELETE FROM .*..*)/)) {count=count+1;delete_count=delete_count+1;s_type="DELETE"; s_count=s_count+1;} else if (match($0, /^(# at) /) && flag==1 && s_count>0) {print " Query Type : "s_type " " s_count " row(s) affected" ;s_type=""; s_count=0; } else if (match($0, /^(COMMIT)/)) {print "[Transaction total : " count " Insert(s) : " insert_count " Update(s) : " update_count " Delete(s) : " delete_count "] n+----------------------+----------------------+----------------------+----------------------+"; count=0;insert_count=0;update_count=0; delete_count=0;s_type=""; s_count=0; flag=0} } '

Output :

Timestamp : #150112 21:40:14 Table : `test`.`t` Query Type : INSERT 1 row(s) affected Timestamp : #150112 21:40:14 Table : `test`.`t` Query Type : UPDATE 1 row(s) affected Timestamp : #150112 21:40:14 Table : `test`.`t` Query Type : DELETE 1 row(s) affected [Transaction total : 3 Insert(s) : 1 Update(s) : 1 Delete(s) : 1] +----------------------+----------------------+----------------------+----------------------+

This awk script will return following columns as output for every transaction in binary logs.

Timestamp : timestamp of event as logged in binary log.
Table : database.tablename
Query Type : Type of query executed on table and number of rows affected by query.

[Transaction total : 3 Insert(s) : 1 Update(s) : 1 Delete(s) : 1]
Above line print the summary of transaction, it displays the total number of rows affected by transaction, total number of rows affected by each type of query in transaction.

Let’s execute some more queries in sakila database and then we will summarize them using our script. We assume that we executed these queries between “2015-01-16 13:30:00″ and “2015-01-16 14:00:00″

use sakila; mysql> update city set city="Acua++" where city_id=4; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> delete from country where country_id=4; Query OK, 1 row affected (0.04 sec) mysql> insert into country (country_id,country,last_update) values ('4','Angola',now()); Query OK, 1 row affected (0.03 sec) mysql> create table payment_tmp select * from payment; Query OK, 16049 rows affected (0.80 sec) Records: 16049 Duplicates: 0 Warnings: 0 mysql> delete from payment_tmp where payment_id between 10000 and 15000; Query OK, 5001 rows affected (0.08 sec) mysql> update payment_tmp set amount=9.99 where payment_id between 1 and 7000; Query OK, 6890 rows affected (0.08 sec) Rows matched: 7000 Changed: 6890 Warnings: 0 mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> delete from country where country_id=5; Query OK, 1 row affected (0.00 sec) mysql> delete from city where country_id=5; Query OK, 1 row affected (0.00 sec) mysql> delete from address where city_id=300; Query OK, 2 rows affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.01 sec)

We will save our script in a file named “summarize_binlogs.sh” and summarize the queries we just executed.

# vim summarize_binlogs.sh #!/bin/bash BINLOG_FILE="mysqld-bin.000035" START_TIME="2015-01-16 13:30:00" STOP_TIME="2015-01-16 14:00:00" mysqlbinlog --base64-output=decode-rows -vv --start-datetime="${START_TIME}" --stop-datetime="${STOP_TIME}" ${BINLOG_FILE} | awk 'BEGIN {s_type=""; s_count=0;count=0;insert_count=0;update_count=0;delete_count=0;flag=0;} {if(match($0, /#15.*Table_map:.*mapped to number/)) {printf "Timestamp : " $1 " " $2 " Table : " $(NF-4); flag=1} else if (match($0, /(### INSERT INTO .*..*)/)) {count=count+1;insert_count=insert_count+1;s_type="INSERT"; s_count=s_count+1;} else if (match($0, /(### UPDATE .*..*)/)) {count=count+1;update_count=update_count+1;s_type="UPDATE"; s_count=s_count+1;} else if (match($0, /(### DELETE FROM .*..*)/)) {count=count+1;delete_count=delete_count+1;s_type="DELETE"; s_count=s_count+1;} else if (match($0, /^(# at) /) && flag==1 && s_count>0) {print " Query Type : "s_type " " s_count " row(s) affected" ;s_type=""; s_count=0; } else if (match($0, /^(COMMIT)/)) {print "[Transaction total : " count " Insert(s) : " insert_count " Update(s) : " update_count " Delete(s) : " delete_count "] n+----------------------+----------------------+----------------------+----------------------+"; count=0;insert_count=0;update_count=0; delete_count=0;s_type=""; s_count=0; flag=0} } ' :wq # chmod u+x summarize_binlogs.sh

Now we run our script to get the summary of information logged in binary log.

# ./summarize_binlogs.sh Timestamp : #150116 13:41:09 Table : `sakila`.`city` Query Type : UPDATE 1 row(s) affected [Transaction total : 1 Insert(s) : 0 Update(s) : 1 Delete(s) : 0] +----------------------+----------------------+----------------------+----------------------+ Timestamp : #150116 13:41:59 Table : `sakila`.`country` Query Type : DELETE 1 row(s) affected [Transaction total : 1 Insert(s) : 0 Update(s) : 0 Delete(s) : 1] +----------------------+----------------------+----------------------+----------------------+ Timestamp : #150116 13:42:07 Table : `sakila`.`country` Query Type : INSERT 1 row(s) affected [Transaction total : 1 Insert(s) : 1 Update(s) : 0 Delete(s) : 0] +----------------------+----------------------+----------------------+----------------------+ Timestamp : #150116 13:42:13 Table : `sakila`.`payment_tmp` Query Type : INSERT 16049 row(s) affected [Transaction total : 16049 Insert(s) : 16049 Update(s) : 0 Delete(s) : 0] +----------------------+----------------------+----------------------+----------------------+ Timestamp : #150116 13:42:20 Table : `sakila`.`payment_tmp` Query Type : DELETE 5001 row(s) affected [Transaction total : 5001 Insert(s) : 0 Update(s) : 0 Delete(s) : 5001] +----------------------+----------------------+----------------------+----------------------+ Timestamp : #150116 13:42:28 Table : `sakila`.`payment_tmp` Query Type : UPDATE 6890 row(s) affected [Transaction total : 6890 Insert(s) : 0 Update(s) : 6890 Delete(s) : 0] +----------------------+----------------------+----------------------+----------------------+ Timestamp : #150116 13:42:42 Table : `sakila`.`country` Query Type : DELETE 1 row(s) affected Timestamp : #150116 13:42:48 Table : `sakila`.`city` Query Type : DELETE 1 row(s) affected Timestamp : #150116 13:42:53 Table : `sakila`.`address` Query Type : DELETE 2 row(s) affected [Transaction total : 4 Insert(s) : 0 Update(s) : 0 Delete(s) : 4] +----------------------+----------------------+----------------------+----------------------+

See how nicely our script has summarized the information logged in binary log. For the last transaction, it shows 3 rows that corresponds to each statement executed within transaction and then the final summary of transaction i.e. transaction affected 4 rows in total and all of them were deleted.

This was how we can summarize MySQL row-based binary logs in a more readable format using a simple awk script. This will save you time whenever you need to decode the binary logs for troubleshooting. Now I’ll show you how this summarized information can help answer some basic but important questions.

Q1 : Which tables received highest number of insert/update/delete statements?

./summarize_binlogs.sh | grep Table |cut -d':' -f5| cut -d' ' -f2 | sort | uniq -c | sort -nr 3 `sakila`.`payment_tmp` 3 `sakila`.`country` 2 `sakila`.`city` 1 `sakila`.`address`

Q2 : Which table received the highest number of DELETE queries?

./summarize_binlogs.sh | grep -E 'DELETE' |cut -d':' -f5| cut -d' ' -f2 | sort | uniq -c | sort -nr 2 `sakila`.`country` 1 `sakila`.`payment_tmp` 1 `sakila`.`city` 1 `sakila`.`address`

Q3: How many insert/update/delete queries executed against sakila.country table?

./summarize_binlogs.sh | grep -i '`sakila`.`country`' | awk '{print $7 " " $11}' | sort -k1,2 | uniq -c 2 `sakila`.`country` DELETE 1 `sakila`.`country` INSERT

Q4: Give me the top 3 statements which affected maximum number of rows.

./summarize_binlogs.sh | grep Table | sort -nr -k 12 | head -n 3 Timestamp : #150116 13:42:13 Table : `sakila`.`payment_tmp` Query Type : INSERT 16049 row(s) affected Timestamp : #150116 13:42:28 Table : `sakila`.`payment_tmp` Query Type : UPDATE 6890 row(s) affected Timestamp : #150116 13:42:20 Table : `sakila`.`payment_tmp` Query Type : DELETE 5001 row(s) affected

See how easy it is to identify the large transactions.

Q5 : Find DELETE queries that affected more than 1000 rows.

./summarize_binlogs.sh | grep -E 'DELETE' | awk '{if($12>1000) print $0}' Timestamp : #150116 13:42:20 Table : `sakila`.`payment_tmp` Query Type : DELETE 5001 row(s) affected

If we want to get all queries that affected more than 1000 rows.

./summarize_binlogs.sh | grep -E 'Table' | awk '{if($12>1000) print $0}' Timestamp : #150116 13:42:13 Table : `sakila`.`payment_tmp` Query Type : INSERT 16049 row(s) affected Timestamp : #150116 13:42:20 Table : `sakila`.`payment_tmp` Query Type : DELETE 5001 row(s) affected Timestamp : #150116 13:42:28 Table : `sakila`.`payment_tmp` Query Type : UPDATE 6890 row(s) affected

Conclusion

The information logged in binary logs is really helpful to understand our write workload. We used mysqlbinlog + awk to present the binary log contents in more readable format. Further we used the summarized information to answer some of the important questions.

The post Identifying useful info from MySQL row-based binary logs appeared first on MySQL Performance Blog.

Importing big tables with large indexes with Myloader

Latest MySQL Performance Blog posts - January 20, 2015 - 3:56am

Mydumper is known as the mysqldump but faster, much faster. So, if you take logical backup you will choose Mydumper instead of mysqldump. But what about the restore? Well, who needs to restore a logical backup? It takes ages! Even with Myloader. But, this could change just a bit if we are able to take advantage of Fast Index Creation.

As you probably know, Mydumper and mysqldump export the struct of a table, with all the indexes and the constraints, and of course, the data. Then, Myloader and mysql import the struct of the table and import the data. The most important difference, is that you can configure Myloader to import the data using certain amount of threads. So, the import steps are:

  1. Create the complete struct of the table
  2. Import the data

When you execute Myloader, internally it first creates the tables executing the “-schema.sql” files and then takes all the filenames without “schema.sql” and puts them in a task queue. Every thread takes a filename from the queue, which actually is a chunk of the table, and executes it, when it finishes take another chunk from the queue, but if the queue is empty it just ends.

This import procedure works fast for small tables, but with big tables with large indexes the inserts are getting slower caused by the overhead of insert the new values in secondary indexes. Another way to import the data is:

  1. Split the table structure into table creation with primary key, indexes creation and constraint creation
  2. Create tables with primary key
  3. Per table do:
    1. Load the data
    2. Create index
  4. Create constraints

This import procedure is implemented in a brach of Myloader that can be downloaded from here or  directly executing  bzr with the repository:

bzr branch lp:~david-ducos/mydumper/mydumper

The tool reads the schema files ands splits them into three separate statements which create the tables with the primary key, the indexes and the constraints. The primary key is kept in the table creation in order to avoid the recreation of the table when a primary key is added and the “KEY” and “CONSTRAINT” lines are removed. These lines are added to the index and constraint statements, respectively.

It processes tables according to their size starting with the largest because creating the indexes of a big table could take hours and is single-threaded. While we cannot process other indexes at the time, we are potentially able to create other tables with the remaining threads.

It has a new thread (monitor_process) that decides which chunk of data will be put in the task queue and a communication queue which is used by the task processes to tell the monitor_process which chunk has been completed.

I run multiples imports on AWS m1.xlarge machine with one table comparing Myloader and this branch and I found that with large indexes the times were:

As you can see, when you have less than 150M rows, import the data and then create the indexes is higher than import the table with the indexes all at once. But everything changes after 150M rows, import 200M takes 64 minutes more for Myloader but just 24 minutes for the new branch.

On a table of 200M rows with a integer primary key and 9 integer columns, you will see how the time increases as the index gets larger:

Where:

2-2-0: two 1-column and two 2-column index
2-2-1: two 1-column, two 2-column and one 3-column index
2-3-1: two 1-column, three 2-column and one 3-column index
2-3-2: two 1-column, three 2-column and two 3-column index

 

Conclusion

This branch can only import all the tables with this same strategy, but with this new logic in Myloader, in a future version it could be able to import each table with the best strategy reducing the time of the restore considerably.

The post Importing big tables with large indexes with Myloader appeared first on MySQL Performance Blog.

query_cache behaviour in multi-master mode for 5.6.21-70.1-56

Lastest Forum Posts - January 19, 2015 - 7:09am
The query_cache is now supported in 5.6 but only after starting up.

But i cannot find any reference to the node and cluster behaviour for a mult-master mode.
ie is the query_cache node or cluster specific and is there any cache invalidation across the nodes?

For 5.5 i can find information about above, but not for 5.6

any help would be appreciated as my app is currently very depended on the query_cache

marcel

Looking deeper into InnoDB’s problem with many row versions

Latest MySQL Performance Blog posts - January 19, 2015 - 7:06am

A few days ago I wrote about MySQL performance implications of InnoDB isolation modes and I touched briefly upon the bizarre performance regression I found with InnoDB handling a large amount of versions for a single row. Today I wanted to look a bit deeper into the problem, which I also filed as a bug.

First I validated in which conditions the problem happens. It seems to happen only in REPEATABLE-READ isolation mode and only in case there is some hot rows which get many row versions during a benchmark run. For example the problem does NOT happen if I run sysbench with “uniform” distribution.

In terms of concurrent selects it also seems to require some very special conditions – you need to have the connection to let some history accumulate by having read snapshot open and then do it again with high history. The exact queries to do that seems not to be that important.

Contrary to what I expected this problem also does not require multiple connections – I can repeat it with only one read and one write connection, which means it can happen in a lot more workloads.

Here is the simplified case to repeat it:

sysbench --num-threads=1 --report-interval=10 --max-time=0 --max-requests=0 --rand-type=uniform --oltp-table-size=1 --mysql-user=root --mysql-password= --mysql-db=sbinnodb --test=/usr/share/doc/sysbench/tests/db/update_index.lua run

It is repeatable on the table with just 1 row where this row is very hot!

Select query sequence:

begin; select c from sbtest1 limit 1; select sleep(20); commit; begin; select c from sbtest1 limit 1; select sleep(300); commit;

It is interesting though, in this case it does not look like it is enough to open a consistent snapshot – it is only when I run the query on the same table as the update workload is running (I assume touches the same data) when the issue happens.

Let’s look at some graphs:

The transaction rate indeed suffers dramatically – in this case going down more than 100x from close to 15K to around 60 in the lowest point.

It is interesting that CPU consumption is reduced but not as much as the throughput.

This is something I find the most enlightening about what seems to happen. The number of buffer pool read requests increases dramatically where the number of transactions goes down. If we do the math we have 940K buffer pool reads per 14.5K updates in the normal case giving us 65 buffer reads per update, which goes up to 37000 per update when we have regression happening. Which is a dramatic change and it points to something that goes badly wrong.

History size is of interest. We can see it grows quickly first – this is when we have the first transaction and sleep(20) and when it grows very slowly when the second transaction is open as update rate is low. Finally it goes to zero very quickly once the transaction is committed. What is worth noting is the history length here peaks at just 280K which is really quite trivial size.

The InnoDB contention graph is interesting from 2 view points. First it shows that the contention picture changes at the time when the transaction is held open and also right afterward when history is being worked out.

If you look at this graph focused only on the os_wait metrics we can see that mutex_os_waits goes down during the problem while x-locks os waits increases significantly. This means there are some very long exclusive lock os-waits happening which is indicative of the long waits. We can’t actually lock times from status variables – we would need to get data from performance schema for that which unfortunately does not cover key mutexes, and which I unfortunately can’t graph easily yet.

Lets look at some more data.

From the great processlist from sys-schema we can get:

*************************** 4. row *************************** thd_id: 536 conn_id: 516 user: root@localhost db: sbinnodb command: Query state: updating time: 0 current_statement: UPDATE sbtest1 SET k=k+1 WHERE id=1 lock_latency: 36.00 us rows_examined: 1 rows_sent: 0 rows_affected: 0 tmp_tables: 0 tmp_disk_tables: 0 full_scan: NO last_statement: NULL last_statement_latency: NULL last_wait: wait/io/table/sql/handler last_wait_latency: Still Waiting source: handler.cc:7692

Which is unfortunately not very helpful as it does not show the actual lock which is being waited on – just saying that it is waiting somewhere inside storage engine call which is obvious.

We can get some good summary information in the other table:

mysql> select * from waits_global_by_latency; +------------------------------------------------------+-----------+---------------+-------------+-------------+ | events | total | total_latency | avg_latency | max_latency | +------------------------------------------------------+-----------+---------------+-------------+-------------+ | wait/synch/cond/sql/Item_func_sleep::cond | 384 | 00:31:27.18 | 4.91 s | 5.00 s | | wait/synch/rwlock/innodb/index_tree_rw_lock | 239460870 | 00:12:13.82 | 3.06 us | 495.27 ms | | wait/synch/mutex/sql/THD::LOCK_thd_data | 601160083 | 36.48 s | 60.41 ns | 643.18 us | | wait/io/table/sql/handler | 1838 | 29.12 s | 15.84 ms | 211.21 ms | | wait/synch/mutex/mysys/THR_LOCK::mutex | 240456674 | 16.57 s | 68.86 ns | 655.16 us | | wait/synch/cond/sql/MDL_context::COND_wait_status | 15 | 14.11 s | 940.37 ms | 999.36 ms | | wait/synch/mutex/innodb/trx_mutex | 360685304 | 10.52 s | 29.11 ns | 4.01 ms | | wait/synch/mutex/innodb/trx_undo_mutex | 120228229 | 3.87 s | 31.93 ns | 426.16 us | | wait/io/file/innodb/innodb_data_file | 399 | 1.46 s | 3.66 ms | 11.21 ms | | wait/synch/mutex/innodb/buf_pool_flush_state_mutex | 32317 | 587.10 ms | 18.17 us | 11.27 ms | | wait/synch/rwlock/innodb/fil_space_latch | 4154178 | 386.96 ms | 92.96 ns | 74.57 us | | wait/io/file/innodb/innodb_log_file | 987 | 271.76 ms | 275.34 us | 14.23 ms | | wait/synch/rwlock/innodb/hash_table_locks | 8509138 | 255.76 ms | 30.05 ns | 53.41 us | | wait/synch/mutex/innodb/srv_threads_mutex | 11938747 | 249.49 ms | 20.66 ns | 15.06 us | | wait/synch/rwlock/innodb/trx_purge_latch | 8488041 | 181.01 ms | 21.28 ns | 23.51 us | | wait/synch/rwlock/innodb/dict_operation_lock | 232 | 100.48 ms | 433.09 us | 61.88 ms | | wait/synch/rwlock/innodb/checkpoint_lock | 10 | 32.34 ms | 3.23 ms | 8.38 ms | | wait/io/socket/sql/client_connection | 2171 | 30.87 ms | 14.22 us | 3.95 ms | | wait/synch/mutex/innodb/log_sys_mutex | 260485 | 6.57 ms | 25.04 ns | 52.00 us | | wait/synch/rwlock/innodb/btr_search_latch | 4982 | 1.71 ms | 343.05 ns | 17.83 us | | wait/io/file/myisam/kfile | 647 | 1.70 ms | 2.63 us | 39.28 us | | wait/synch/rwlock/innodb/dict_table_stats | 46323 | 1.63 ms | 35.06 ns | 6.55 us | | wait/synch/mutex/innodb/os_mutex | 11410 | 904.63 us | 79.19 ns | 26.67 us | | wait/lock/table/sql/handler | 1838 | 794.21 us | 431.94 ns | 9.70 us | | wait/synch/rwlock/sql/MDL_lock::rwlock | 7056 | 672.34 us | 95.15 ns | 53.29 us | | wait/io/file/myisam/dfile | 139 | 518.50 us | 3.73 us | 31.61 us | | wait/synch/mutex/sql/LOCK_global_system_variables | 11692 | 462.58 us | 39.44 ns | 363.39 ns | | wait/synch/mutex/innodb/rseg_mutex | 7238 | 348.25 us | 47.89 ns | 10.38 us | | wait/synch/mutex/innodb/lock_mutex | 5747 | 222.13 us | 38.50 ns | 9.84 us | | wait/synch/mutex/innodb/trx_sys_mutex | 4601 | 187.43 us | 40.69 ns | 326.15 ns | | wait/synch/mutex/sql/LOCK_table_cache | 2173 | 185.14 us | 85.14 ns | 10.02 us | | wait/synch/mutex/innodb/fil_system_mutex | 3481 | 144.60 us | 41.32 ns | 375.91 ns | | wait/synch/rwlock/sql/LOCK_grant | 1217 | 121.86 us | 99.85 ns | 12.07 us | | wait/synch/mutex/innodb/flush_list_mutex | 3191 | 108.82 us | 33.80 ns | 8.81 us | | wait/synch/mutex/innodb/purge_sys_bh_mutex | 3600 | 83.52 us | 23.16 ns | 123.95 ns | | wait/synch/mutex/sql/MDL_map::mutex | 1456 | 75.02 us | 51.33 ns | 8.78 us | | wait/synch/mutex/myisam/MYISAM_SHARE::intern_lock | 1357 | 55.55 us | 40.69 ns | 320.51 ns | | wait/synch/mutex/innodb/log_flush_order_mutex | 204 | 40.84 us | 200.01 ns | 9.42 us | | wait/io/file/sql/FRM | 16 | 31.16 us | 1.95 us | 16.40 us | | wait/synch/mutex/mysys/BITMAP::mutex | 378 | 30.52 us | 80.44 ns | 9.16 us | | wait/synch/mutex/innodb/dict_sys_mutex | 463 | 24.15 us | 51.96 ns | 146.17 ns | | wait/synch/mutex/innodb/buf_pool_LRU_list_mutex | 293 | 23.37 us | 79.50 ns | 313.94 ns | | wait/synch/mutex/innodb/buf_dblwr_mutex | 398 | 22.60 us | 56.65 ns | 380.61 ns | | wait/synch/rwlock/myisam/MYISAM_SHARE::key_root_lock | 388 | 14.86 us | 38.19 ns | 254.16 ns | | wait/synch/mutex/sql/LOCK_plugin | 250 | 14.30 us | 56.97 ns | 137.41 ns | | wait/io/socket/sql/server_unix_socket | 2 | 9.35 us | 4.67 us | 4.72 us | | wait/synch/mutex/sql/THD::LOCK_temporary_tables | 216 | 8.97 us | 41.32 ns | 465.74 ns | | wait/synch/mutex/sql/hash_filo::lock | 151 | 8.35 us | 55.09 ns | 150.87 ns | | wait/synch/mutex/mysys/THR_LOCK_open | 196 | 7.84 us | 39.75 ns | 110.80 ns | | wait/synch/mutex/sql/TABLE_SHARE::LOCK_ha_data | 159 | 6.23 us | 39.13 ns | 108.92 ns | | wait/synch/mutex/innodb/lock_wait_mutex | 29 | 5.38 us | 185.30 ns | 240.38 ns | | wait/synch/mutex/innodb/ibuf_mutex | 29 | 5.26 us | 181.23 ns | 249.46 ns | | wait/synch/mutex/sql/LOCK_status | 62 | 4.12 us | 66.36 ns | 127.39 ns | | wait/synch/mutex/sql/LOCK_open | 109 | 4.05 us | 36.93 ns | 105.79 ns | | wait/io/file/sql/dbopt | 3 | 3.34 us | 1.11 us | 1.99 us | | wait/synch/mutex/innodb/buf_pool_free_list_mutex | 30 | 1.58 us | 52.58 ns | 106.73 ns | | wait/synch/mutex/innodb/srv_innodb_monitor_mutex | 31 | 1.38 us | 44.45 ns | 89.52 ns | | wait/synch/mutex/sql/LOCK_thread_count | 33 | 1.10 us | 33.18 ns | 50.71 ns | | wait/synch/mutex/sql/LOCK_prepared_stmt_count | 32 | 1.10 us | 34.12 ns | 67.92 ns | | wait/synch/mutex/innodb/srv_sys_mutex | 42 | 1.07 us | 25.35 ns | 49.14 ns | | wait/synch/mutex/mysys/KEY_CACHE::cache_lock | 24 | 1.02 us | 42.26 ns | 103.60 ns | | wait/synch/mutex/sql/MDL_wait::LOCK_wait_status | 18 | 544.62 ns | 30.05 ns | 42.57 ns | | wait/synch/rwlock/sql/CRYPTO_dynlock_value::lock | 2 | 509.25 ns | 254.47 ns | 287.02 ns | | wait/synch/rwlock/sql/LOCK_dboptions | 6 | 262.92 ns | 43.82 ns | 84.82 ns | | wait/synch/rwlock/sql/MDL_context::LOCK_waiting_for | 4 | 262.92 ns | 65.73 ns | 103.60 ns | | wait/synch/mutex/sql/LOCK_connection_count | 4 | 238.51 ns | 59.47 ns | 123.01 ns | | wait/synch/mutex/sql/LOCK_sql_rand | 2 | 172.15 ns | 86.08 ns | 94.84 ns | | wait/synch/rwlock/sql/LOCK_system_variables_hash | 2 | 141.16 ns | 70.43 ns | 78.25 ns | | wait/synch/rwlock/sql/gtid_commit_rollback | 2 | 95.47 ns | 47.58 ns | 51.65 ns | | wait/synch/mutex/innodb/autoinc_mutex | 2 | 90.14 ns | 45.07 ns | 45.07 ns | | wait/synch/mutex/sql/LOCK_thd_remove | 2 | 85.14 ns | 42.57 ns | 52.58 ns | | wait/synch/mutex/innodb/ibuf_bitmap_mutex | 1 | 76.37 ns | 76.37 ns | 76.37 ns | | wait/synch/mutex/sql/LOCK_xid_cache | 2 | 62.91 ns | 31.30 ns | 34.43 ns | +------------------------------------------------------+-----------+---------------+-------------+-------------+ 73 rows in set (0.00 sec)

Which shows a couple of hotspots – the well-known index_tree_rw_lock hotspot which should be gone in MySQL 5.7 as well as trx_mutex and trx_undo_mutexes. It is especially worth noting the almost 500ms which this index tree lw_lock was waited at max (this is the data for a couple of minutes only, so it it is unlikely to be a very rare outlier). This hints to me that in the case of many versions our index_tree_rw_lock might be held for a prolonged period of time.

What is interesting is that the reliable “SHOW ENGINE INNODB STATUS” remains quite handy:

---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 58847289 --Thread 140386357991168 has waited at btr0cur.cc line 304 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fb0bea2ca40 '&block->lock' a writer (thread id 140386454755072) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file buf0flu.cc line 1082 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/row /row0row.cc line 815 --Thread 140386374776576 has waited at btr0cur.cc line 577 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fade8077f98 '&new_index->lock' a writer (thread id 140386357991168) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.cc line 586 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/btr /btr0cur.cc line 577 --Thread 140386324420352 has waited at btr0cur.cc line 577 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fade8077f98 '&new_index->lock' a writer (thread id 140386357991168) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.cc line 586 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/btr /btr0cur.cc line 577 --Thread 140386332813056 has waited at btr0cur.cc line 577 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fade8077f98 '&new_index->lock' a writer (thread id 140386357991168) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.cc line 586 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/btr /btr0cur.cc line 577 --Thread 140386445960960 has waited at btr0cur.cc line 586 for 0.0000 seconds the semaphore: S-lock on RW-latch at 0x7fade8077f98 '&new_index->lock' a writer (thread id 140386357991168) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.cc line 586 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/btr /btr0cur.cc line 577 --Thread 140386366383872 has waited at btr0cur.cc line 577 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fade8077f98 '&new_index->lock' a writer (thread id 140386357991168) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.cc line 586 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/btr /btr0cur.cc line 577 --Thread 140386341205760 has waited at btr0cur.cc line 577 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fade8077f98 '&new_index->lock' a writer (thread id 140386357991168) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.cc line 586 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/btr /btr0cur.cc line 577 --Thread 140386349598464 has waited at btr0cur.cc line 577 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fade8077f98 '&new_index->lock' a writer (thread id 140386357991168) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.cc line 586 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/btr /btr0cur.cc line 577

Another sample….

---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 61753307 --Thread 140386332813056 has waited at row0row.cc line 815 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fb0bea2cd40 '&block->lock' a writer (thread id 140386454755072) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file buf0flu.cc line 1082 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/row/row0row.cc line 815 --Thread 140386366383872 has waited at row0row.cc line 815 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fb0bea2cd40 '&block->lock' a writer (thread id 140386454755072) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file buf0flu.cc line 1082 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/row/row0row.cc line 815 --Thread 140386341205760 has waited at row0row.cc line 815 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fb0bea2cd40 '&block->lock' a writer (thread id 140386454755072) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file buf0flu.cc line 1082 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/row/row0row.cc line 815 --Thread 140386374776576 has waited at row0row.cc line 815 for 0.0000 seconds the semaphore: X-lock on RW-latch at 0x7fb0bea2cd40 '&block->lock' a writer (thread id 140386454755072) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file buf0flu.cc line 1082 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/row/row0row.cc line 815 --Thread 140386445960960 has waited at btr0cur.cc line 257 for 0.0000 seconds the semaphore: X-lock (wait_ex) on RW-latch at 0x7fb0bea2f2c0 '&block->lock' a writer (thread id 140386445960960) has reserved it in mode wait exclusive number of readers 1, waiters flag 0, lock_word: ffffffffffffffff Last time read locked in file btr0cur.cc line 257 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.1/storage/innobase/btr/btr0cur.cc line 257 OS WAIT ARRAY INFO: signal count 45260762 Mutex spin waits 1524575466, rounds 5092199292, OS waits 43431355 RW-shared spins 14500456, rounds 258217222, OS waits 3721069 RW-excl spins 37764438, rounds 730059390, OS waits 12592324 Spin rounds per wait: 3.34 mutex, 17.81 RW-shared, 19.33 RW-excl

The first one shows mostly index lock contention which we have already seen. The second, though, shows a lot of contention on the block lock, which is something that is not covered by performance schema instrumentation in MySQL 5.6.

Now to tell the truth in this 2nd thread test case it is not your typical case of contention – unlike the case with 64 threads we do not see that drastic CPU drop and it still stays above 25%, the single thread fully saturated should show on this system. This is however where DBA level profiling gets tough with MySQL – as I mentioned the only indication of excessive activity going on is a high number of buffer pool requests – there is no instrumentation which I’m aware of that tells us anything around scanning many old row versions in undo space. Perhaps some more metrics around this syssystem need to be added to INNODB_METRICS?

A good tool to see CPU consumption which both developers and DBAs can use is oprofile, which is of low enough overhead to use in production. Note that to get any meaningful information about the CPU usage inside the program you need to have debug systems installed. For Percona Server 5.6 On Ubuntu you can do apt-get install percona-server-5.6-dbg

To profile just the MySQL Process you can do:

root@ts140i:~# operf --pid 8970 operf: Press Ctl-c or 'kill -SIGINT 9113' to stop profiling operf: Profiler started ^C Profiling done.

To get top symbols (functions) called in mysqld I can do:

root@ts140i:~# opreport -l | more Using /root/oprofile_data/samples/ for samples directory. warning: /no-vmlinux could not be found. warning: [vdso] (tgid:8970 range:0x7fffba5fe000-0x7fffba5fffff) could not be found. CPU: Intel Haswell microarchitecture, speed 3.201e+06 MHz (estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000 samples % image name symbol name 212162 12.4324 mysqld buf_page_get_gen(unsigned long, unsigned long, unsigned long, unsigned long, buf_block_t*, unsigned long, char const*, unsigned long, mtr_t*) 204230 11.9676 mysqld dict_table_copy_types(dtuple_t*, dict_table_t const*) 179783 10.5351 no-vmlinux /no-vmlinux 144335 8.4579 mysqld trx_undo_update_rec_get_update(unsigned char*, dict_index_t*, unsigned long, unsigned long, unsigned long, unsigned long, trx_t*, mem_block_info_t*, upd_t **) 95984 5.6245 mysqld trx_undo_prev_version_build(unsigned char const*, mtr_t*, unsigned char const*, dict_index_t*, unsigned long*, mem_block_info_t*, unsigned char**) 78785 4.6167 mysqld row_build(unsigned long, dict_index_t const*, unsigned char const*, unsigned long const*, dict_table_t const*, dtuple_t const*, unsigned long const*, row_ ext_t**, mem_block_info_t*) 76656 4.4919 libc-2.19.so _int_free 63040 3.6941 mysqld rec_get_offsets_func(unsigned char const*, dict_index_t const*, unsigned long*, unsigned long, mem_block_info_t**) 52794 3.0937 mysqld mtr_commit(mtr_t*)

Which points to some hot functions – quite as expected we see some functions working with undo space here. In some cases it is also helpful to look at the call graph to understand where functions are being called if it is not clear.

Now in many cases looking at oprofile report it is hard to figure out what is “wrong” without having the baseline. In this case we can look at the oprofile run for sysbench alone:

root@ts140i:~# opreport -l | more Using /root/oprofile_data/samples/ for samples directory. warning: /no-vmlinux could not be found. warning: [vdso] (tgid:8970 range:0x7fffba5fe000-0x7fffba5fffff) could not be found. CPU: Intel Haswell microarchitecture, speed 3.201e+06 MHz (estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000 samples % image name symbol name 1331386 28.4544 no-vmlinux /no-vmlinux 603873 12.9060 mysqld ut_delay(unsigned long) 158428 3.3859 mysqld buf_page_get_gen(unsigned long, unsigned long, unsigned long, unsigned long, buf_block_t*, unsigned long, char const*, unsigned long, mtr_t*) 88172 1.8844 mysqld my_timer_cycles 75948 1.6232 mysqld srv_worker_thread 74350 1.5890 mysqld MYSQLparse(THD*) 68075 1.4549 mysqld mutex_spin_wait(void*, bool, char const*, unsigned long) 59321 1.2678 mysqld que_run_threads(que_thr_t*) 59152 1.2642 mysqld start_mutex_wait_v1 57495 1.2288 mysqld rw_lock_x_lock_func(rw_lock_t*, unsigned long, char const*, unsigned long, bool, bool) 53133 1.1356 mysqld mtr_commit(mtr_t*) 44404 0.9490 mysqld end_mutex_wait_v1 40226 0.8597 libpthread-2.19.so pthread_mutex_lock 38889 0.8311 mysqld log_block_calc_checksum_innodb(unsigned char const*)

As you see this report is very different in terms of top functions.

Final thoughts: I had 2 purposes with this blog post. First, I hope it will contain some helpful information for developers so this bug can be fixed at least in MySQL 5.7. It is quite a nasty one as it can come out of nowhere if the application unexpectedly develops hot rows. Second, I hope it will show some of the methods you can use to capture and analyze data about MySQL performance.

The post Looking deeper into InnoDB’s problem with many row versions appeared first on MySQL Performance Blog.

pmp-check-aws-rds.py Issues

Lastest Forum Posts - January 18, 2015 - 5:14pm
Hey All

I am trying to configure the percona-nagios-plugins as i would like to monitor some RDS Instnaces that i have but i am running into issues

i have followed the documentation and i am getting a little frustrated as i keep getting errors or nothing gets returned

For example if i run ./pmp-check-aws-rds.py -l it returns "List of All DB Instnaces []" and if i run ./pmp-check-aws-rds.py -i "Instance-name" -p it returns No DB instance "Instance-name" found on your AWS account.

i have got the correct aws keys in the /etc/boto.cfg file and python-boto is installed, can someone help please

Pages

Subscribe to Percona aggregator
]]>