]]>
]]>

You are here

Feed aggregator

weird percona problem with custom my.cnf

Lastest Forum Posts - 3 hours 6 min ago
Hey,

I've been experiencing this issue for the past couple of months, got some time today so decided to troubleshoot it and have been pulling my hair out since.

I use Atomic Secure Linux on my servers which run high traffic cms based websites for clients. The issue I've started to experience is that the cms requires a custom my.cnf to work. As soon as I setup the custom my.cnf ASL goes haywire. In an attempt to troubleshoot the config line causing ASL to go haywire I ran into a brick wall. With the custom my.cnf loaded into mysql (percona) when I try to go back to the default config it doesn't flush the custom config completely and I still get the error below.

I get the following error while trying to import the db:
: ERROR 1146 (42S02) at line 100: Table 'tortix.alert' doesn't exist Note: This table does exist and has some 21k rows.

Even when I go back to the default config, it'll still throw this error. I've gone to the extent of rebooting the server to see if that'll make mysql release the old config.

I've setup a small vps, installed percona and imported the db there without touching the default config and it went through without any problems.

Any ideas?

: Server config: Mysql (Percona): 5.6 CentOS 6.6 64-bit

Percona Toolkit 2.2.13 is now available

Percona is pleased to announce the availability of Percona Toolkit 2.2.13.  Released January 26, 2015. Percona Toolkit is a collection of advanced command-line tools to perform a variety of MySQL server and system tasks that are too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

This release is the current GA (Generally Available) stable release in the 2.2 series. It includes multiple bug fixes for pt-table-checksum with better support for Percona XtraDB Cluster, various other fixes, as well as continued preparation for MySQL 5.7 compatibility. Full details are below. Downloads are available here and from the Percona Software Repositories.

New Features:

  • pt-kill now supports new --query-id option. This option can be used to print a query fingerprint hash after killing a query to enable the cross-referencing with the pt-query-digest output. This option can be used along with --print option as well.

Bugs Fixed:

  • Fixed bug 1408375: Percona Toolkit was vulnerable to MITM attack which could allow exfiltration of MySQL configuration information via --version-check option. This vulnerability was logged as CVE 2015-1027
  • Fixed bug 1019479: pt-table-checksum now works with ONLY_FULL_GROUP_BY SQL mode.
  • Fixed bug 1394934: running pt-table-checksum in debug mode would cause an error.
  • Fixed bug 1396868: regression introduced in Percona Toolkit 2.2.12 caused pt-online-schema-change not to honor --ask-pass option.
  • Fixed bug 1399789: pt-table-checksum would fail to find Percona XtraDB Cluster nodes when variable wsrep_node_incoming_address was set to AUTO.
  • Fixed bug 1321297: pt-table-checksum was reporting differences on timestamp columns with replication from 5.5 to 5.6 server version, although the data was identical.
  • Fixed bug 1388870: pt-table-checksum was showing differences if the master and slave were in different time zone.
  • Fixed bug 1402668: pt-mysql-summary would exit if Percona XtraDB Cluster was in Donor/Desynced state.
  • Fixed bug 1266869: pt-stalk would fail to start if $HOME environment variable was not set.

Details of the release can be found in the release notes and the 2.2.13 milestone at Launchpad. Bugs can be reported on the Percona Toolkit launchpad bug tracker.

The post Percona Toolkit 2.2.13 is now available appeared first on MySQL Performance Blog.

Parent table of FTS auxiliary table not found.

Lastest Forum Posts - 4 hours 6 min ago
I have a strange behaviour and I hope someone can help me. I try to backup a 350GB mysql 5.6 database but preparing fails with an error.

This is my backup command:
/usr/bin/innobackupex --user=USER--password=PASS /root/backup_innobackupex/

This is my prepare command:
innobackupex --apply-log --use-memory=24G /root/backup_innobackupex/2015-01-17_21-00-43/

My backup-my.cfg

[mysqld]
innodb_checksum_algorithm=innodb
innodb_log_checksum_algorithm=innodb
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_files_in_group=2
innodb_log_file_size=50331648
innodb_page_size=16384
innodb_log_block_size=512
innodb_undo_directory=.
innodb_undo_tablespaces=0


This is the "--apply-log" error:
....skipped some lines...
InnoDB: Apply batch completed
InnoDB: Last MySQL binlog file position 0 425132256, file name mysql-bin.001061
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_BEING_DELETED not found.
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_DELETED_CACHE not found.
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_BEING_DELETED_CACHE not found.
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_DELETED not found.
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_CONFIG not found.
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.6.21 started; log sequence number 1220893746972
2015-01-21 15:20:42 7fbe91975720 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
2015-01-21 15:20:42 7fbe91975720InnoDB: InnoDB: Error: cannot open ./DB/FTS_0000000000003f8d_BEING_DELETED.ibd
. InnoDB: Have you deleted .ibd files under a running mysqld server?

InnoDB: Trying to do i/o to a tablespace which exists without .ibd data file. i/o type 10, space id 16252, page no 0, i/o length 16384 bytes
2015-01-21 15:20:42 7fbe91975720 InnoDB: Error: trying to access tablespace 16252 page no. 0,
InnoDB: but the tablespace does not exist or is just being dropped.
2015-01-21 15:20:42 7fbe91975720 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
2015-01-21 15:20:42 7fbe91975720InnoDB: InnoDB: Error: cannot open ./DB/FTS_0000000000003f8d_BEING_DELETED.ibd
. InnoDB: Have you deleted .ibd files under a running mysqld server?
...skipped some lines...
InnoDB: Trying to do i/o to a tablespace which exists without .ibd data file. i/o type 10, space id 16252, page no 0, i/o length 16384 bytes
2015-01-21 15:20:42 7fbe91975720 InnoDB: Error: trying to access tablespace 16252 page no. 0,
InnoDB: but the tablespace does not exist or is just being dropped.
InnoDB: Error: Unable to read tablespace 16252 page no 0 into the buffer pool after 100 attempts
InnoDB: The most probable cause of this error may be that the table has been corrupted.
InnoDB: You can try to fix this problem by using innodb_force_recovery.
InnoDB: Please see reference manual for more details.
InnoDB: Aborting...
2015-01-21 15:20:42 7fbe91975720 InnoDB: Assertion failure in thread 140456463128352 in file buf0buf.cc line 2660
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/...-recovery.html
InnoDB: about forcing recovery.
14:20:42 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Thread pointer: 0x167deb0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
xtrabackup(my_print_stacktrace+0x2b) [0x8eb44b]
xtrabackup(handle_fatal_signal+0x252) [0x7973d2]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf0a0) [0x7fbe9155b0a0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x7fbe8fb75165]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x180) [0x7fbe8fb783e0]
xtrabackup() [0x67dcac]
xtrabackup(main+0x1d91) [0x5c06c1]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7fbe8fb61ead]
xtrabackup() [0x5d162d]

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 2633
main::apply_log() called at /usr/bin/innobackupex line 1561
innobackupex: Error:
innobackupex: ibbackup failed at /usr/bin/innobackupex line 2633.

----

Before I run apply-log I can see the FTS* files in the backup directory:
-rw-r----- 1 root root 98304 Jan 24 21:08 FTS_0000000000003f8d_BEING_DELETED.ibd
-rw-r----- 1 root root 98304 Jan 24 21:41 FTS_0000000000003f8d_BEING_DELETED_CACHE.ibd
-rw-r----- 1 root root 98304 Jan 24 21:45 FTS_0000000000003f8d_CONFIG.ibd
-rw-r----- 1 root root 98304 Jan 24 21:42 FTS_0000000000003f8d_DELETED.ibd
-rw-r----- 1 root root 98304 Jan 24 21:31 FTS_0000000000003f8d_DELETED_CACHE.ibd

according to these error messages a parent table is missing. but why and what can I do about it?
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_BEING_DELETED not found.
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_DELETED_CACHE not found.
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_BEING_DELETED_CACHE not found.
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_DELETED not found.
InnoDB: Parent table of FTS auxiliary table DB/FTS_0000000000003f8d_CONFIG not found.

The strange behaviour is this. Apply-log is somehow deleting these 5 FTS tables from the backup directory and is then complaining about these missing FTS tables:

InnoDB: Trying to do i/o to a tablespace which exists without .ibd data file. i/o type 10, space id 16252, page no 0, i/o length 16384 bytes
2015-01-21 15:20:42 7fbe91975720 InnoDB: Error: trying to access tablespace 16252 page no. 0,
InnoDB: but the tablespace does not exist or is just being dropped.
2015-01-21 15:20:42 7fbe91975720 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
2015-01-21 15:20:42 7fbe91975720InnoDB: InnoDB: Error: cannot open ./DB/FTS_0000000000003f8d_BEING_DELETED.ibd
. InnoDB: Have you deleted .ibd files under a running mysqld server?


When I run apply-log for a second time it seems to work. At least it was filling up my disk with 700GB before I had to abort the "apply-log process" because of a almost full filesystem on my testserver.

Now my questions. what can I do about the missing parent table? why is apply-log deleting these FTS tables as mentioned? will apply-log work when I run it a second time and how much free space do I need to prepare a 350GB backup (700GB were not enough)?

any help would be greatly appreciated!
thanks in advance,
markus

Innodb Mutex Error??

Lastest Forum Posts - 5 hours 7 min ago
Hi Guys,

Today i ran into a weird issue on Percona 5.6 after successfully running for past 6months. All my Select, Commit, Insert was in Hung State and it keept on pilling up till the server crashed. I fortunately got the SHOW ENGINE INNODB STATUS and saw some mutex errors.


: OS WAIT ARRAY INFO: reservation count 4487167 --Thread 47584657541440 has waited at dict0boot.ic line 36 for 48.000 seconds the semaphore: Mutex at 0x2b41067fbae8 '&dict_sys->mutex', lock var 1 waiters flag 1 --Thread 47585826445632 has waited at buf0buf.cc line 2560 for 34.000 seconds the semaphore: S-lock on RW-latch at 0x2b44b2f23e40 '&block->lock' a writer (thread id 47584652749120) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file row0sel.cc line 3051 Last time write locked in file /mnt/workspace/percona-server-5.6-redhat-binary/label_exp/centos5-64/rpmbuild/BUILD/percona-server-5.6.21-70.1/storage/innobase/buf/buf0buf.cc line 3728 OS WAIT ARRAY INFO: signal count 5808535 Mutex spin waits 17056534, rounds 52170104, OS waits 839521 RW-shared spins 4714980, rounds 109885979, OS waits 3459657 RW-excl spins 1394826, rounds 17190451, OS waits 162118 Spin rounds per wait: 3.06 mutex, 23.31 RW-shared, 12.32 RW-excl

Please help me to understand what went wrong. Since neither mysql/system error log showed any clue.


Thanks

MySQL benchmarks on eXFlash DIMMs

In this blog post, we will discuss MySQL performance on eXFlash DIMMs. Earlier we measured the IO performance of these storage devices with sysbench fileio.

Environment

The benchmarking environment was the same as the one we did sysbench fileio in.

CPU: 2x Intel Xeon E5-2690 (hyper threading enabled)
FusionIO driver version: 3.2.6 build 1212
Operating system: CentOS 6.5
Kernel version: 2.6.32-431.el6.x86_64

In this case, we used a separate machine for testing which had a 10G ethernet connection to this server. This server executed sysbench. The client was not the bottleneck in this case. The environment is described in greater detail at the end of the blog post.

Sysbench OLTP write workload

The graph shows throughput for sysbench OLTP, we will examine properties only for the dark areas of this graph: which is the read/write case for high concurrency.

Each table in the following sections has the following columns
columnexplanationstorageThe device that was used for the measurement.threadsThe number of sysbench client threads were used in the benchmark.ro_rwRead-only or read-write. In the whitepaper you can find detailed information about read-only data as well.sdThe standard deviation of the metric in question.meanThe mean of the metric in question.95thpctThe 95th percentile of the metric in question (the maximum without the highest 5 percent of the samples).maxThe maximum of the metric in question.

Sysbench OLTP throughputstoragethreadsro_rwsdmean95thpctmaxeXFlash DIMM_4128rw714.096055996.51057172.07257674.87eXFlash DIMM_4256rw470.954106162.42716673.02057467.99eXFlash DIMM_8128rw195.578577140.50387493.47807723.13eXFlash DIMM_8256rw173.513736498.14606736.17107490.95fio128rw588.142821855.43042280.27807179.95fio256rw599.885102187.52712584.19957467.13

Going from 4 to 8 eXFlash DIMMs will mostly mean more consistent throughput. The mean throughput is significantly higher in case of 8 DIMMs used, but the 95th percentile and the maximum values are not much different (the difference in standard deviation also shows this). The reason they are not much different is that these benchmark are CPU bound (check CPU idle time table later in this post or the graphs in the whitepaper). The PCI-E flash drive on the other hand can do less than half of the throughput of the eXFlash DIMMs (the most relevant is comparing the 95th percentile value).

Sysbench OLTP response timestoragethreadsro_rwsdmean95thpctmaxeXFlash DIMM_4128rw4.418778437.93148944.260064.54eXFlash DIMM_4256rw9.664274190.789317109.0450176.45eXFlash DIMM_8128rw2.100408528.79601732.160067.10eXFlash DIMM_8256rw5.593257294.060628101.6300121.92fio128rw51.2343587138.052150203.1160766.11fio256rw72.9901355304.851844392.7660862.00

The 95th percentile response time for the eXFlash DIMM’s case are less than 1/4 compared to the PCI-E flash device.

CPU idle percentagestoragethreadsro_rwsdmean95thpctmaxeXFlash DIMM_4128rw1.628466743.36838576.260022.18eXFlash DIMM_4256rw1.069800952.29306343.917026.37eXFlash DIMM_8128rw0.429876370.85535431.290015.28eXFlash DIMM_8256rw1.323284354.48617956.71009.40fio128rw4.2115699626.127899431.502055.49fio256rw5.4948985219.312363927.671547.34

The percentage of CPU being idle shows that the performance bottleneck in this benchmark was the CPU in case of eXFlash DIMMs (both with 4 and 8 DIMMs, this is why we didn’t see a substantial throughput difference between the 4 and the 8 DIMM setup). However, for the PCI-E flash, the storage device itself was the bottleneck.

If you are interested in more details, download the free white paper which contains the full analysis of sysbench OLTP and linkbench benchmarks.

The post MySQL benchmarks on eXFlash DIMMs appeared first on MySQL Performance Blog.

Restore fails on fresh Ubuntu/Debian-installation, "mysqld"-group missing

Lastest Forum Posts - 9 hours 28 min ago
Admins - Please delete this thread.

Sorry about the double-post, forum-software appears to be buggy (Error-message "An unexpected error was returned: 'Subscriptions'", but there was no indiciation that the post did go through successfully).

Can't find an option to delete the post as OP either?!

Restore fails on fresh Ubuntu/Debian-installation - missing "mysqld"-group

Lastest Forum Posts - 9 hours 29 min ago
While testing XtraBackup, I didn't manage to get it to restore on Debian 7.8 "wheezy" or Ubuntu 14.04 "Trusty".

It complains about missing the "mysqld" group in my config-file, despite that there is a mysqld-group in the config-file.

: innobackupex --apply-log --defaults-file=/etc/mysql/my.conf --defaults-group=mysqld --ibbackup=xtrabackup_51 . InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. Get the latest version of Percona XtraBackup, documentation, and help resources: http://www.percona.com/xb/p 150126 11:42:19 innobackupex: Starting the apply-log operation IMPORTANT: Please check that the apply-log run completes successfully. At the end of a successful apply-log run innobackupex prints "completed OK!". sh: 1: xtrabackup_51: not found innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 4482. main::get_option('innodb_data_file_path') called at /usr/bin/innobackupex line 2615 main::apply_log() called at /usr/bin/innobackupex line 1561 innobackupex: Error: no 'mysqld' group in server configuration file '/etc/mysql/my.conf' at /usr/bin/innobackupex line 4482. Here's /etc/mysql/my.conf (Which is an untouched standard debian/ubuntu-installation from the official repo):

: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/
Since I couldn't find this error online, I hope someone in this forum can help me.

Harbaugh’s stance might have fueled the reported coverup

Lastest Forum Posts - 10 hours 46 min ago
As we wait (and wait . . . and wait) for the Ravens to handle the alleged errors, inaccuracies, false assumptions and, maybe, misunderstandings�?in the ESPN report alleging that Ravens director of safety Darren Sanders knew the contents of the notorious elevator video in February and Ravens president Dick Cass knew in early April Martavis Bryant Jersey, coach John Harbaugh has addressed the report that he lobbied for the group to reduce Rice in February.
“Every single football decision we make, we perform collectively, Harbaugh advised reporters following Sundays win at Cleveland, as Josh Alper has pointed out. Just like each football decision. You get together, you hash it out. [G.M.] Ozzie [Newsome] employs the term scrimmaging. You scrimmage it out, everybody’s received their opinions. It’s not black and white.�?/p>
Asked by Peter King of TheMMQB.com regardless of whether Harbaugh desired to reduce Rice in February, Harbaugh didnt offer an unequivocal no.
“That is such an unfair characterization, Harbaugh explained. It is not fair to the organization. We said all along that the information would establish the consequences, and that was my stance from the start of this.�?/p>
Reading through people comments with each other in light of the ESPN report https://www.steelersboutique.com/110...eCastro_Jersey, its a honest characterization to say that Harbaugh at least raised the likelihood of cutting Rice in February https://www.steelersboutique.com/457..._Foster_Jersey, and that Newsomes scrimmaging method resulted in a consensus that the Ravens would preserve Rice but that ultimately the information would establish the consequences.
The details, once they ultimately came to light through TMZ, determined the ultimate consequence for Rice. If Harbaugh indeed raised for the duration of the scrimmaging approach that the group must lower Rice, its affordable to feel that Harbaughs agreement to maintain Rice hinged on the facts displaying that Rice didnt punch his then-fiancée.
As soon as the information showed he did, end of story.
So if, as the ESPN report contends, Sanders, Cass, and possibly other people in the organization knew the true contents of the elevator video ahead of the elevator video came out https://www.steelersboutique.com/434...itchell_Jersey, probably they concealed the reality not only to safe a quick suspension from the league office, but also to maintain Harbaugh from winning the internal scrimmage as to whether or not a player who had been paid $25 million among July 2012 and December 2013 need to be dumped from the roster.
Either way, ESPNs contention that the Ravens knew the contents of the video long just before seeing it has not but been rebutted by Sanders or Cass. The only man or woman who has spoken is Harbaugh, whose remarks really support show why a coverup happened, if a coverup in truth did occur.

NFL threatens Marshawn Lynch with six-figure fine

Lastest Forum Posts - 10 hours 47 min ago
Jets coach Rex Ryan was fined $one hundred,000 final weekend for utilizing profanity. Seahawks running back Marshawn Lynch faces an identical punishment for not employing profanity. Or any other words.
Per a league supply, the NFL has threatened to fine Lynch $a hundred,000 if he fails to speak to the media right after Sundays game towards the Chiefs https://www.steelersboutique.com/131..._Dawson_Jersey.
The sum comes from the $50,000 fine that was imposed then suspended final yr for failing to speak to the media. When the NFL lifted the fine, it also warned Lynch that future failure to cooperate would consequence in reinstatement of the fine, plus one more $50,000.
Although the necessity that players talk to the media is aimed at allowing the media to serve as the conduit to the fans, supporters usually side with players who are fined for refusing to communicate to the media https://www.steelersboutique.com/89-..._Carter_Jersey. Thats very likely what will take place in this case, too, specially because the NFL has spent significantly of the season unable to get out of its own way on matters this kind of as the Ray Rice and Adrian Peterson situations Dermontti Dawson Jersey.
If an individual is going to be fined $100 https://www.steelersboutique.com/310...eenwood_Jersey,000 for bungling the Rice investigation or reneging on the agreement to reinstate Peterson, then go ahead and fine Lynch for not talking to the media. If the door isnt going to swing both ways, then let Lynch keep silent if he chooses to do so.

Darrelle Revis: 10 Motives the Jets Should Get a Deal Accomplished This Week

Lastest Forum Posts - January 25, 2015 - 11:06pm
Based on who you feel, Darrelle Revis could be back in uniform significantly sooner than originally anticipated.Tim Cowlishaw of the Dallas Morning Information is reporting on Twitter that Revis is expected to signal a deal sometime this week Akeem Ayers Jersey. ESPN's Adam Schefter has refuted the report and explained his sources have not indicated the talks have progressed to a level where a deal looks imminent.Despite the conflicting reviews Monday, there is no question that both sides would be wise to get a deal wrapped up this week. Here's a look at ten reasons why Darrelle Revis and the Jets want to reconcile their variations.

New England Patriots 2014 NFL Draft Reality or Fiction

Lastest Forum Posts - January 25, 2015 - 11:05pm
The structure of the NFL offseason has left supporters in a bit of a holding pattern at the moment Vince Wilfork Jersey. However totally free-agent bargains are even now offered, the bulk of the money in that department has previously been invested. And nevertheless, with about a month to go before the draft, it's still a bit early to make any definitive conclusions about a team's draft plans.Of program, that will not halt the frequently fruitless draft speculation train. Now is the time period when myriad rumors make likely connections, fostering a exciting (but also exhausting) workout about nearly every single conceivable draft chance for every team Jonathan Casillas Jersey.The New England Patriots are as tight-lipped an organization as any, but even the steel curtain shrouding Foxboro sometimes springs a leak. Consequently, Pats supporters at least have an concept of who their staff may well pick on draft day, even if New England is as likely as any crew to throw a curveball.Not all rumors are real of course Brandon Bolden Jersey, but it's also real that we need to not readily dismiss all individuals whispers. With the caveat that prospect evaluations are not ultimate at this stage, here is one viewpoint on the validity of some recent Patriots' draft rumors.

Error 1193: Unknown system variable 'port' on testing agent's MySQL connection

Lastest Forum Posts - January 24, 2015 - 5:05am
I got this error every time I want to add MySQL server to my dashboard,
Error 1193: Unknown system variable 'port'
I've found nothing useful to fix it in this forum or other websites,
So any help regarding troubleshooting this issue is highly appreciated.

Kind Regards

Error 1193: Unknown system variable 'port' on testing agent's MySQL connection

Lastest Forum Posts - January 24, 2015 - 12:31am
I got this error every time I want to add MySQL server to my dashboard,
Error 1193: Unknown system variable 'port'
I've found nothing useful to fix it in this forum or other websites,
So any help regarding troubleshooting this issue is highly appreciated.

Kind Regards

Using Percona Cloud Tools to solve real-world MySQL problems

Latest MySQL Performance Blog posts - January 23, 2015 - 11:49am

For months when speaking with customers I have been positioning Percona Cloud Tools (PCT) as a valuable tool for the DBA/Developer/SysAdmin but only recently have I truly been able to harness the data and make a technical recommendation to a customer that I feel would have been very difficult to accomplish otherwise.

Let me provide some background: I was tasked with performing a Performance Audit for one of our customers (Performance Audits are extremely popular as they allow you to have a MySQL Expert confirm or reveal challenges within your MySQL environment and make your database run faster!) and as part of our conversation we discussed and agreed to install Percona Cloud Tools. We let the site run for a few days, and then I started my audit. What I noticed was that at regular intervals there was often a CPU spike, along with a corresponding drop in Queries Per Second (QPS), but that lasted only for a few seconds. We decided that further investigation was warranted as the customer was concerned the spikes impacted their users’ experience with the application.

Here are the tasks that Percona Cloud Tools made easy while I worked to identify the source of the CPU spike and QPS drop:

  1. Per-second granularity data capture of PCT allowed me to identify how significant the spike and QPS actually were – if I was looking at the 1 minute or higher average values (such as Cacti would provide) I probably wouldn’t have been able to detect the spike or stall as clearly in the first place, it would have been lost in the average. In the case of PCT the current graphs group at the 1 minute range but you have the ability to view the min and max values during this 1 minute range since they are the true highest and lowest observed 1s intervals during the 1 minute group.
  2. Ability for all graphs to maintain the same resolution time allowed me to zero-in on the problematic time period and then quickly look across all graphs for corresponding deflections. This analysis led me to discover a significant spike in InnoDB disk reads.
  3. Ability to use the Query Analytics functionality to zoom-in again on the problematic query. By adjusting Query Analytics to an appropriate time period narrowed down the range of unique queries that could be considered the cause. This task in my opinion is the best part of using PCT.
  4. Query Analytics allowed me to view the Rows Examined in Total for each query based on just this shortened interval. I then tagged those that had higher than 10k Rows Examined (arbitrary but most queries for this customer seemed to fall below this) so that I could then review in real-time with the customer before making a decision on what to do next. We can only view this sort of information by leveraging the slow query log – this data is not available via Performance_Schema or via network sniffing.

Once we were able to identify the problematic queries then the rest was routine query optimization – 10 minutes work using Percona Cloud Tools for what might have been an hour using traditional methods!

For those of you wondering how else this can be done, assuming you detected the CPU spike / QPS drop (perhaps you are using Graphite or other tool that can deliver per-second resolution) then you’d also need to be capturing the slow query log at a good enough resolution level (I prefer long_query_time=0 to just get it all), and then be adept at leveraging pt-query-digest with –since and –until options to narrow down your range of queries.  The significant drawback to this approach is that each time you want to tweak your time range you probably need to stream through a fairly large slow log file multiple times which can be both CPU and disk intensive operations, which means it can take some time (probably minutes, maybe hours) depending on the size of your log file.  Certainly a workable approach but nowhere near as quick as reloading a page in your browser

So what are you waiting for? Start using Percona Cloud Tools today, it’s free! Register for the free beta here.

The post Using Percona Cloud Tools to solve real-world MySQL problems appeared first on MySQL Performance Blog.

Full graceful restart without bootstrap, pc.recovery=true hangs: no gvwstate.dat!?

Lastest Forum Posts - January 23, 2015 - 3:23am
Hi,

I tried to do a full cluster graceful restart without bootstrap, and relying on the pc.recovery=true feature + gvwstate.dat.

"This feature can be used for automatic recovery from full cluster crashes, such as in the case of a data center power outage and graceful full cluster restarts without the need for explicitly bootstrapping a new Primary Component."


However, gvwstate.dat was deleted on graceful shutdown, so all the daemons hung on reboot.

systemctl stop mysql
ls -l /var/lib/mysql/gv*
No such file or directory


How is full graceful restart supposed to work?

Version: Percona-XtraDB-Cluster-56-5.6.21-25.8.940.el7.x86_64

Oracle MySQL Security Patches January 2015

Lastest Forum Posts - January 22, 2015 - 6:57am
Hi,

I'm new to the forum, but wanted to know that in the light of the new Oracle MySQL Security patches for January 2015 for MySQL releases 5.5.40 and earlier and 5.6.21 and earlier, how will Percona incorporate these patches or will the current Percona servers need to be patched?

Reason I'm asking is because we have a host of Percona servers (ranging on different versions) and want to know how these patches will be handled? Will a new release be brought out or do we need to apply these patches to our servers?

Any help will be greatly appreciated.

Many Thanks,
Sunny

Nodes terminated when addeted new

Lastest Forum Posts - January 22, 2015 - 12:59am
Hello!

I have cluster of 2 nodes (master-master). One of them master for slave.
When i tried add new master node in cluster Donor node and acceptor fall down. On donor node in innobackup.backup.log i see next:
150121 14:49:11 innobackupex: Finished backing up non-InnoDB tables and files

150121 14:49:11 innobackupex: Executing LOCK BINLOG FOR BACKUP...
DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr//bin/innobackupex line 3036.
innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 3039
main::mysql_query('HASH(0x10a9720)', 'LOCK BINLOG FOR BACKUP') called at /usr//bin/innobackupex line 3501
main::mysql_lock_binlog('HASH(0x10a9720)') called at /usr//bin/innobackupex line 2000
main::backup() called at /usr//bin/innobackupex line 1592
innobackupex: Error:
Error executing 'LOCK BINLOG FOR BACKUP': DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr//bin/innobackupex line 3036.
150121 14:49:11 innobackupex: Waiting for ibbackup (pid=44712) to finish


Versions:
mysqld Ver 5.6.21-70.1-56 for Linux on x86_64 (Percona XtraDB Cluster (GPL), Release rel70.1, Revision 938, WSREP version 25.8, wsrep_25.8.r4150)

xtrabackup version 2.2.8 based on MySQL server 5.6.22 Linux (x86_64) (revision id: )



I tried to setup new node and got same error.

CPU Usage

Lastest Forum Posts - January 21, 2015 - 3:52pm
I looking at the graphs for CPU usage and I'm getting values with 'u' and 'm'. Can you explain to me what these mean?

Also I'm looking at the graphs of the Daily (5 min average) and the other graphs. The values for the "Daily (5 min average)" on the 'y' axis exceeds 500. However, the values for all other graphs only reach 400 and reflects the increase by 100 with each added virtual CPU (it's a four core processor).

Below are examples of the Daily and Weekly graphs. Thanks.

innobackupex Incremental backup fails on LOCK BINLOG FOR BACKUP...

Lastest Forum Posts - January 21, 2015 - 11:36am
Executing an incremental backup on my database and it failed with the following trace. Any clues to what is wrong.

Backup Version is: InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p

Server Version is: Server version: 5.6.21-70.0 Percona Server (GPL), Release 70.0, Revision 688

Command is: innobackupex --compress --incremental /usb/ariel/innobackupex/mysql --incremental-basedir=/usb/ariel/innobackupex/mysql/2015-01-14_05-51-30

Log Trace is:
innobackupex: Backing up files '/srv/mysql//annotation_service/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM ,CSV,opt,par}' (10 files)
innobackupex: Backing up file '/srv/mysql//entrez/db.opt'
innobackupex: Backing up file '/srv/mysql//entrez/gene.frm'
>> log scanned up to (8048683632701)
innobackupex: Backing up file '/srv/mysql//entrez/synonym.frm'
150121 10:38:01 innobackupex: Finished backing up non-InnoDB tables and files

150121 10:38:01 innobackupex: Executing LOCK BINLOG FOR BACKUP...
DBD::mysql::db do failed: MySQL server has gone away at /usr/bin/innobackupex line 3036.
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 3039
main::mysql_query('HASH(0x1df0d20)', 'LOCK BINLOG FOR BACKUP') called at /usr/bin/innobackupex line 3501
main::mysql_lock_binlog('HASH(0x1df0d20)') called at /usr/bin/innobackupex line 2000
main::backup() called at /usr/bin/innobackupex line 1592
innobackupex: Error:
Error executing 'LOCK BINLOG FOR BACKUP': DBD::mysql::db do failed: MySQL server has gone away at /usr/bin/innobackupex line 3036.
150121 10:38:01 innobackupex: Waiting for ibbackup (pid=4074) to finish


Encrypted and incremental backups.

Lastest Forum Posts - January 21, 2015 - 1:50am
Hello,


I'm currently writing backups and restorations scripts, on the basis of innobackupex. I need the backups to be encrypted and incremental.

Therefore, In order to enable incremental backups on the basis of an encrypted basedir, I'm using the '--extra-lsndir' option to save an alternative cleartext 'xtrabackup_checkpoints' file. I think it is a good solution (this solution comes from this blog post).

My question is :
Can I safely set the '--extra-lsndir' value to the same value as the backupDir, and then delete xtrabackup_checkpoints.xbcrypt ?

Pages

Subscribe to Percona aggregator
]]>