]]>
]]>

You are here

Feed aggregator

MySQL QA Episode 5: Preparing Your QA Run with pquery

Latest MySQL Performance Blog posts - July 13, 2015 - 12:00am

Welcome to MySQL QA Episode #5! In this episode we’ll be setting up and running pquery for the first time… and I’ll also trigger some actual bugs (fun guaranteed)! I’ll also introduce you to mtr_to_sql.sh and pquery-run.sh.

pquery-run.sh (the main wrapper around pquery) is capable of generating 80-120 MySQL Server crashes – per hour! See how it all works in this episode…

Full-screen viewing @ 720p resolution recommended

The post MySQL QA Episode 5: Preparing Your QA Run with pquery appeared first on MySQL Performance Blog.

How to create a rock-solid MySQL database backup & recovery strategy

Latest MySQL Performance Blog posts - July 10, 2015 - 8:05am

Have you ever wondered what could happen if your MySQL database goes down?

Although it’s evident such a crash will cause downtime – and surely some business impact in terms of revenue – can you do something to reduce this impact?

The simple answer is “yes” by doing regular backups (of course) but are you 100% sure that your current backup strategy will really come through when an outage occurs? And how much precious time will pass (and how much revenue will be lost) before you get your business back online?

I usually think of backups as the step after HA fails. Let’s say we’re in M<>M replication and something occurs that kills the db but the HA can’t save the day. Let’s pretend that the UPS fails and those servers are completely out. You can’t failover; you have to restore data. Backups are a key piece of “Business Continuity.” Also factor in the frequent need to restore data that’s been altered by mistake. No ‘WHERE’ clause or DROP TABLE in prod instead of DEV. These instances are where backups are invaluable.

Let’s take some time and discuss the possible backup strategies with MySQL…  how to make backups efficiently and also examine the different tools that are available. We’ll cover these topics and more during my July 15  webinar: “Creating a Best-in-Class Backup and Recovery System for Your MySQL Environment” starting at 10 a.m. Pacific time.

On a related note, did you know that most online backups are possible with mysqldump and you can save some space on backups by using simple Linux tools? I’ll also cover this so be sure to join me next Wednesday. Oh, and it’s a free webinar, too!

Stay tuned!

The post How to create a rock-solid MySQL database backup & recovery strategy appeared first on MySQL Performance Blog.

Is it advisable to take full backup from slave after switchover?

Lastest Forum Posts - July 10, 2015 - 4:08am
Hello

We have two mysql servers, 1 as master and 2 as slave (using GTID replication 5.6.21)
Currently we take full back from master server from Mysql enterprise backup
We will be performing an activity to make slave as master and master as slave

Wanted to know if there is any harm if we take backup from newly made slave (which was master earlier)after switchover?

Is it advisable to take full backup from slave after switchover?

Lastest Forum Posts - July 10, 2015 - 4:08am
Hello

We have two mysql servers, 1 as master and 2 as slave (using GTID replication 5.6.21)
Currently we take full back from master server from Mysql enterprise backup
We will be performing an activity to make slave as master and master as slave

Wanted to know if there is any harm if we take backup from newly made slave (which was master earlier)after switchover?

mysqldump: unknown option '--END OF FILE

Lastest Forum Posts - July 10, 2015 - 2:12am
While taking dump of database from mysqldump I get below error

mysqldump -u root -p db_test > /home/test.sql

mysqldump: unknown option '--END OF FILE

Could anyone please help?

PXC 5.6.24 - BF applier failed to open_and_lock_tables

Lastest Forum Posts - July 9, 2015 - 10:29am
I am running a 3 node cluster of PXC and I keep getting random crashes on all 3 nodes.

Setup:
Ubuntu 14.04.2 LTS
- 60 GB RAM
- SSD RAID 10 (130GB)
PXC 5.6.24-72.2-56-log - Percona XtraDB Cluster (GPL), Release rel72.2, Revision 43abf03, WSREP version 25.11, wsrep_25.11
>>
[mysqld]

# GENERAL #
bind-address = 0.0.0.0
character-set-server = utf8
collation-server = utf8_general_ci
default_storage_engine = InnoDB
event-scheduler = ON
pid-file = /var/run/mysqld/mysqld.pid
port = 3306
server-id = 1
socket = /var/run/mysqld/mysqld.sock
user = mysql

# MyISAM #
key-buffer-size = 32M
myisam-recover-options = FORCE,BACKUP

# SAFETY #
innodb = FORCE
innodb-strict-mode = 1
max-allowed-packet = 64M
max-connect-errors = 1000000
skip-external-locking
skip-host-cache
skip-name-resolve
sql-mode = STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_AUTO_VA LUE_ON_ZERO,NO_ENGINE_SUBSTITUTION
sysdate-is-now = 1

# DATA STORAGE #
datadir = /var/lib/mysql

# BINARY LOGGING #
expire-logs-days = 14
log-bin = /var/lib/mysql/mysql-bin
log-slave-updates
sync-binlog = 1

# CACHES AND LIMITS #
back-log = 1000
connect-timeout = 20
interactive-timeout = 30
join-buffer-size = 8M
max-binlog-size = 100M
max-connections = 2000
max-heap-table-size = 32M
open-files-limit = 65535
preload-buffer-size = 65536
query-cache-size = 0
query-cache-type = 0
sort-buffer-size = 2M
read-buffer-size = 4M
read-rnd-buffer-size = 4M
table-definition-cache = 4096
table-open-cache = 5000
thread-cache-size = 100
thread-stack = 256K
tmp-table-size = 32M
wait-timeout = 30

# INNODB #
innodb-buffer-pool-instances = 8
innodb-buffer-pool-size = 40G
innodb-file-per-table = 1
innodb-flush-log-at-trx-commit = 1
innodb-flush-method = O_DIRECT
innodb-lock-wait-timeout = 15
innodb-log-files-in-group = 2
innodb-log-file-size = 512M

# LOGGING *
log-error = /var/log/mysql/mysql-error.log
log-queries-not-using-indexes = 0
slow-query-log = 0

# WSREP #
wsrep_provider = /usr/lib/galera3/libgalera_smm.so
wsrep_cluster_address = gcomm://<redacted>,<redacted>,<redacted>
binlog_format = ROW
innodb_autoinc_lock_mode = 2
wsrep_node_address = <redacted>
wsrep_node_name = "db01"
wsrep_sst_method = xtrabackup-v2
wsrep_cluster_name = <redacted>
wsrep_sst_auth = "<redacted>"
wsrep_slave_threads = 8
wsrep_notify_cmd = /etc/mysql/wsrep_notify
<<

I was getting crashes every 1-3 days on all 3 nodes until the release of PXC 5.6.24. Now I get them about once a week. At first, I thought it was a specific cron job because the crash timestamps had similar minutes, so I looked at the job but I couldn't replicate the error manually. Then the crash timestamps started to be different so I haven't been able to track any pattern. When the nodes crash, I get the same error across the board:

2015-07-09 00:44:36 19638 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 46677454)
2015-07-09 00:44:36 19638 [Warning] WSREP: RBR event 3 Write_rows apply warning: 1615, 46677454
2015-07-09 00:44:36 19638 [Warning] WSREP: Failed to apply app buffer: seqno: 46677454, status: 1
at galera/src/trx_handle.cpp:apply():351
Retrying 2th time
2015-07-09 00:44:36 19638 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 46677454)
2015-07-09 00:44:36 19638 [Warning] WSREP: RBR event 3 Write_rows apply warning: 1615, 46677454
2015-07-09 00:44:36 19638 [Warning] WSREP: Failed to apply app buffer: seqno: 46677454, status: 1
at galera/src/trx_handle.cpp:apply():351
Retrying 3th time
2015-07-09 00:44:36 19638 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 46677454)
2015-07-09 00:44:36 19638 [Warning] WSREP: RBR event 3 Write_rows apply warning: 1615, 46677454
2015-07-09 00:44:36 19638 [Warning] WSREP: Failed to apply app buffer: seqno: 46677454, status: 1
at galera/src/trx_handle.cpp:apply():351
Retrying 4th time
2015-07-09 00:44:36 19638 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1615, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 46677454)
2015-07-09 00:44:36 19638 [Warning] WSREP: RBR event 3 Write_rows apply warning: 1615, 46677454
2015-07-09 00:44:36 19638 [Warning] WSREP: failed to replay trx: source: f3b46697-1ff6-11e5-af61-0b245f7246eb version: 3 local: 1 state: REPLAYING flags: 129 conn_id: 6286358 trx_id: 366113962 seqnos (l: 6786080, g: 46677454, s: 46677452, d: 46677453, ts: 2721954221674858)
2015-07-09 00:44:36 19638 [Warning] WSREP: Failed to apply trx 46677454 4 times
2015-07-09 00:44:36 19638 [ERROR] WSREP: trx_replay failed for: 6, query: void
2015-07-09 00:44:36 19638 [ERROR] Aborting

After the node fails, it ALWAYS has to do a SST (instead of an IST). I have tried to Google around and I have found several people having the same issue, but no resolutions. Is this a configuration problem? A bug that needs to be reported?

Any help would be appreciated. Thanks in advance!

How to get a performance upgrade to this complicated Select...

Lastest Forum Posts - July 9, 2015 - 8:20am
Hey
First of all, I am really new here in this Forum and so, the first I have to say is: HELLO
Ok - I have the following Problem with a Select.
I create temporary tables, because I have to concat some data. - The second query, has 4 UNION ALL Statements.
I can't Change the database tables. - All I can do is to perform the Select statments and/or create Indexes.
I do have parallel a MS SQL Server (not in production - the mySQL Server is the production Server) - There the statment runs all in all within half a second. - On the mySQL Server the same Statement Needs about 4 seconds... (same Hardware)
I hope somebody can help me with Tuning of this (Please don't doubt on the Statement, it is put togegther from my Software, where, like in this case, some values are the same. - so it Looks like there is the same Statement over and over, but this depends on the customers Needs!)
So, here is the Statement

DROP TEMPORARY TABLE IF EXISTS tmpGlaeser;
DROP TEMPORARY TABLE IF EXISTS tmpSchicht;

CREATE TEMPORARY TABLE tmpGlaeser (
PRIMARY KEY (ID),
INDEX (hst_code_grundglas),
INDEX (lieferbar_ab),
INDEX (lieferbar_bis)
) ENGINE = MEMORY
SELECT DISTINCT
iprolenstype.*
FROM iprolensrange iprolensrange
INNER JOIN iprolenstype iprolenstype
ON iprolensrange.hst_code_grundglas = iprolenstype.hst_code_grundglas
WHERE (1.5 BETWEEN iprolensrange.shs_von / 100 AND iprolensrange.shs_bis / 100)
AND (0 BETWEEN iprolensrange.cyl_von / 100 AND iprolensrange.cyl_bis / 100)
AND iprolenstype.hst_code_grundglas IN (SELECT
hst_code_grundglas
FROM iprolensrange
WHERE (1.5 BETWEEN iprolensrange.shs_von / 100 AND iprolensrange.shs_bis / 100)
AND (0 BETWEEN iprolensrange.cyl_von / 100 AND iprolensrange.cyl_bis / 100))
AND (CURDATE() BETWEEN IFNULL(iprolenstype.lieferbar_ab, CURDATE()) AND IFNULL(iprolenstype.lieferbar_bis, CURDATE()))
AND (0 <= iprolensrange.prisma_bis)
AND (0 <= iprolensrange.prisma_bis
);
CREATE TEMPORARY TABLE tmpSchicht(
INDEX (grundglas),
INDEX (photo),
INDEX (schicht))
ENGINE = MEMORY
SELECT
grundglas,
photo,
schicht
FROM (SELECT DISTINCT
i1.hst_code_grundglas AS grundglas,
iprooptions.Phototrop AS photo,
iprooptions.hst_code_schicht AS schicht
FROM iprooptions
LEFT JOIN iprocombination i1
ON (iprooptions.hst_code_schicht = i1.hst_code_schicht1
AND i1.hst_code_grundglas IN (SELECT DISTINCT
iprolenstype.hst_code_grundglas
FROM iprolensrange iprolensrange
INNER JOIN iprolenstype iprolenstype
ON iprolensrange.hst_code_grundglas = iprolenstype.hst_code_grundglas
WHERE (1.5 BETWEEN iprolensrange.shs_von / 100 AND iprolensrange.shs_bis / 100)
AND (0 BETWEEN iprolensrange.cyl_von / 100 AND iprolensrange.cyl_bis / 100)
AND iprolenstype.hst_code_grundglas IN (SELECT
hst_code_grundglas
FROM iprolensrange
WHERE (1.5 BETWEEN iprolensrange.shs_von / 100 AND iprolensrange.shs_bis / 100)
AND (0 BETWEEN iprolensrange.cyl_von / 100 AND iprolensrange.cyl_bis / 100))
AND (CURDATE() BETWEEN IFNULL(iprolenstype.lieferbar_ab, CURDATE()) AND IFNULL(iprolenstype.lieferbar_bis, CURDATE()))
AND (0 <= iprolensrange.prisma_bis)
AND (0 <= iprolensrange.prisma_bis))
AND i1.hst_code_schicht1 <> '******'
AND i1.lieferbarkeit = 2
AND iprooptions.manufacturer_code = i1.manufacturer_code)
WHERE iprooptions.Farbe > 0
UNION ALL
SELECT DISTINCT
i2.hst_code_grundglas AS grundglas,
iprooptions.Phototrop AS photo,
iprooptions.hst_code_schicht AS schicht
FROM iprooptions
LEFT JOIN iprocombination i2
ON (iprooptions.hst_code_schicht = i2.hst_code_schicht2
AND i2.hst_code_grundglas IN (SELECT DISTINCT
iprolenstype.hst_code_grundglas
FROM iprolensrange iprolensrange
INNER JOIN iprolenstype iprolenstype
ON iprolensrange.hst_code_grundglas = iprolenstype.hst_code_grundglas
WHERE (1.5 BETWEEN iprolensrange.shs_von / 100 AND iprolensrange.shs_bis / 100)
AND (0 BETWEEN iprolensrange.cyl_von / 100 AND iprolensrange.cyl_bis / 100)
AND iprolenstype.hst_code_grundglas IN (SELECT
hst_code_grundglas
FROM iprolensrange
WHERE (1.5 BETWEEN iprolensrange.shs_von / 100 AND iprolensrange.shs_bis / 100)
AND (0 BETWEEN iprolensrange.cyl_von / 100 AND iprolensrange.cyl_bis / 100))
AND (CURDATE() BETWEEN IFNULL(iprolenstype.lieferbar_ab, CURDATE()) AND IFNULL(iprolenstype.lieferbar_bis, CURDATE()))
AND (0 <= iprolensrange.prisma_bis)
AND (0 <= iprolensrange.prisma_bis))
AND i2.hst_code_schicht1 <> '******'
AND i2.lieferbarkeit = 2
AND iprooptions.manufacturer_code = i2.manufacturer_code)
WHERE iprooptions.Farbe > 0
UNION ALL
SELECT DISTINCT
i3.hst_code_grundglas AS grundglas,
iprooptions.Phototrop AS photo,
iprooptions.hst_code_schicht AS schicht
FROM iprooptions
LEFT JOIN iprocombination i3
ON (iprooptions.hst_code_schicht = i3.hst_code_schicht1
AND i3.hst_code_grundglas IN (SELECT DISTINCT
iprolenstype.hst_code_grundglas
FROM iprolensrange iprolensrange
INNER JOIN iprolenstype iprolenstype
ON iprolensrange.hst_code_grundglas = iprolenstype.hst_code_grundglas
WHERE (1.5 BETWEEN iprolensrange.shs_von / 100 AND iprolensrange.shs_bis / 100)
AND (0 BETWEEN iprolensrange.cyl_von / 100 AND iprolensrange.cyl_bis / 100)
AND iprolenstype.hst_code_grundglas IN (SELECT
hst_code_grundglas
FROM iprolensrange
WHERE (1.5 BETWEEN iprolensrange.shs_von / 100 AND iprolensrange.shs_bis / 100)
AND (0 BETWEEN iprolensrange.cyl_von / 100 AND iprolensrange.cyl_bis / 100))
AND (CURDATE() BETWEEN IFNULL(iprolenstype.lieferbar_ab, CURDATE()) AND IFNULL(iprolenstype.lieferbar_bis, CURDATE()))
AND (0 <= iprolensrange.prisma_bis)
AND (0 <= iprolensrange.prisma_bis))
AND i3.hst_code_schicht1 <> '******'
AND i3.lieferbarkeit = 2
AND iprooptions.manufacturer_code = i3.manufacturer_code)
WHERE iprooptions.Farbe > 0) AS newTable
WHERE grundglas IS NOT NULL;
UPDATE tmpGlaeser g, tmpSchicht s
SET g.Phototrop = s.photo
WHERE g.hst_code_grundglas = s.grundglas
AND g.Phototrop < s.photo;
SELECT
*
FROM tmpGlaeser ORDER BY manufacturer_code, hst_code_grundglas;

Percona Live Europe 2015 conference, tutorials schedule now available

Latest MySQL Performance Blog posts - July 9, 2015 - 8:10am

The conference and tutorial schedule for Percona Live Europe 2015, September 21-23 in Amsterdam, was published this morning and this year’s event will focus on MySQL, NoSQL and Data in the Cloud.

Conference sessions, which will follow each morning’s keynote addresses, feature a variety of formal tracks and sessions. Topic areas include: high availability (HA), DevOps, programming, performance optimization, replication and backup, MySQL in the cloud, MySQL and NoSQL. There will also be MySQL case studies, session on security and talks about “What’s new in MySQL.”

Technology experts from the world’s leading MySQL and NoSQL vendors and users – including Oracle, MariaDB, Percona, Facebook, Google, LinkedIn and Yelp – will deliver the sessions. Sessions will include:

  • “InnoDB: A Journey to the Core,” Jeremy Cole, Systems Engineer, Google and Davi Arnaut, Software Engineer, LinkedIn
  • “MongoDB Patterns and Antipatterns for Dev and Ops,” Steffan Mejia, Principal Consulting Engineer, MongoDB, Inc.
  • “NoSQL’s Biggest Lie: SQL Never Went Away,” Matthew Revell, Lead Developer Advocate, Couchbase
  • “The Future of Replication is Today: New Features in Practice,” Giuseppe Maxia, Quality Assurance Architect, VMware
  • “What’s New in MySQL 5.7,” Geir Høydalsvik, Senior Software Development Director, Oracle
Tutorial Schedule

Tutorials provide practical, in-depth knowledge of critical MySQL issues. Topics will include:

  • “Best Practices for MySQL High Availability,” Colin Charles, Chief Evangelist, MariaDB
  • “Mongo Sharding from the Trench: A Veterans Field Guide,” David Murphy, Lead DBA, Rackspace Data Stores
  • “Advanced Percona XtraDB Cluster in a Nutshell, La Suite: Hands on Tutorial Not for Beginners!,” Frederic Descamps, Senior Architect, Percona
Featured Events
  • On Monday, September 21 at 5 p.m., Percona will host an opening reception at the Delirium Café in Amsterdam.
  • On Tuesday, September 22 at 7 p.m., the Community Dinner will take place at the offices of Booking.com.
  • On Wednesday September 23 at 6 p.m., the closing reception will be held at the Mövenpick Hotel, giving attendees one last chance to visit the sponsor kiosks.
Sponsorships

Sponsorship opportunities for Percona Live Europe 2015 are still available but they are selling out fast. Event sponsors become part of a dynamic and fast-growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors and entrepreneurs who typically attend the event. This year’s conference will feature expanded accommodations and turnkey kiosks. Current sponsors include:

  • Diamond: VMware
  • Exhibitors: MariaDB, Severalnines
  • Media: Business Cloud News, Computerworld UK, TechWorld
Planning to Attend?

Early Bird registration discounts for Percona Live Europe 2015 are available through July 26, 2015 at 11:30 p.m. CEST.

The post Percona Live Europe 2015 conference, tutorials schedule now available appeared first on MySQL Performance Blog.

pt-table-checksum shows diffs, but pmp-check-pt-table-checksum says &amp;quot;OK&amp;quot;

Lastest Forum Posts - July 9, 2015 - 7:17am
I'm new to Percona toolkit an Nagios plugins. When I execute pt-table-checksum, I see some diffs at some tables, e.g.

TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE
07-09T16:10:10 0 1 87 1 0 0.054 LDS.DATABASECHANGELOG

and the exit code is 16, telling me, that there are differences. But in the table check_sums I see the following line:

| db | tbl | chunk | chunk_time | chunk_index | lower_boundary | upper_boundary | this_crc | this_cnt | master_crc | master_cnt | ts |
| LDS | DATABASECHANGELOG | 1 | 0.011309 | NULL | NULL | NULL | 4090c657 | 87 | 4090c657 | 87 | 2015-07-09 16:10:10 |

showing me no differences. And pmp-check-pt-table-checksum shows

OK pt-table-checksum found no out-of-sync tables

What is wrong here? Do we have differences or not?

Thanks in advance!

Regards
Burkhard

error install percona-xtradb-cluster-server-5.6

Lastest Forum Posts - July 9, 2015 - 5:58am
: apt-get install percona-xtradb-cluster-56 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: percona-xtradb-cluster-56 : Depends: percona-xtradb-cluster-server-5.6 (>= 5.6.15-25.5-759.raring) but it is not going to be installed E: Unable to correct problems, you have held broken packages. : lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise : cat /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu precise main restricted universe deb http://archive.ubuntu.com/ubuntu precise-updates main restricted universe deb http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb http://archive.canonical.com/ubuntu precise partner ################## ######### Percona deb http://repo.percona.com/apt raring main deb-src http://repo.percona.com/apt raring main

percona replication error after shutting down the slave

Lastest Forum Posts - July 9, 2015 - 5:49am
We had a shutdown on slave server (at 13:47) and after that slave does not follow the master. I have all the data until the shutdown in the slave server.
Here is the slave:
: mysql> show slave status \G *************************** 1. row ***************************<br> Master_Host: 192.168.0.56<br> Master_Log_File: mysql-bin.000226<br> Read_Master_Log_Pos: 695831819<br> Relay_Log_File: mysql-relay-bin.000001<br> Relay_Log_Pos: 4<br> Relay_Master_Log_File: mysql-bin.000226<br> Slave_IO_Running: No<br> Slave_SQL_Running: Yes<br> Exec_Master_Log_Pos: 695831819<br> Relay_Log_Space: 120<br> Last_IO_Errno: 1236<br> Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'<br> Last_IO_Error_Timestamp: 150709 14:26:07<br> It seems slave receives the logs.
Actually we have "Master with Relay Slave" replication and the slave itself act as master for another slave.

Percona Server 5.6.25-73.1 is now available

Latest MySQL Performance Blog posts - July 9, 2015 - 4:48am

Percona is glad to announce the release of Percona Server 5.6.25-73.1 on July 9, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.25, including all the bug fixes in it, Percona Server 5.6.25-73.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.25-73.1 milestone on Launchpad.

New Features:

  • TokuDB storage engine package has been updated to version 7.5.8

New TokuDB Features:

  • Exposed ft-index fanout as TokuDB option tokudb_fanout, (default=16 range=2-16384).
  • Tokuftdump can now provide a summary info with new --summary option.
  • Fanout has been serialized in the ft-header and ft_header.fanout value has been exposed in tokuftdump.
  • New checkpoint status variables have been implemented:
    • CP_END_TIME – checkpoint end time, time spend in checkpoint end operation in seconds,
    • CP_LONG_END_COUNT – long checkpoint end count, count of end_checkpoint operations that exceeded 1 minute,
    • CP_LONG_END_TIME – long checkpoint end time, total time of long checkpoints in seconds.
  • “Engine” status variables are now visible as “GLOBAL” status variables.

TokuDB Bugs Fixed:

  • Fixed assertion with big transaction in toku_txn_complete_txn.
  • Fixed assertion that was caused when a transaction had rollback log nodes orphaned in the blocktable.
  • Fixed ftcxx tests failures that were happening when it was run in parallel.
  • Fixed multiple test failures for Debian/Ubuntu caused by assertion on setlocale().
  • Status has been refactored to its own file/subsystem within ft-index code to make the it more accessible.

Release notes for Percona Server 5.6.25-73.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

The post Percona Server 5.6.25-73.1 is now available appeared first on MySQL Performance Blog.

How to obtain the MySQL version from an FRM file

Latest MySQL Performance Blog posts - July 9, 2015 - 12:00am

I recently helped a customer figure out why a minor version MySQL upgrade was indicating that some tables needed to be rebuilt. The mysql_upgrade program should be run for every upgrade, no matter how big or small the version difference is, but when only the minor version changes, I would normally not expect it to require tables to be rebuilt.

Turns out some of their tables were still marked with an older MySQL version, which could mean a few things… most likely that something went wrong with a previous upgrade, or that the tables were copied from a server with an older version.

In cases like this, did you know there is a fast, safe and simple way to check the version associated with a table? You can do this by reading the FRM file, following the format specification found here.

If you look at that page, you’ll see that the version is 4 bytes long and starts at offset 0x33. Since it is stored in little endian format, you can get the version just by reading the first two bytes.

This means you can use hexdump to read 2 bytes, starting at offset 0x33, and getting the decimal representation of them, to obtain the MySQL version, like so:


telecaster:test fernandoipar$ hexdump -s 0x33 -n 2 -v -d 55_test.frm
0000033 50532
0000035
telecaster:test fernandoipar$ hexdump -s 0x33 -n 2 -v -d 51_test.frm
0000033 50173
0000035

The first example corresponds to a table created on MySQL version 5.5.32, while the second one corresponds to 5.1.73.

Does that mean the 51_test table was created on 5.1.73? Not necessarily, as MySQL will update the version on the FRM whenever the table is rebuilt or altered.

The manual page says the details can change with the transition to the new text based format, but I was able to get the version using this command up to version MySQL 5.7.7.

Hope you found that useful!

The post How to obtain the MySQL version from an FRM file appeared first on MySQL Performance Blog.

Master-master asynchronous replication issue between two 5.6.24 PXC clusters

Lastest Forum Posts - July 8, 2015 - 9:27am
To meet the DR requirements for production, we have set up bidirectional master - master asynchronous mysql replication between two 3 node PXC clusters. binlog_format=ROW and log-slave-updates are set on each node.

The Server version: 5.6.24-72.2-56-log Percona XtraDB Cluster (GPL), Release rel72.2, Revision 1, WSREP version 25.11, wsrep_25.11

The asynchronous slave runs on node 3 in each cluster.

For instance:

cluster a is the slave of cluster b. The slave runs on node a3.
cluster b is the slave of cluster a. The slave runs on node b3.

On cluster a, if a transaction is executed on node a3, where the asynchronous slave is running, it is replicated to a1 and a2 and cluster b without error. However, if a transaction is executed on a1 or a2, where no asynchronous slave is running, the transaction is replicated to the rest of nodes in cluster a and cluster b. But the asyncrhonous slave on a3 is stopped trying to replicate the same transaction again.

The same behavior is observedd on cluster b. If a transaction is executed on b1 or b2, the salve on b3 will stop failing to apply the duplicate transaction.

Is master - master asynchronous mysql replication supported between two PXC clusters?

What can we do to prevent the recursive replication?

Here are the examples of the slave errors:

2015-07-07 20:22:12 11098 [ERROR] Slave SQL: Error 'Table 'test' already exists' on query. Default database: 'spxna'. Query: 'create table test ( i int unsigned not null auto_increment primary key, j char(32))', Error_code: 1050
2015-07-07 20:22:12 11098 [Warning] Slave: Table 'test' already exists Error_code: 1050
2015-07-07 20:22:12 11098 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'spxus-slcprdmagdb03-master-bin.000002' position 527

2015-07-07 20:39:10 12272 [ERROR] Slave SQL: Could not execute Write_rows event on table spxna.test; Duplicate entry '1' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log spxus-slcprdmagdb03-master-bin.000003, end_log_pos 569, Error_code: 1062
2015-07-07 20:39:10 12272 [Warning] Slave: Duplicate entry '1' for key 'PRIMARY' Error_code: 1062
2015-07-07 20:39:10 12272 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'spxus-slcprdmagdb03-master-bin.000003' position 408

Thanks.

Bug in ss_get_by_ssh.php --type=redis

Lastest Forum Posts - July 8, 2015 - 4:38am
I've found that my cacti graph of redis commands executed was jumping around like crazy. After a lot of debugging it turns out that the redis_get function usually will return about half of the INFO response from redis. The crazy thing is it would stop at the second-to-last digit of the total_commands_processed value.

To illustrate; if the actual redis info response was:
: (...snip...) # Stats total_connections_received:7378492 total_commands_processed:2724014579 instantaneous_ops_per_sec:23 (...snip...) The $data variable in redis_get would end up with:
: (...snip...) # Stats total_connections_received:7378492 total_commands_processed:272401457 Every 10th poll or so the implementation would work as intended, causing a huge delta because of the extra digit, and a big spike in the Cacti graph.

I've changed the script at line 1307 (in redis_get) to send a PING after the info, and keep reading the response until PONG is received:

:    $res = fwrite($sock, "INFO\r\nPING\r\n");
   if (!$res ) {
      echo("Can't write to socket");
      return;
   }

   $data = '';
   while (($line = fgets($sock)) && trim($line) != '+PONG') {
      $data .= $line;
   } 
Which caused a b0rked graph like this:
Array

...to become more reasonable (fix applied 11:31 + graph is zoomed in the avoid the earlier extreme values):
Array

Hope it can help someone. Cheers.

mysql vs percona first test

Lastest Forum Posts - July 8, 2015 - 3:43am
Hello, I tell you, what is my scenary., I have a vmware virtual machine install in ssd disk. The installation is clean because I installed yesterday. First I installed mysql server 5.5 via apt-get. I restore my db and I made firsts test. the first query return 2,8 seconds for one total 3500000 of register of table, with 3 fields in mysql. In other vm with same features , I install Percona server. The same query, returns 6,47 seconds with percona server. In both case there weren't any optimizacion. Why mysql 5.5 return better result that percona?

Regards

How to debug establishing connection failures?

Lastest Forum Posts - July 8, 2015 - 2:40am
Hi,
I take notice that a correlation exists between Aborted_connects and Innodb_rows_inserted. How to debug it?

Red Hat Enterprise Linux Server release 6.5 x86_64
Percona Server 5.5.16

Same host, cross database transactions possible? Percona/MySQL, Innodb/ExtraDb

Lastest Forum Posts - July 8, 2015 - 12:42am
Hi,

Are cross-database transactions supported for Innodb and/or ExtraDB with Percona and/or MySQL?

I am not asking about cross-server or cluster, just a simple single host with multiple databases. I would expect transactions to be supported across database since the meta data files are shared. When I test it seems to be ok with MySQL and InnoDB at least i.e. locks are held until the end of the transaction and rollback seems to work.

But can anyone give me definitive answers?

I cannot find anything in any documentation or any authoritative responses in StackOverFlow etc.

Many Thanks,

gw

MySQL QA Episode 4: QA Framework Setup Time!

Latest MySQL Performance Blog posts - July 8, 2015 - 12:00am

Welcome to MySQL QA Episode 4! In this episode we’ll look into setting up our QA Framework: percona-qa, pquery, reducer & more.

1. All about percona-qa
2. pquery

$ cd ~; bzr branch lp:percona-qa

3. reducer.sh

$ cd ~; bzr branch lp:randgen $ vi ~/randgen/util/reducer/reducer.sh

4. Short introduction to pquery framework tools

The tools introduced in this episode will be covered further in next two episodes.

Full-screen viewing @ 720p resolution recommended

The post MySQL QA Episode 4: QA Framework Setup Time! appeared first on MySQL Performance Blog.

MySQL QA Episode 3: How to use the debugging tool GDB

Latest MySQL Performance Blog posts - July 7, 2015 - 12:00am

Welcome to MySQL QA Episode 3: “Debugging: GDB, Backtraces, Frames and Library Dependencies”

In this episode you’ll learn how to use debugging tool GDB. The following debugging topics are covered:

 

1. GDB Introduction
2. Backtrace, Stack trace
3. Frames
4. Commands & Logging
5. Variables
6. Library dependencies
7. c++filt
8. Handy references
– GDB Cheat sheet (page #2): https://goo.gl/rrmB9i
– From Crash to testcase: https://goo.gl/1o5MzM

Also expands on live debugging & more. In HD quality (set your player to 720p!)

The post MySQL QA Episode 3: How to use the debugging tool GDB appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
]]>