]]>
]]>

You are here

Feed aggregator

Percona Live Europe 2015 conference, tutorials schedule now available

Latest MySQL Performance Blog posts - July 9, 2015 - 8:10am

The conference and tutorial schedule for Percona Live Europe 2015, September 21-23 in Amsterdam, was published this morning and this year’s event will focus on MySQL, NoSQL and Data in the Cloud.

Conference sessions, which will follow each morning’s keynote addresses, feature a variety of formal tracks and sessions. Topic areas include: high availability (HA), DevOps, programming, performance optimization, replication and backup, MySQL in the cloud, MySQL and NoSQL. There will also be MySQL case studies, session on security and talks about “What’s new in MySQL.”

Technology experts from the world’s leading MySQL and NoSQL vendors and users – including Oracle, MariaDB, Percona, Facebook, Google, LinkedIn and Yelp – will deliver the sessions. Sessions will include:

  • “InnoDB: A Journey to the Core,” Jeremy Cole, Systems Engineer, Google and Davi Arnaut, Software Engineer, LinkedIn
  • “MongoDB Patterns and Antipatterns for Dev and Ops,” Steffan Mejia, Principal Consulting Engineer, MongoDB, Inc.
  • “NoSQL’s Biggest Lie: SQL Never Went Away,” Matthew Revell, Lead Developer Advocate, Couchbase
  • “The Future of Replication is Today: New Features in Practice,” Giuseppe Maxia, Quality Assurance Architect, VMware
  • “What’s New in MySQL 5.7,” Geir Høydalsvik, Senior Software Development Director, Oracle
Tutorial Schedule

Tutorials provide practical, in-depth knowledge of critical MySQL issues. Topics will include:

  • “Best Practices for MySQL High Availability,” Colin Charles, Chief Evangelist, MariaDB
  • “Mongo Sharding from the Trench: A Veterans Field Guide,” David Murphy, Lead DBA, Rackspace Data Stores
  • “Advanced Percona XtraDB Cluster in a Nutshell, La Suite: Hands on Tutorial Not for Beginners!,” Frederic Descamps, Senior Architect, Percona
Featured Events
  • On Monday, September 21 at 5 p.m., Percona will host an opening reception at the Delirium Café in Amsterdam.
  • On Tuesday, September 22 at 7 p.m., the Community Dinner will take place at the offices of Booking.com.
  • On Wednesday September 23 at 6 p.m., the closing reception will be held at the Mövenpick Hotel, giving attendees one last chance to visit the sponsor kiosks.
Sponsorships

Sponsorship opportunities for Percona Live Europe 2015 are still available but they are selling out fast. Event sponsors become part of a dynamic and fast-growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors and entrepreneurs who typically attend the event. This year’s conference will feature expanded accommodations and turnkey kiosks. Current sponsors include:

  • Diamond: VMware
  • Exhibitors: MariaDB, Severalnines
  • Media: Business Cloud News, Computerworld UK, TechWorld
Planning to Attend?

Early Bird registration discounts for Percona Live Europe 2015 are available through July 26, 2015 at 11:30 p.m. CEST.

The post Percona Live Europe 2015 conference, tutorials schedule now available appeared first on MySQL Performance Blog.

pt-table-checksum shows diffs, but pmp-check-pt-table-checksum says "OK"

Lastest Forum Posts - July 9, 2015 - 7:17am
I'm new to Percona toolkit an Nagios plugins. When I execute pt-table-checksum, I see some diffs at some tables, e.g.

TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE
07-09T16:10:10 0 1 87 1 0 0.054 LDS.DATABASECHANGELOG

and the exit code is 16, telling me, that there are differences. But in the table check_sums I see the following line:

| db | tbl | chunk | chunk_time | chunk_index | lower_boundary | upper_boundary | this_crc | this_cnt | master_crc | master_cnt | ts |
| LDS | DATABASECHANGELOG | 1 | 0.011309 | NULL | NULL | NULL | 4090c657 | 87 | 4090c657 | 87 | 2015-07-09 16:10:10 |

showing me no differences. And pmp-check-pt-table-checksum shows

OK pt-table-checksum found no out-of-sync tables

What is wrong here? Do we have differences or not?

Thanks in advance!

Regards
Burkhard

error install percona-xtradb-cluster-server-5.6

Lastest Forum Posts - July 9, 2015 - 5:58am
: apt-get install percona-xtradb-cluster-56 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: percona-xtradb-cluster-56 : Depends: percona-xtradb-cluster-server-5.6 (>= 5.6.15-25.5-759.raring) but it is not going to be installed E: Unable to correct problems, you have held broken packages. : lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise : cat /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu precise main restricted universe deb http://archive.ubuntu.com/ubuntu precise-updates main restricted universe deb http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb http://archive.canonical.com/ubuntu precise partner ################## ######### Percona deb http://repo.percona.com/apt raring main deb-src http://repo.percona.com/apt raring main

percona replication error after shutting down the slave

Lastest Forum Posts - July 9, 2015 - 5:49am
We had a shutdown on slave server (at 13:47) and after that slave does not follow the master. I have all the data until the shutdown in the slave server.
Here is the slave:
: mysql> show slave status \G *************************** 1. row ***************************<br> Master_Host: 192.168.0.56<br> Master_Log_File: mysql-bin.000226<br> Read_Master_Log_Pos: 695831819<br> Relay_Log_File: mysql-relay-bin.000001<br> Relay_Log_Pos: 4<br> Relay_Master_Log_File: mysql-bin.000226<br> Slave_IO_Running: No<br> Slave_SQL_Running: Yes<br> Exec_Master_Log_Pos: 695831819<br> Relay_Log_Space: 120<br> Last_IO_Errno: 1236<br> Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'<br> Last_IO_Error_Timestamp: 150709 14:26:07<br> It seems slave receives the logs.
Actually we have "Master with Relay Slave" replication and the slave itself act as master for another slave.

Percona Server 5.6.25-73.1 is now available

Latest MySQL Performance Blog posts - July 9, 2015 - 4:48am

Percona is glad to announce the release of Percona Server 5.6.25-73.1 on July 9, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.25, including all the bug fixes in it, Percona Server 5.6.25-73.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.25-73.1 milestone on Launchpad.

New Features:

  • TokuDB storage engine package has been updated to version 7.5.8

New TokuDB Features:

  • Exposed ft-index fanout as TokuDB option tokudb_fanout, (default=16 range=2-16384).
  • Tokuftdump can now provide a summary info with new --summary option.
  • Fanout has been serialized in the ft-header and ft_header.fanout value has been exposed in tokuftdump.
  • New checkpoint status variables have been implemented:
    • CP_END_TIME – checkpoint end time, time spend in checkpoint end operation in seconds,
    • CP_LONG_END_COUNT – long checkpoint end count, count of end_checkpoint operations that exceeded 1 minute,
    • CP_LONG_END_TIME – long checkpoint end time, total time of long checkpoints in seconds.
  • “Engine” status variables are now visible as “GLOBAL” status variables.

TokuDB Bugs Fixed:

  • Fixed assertion with big transaction in toku_txn_complete_txn.
  • Fixed assertion that was caused when a transaction had rollback log nodes orphaned in the blocktable.
  • Fixed ftcxx tests failures that were happening when it was run in parallel.
  • Fixed multiple test failures for Debian/Ubuntu caused by assertion on setlocale().
  • Status has been refactored to its own file/subsystem within ft-index code to make the it more accessible.

Release notes for Percona Server 5.6.25-73.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

The post Percona Server 5.6.25-73.1 is now available appeared first on MySQL Performance Blog.

How to obtain the MySQL version from an FRM file

Latest MySQL Performance Blog posts - July 9, 2015 - 12:00am

I recently helped a customer figure out why a minor version MySQL upgrade was indicating that some tables needed to be rebuilt. The mysql_upgrade program should be run for every upgrade, no matter how big or small the version difference is, but when only the minor version changes, I would normally not expect it to require tables to be rebuilt.

Turns out some of their tables were still marked with an older MySQL version, which could mean a few things… most likely that something went wrong with a previous upgrade, or that the tables were copied from a server with an older version.

In cases like this, did you know there is a fast, safe and simple way to check the version associated with a table? You can do this by reading the FRM file, following the format specification found here.

If you look at that page, you’ll see that the version is 4 bytes long and starts at offset 0x33. Since it is stored in little endian format, you can get the version just by reading the first two bytes.

This means you can use hexdump to read 2 bytes, starting at offset 0x33, and getting the decimal representation of them, to obtain the MySQL version, like so:


telecaster:test fernandoipar$ hexdump -s 0x33 -n 2 -v -d 55_test.frm
0000033 50532
0000035
telecaster:test fernandoipar$ hexdump -s 0x33 -n 2 -v -d 51_test.frm
0000033 50173
0000035

The first example corresponds to a table created on MySQL version 5.5.32, while the second one corresponds to 5.1.73.

Does that mean the 51_test table was created on 5.1.73? Not necessarily, as MySQL will update the version on the FRM whenever the table is rebuilt or altered.

The manual page says the details can change with the transition to the new text based format, but I was able to get the version using this command up to version MySQL 5.7.7.

Hope you found that useful!

The post How to obtain the MySQL version from an FRM file appeared first on MySQL Performance Blog.

Master-master asynchronous replication issue between two 5.6.24 PXC clusters

Lastest Forum Posts - July 8, 2015 - 9:27am
To meet the DR requirements for production, we have set up bidirectional master - master asynchronous mysql replication between two 3 node PXC clusters. binlog_format=ROW and log-slave-updates are set on each node.

The Server version: 5.6.24-72.2-56-log Percona XtraDB Cluster (GPL), Release rel72.2, Revision 1, WSREP version 25.11, wsrep_25.11

The asynchronous slave runs on node 3 in each cluster.

For instance:

cluster a is the slave of cluster b. The slave runs on node a3.
cluster b is the slave of cluster a. The slave runs on node b3.

On cluster a, if a transaction is executed on node a3, where the asynchronous slave is running, it is replicated to a1 and a2 and cluster b without error. However, if a transaction is executed on a1 or a2, where no asynchronous slave is running, the transaction is replicated to the rest of nodes in cluster a and cluster b. But the asyncrhonous slave on a3 is stopped trying to replicate the same transaction again.

The same behavior is observedd on cluster b. If a transaction is executed on b1 or b2, the salve on b3 will stop failing to apply the duplicate transaction.

Is master - master asynchronous mysql replication supported between two PXC clusters?

What can we do to prevent the recursive replication?

Here are the examples of the slave errors:

2015-07-07 20:22:12 11098 [ERROR] Slave SQL: Error 'Table 'test' already exists' on query. Default database: 'spxna'. Query: 'create table test ( i int unsigned not null auto_increment primary key, j char(32))', Error_code: 1050
2015-07-07 20:22:12 11098 [Warning] Slave: Table 'test' already exists Error_code: 1050
2015-07-07 20:22:12 11098 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'spxus-slcprdmagdb03-master-bin.000002' position 527

2015-07-07 20:39:10 12272 [ERROR] Slave SQL: Could not execute Write_rows event on table spxna.test; Duplicate entry '1' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log spxus-slcprdmagdb03-master-bin.000003, end_log_pos 569, Error_code: 1062
2015-07-07 20:39:10 12272 [Warning] Slave: Duplicate entry '1' for key 'PRIMARY' Error_code: 1062
2015-07-07 20:39:10 12272 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'spxus-slcprdmagdb03-master-bin.000003' position 408

Thanks.

Bug in ss_get_by_ssh.php --type=redis

Lastest Forum Posts - July 8, 2015 - 4:38am
I've found that my cacti graph of redis commands executed was jumping around like crazy. After a lot of debugging it turns out that the redis_get function usually will return about half of the INFO response from redis. The crazy thing is it would stop at the second-to-last digit of the total_commands_processed value.

To illustrate; if the actual redis info response was:
: (...snip...) # Stats total_connections_received:7378492 total_commands_processed:2724014579 instantaneous_ops_per_sec:23 (...snip...) The $data variable in redis_get would end up with:
: (...snip...) # Stats total_connections_received:7378492 total_commands_processed:272401457 Every 10th poll or so the implementation would work as intended, causing a huge delta because of the extra digit, and a big spike in the Cacti graph.

I've changed the script at line 1307 (in redis_get) to send a PING after the info, and keep reading the response until PONG is received:

:    $res = fwrite($sock, "INFO\r\nPING\r\n");
   if (!$res ) {
      echo("Can't write to socket");
      return;
   }

   $data = '';
   while (($line = fgets($sock)) && trim($line) != '+PONG') {
      $data .= $line;
   } 
Which caused a b0rked graph like this:
Array

...to become more reasonable (fix applied 11:31 + graph is zoomed in the avoid the earlier extreme values):
Array

Hope it can help someone. Cheers.

mysql vs percona first test

Lastest Forum Posts - July 8, 2015 - 3:43am
Hello, I tell you, what is my scenary., I have a vmware virtual machine install in ssd disk. The installation is clean because I installed yesterday. First I installed mysql server 5.5 via apt-get. I restore my db and I made firsts test. the first query return 2,8 seconds for one total 3500000 of register of table, with 3 fields in mysql. In other vm with same features , I install Percona server. The same query, returns 6,47 seconds with percona server. In both case there weren't any optimizacion. Why mysql 5.5 return better result that percona?

Regards

How to debug establishing connection failures?

Lastest Forum Posts - July 8, 2015 - 2:40am
Hi,
I take notice that a correlation exists between Aborted_connects and Innodb_rows_inserted. How to debug it?

Red Hat Enterprise Linux Server release 6.5 x86_64
Percona Server 5.5.16

Same host, cross database transactions possible? Percona/MySQL, Innodb/ExtraDb

Lastest Forum Posts - July 8, 2015 - 12:42am
Hi,

Are cross-database transactions supported for Innodb and/or ExtraDB with Percona and/or MySQL?

I am not asking about cross-server or cluster, just a simple single host with multiple databases. I would expect transactions to be supported across database since the meta data files are shared. When I test it seems to be ok with MySQL and InnoDB at least i.e. locks are held until the end of the transaction and rollback seems to work.

But can anyone give me definitive answers?

I cannot find anything in any documentation or any authoritative responses in StackOverFlow etc.

Many Thanks,

gw

MySQL QA Episode 4: QA Framework Setup Time!

Latest MySQL Performance Blog posts - July 8, 2015 - 12:00am

Welcome to MySQL QA Episode 4! In this episode we’ll look into setting up our QA Framework: percona-qa, pquery, reducer & more.

1. All about percona-qa
2. pquery

$ cd ~; bzr branch lp:percona-qa

3. reducer.sh

$ cd ~; bzr branch lp:randgen $ vi ~/randgen/util/reducer/reducer.sh

4. Short introduction to pquery framework tools

The tools introduced in this episode will be covered further in next two episodes.

Full-screen viewing @ 720p resolution recommended

The post MySQL QA Episode 4: QA Framework Setup Time! appeared first on MySQL Performance Blog.

MySQL QA Episode 3: How to use the debugging tool GDB

Latest MySQL Performance Blog posts - July 7, 2015 - 12:00am

Welcome to MySQL QA Episode 3: “Debugging: GDB, Backtraces, Frames and Library Dependencies”

In this episode you’ll learn how to use debugging tool GDB. The following debugging topics are covered:

 

1. GDB Introduction
2. Backtrace, Stack trace
3. Frames
4. Commands & Logging
5. Variables
6. Library dependencies
7. c++filt
8. Handy references
– GDB Cheat sheet (page #2): https://goo.gl/rrmB9i
– From Crash to testcase: https://goo.gl/1o5MzM

Also expands on live debugging & more. In HD quality (set your player to 720p!)

The post MySQL QA Episode 3: How to use the debugging tool GDB appeared first on MySQL Performance Blog.

TOI wsrep_RSU_method in PXC 5.6.24 and up

Latest MySQL Performance Blog posts - July 6, 2015 - 8:28am

I noticed that in the latest release of Percona XtraDB Cluster (PXC), the behavior of wsrep_RSU_method changed somewhat.  Prior to this release, the variable was GLOBAL only, meaning to use it you would:

mysql> set GLOBAL wsrep_RSU_method='RSU'; mysql> ALTER TABLE ... mysql> set GLOBAL wsrep_RSU_method='TOI';

This had the (possibly negative) side-effect that ALL DDL’s issued on this node would be affected by the setting while in RSU mode.

So, in this latest release, this variable was made to also have a SESSION value, while retaining GLOBAL as well. This has a couple of side-effects that are common to MySQL variables that are both GLOBAL and SESSION:

  • The SESSION copy is made from whatever the GLOBAL’s value is when a new connection (session) is established.
  • SET GLOBAL does not affect existing connection’s SESSION values.

Therefore, our above workflow would only set the GLOBAL value to RSU and not the SESSION value for the local connection.  Therefore, our ALTER TABLE will be TOI and NOT RSU!

So, for those using RSU, the proper workflow would be to make your connection, set the SESSION copy of the variable and then issue your DDL:

mysql> set SESSION wsrep_RSU_method='RSU'; mysql> ALTER TABLE ... ... disconnect ...

The advantage here is ONLY your session’s DDLs will be affected by RSU (handy if you possibly do DDLs automatically from your application).

The post TOI wsrep_RSU_method in PXC 5.6.24 and up appeared first on MySQL Performance Blog.

Deadlock found when trying to get lock with Magento

Lastest Forum Posts - July 3, 2015 - 6:13am
Hello!

We have some deadlock problems with our new XtraDB Cluster. We have a 3 nodes cluster (2 Percona XtraDB Cluster in version 5.6.24-72.2-56 and a third arbitrer with garbd) hosting a Magento website.

The old platform used a standard asynchonous MySQL replication with master/slave and we decided to migrate to a much robust solution witth XtraDB Cluster. Since the migration, we notices some deadlock errors in Magento:

a:5:{i:0;s:111:"SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction";i:1;s:7362:"#0 /var/[...]Varien/Db/Statement/Pdo/Mysql.php(110): Zend_Db_Statement_Pdo->_execute(Array)

Those errors was not present before the migration. We tried to tweak the wsrep configuration but without success. We have HAProxy on each frontend servers (3 servers hosting the magento source code) to redirect ALL requests (read and writes) to the single node of the cluster.

Our HAProxy configuration is simple:

frontend FE-BDD-RW
bind 127.0.0.1:3306
mode tcp
default_backend BE-BDD-RW

backend BE-BDD-RW
mode tcp
option httpchk
server BDD01 192.168.10.1:3306 check port 9200 inter 12000 rise 3 fall 3
server BDD02 192.168.10.2:3306 check port 9200 inter 12000 rise 3 fall 3 backup

The wsrep configuration has been tweaked from several sources to help the deadlock issue but without success. Right now, our wsrep configuration is like that:

wsrep_provider = /usr/lib/libgalera_smm.so
wsrep_cluster_address = gcomm://192.168.10.1,192.168.10.2,192.168.10.3
wsrep_cluster_name = CLUSTER01
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = xtrabackup:mypassword
wsrep_provider_options = "gcs.fc_limit = 256; gcs.fc_factor = 0.99; gcs.fc_master_slave = yes"
wsrep_retry_autocommit = 4
binlog_format = ROW
default_storage_engine = InnoDB
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode = 2

Is that possible to Percona to understand that we are in a master/slave situation to avoid deadlock issues that are related to galera? Right now, we don't know how to tweak more the Percona configuration.

Thanks!

Bootstrapping PXC (Percona XtraDB Cluster)MySQL (Percona...

Lastest Forum Posts - July 3, 2015 - 1:37am
Hi,

I have problem with my Percona Cluster.
I try to run with this command and terminal show me:

# /etc/init.d/mysql bootstrap-pxc
Bootstrapping PXC (Percona XtraDB Cluster)MySQL (Percona Xt[FALLÓ]uster) is not running, but lock file (/var/lock/subsys/mysql) exists
Starting MySQL (Percona XtraDB Cluster)........The server q[FALLÓ]hout updating PID file (/DATAMYSQL/data1/mysql.pid).
MySQL (Percona XtraDB Cluster) server startup failed! [FALLÓ]


I attached a txt with the log info.
Any help please?

Best regards.

[ATTACH]temp_343_1435912617412_989[/ATTACH]

Percona Server 5.5.44-37.3 is now available

Latest MySQL Performance Blog posts - July 1, 2015 - 6:43am


Percona is glad to announce the release of Percona Server 5.5.44-37.3 on July 1, 2015. Based on MySQL 5.5.44, including all the bug fixes in it, Percona Server 5.5.44-37.3 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.44-37.3 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • Symlinks to libmysqlclient libraries were missing on CentOS 6. Bug fixed #1408500.
  • RHEL/CentOS 6.6 OpenSSL package (1.0.1e-30.el6_6.9), containing a fix for CVE-2015-4000, changed the DH key sizes to a minimum of 768 bits. This caused an issue for MySQL as it uses 512 bit keys. Fixed by backporting an upstream 5.7 fix that increases the key size to 2048 bits. Bug fixed #1462856 (upstream #77275).
  • innochecksum would fail to check tablespaces in compressed format. The fix for this bug has been ported from Facebook MySQL 5.1 patch. Bug fixed #1100652 (upstream #66779).
  • Issuing SHOW BINLOG EVENTS with an invalid starting binlog position would cause a potentially misleading message in the server error log. Bug fixed #1409652 (upstream #75480).
  • While using max_slowlog_size, the slow query log was rotated every time slow query log was enabled, not really checking if the current slow log is indeed bigger than max_slowlog_size or not. Bug fixed #1416582.
  • If query_response_time_range_base variable was set as a command line option or in a configuration file, its value would not take effect until the first flush was made. Bug fixed #1453277 (Preston Bennes).
  • Prepared XA transactions with update undo logs were not properly recovered. Bug fixed #1468301.
  • Variable log_slow_sp_statements now supports skipping the logging of stored procedures into the slow log entirely with new OFF_NO_CALLS option. Bug fixed #1432846.

Other bugs fixed: #1380895 (upstream #72322).

(Please also note that Percona Server 5.6 series is the latest General Availability series and current GA release is 5.6.25-73.0.)

Release notes for Percona Server 5.5.44-37.3 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.44-37.3 is now available appeared first on MySQL Performance Blog.

Percona Server 5.6.25-73.0 is now available

Latest MySQL Performance Blog posts - July 1, 2015 - 6:24am

Percona is glad to announce the release of Percona Server 5.6.25-73.0 on July 1, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.25, including all the bug fixes in it, Percona Server 5.6.25-73.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.25-73.0 milestone on Launchpad.

New Features:

Bugs Fixed:

  • Symlinks to libmysqlclient libraries were missing on CentOS 6. Bug fixed #1408500.
  • RHEL/CentOS 6.6 OpenSSL package (1.0.1e-30.el6_6.9), containing a fix for CVE-2015-4000, changed the DH key sizes to a minimum of 768 bits. This caused an issue for MySQL as it uses 512 bit keys. Fixed by backporting an upstream 5.7 fix that increases the key size to 2048 bits. Bug fixed #1462856 (upstream #77275).
  • Some compressed InnoDB data pages could be mistakenly considered corrupted, crashing the server. Bug fixed #1467760 (upstream #73689) Justin Tolmer.
  • innochecksum would fail to check tablespaces in compressed format. The fix for this bug has been ported from Facebook MySQL 5.6 patch. Bug fixed #1100652 (upstream #66779).
  • Using concurrent REPLACE, LOAD DATA REPLACE or INSERT ON DUPLICATE KEY UPDATE statements in the READ COMMITTED isolation level or with the innodb_locks_unsafe_for_binlog option enabled could lead to a unique-key constraint violation. Bug fixed #1308016 (upstream #76927).
  • Issuing SHOW BINLOG EVENTS with an invalid starting binlog position would cause a potentially misleading message in the server error log. Bug fixed #1409652 (upstream #75480).
  • While using max_slowlog_size, the slow query log was rotated every time slow query log was enabled, not really checking if the current slow log is indeed bigger than max_slowlog_size or not. Bug fixed #1416582.
  • Fixed possible server assertions when Backup Locks are used. Bug fixed #1432494.
  • If query_response_time_range_base variable was set as a command line option or in a configuration file, its value would not take effect until the first flush was made. Bug fixed #1453277 (Preston Bennes).
  • mysqld_safe script is now searching for libjemalloc.so.1 library, needed by TokuDB, in the basedir directory as well. Bug fixed #1462338.
  • Prepared XA transactions could cause a debug assertion failure during the shutdown. Bug fixed #1468326.
  • Variable log_slow_sp_statements now supports skipping the logging of stored procedures into the slow log entirely with new OFF_NO_CALLS option. Bug fixed #1432846.
  • TokuDB HotBackup library is now automatically loaded with mysqld_safe script. Bug fixed #1467443.

Other bugs fixed: #1457113, #1380895, and #1413836.

Release notes for Percona Server 5.6.25-73.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

The post Percona Server 5.6.25-73.0 is now available appeared first on MySQL Performance Blog.

Using Cgroups to Limit MySQL and MongoDB memory usage

Latest MySQL Performance Blog posts - July 1, 2015 - 5:00am

Quite often, especially for benchmarks, I am trying to limit available memory for a database server (usually for MySQL, but recently for MongoDB also). This is usually needed to test database performance in scenarios with different memory limits. I have physical servers with the usually high amount of memory (128GB or more), but I am interested to see how a database server will perform, say if only 16GB of memory is available.

And while InnoDB usually respects the setting of innodb_buffer_pool_size in O_DIRECT mode (OS cache is not being used in this case), more engines (TokuDB for MySQL, MMAP, WiredTiger, RocksDB for MongoDB) usually get benefits from OS cache, and Linux kernel by default is generous enough to allocate as much memory as available. There I should note that while TokuDB (and TokuMX for MongoDB) supports DIRECT mode (that is bypass OS cache), we found there is a performance gain if OS cache is used for compressed pages.

Well, an obvious recommendation on how to restrict available memory would be to use a virtual machine, but I do not like this because virtualization does come cheap and usually there are both CPU and IO penalties.

Other popular options I hear are:

  • to use "mem=" option in a kernel boot line. Despite the fact that it requires a server reboot by itself (so you can’t really script this and leave for automatic iterations through different memory options), I also suspect it does not work well in a multi-node NUMA environment – it seems that a kernel limits memory only from some nodes and not from all proportionally
  • use an auxiliary program that allocates as much memory as you want to make unavailable and execute mlock call. This option may work, but I again have an impression that the Linux kernel does not always make good choices when there is a huge amount of locked memory that it can’t move around. For example, I saw that in this case Linux starts swapping (instead of decreasing cached pages) even if vm.swappiness is set to 0.

Another option, on a raising wave of Docker and containers (like LXC), is, well, to use docker or another container… put a database server inside a container and limit resources this way. This, in fact, should work, but if you are lazy as I am, and do not want to deal with containers, we can just use Cgroups (https://en.wikipedia.org/wiki/Cgroups), which in fact are extensively used by mentioned Docker and LXC.

Using cgroups, our task can be accomplished in a few easy steps.

1. Create control group: cgcreate -g memory:DBLimitedGroup (make sure that cgroups binaries installed on your system, consult your favorite Linux distribution manual for how to do that)
2. Specify how much memory will be available for this group:
echo 16G > /sys/fs/cgroup/memory/DBLimitedGroup/memory.limit_in_bytesThis command limits memory to 16G (good thing this limits the memory for both malloc allocations and OS cache)
3. Now, it will be a good idea to drop pages already stayed in cache:
sync; echo 3 > /proc/sys/vm/drop_caches
4. And finally assign a server to created control group:

cgclassify -g memory:DBLimitedGroup `pidof mongod`

This will assign a running mongod process to a group limited by only 16GB memory.

On this, our task is accomplished… but there is one more thing to keep in mind.

This are dirty pages in the OS cache. As long as we rely on OS cache, Linux will control writing from OS cache to disk by two variables:
/proc/sys/vm/dirty_background_ratio and /proc/sys/vm/dirty_ratio.

These variables are percentage of memory that Linux kernel takes as input for flushing of dirty pages.

Let’s talk about them a little more. In simple terms:
/proc/sys/vm/dirty_background_ratio which by default is 10 on my Ubuntu, meaning that Linux kernel will start background flushing of dirty pages from OS cache, when amount of dirty pages reaches 10% of available memory.

/proc/sys/vm/dirty_ratio which by default is 20 on my Ubuntu, meaning that Linux kernel will start foreground flushing of dirty pages from OS cache, when amount of dirty pages reaches 20% of available memory. Foreground means that user threads executing IO might be blocked… and this is what will cause IO stalls for a user (and we want to avoid at all cost).

Why this is important to keep in mind? Let’s consider 20% from 256GB (this is what I have on my servers), this is 51.2GB, which database can make dirty VERY fast in write intensive workload, and if it happens that server has a slow storage (HDD RAID or slow SATA SSD), it may take long time for Linux kernel to flush all these pages, while stalling user’s IO activity meantime.

So it is worth to consider changing these values (or corresponding /proc/sys/vm/dirty_background_bytes and /proc/sys/vm/dirty_bytes if you like to operate in bytes and not in percentages).

Again, it was not important for our traditional usage of InnoDB in O_DIRECT mode, that’s why we did not pay much attention before to Linux OS cache tuning, but as soon as we start to rely on OS cache, this is something to keep in mind.

Finally, it’s worth remembering that dirty_bytes and dirty_background_bytes are related to ALL memory, not controlled by cgroups. It applies also to containers, if you are running several Docker or LXC containers on the same box, dirty pages among ALL of them are controlled globally by a single pair of dirty_bytes and dirty_background_bytes.

It may change it future Linux kernels, as I saw patches to apply dirty_bytes and dirty_background_bytes to cgroups, but it is not available in current kernels.

The post Using Cgroups to Limit MySQL and MongoDB memory usage appeared first on MySQL Performance Blog.

Cacti per-minute graphs

Lastest Forum Posts - July 1, 2015 - 3:29am
Thank you for the Mysql monitoring templates for Cacti!

I have configured cacti to run per-minute and my graph looks like a hedgehog (sample attached). Is this actually because my mysql data sources update less often than 1 minute, causing every nth sample to have a delta of 0? ...or is it something else, and maybe something I could fix?
Array

Thanks for your help!

Pages

Subscribe to Percona aggregator
]]>