]]>
]]>

You are here

Feed aggregator

Percona Server 5.5.44-37.3 is now available


Percona is glad to announce the release of Percona Server 5.5.44-37.3 on July 1, 2015. Based on MySQL 5.5.44, including all the bug fixes in it, Percona Server 5.5.44-37.3 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.44-37.3 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • Symlinks to libmysqlclient libraries were missing on CentOS 6. Bug fixed #1408500.
  • RHEL/CentOS 6.6 OpenSSL package (1.0.1e-30.el6_6.9), containing a fix for CVE-2015-4000, changed the DH key sizes to a minimum of 768 bits. This caused an issue for MySQL as it uses 512 bit keys. Fixed by backporting an upstream 5.7 fix that increases the key size to 2048 bits. Bug fixed #1462856 (upstream #77275).
  • innochecksum would fail to check tablespaces in compressed format. The fix for this bug has been ported from Facebook MySQL 5.1 patch. Bug fixed #1100652 (upstream #66779).
  • Issuing SHOW BINLOG EVENTS with an invalid starting binlog position would cause a potentially misleading message in the server error log. Bug fixed #1409652 (upstream #75480).
  • While using max_slowlog_size, the slow query log was rotated every time slow query log was enabled, not really checking if the current slow log is indeed bigger than max_slowlog_size or not. Bug fixed #1416582.
  • If query_response_time_range_base variable was set as a command line option or in a configuration file, its value would not take effect until the first flush was made. Bug fixed #1453277 (Preston Bennes).
  • Prepared XA transactions with update undo logs were not properly recovered. Bug fixed #1468301.
  • Variable log_slow_sp_statements now supports skipping the logging of stored procedures into the slow log entirely with new OFF_NO_CALLS option. Bug fixed #1432846.

Other bugs fixed: #1380895 (upstream #72322).

(Please also note that Percona Server 5.6 series is the latest General Availability series and current GA release is 5.6.25-73.0.)

Release notes for Percona Server 5.5.44-37.3 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.44-37.3 is now available appeared first on MySQL Performance Blog.

Percona Server 5.6.25-73.0 is now available

Percona is glad to announce the release of Percona Server 5.6.25-73.0 on July 1, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.25, including all the bug fixes in it, Percona Server 5.6.25-73.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.25-73.0 milestone on Launchpad.

New Features:

Bugs Fixed:

  • Symlinks to libmysqlclient libraries were missing on CentOS 6. Bug fixed #1408500.
  • RHEL/CentOS 6.6 OpenSSL package (1.0.1e-30.el6_6.9), containing a fix for CVE-2015-4000, changed the DH key sizes to a minimum of 768 bits. This caused an issue for MySQL as it uses 512 bit keys. Fixed by backporting an upstream 5.7 fix that increases the key size to 2048 bits. Bug fixed #1462856 (upstream #77275).
  • Some compressed InnoDB data pages could be mistakenly considered corrupted, crashing the server. Bug fixed #1467760 (upstream #73689) Justin Tolmer.
  • innochecksum would fail to check tablespaces in compressed format. The fix for this bug has been ported from Facebook MySQL 5.6 patch. Bug fixed #1100652 (upstream #66779).
  • Using concurrent REPLACE, LOAD DATA REPLACE or INSERT ON DUPLICATE KEY UPDATE statements in the READ COMMITTED isolation level or with the innodb_locks_unsafe_for_binlog option enabled could lead to a unique-key constraint violation. Bug fixed #1308016 (upstream #76927).
  • Issuing SHOW BINLOG EVENTS with an invalid starting binlog position would cause a potentially misleading message in the server error log. Bug fixed #1409652 (upstream #75480).
  • While using max_slowlog_size, the slow query log was rotated every time slow query log was enabled, not really checking if the current slow log is indeed bigger than max_slowlog_size or not. Bug fixed #1416582.
  • Fixed possible server assertions when Backup Locks are used. Bug fixed #1432494.
  • If query_response_time_range_base variable was set as a command line option or in a configuration file, its value would not take effect until the first flush was made. Bug fixed #1453277 (Preston Bennes).
  • mysqld_safe script is now searching for libjemalloc.so.1 library, needed by TokuDB, in the basedir directory as well. Bug fixed #1462338.
  • Prepared XA transactions could cause a debug assertion failure during the shutdown. Bug fixed #1468326.
  • Variable log_slow_sp_statements now supports skipping the logging of stored procedures into the slow log entirely with new OFF_NO_CALLS option. Bug fixed #1432846.
  • TokuDB HotBackup library is now automatically loaded with mysqld_safe script. Bug fixed #1467443.

Other bugs fixed: #1457113, #1380895, and #1413836.

Release notes for Percona Server 5.6.25-73.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

The post Percona Server 5.6.25-73.0 is now available appeared first on MySQL Performance Blog.

Using Cgroups to Limit MySQL and MongoDB memory usage

Quite often, especially for benchmarks, I am trying to limit available memory for a database server (usually for MySQL, but recently for MongoDB also). This is usually needed to test database performance in scenarios with different memory limits. I have physical servers with the usually high amount of memory (128GB or more), but I am interested to see how a database server will perform, say if only 16GB of memory is available.

And while InnoDB usually respects the setting of innodb_buffer_pool_size in O_DIRECT mode (OS cache is not being used in this case), more engines (TokuDB for MySQL, MMAP, WiredTiger, RocksDB for MongoDB) usually get benefits from OS cache, and Linux kernel by default is generous enough to allocate as much memory as available. There I should note that while TokuDB (and TokuMX for MongoDB) supports DIRECT mode (that is bypass OS cache), we found there is a performance gain if OS cache is used for compressed pages.

Well, an obvious recommendation on how to restrict available memory would be to use a virtual machine, but I do not like this because virtualization does come cheap and usually there are both CPU and IO penalties.

Other popular options I hear are:

  • to use "mem=" option in a kernel boot line. Despite the fact that it requires a server reboot by itself (so you can’t really script this and leave for automatic iterations through different memory options), I also suspect it does not work well in a multi-node NUMA environment – it seems that a kernel limits memory only from some nodes and not from all proportionally
  • use an auxiliary program that allocates as much memory as you want to make unavailable and execute mlock call. This option may work, but I again have an impression that the Linux kernel does not always make good choices when there is a huge amount of locked memory that it can’t move around. For example, I saw that in this case Linux starts swapping (instead of decreasing cached pages) even if vm.swappiness is set to 0.

Another option, on a raising wave of Docker and containers (like LXC), is, well, to use docker or another container… put a database server inside a container and limit resources this way. This, in fact, should work, but if you are lazy as I am, and do not want to deal with containers, we can just use Cgroups (https://en.wikipedia.org/wiki/Cgroups), which in fact are extensively used by mentioned Docker and LXC.

Using cgroups, our task can be accomplished in a few easy steps.

1. Create control group: cgcreate -g memory:DBLimitedGroup (make sure that cgroups binaries installed on your system, consult your favorite Linux distribution manual for how to do that)
2. Specify how much memory will be available for this group:
echo 16G > /sys/fs/cgroup/memory/DBLimitedGroup/memory.limit_in_bytesThis command limits memory to 16G (good thing this limits the memory for both malloc allocations and OS cache)
3. Now, it will be a good idea to drop pages already stayed in cache:
sync; echo 3 > /proc/sys/vm/drop_caches
4. And finally assign a server to created control group:

cgclassify -g memory:DBLimitedGroup `pidof mongod`

This will assign a running mongod process to a group limited by only 16GB memory.

On this, our task is accomplished… but there is one more thing to keep in mind.

This are dirty pages in the OS cache. As long as we rely on OS cache, Linux will control writing from OS cache to disk by two variables:
/proc/sys/vm/dirty_background_ratio and /proc/sys/vm/dirty_ratio.

These variables are percentage of memory that Linux kernel takes as input for flushing of dirty pages.

Let’s talk about them a little more. In simple terms:
/proc/sys/vm/dirty_background_ratio which by default is 10 on my Ubuntu, meaning that Linux kernel will start background flushing of dirty pages from OS cache, when amount of dirty pages reaches 10% of available memory.

/proc/sys/vm/dirty_ratio which by default is 20 on my Ubuntu, meaning that Linux kernel will start foreground flushing of dirty pages from OS cache, when amount of dirty pages reaches 20% of available memory. Foreground means that user threads executing IO might be blocked… and this is what will cause IO stalls for a user (and we want to avoid at all cost).

Why this is important to keep in mind? Let’s consider 20% from 256GB (this is what I have on my servers), this is 51.2GB, which database can make dirty VERY fast in write intensive workload, and if it happens that server has a slow storage (HDD RAID or slow SATA SSD), it may take long time for Linux kernel to flush all these pages, while stalling user’s IO activity meantime.

So it is worth to consider changing these values (or corresponding /proc/sys/vm/dirty_background_bytes and /proc/sys/vm/dirty_bytes if you like to operate in bytes and not in percentages).

Again, it was not important for our traditional usage of InnoDB in O_DIRECT mode, that’s why we did not pay much attention before to Linux OS cache tuning, but as soon as we start to rely on OS cache, this is something to keep in mind.

Finally, it’s worth remembering that dirty_bytes and dirty_background_bytes are related to ALL memory, not controlled by cgroups. It applies also to containers, if you are running several Docker or LXC containers on the same box, dirty pages among ALL of them are controlled globally by a single pair of dirty_bytes and dirty_background_bytes.

It may change it future Linux kernels, as I saw patches to apply dirty_bytes and dirty_background_bytes to cgroups, but it is not available in current kernels.

The post Using Cgroups to Limit MySQL and MongoDB memory usage appeared first on MySQL Performance Blog.

Cacti per-minute graphs

Lastest Forum Posts - 9 hours 18 min ago
Thank you for the Mysql monitoring templates for Cacti!

I have configured cacti to run per-minute and my graph looks like a hedgehog (sample attached). Is this actually because my mysql data sources update less often than 1 minute, causing every nth sample to have a delta of 0? ...or is it something else, and maybe something I could fix?
Array

Thanks for your help!

Why xtrabackup doesn't need "flush tables with read lock"?

Lastest Forum Posts - June 30, 2015 - 2:12am
From mannual I knew that xtrabackup needs FTWRL to copy myisam tables and record binlog position information, but I did an experiment and it seems xtrabackup doesn't do a FTWRL :
: Session 1: > lock table test.test; Session 2: # innobackupex --user=root /tmpdir/ But session 2 didn't be blocked by session 1, if xtrabackup do a FTWRL , session 2 should be blocked by session 1~

Following is xtrabackup 's output:
: root@li741-164 ~# innobackupex --user=root /tmpdir/ InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. 150630 08:48:25 innobackupex: Executing a version check against the server... 150630 08:48:25 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root' (using password: NO). 150630 08:48:25 innobackupex: Connected to MySQL server 150630 08:48:25 innobackupex: Done. 150630 08:48:25 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root' (using password: NO). 150630 08:48:25 innobackupex: Connected to MySQL server 150630 08:48:25 innobackupex: Starting the backup operation IMPORTANT: Please check that the backup run completes successfully. At the end of a successful backup run innobackupex prints "completed OK!". innobackupex: Using server version 5.6.23-72.1-log innobackupex: Created backup directory /tmpdir/2015-06-30_08-48-25 150630 08:48:25 innobackupex: Starting ibbackup with command: xtrabackup --defaults-group="mysqld" --backup --suspend-at-end --target-dir=/tmpdir/2015-06-30_08-48-25 --datadir="/var/lib/mysql/" --innodb_log_file_size="50331648" --innodb_data_file_path="ibdata1:12M:autoextend" --tmpdir=/tmp --extra-lsndir='/tmp' innobackupex: Waiting for ibbackup (pid=24893) to suspend innobackupex: Suspend file '/tmpdir/2015-06-30_08-48-25/xtrabackup_suspended_2' xtrabackup version 2.2.11 based on MySQL server 5.6.24 Linux (x86_64) (revision id: ) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql/ xtrabackup: open files limit requested 0, set to 51200 xtrabackup: using the following InnoDB configuration: xtrabackup: innodb_data_home_dir = ./ xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend xtrabackup: innodb_log_group_home_dir = ./ xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 50331648 >> log scanned up to (26294577) xtrabackup: Generating a list of tablespaces [01] Copying ./ibdata1 to /tmpdir/2015-06-30_08-48-25/ibdata1 [01] ...done [01] Copying ./percona/checksums.ibd to /tmpdir/2015-06-30_08-48-25/percona/checksums.ibd [01] ...done [01] Copying ./user01/account07.ibd to /tmpdir/2015-06-30_08-48-25/user01/account07.ibd [01] ...done [01] Copying ./user01/account02.ibd to /tmpdir/2015-06-30_08-48-25/user01/account02.ibd [01] ...done [01] Copying ./user01/account05.ibd to /tmpdir/2015-06-30_08-48-25/user01/account05.ibd [01] ...done .................................................................. ignored output~ .................................................................. >> log scanned up to (26294577) xtrabackup: Creating suspend file '/tmpdir/2015-06-30_08-48-25/xtrabackup_suspended_2' with pid '24893' 150630 08:48:27 innobackupex: Continuing after ibbackup has suspended 150630 08:48:27 innobackupex: Executing LOCK TABLES FOR BACKUP... 150630 08:48:27 innobackupex: Backup tables lock acquired 150630 08:48:27 innobackupex: Starting to backup non-InnoDB tables and files innobackupex: in subdirectories of '/var/lib/mysql/' innobackupex: Backing up file '/var/lib/mysql//percona/db.opt' >> log scanned up to (26294577) innobackupex: Backing up file '/var/lib/mysql//percona/checksums.frm' innobackupex: Backing up files '/var/lib/mysql//user01/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (11 files) innobackupex: Backing up files '/var/lib/mysql//c2c_lottery/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (16 files) innobackupex: Backing up file '/var/lib/mysql//test/db.opt' innobackupex: Backing up file '/var/lib/mysql//test/test1.frm' innobackupex: Backing up file '/var/lib/mysql//test/test.frm' innobackupex: Backing up file '/var/lib/mysql//test/test2.frm' innobackupex: Backing up file '/var/lib/mysql//test/gf.frm' innobackupex: Backing up files '/var/lib/mysql//performance_schema/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (53 files) >> log scanned up to (26294577) innobackupex: Backing up files '/var/lib/mysql//mysql/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (74 files) innobackupex: Backing up files '/var/lib/mysql//sakila/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (19 files) >> log scanned up to (26294577) 150630 08:48:29 innobackupex: Finished backing up non-InnoDB tables and files 150630 08:48:29 innobackupex: Executing LOCK BINLOG FOR BACKUP... 150630 08:48:29 innobackupex: Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS... 150630 08:48:29 innobackupex: Waiting for log copying to finish xtrabackup: The latest check point (for incremental): '26294577' xtrabackup: Stopping log copying thread. .>> log scanned up to (26294577) xtrabackup: Creating suspend file '/tmpdir/2015-06-30_08-48-25/xtrabackup_log_copied' with pid '24893' xtrabackup: Transaction log of lsn (26294577) to (26294577) was copied. 150630 08:48:30 innobackupex: Executing UNLOCK BINLOG 150630 08:48:30 innobackupex: Executing UNLOCK TABLES 150630 08:48:30 innobackupex: All tables unlocked innobackupex: Backup created in directory '/tmpdir/2015-06-30_08-48-25' innobackupex: MySQL binlog position: filename 'mysqld-bin.000004', position 3782345 150630 08:48:30 innobackupex: Connection to database server closed 150630 08:48:30 innobackupex: completed OK!

Why xtrabackup doesn't need "flush tables with read lock"?

Lastest Forum Posts - June 30, 2015 - 2:11am
From mannual I knew that xtrabackup needs FTWRL to copy myisam tables and record binlog position information, but I did an experiment and it seems xtrabackup doesn't do a FTWRL :
: Session 1: > lock table test.test; Session 2: # innobackupex --user=root /tmpdir/ But session 2 didn't be blocked by session 1, if xtrabackup do a FTWRL , session 2 should be blocked by session 1~

Following is xtrabackup 's output:
: root@li741-164 ~# innobackupex --user=root /tmpdir/ InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. 150630 08:48:25 innobackupex: Executing a version check against the server... 150630 08:48:25 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root' (using password: NO). 150630 08:48:25 innobackupex: Connected to MySQL server 150630 08:48:25 innobackupex: Done. 150630 08:48:25 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root' (using password: NO). 150630 08:48:25 innobackupex: Connected to MySQL server 150630 08:48:25 innobackupex: Starting the backup operation IMPORTANT: Please check that the backup run completes successfully. At the end of a successful backup run innobackupex prints "completed OK!". innobackupex: Using server version 5.6.23-72.1-log innobackupex: Created backup directory /tmpdir/2015-06-30_08-48-25 150630 08:48:25 innobackupex: Starting ibbackup with command: xtrabackup --defaults-group="mysqld" --backup --suspend-at-end --target-dir=/tmpdir/2015-06-30_08-48-25 --datadir="/var/lib/mysql/" --innodb_log_file_size="50331648" --innodb_data_file_path="ibdata1:12M:autoextend" --tmpdir=/tmp --extra-lsndir='/tmp' innobackupex: Waiting for ibbackup (pid=24893) to suspend innobackupex: Suspend file '/tmpdir/2015-06-30_08-48-25/xtrabackup_suspended_2' xtrabackup version 2.2.11 based on MySQL server 5.6.24 Linux (x86_64) (revision id: ) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql/ xtrabackup: open files limit requested 0, set to 51200 xtrabackup: using the following InnoDB configuration: xtrabackup: innodb_data_home_dir = ./ xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend xtrabackup: innodb_log_group_home_dir = ./ xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 50331648 >> log scanned up to (26294577) xtrabackup: Generating a list of tablespaces [01] Copying ./ibdata1 to /tmpdir/2015-06-30_08-48-25/ibdata1 [01] ...done [01] Copying ./percona/checksums.ibd to /tmpdir/2015-06-30_08-48-25/percona/checksums.ibd [01] ...done [01] Copying ./user01/account07.ibd to /tmpdir/2015-06-30_08-48-25/user01/account07.ibd [01] ...done [01] Copying ./user01/account02.ibd to /tmpdir/2015-06-30_08-48-25/user01/account02.ibd [01] ...done [01] Copying ./user01/account05.ibd to /tmpdir/2015-06-30_08-48-25/user01/account05.ibd [01] ...done .................................................................. ignored output~ .................................................................. >> log scanned up to (26294577) xtrabackup: Creating suspend file '/tmpdir/2015-06-30_08-48-25/xtrabackup_suspended_2' with pid '24893' 150630 08:48:27 innobackupex: Continuing after ibbackup has suspended 150630 08:48:27 innobackupex: Executing LOCK TABLES FOR BACKUP... 150630 08:48:27 innobackupex: Backup tables lock acquired 150630 08:48:27 innobackupex: Starting to backup non-InnoDB tables and files innobackupex: in subdirectories of '/var/lib/mysql/' innobackupex: Backing up file '/var/lib/mysql//percona/db.opt' >> log scanned up to (26294577) innobackupex: Backing up file '/var/lib/mysql//percona/checksums.frm' innobackupex: Backing up files '/var/lib/mysql//user01/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (11 files) innobackupex: Backing up files '/var/lib/mysql//c2c_lottery/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (16 files) innobackupex: Backing up file '/var/lib/mysql//test/db.opt' innobackupex: Backing up file '/var/lib/mysql//test/test1.frm' innobackupex: Backing up file '/var/lib/mysql//test/test.frm' innobackupex: Backing up file '/var/lib/mysql//test/test2.frm' innobackupex: Backing up file '/var/lib/mysql//test/gf.frm' innobackupex: Backing up files '/var/lib/mysql//performance_schema/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (53 files) >> log scanned up to (26294577) innobackupex: Backing up files '/var/lib/mysql//mysql/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (74 files) innobackupex: Backing up files '/var/lib/mysql//sakila/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (19 files) >> log scanned up to (26294577) 150630 08:48:29 innobackupex: Finished backing up non-InnoDB tables and files 150630 08:48:29 innobackupex: Executing LOCK BINLOG FOR BACKUP... 150630 08:48:29 innobackupex: Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS... 150630 08:48:29 innobackupex: Waiting for log copying to finish xtrabackup: The latest check point (for incremental): '26294577' xtrabackup: Stopping log copying thread. .>> log scanned up to (26294577) xtrabackup: Creating suspend file '/tmpdir/2015-06-30_08-48-25/xtrabackup_log_copied' with pid '24893' xtrabackup: Transaction log of lsn (26294577) to (26294577) was copied. 150630 08:48:30 innobackupex: Executing UNLOCK BINLOG 150630 08:48:30 innobackupex: Executing UNLOCK TABLES 150630 08:48:30 innobackupex: All tables unlocked innobackupex: Backup created in directory '/tmpdir/2015-06-30_08-48-25' innobackupex: MySQL binlog position: filename 'mysqld-bin.000004', position 3782345 150630 08:48:30 innobackupex: Connection to database server closed 150630 08:48:30 innobackupex: completed OK!

Playing with Percona XtraDB Cluster in Docker

Latest MySQL Performance Blog posts - June 30, 2015 - 12:00am

Like any good, thus lazy, engineer I don’t like to start things manually. Creating directories, configuration files, specify paths, ports via command line is too boring. I wrote already how I survive in case when I need to start MySQL server (here). There is also the MySQL Sandbox which can be used for the same purpose.

But what to do if you want to start Percona XtraDB Cluster this way? Fortunately we, at Percona, have engineers who created automation solution for starting PXC. This solution uses Docker. To explore it you need:

  1. Clone the pxc-docker repository:
    git clone https://github.com/percona/pxc-docker
  2. Install Docker Compose as described here
  3. cd pxc-docker/docker-bld
  4. Follow instructions from the README file:

    a) ./docker-gen.sh 5.6    (docker-gen.sh takes a PXC branch as argument, 5.6 is default, and it looks for it on github.com/percona/percona-xtradb-cluster)

    b) Optional: docker-compose build (if you see it is not updating with changes).

    c) docker-compose scale bootstrap=1 members=2 for a 3 node cluster

  5. Check which ports assigned to containers:
    $docker port dockerbld_bootstrap_1 3306 0.0.0.0:32768 $docker port dockerbld_members_1 4567 0.0.0.0:32772 $docker port dockerbld_members_2 4568 0.0.0.0:32776
    Now you can connect to MySQL clients as usual:
    $mysql -h 0.0.0.0 -P 32768 -uroot Welcome to the MySQL monitor.  Commands end with ; or g. Your MySQL connection id is 10 Server version: 5.6.21-70.1 MySQL Community Server (GPL), wsrep_25.8.rXXXX Copyright (c) 2009-2015 Percona LLC and/or its affiliates Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql>
  6. To change MySQL options either pass it as a mount at runtime with something like volume: /tmp/my.cnf:/etc/my.cnf in docker-compose.yml or connect to container’s bash (docker exec -i -t container_name /bin/bash), then change my.cnf and run docker restart container_name

Notes.

  • If you don’t want to build use ready-to-use images
  • If you don’t want to run Docker Compose as root user add yourself to docker group

The post Playing with Percona XtraDB Cluster in Docker appeared first on MySQL Performance Blog.

Practical MySQL Performance Optimization

Latest MySQL Performance Blog posts - June 29, 2015 - 7:24am

Applications often become impacted by MySQL performance. Optimization is the obvious solution but where to start? Join me on July 2 at 11 a.m. Pacific and I’ll show you how to optimize MySQL in a practical way – spending time on what gives you the best return. Many of you have heard me talk on this topic before and this updated presentation will get you up to speed on the latest areas to consider when improving MySQL performance.

I’ll be pointing out the most important things you need to get right for each of the following:

  • Hardware
  • MySQL Configuration
  • Schema and Queries
  • Application Architecture

I’ll also share some tools and techniques to focus on the queries most important for your application. At the end of this webinar, aptly titled “Practical MySQL Performance Optimization,” you will know how to optimize MySQL performance in a practical way to ensure our system is “good enough” in terms of responsiveness, efficiency and scalability.

I hope to see you there! Register now to reserve your spot.

The post Practical MySQL Performance Optimization appeared first on MySQL Performance Blog.

Build on Windows

Lastest Forum Posts - June 28, 2015 - 1:50am
Anyone have build instructions for Building on Windows?

Percona XtraDB Cluster Node is not synced. in remote server

Lastest Forum Posts - June 27, 2015 - 11:35pm
Hello dear friends
I've a problem with new node . I created new node and all of the configs is the same as other nodes and it synced by other nodes .
I used haproxy and mysqlchk script in xinetd for checking cluster in remote server ,
when I run : : /usr/bin/clustercheck in local machine I got :
-----------------------------------------------------------------------
: HTTP/1.1 200 OK Content-Type: text/plain Connection: close Content-Length: 40 Percona XtraDB Cluster Node is synced. -----------------------------------------------------------------------
till here all of things are good , but when I want to test from remote machine or run : telnet 127.0.0.1 9200 I got :
-----------------------------------------------------------------------
: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. HTTP/1.1 503 Service Unavailable Content-Type: text/plain Connection: close Content-Length: 44 Percona XtraDB Cluster Node is not synced. Connection closed by foreign host. -----------------------------------------------------------------------

in the local working well , but the response in remote machine is incorrect ,
in the log of clustercheck script shows :
-----------------------------------------------------------------------
: Warning: Using a password on the command line interface can be insecure. ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (13) -----------------------------------------------------------------------

how can I solve it ?







galera Master-Master cluster with cross sites replication

Lastest Forum Posts - June 27, 2015 - 7:10pm
hello
I am running galera Master-Master cluster in one site with 3 master nodes. its version is 5.5 and it works fine.
Now, I would replicate the data to another site. Its latency is around 20ms and i would not want to extend the existing cluster to the remote site.
Any suggestion?
thanks a lot

Tips for avoiding malware from a lesson learned

Latest MySQL Performance Blog posts - June 26, 2015 - 9:28am

In a recent article on the Percona blog, I recommended readers to a tool called CamStudio for making technical screen recordings. The blog post was very popular and got 300+ Facebook likes in a short time. Providentially though, a reader commented that the installer (as downloaded from the project website) installed “pretty annoying adware on [his] PC.”

As I had been using a slightly dated installer, which did not show this issue, I started looking into the reader’s claims. Besides finding that the reader was correct in his claims about the project website’s installer, I found that even the installer from sourceforge.com (a well known open source download site) had a significant amount of adware in it.

However, the worst had yet to come. Reading through the CamStudio forum on SourceForge, I found out that the CamStudio binaries had apparently been plagued by adware and possibly also viruses and malware. I am however always somewhat suspicious of such reports; consider for example that CamStudio’s competitor TechSmith sells a very similar product (originally based on the same source code I believe) at $299 US per license. Not saying it happened, but one can easily see why competing companies may try to eliminate the open source/free competition.

Still, being cautious, I ran my older original installer (which did not have the adware issues) through virustotal.com, a Google service I learned about during this ‘adventure’. “Guess what” my daughter would say. It had a malware Trojan (Trojan.Siggen6.33552) in it which had only been discovered by a anti-virus software company last April, and only one in 56 virus scanners picked it up according to https://virustotal.com! Once the situation was clear, I immediately removed the blog post!

Clearly this was turning out not to be my day. Reading up on this Trojan proved that it was ‘designed for installation of another malware’. Given that Trojan.Siggen6.33552 had only been discovered in April, and given that it may have been installing other malware as per the anti-virus company who discovered it, I quickly decided to reinitialize my Windows PC. Better safe then sorry.

As I mentioned to my colleague David Busby, when you have something happen like this, you become much more security conscious! Thus, I did a review of my network security and was quite amazed at what I found, especially when compared with online security reports.

For example, we have uPnP (universal plug and play) on our routers, Skype automatically installs a (quite wide) hole in the Windows Firewall (seemingly even where it is not necessary), and we allow all 3rd party cookies in all our browsers. One would think this is all fine, but it makes things more easy for attackers!

     Besides virustotal.com, David showed me https://malwr.com – another great resource for analysing potential malwares.

Did you know that with the standard Skype settings, someone can easily work out your IP address? Don’t believe it? If you’re on Windows, go to Skype > Tools > Options > Advanced > Connection and hover your mouse over the blue/white question mark after ‘Allow direct connections to your contacts only’. You’ll see that it says “When you call someone who isn’t a contact, we’ll keep your IP address hidden. This may delay your call setup time.“ And apparently on Linux this option is not even directly available (more info here).

So, for example, to make Skype more secure I did 1) untick ‘use port 80 and 443 for additional incoming connections’, 2) setup a fixed port and punched a hole in the Windows firewall just for that port, for a specific program, a specific user, and for a specific IP range (much more restricted than the wide hole that was already there), 3) Removed the “Skype rule” which seemingly was placed there by the Skype installer, 4) Disabled uPnP on my router, 5) Disabled Skype from using uPnP, 6) Ticked ‘Allow direct connections to your contacts only’. Phewy. (Note that disabling uPnP (being a convenience protocol) can lead to some issues with smartTV’s / consoles / mobile phone apps if disabled.)

     All our networking & software setup these days is mostly about convenience.

Further reviewing the Windows firewall rules, I saw many rules that could be either removed or tied down significantly. It was like doing QA on my own network (phun intended :). The same with the router settings. Also did a router firmware upgrade, and installed the latest Windows security patches. All of the sudden that previously-annoying ‘we’ll just shut down your pc to install updates, even if you had work open’ feature in Windows seemed a lot more acceptable (I now have a one-week timeout for automatic restarts).

For the future ahead, when I download third party utilities (open source or not), I will almost surely run them through virustotal.com – a fantastic service by Google. It’s quite quick and easy to do; download, upload, check. I also plan to once in a while review Windows firewall rules, program security settings (especially for browsers and tools like Skype etc.), and see if there are Windows updates etc.

The most surprising thing of all? Having made all these security restrictions has given me 0% less functionality thus far.

Maybe it is indeed time we wake up about security.

The post Tips for avoiding malware from a lesson learned appeared first on MySQL Performance Blog.

Using Percona XtraBackup as a way to migrate a MySQL instance to a newer server?

Lastest Forum Posts - June 25, 2015 - 11:17pm
It was recommended to me that Percona XtraBackup was a better way to migrate a MySQL 5.5 instance to a new server running MySQL 5.5. Is this correct? Does Percona allow me to migrate the database while live, and then, once the initial sync to the new DB server happens, repoint my data ingestion endpoints at the new DB server and then just run a follow up pass of Percona, to catch and data that may have been missed during the initial sync?

Oracle license revenue and the MySQL ecosystem

Latest MySQL Performance Blog posts - June 25, 2015 - 2:30pm

Oracle was in the news recently with the story of its license revenue declining as much as 17% in the recent quarter. This is blamed on transitioning to the cloud in some publications, but others, such as Bloomberg and TechRepublic, look deeper, seeing open source software responsible for the bulk of it.

Things are especially interesting in the MySQL ecosystem, as Oracle both owns its traditional “Enterprise” Oracle database and MySQL – a more modern open source database.

At Percona we see the same story repeating among many of our enterprise customers:

  1. MySQL proves itself. This generally happens one of two ways. One is for the enterprise using traditional enterprise databases, such as Oracle or DB2, to acquire a company which has been built on MySQL. After the dust settles the CFO or CIO discovers that the acquired company has been successfully running business-critical operations with MySQL and spending hundreds of thousands of dollars on database support instead of tens of millions. At this point it’s been shown that it can be done, so it should continue.

The other way is for MySQL to rise through the ranks in an organization. Typically it starts with some small MySQL use, such as running a bug tracking application in the IT department. Then it moves to MySQL being used with Drupal to power the main corporate website and an e-commerce function with Magento or something similar. Over time, MySQL proves itself and is trusted to handle more and more “core” enterprise databases that are absolutely critical for the business.

Interestingly enough, contrary to what some people have said, MySQL ownership by Oracle helps it to gain trust with many enterprise accounts. Enterprises may not like Oracle’s license and maintenance fees, but they like Oracle’s quality engineering, attention to security and predictable releases.

  1. New applications are built using MySQL. As the enterprise is ready to embrace MySQL it is added to the approved database list and now internal teams can use it to develop applications. In many cases the mandate goes even further with MySQL than with other open source technologies, as it is given preference, and teams need to really justify to management when they want to use Oracle or other proprietary database technologies. There are some cases when that may be warranted, but in most cases MySQL is good enough.
  1. Moving existing applications from Oracle to MySQL.  Depending on the organization and applications it can happen a couple of different ways. One is the equivalent applications are built from scratch on the new open source technology stack and the old application is retired. The other is only the database is migrated from Oracle to MySQL. Moving the database from Oracle to MySQL might be easy and might be close to a full application rewrite. For example, we see Java applications which often use the database as a simple data store through the ORM framework which can be moved to MySQL easily; on the other hand, applications built with extensive use of advanced stored procedures and Oracle-specific SQL extensions are much harder to move.

The wave of moving to open source database technologies will continue and we’re not alone in thinking that – Gartner believes that by 2018, 70% of new in-house applications will be built on open source database systems.

What are we currently seeing in the MySQL ecosystem? First, many customers tell us that they are looking at hefty price increases for MySQL support subscriptions. Some of the customers which had previously signed 5 year agreements with Sun (at the time it was acquired by Oracle) who are exploring renewing now, see price increases as much as 5x for a comparable environment. This is very understandable considering the pressures Oracle has on the market right now.

The issues, however, go deeper than the price. Many customers are not comfortable trusting Oracle to give them the best possible advice for moving from expensive Oracle to a much less expensive Oracle MySQL database. The conflicts are obvious when the highest financial reward comes to Oracle by proving applications can’t be moved to MySQL or any other open source database.

If you’re choosing MySQL, Oracle is financially interested in having you use the Enterprise Edition, which brings back many of the vendor lock-in issues enterprises are trying to avoid by moving to open source databases. Customers believe Oracle will ensure enterprise-only features are put in use in the applications, making it difficult to avoid renewing at escalating prices.

So what do our customers see in Percona which makes them prefer our support and other services to those of Oracle?

  • We are a great partner if you’re considering moving from the Oracle database to MySQL as we both have years of experience and no conflict of interest.
  • Percona Server, Percona XtraDB Cluster, Percona Xtrabackup and our other software for the MySQL ecosystem is 100% open source, which means we’re not trying to lock you into the “enterprise version” as we work together. Furthermore, many of the features which are only available in MySQL Enterprise Edition are available in the fully open source Percona Server, including audit, backup and authentication.
  • We are focused on solutions for your business, not pushing Percona-branded technology. If you choose to use Percona Server, great! If you are using MySQL, MariaDB, Amazon RDS, etc., that’s great too.

With the continuing trend of moving to open source database management systems the cost pressures on people running proprietary databases will continue to increase, and the only real solution is to accelerate moving to the open source stack. As you do that, you’re better off moving to completely open source technology, such as what is available from Percona, to avoid vendor lock-in. If you’re looking for the partner to help you to assess the migration strategy and execute the move successfully, check for conflicts of interests and ensure the interests of your and your provider are completely aligned.

The post Oracle license revenue and the MySQL ecosystem appeared first on MySQL Performance Blog.

Percona XtraDB Cluster

Lastest Forum Posts - June 25, 2015 - 5:43am
Hi All ,

I am using Galera cluster on 3 Node. While i am able to get write performance on Galera. I am still facing issue with read performance. Is there any parameter related to read performance or we only need to tune only at MySQL side.

Cluster performance is low in comparison of Standalone MySQL.

Any help will be appreciated.

Xtrabackup ignore coonfiguration options in my.cnf

Lastest Forum Posts - June 25, 2015 - 3:46am
There are three node cluster percona. In section "[xtrabackup] " in the file my.cnf option is specified "databases = db_name" for the change is replicated only on one database. However, when you create a new database with a different name, it appears on other sites. Why option is ignored ""databases".

Air conditioning Service, INC

Lastest Forum Posts - June 24, 2015 - 5:03am
Air conditioning Service, INC The Guys in the Green Trucks

Which version for MySQL 5.5.x

Lastest Forum Posts - June 24, 2015 - 1:17am
I've started looking at the potential option of using Percona XtraBackup. The version I have on my test system is 2.2.9 `xtrabackup --version`

However, my databases are all MySQL 5.5.x

Will this version work ok (for both backup and restoring to MySQL 5.5) or should I use a different version?

If so, is there an easy way to upgrade / downgrade?

2 of 3 nodes crash

Lastest Forum Posts - June 23, 2015 - 3:54pm
Hi,

we have cluster in LAN, 3 nodes.We run group of test(create DB, create tables, run coherently a lot of transactions, Drop DB).All write/read requests are going to first node. Sometimes, other two nodes simultaneously crash with such errors on both :
: 2015-06-23 21:36:30 30464 [ERROR] Slave SQL: Could not execute Write_rows event on table test-rain-17165.machine_services; Cannot add or update a child row: a foreign key constraint fails (`test-rain-17165`.`machine_services`, CONSTRAINT `machine_registration_fk` FOREIGN KEY (`machine_id`) REFERENCES `machines` (`id `) ON DELETE NO ACTION ON UPDATE NO ACTION), Error_code: 1452; handler error HA_ERR_NO_REFERENCED_ROW; the event's master log FIRST, end_log_pos 209, Error_code: 1452 2015-06-23 21:36:30 30464 [Warning] WSREP: RBR event 3 Write_rows apply warning: 151, 2948798 2015-06-23 21:36:30 30464 [ERROR] Slave SQL: Could not execute Write_rows event on table test-rain-17165.machine_services; Cannot add or update a child row: a foreign key constraint fails (`test-rain-17165`.`machine_services`, CONSTRAINT `machine_registration_fk` FOREIGN KEY (`machine_id`) REFERENCES `machines` (`id `) ON DELETE NO ACTION ON UPDATE NO ACTION), Error_code: 1452; handler error HA_ERR_NO_REFERENCED_ROW; the event's master log FIRST, end_log_pos 209, Error_code: 1452 2015-06-23 21:36:30 30464 [Warning] WSREP: RBR event 3 Write_rows apply warning: 151, 2948796 2015-06-23 21:36:30 30464 [Warning] WSREP: Failed to apply app buffer: seqno: 2948798, status: 1 at galera/src/trx_handle.cpp:apply():351 Retrying 3th time 2015-06-23 21:36:30 30464 [Warning] WSREP: Failed to apply app buffer: seqno: 2948796, status: 1 at galera/src/trx_handle.cpp:apply():351 Retrying 3th time 2015-06-23 21:36:30 30464 [ERROR] Slave SQL: Could not execute Write_rows event on table test-rain-17165.machine_services; Cannot add or update a child row: a foreign key constraint fails (`test-rain-17165`.`machine_services`, CONSTRAINT `machine_registration_fk` FOREIGN KEY (`machine_id`) REFERENCES `machines` (`id `) ON DELETE NO ACTION ON UPDATE NO ACTION), Error_code: 1452; handler error HA_ERR_NO_REFERENCED_ROW; the event's master log FIRST, end_log_pos 209, Error_code: 1452 2015-06-23 21:36:30 30464 [Warning] WSREP: RBR event 3 Write_rows apply warning: 151, 2948798 2015-06-23 21:36:30 30464 [ERROR] Slave SQL: Could not execute Write_rows event on table test-rain-17165.machine_services; Cannot add or update a child row: a foreign key constraint fails (`test-rain-17165`.`machine_services`, CONSTRAINT `machine_registration_fk` FOREIGN KEY (`machine_id`) REFERENCES `machines` (`id `) ON DELETE NO ACTION ON UPDATE NO ACTION), Error_code: 1452; handler error HA_ERR_NO_REFERENCED_ROW; the event's master log FIRST, end_log_pos 209, Error_code: 1452 2015-06-23 21:36:30 30464 [Warning] WSREP: RBR event 3 Write_rows apply warning: 151, 2948796 2015-06-23 21:36:30 30464 [Warning] WSREP: Failed to apply app buffer: seqno: 2948798, status: 1 at galera/src/trx_handle.cpp:apply():351 Retrying 4th time 2015-06-23 21:36:30 30464 [ERROR] Slave SQL: Could not execute Write_rows event on table test-rain-17165.machine_services; Cannot add or update a child row: a foreign key constraint fails (`test-rain-17165`.`machine_services`, CONSTRAINT `machine_registration_fk` FOREIGN KEY (`machine_id`) REFERENCES `machines` (`id `) ON DELETE NO ACTION ON UPDATE NO ACTION), Error_code: 1452; handler error HA_ERR_NO_REFERENCED_ROW; the event's master log FIRST, end_log_pos 209, Error_code: 1452 2015-06-23 21:36:30 30464 [Warning] WSREP: RBR event 3 Write_rows apply warning: 151, 2948798 But it doesn't reproduce 100%, we can run similar test two or three time(each time new DB) and only after fourth time 2 or 1 nodes crashed.
Also,there is no errors on first node(where we run test) and all test finished succesfully.
I tried to look into audit logs of global logs, but there is nothing. What else I can do? and what this errors means?

Percona XtraDB Cluster (PXC): How many nodes do you need?

Latest MySQL Performance Blog posts - June 23, 2015 - 8:12am

A question I often hear when customers want to set up a production PXC cluster is: “How many nodes should we use?”

Three nodes is the most common deployment, but when are more nodes needed? They also ask: “Do we always need to use an even number of nodes?”

This is what we’ll clarify in this post.

This is all about quorum

I explained in a previous post that a quorum vote is held each time one node becomes unreachable. With this vote, the remaining nodes will estimate whether it is safe to keep on serving queries. If quorum is not reached, all remaining nodes will set themselves in a state where they cannot process any query (even reads).

To get the right size for you cluster, the only question you should answer is: how many nodes can simultaneously fail while leaving the cluster operational?

  • If the answer is 1 node, then you need 3 nodes: when 1 node fails, the two remaining nodes have quorum.
  • If the answer is 2 nodes, then you need 5 nodes.
  • If the answer is 3 nodes, then you need 7 nodes.
  • And so on and so forth.

Remember that group communication is not free, so the more nodes in the cluster, the more expensive group communication will be. That’s why it would be a bad idea to have a cluster with 15 nodes for instance. In general we recommend that you talk to us if you think you need more than 10 nodes.

What about an even number of nodes?

The recommendation above always specifies odd number of nodes, so is there anything bad with an even number of nodes? Let’s take a 4-node cluster and see what happens if nodes fail:

  • If 1 node fails, 3 nodes are remaining: they have quorum.
  • If 2 nodes fail, 2 nodes are remaining: they no longer have quorum (remember 50% is NOT quorum).

Conclusion: availability of a 4-node cluster is no better than the availability of a 3-node cluster, so why bother with a 4th node?

The next question is: is a 4-node cluster less available than a 3-node cluster? Many people think so, specifically after reading this sentence from the manual:

Clusters that have an even number of nodes risk split-brain conditions.

Many people read this as “as soon as one node fails, this is a split-brain condition and the whole cluster stop working”. This is not correct! In a 4-node cluster, you can lose 1 node without any problem, exactly like in a 3-node cluster. This is not better but not worse.

By the way the manual is not wrong! The sentence makes sense with its context.

There could actually reasons why you might want to have an even number of nodes, but we will discuss that topic in the next section.

Quorum with multiple data centers

To provide more availability, spreading nodes in several datacenters is a common practice: if power fails in one DC, nodes are available elsewhere. The typical implementation is 3 nodes in 2 DCs:

Notice that while this setup can handle any single node failure, it can’t handle all single DC failures: if we lose DC1, 2 nodes leave the cluster and the remaining node has not quorum. You can try with 4, 5 or any number of nodes and it will be easy to convince yourself that in all cases, losing one DC can make the whole cluster stop operating.

If you want to be resilient to a single DC failure, you must have 3 DCs, for instance like this:

Other considerations

Sometimes other factors will make you choose a higher number of nodes. For instance, look at these requirements:

  • All traffic is directed to a single node.
  • The application should be able to fail over to another node in the same datacenter if possible.
  • The cluster must keep operating even if one datacenter fails.

The following architecture is an option (and yes, it has an even number of nodes!):

Conclusion

Regarding availability, it is easy to estimate the number of nodes you need for your PXC cluster. But node failures are not the only aspect to consider: Resilience to a datacenter failure can, for instance, influence the number of nodes you will be using.

The post Percona XtraDB Cluster (PXC): How many nodes do you need? appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
]]>