Buy Percona ServicesBuy Now!

qan does not remove or stop individual servers

Lastest Forum Posts - 2 hours 5 min ago
Monitoring queries is such a great feature and I really like it. That said it can be clunky to admin. I am on RDS, which means that it can be easy to add instances but deleting QAN for instances about to be decommissioned is challenging. I took notes. Tried many things and eventually found the UUID associated with the QAN and removed it. Here's the details:

pmm-admin Info
pmm-admin 1.2.0 (both server & client)

I did the following
1. pmm-admin stop mysql:queries myrds2
2. pmm-admin list shows ALL mysql:queries had stopped, including my main RDS.
3. pmm-admin start mysql:queries myrds1
4. pmm-admin list shows ALL mysql:queries had started, including myrds2.
5. I again stopped mysql:queries : pmm-admin stop mysql:queries
6. ran pmm-admin remove mysql:queries myrds2
7. pmm-admin start mysql:queries myrds1 and mysql:queries myrds2 was STILL listed.
8.. systemctl stop pmm-mysql-queries-0
9. systemctl start pmm-mysql-queries-0
10. pmm-admin list show ALL mysql:queries had started, including myrds2.
11. went into PMM queries & the drop down showed me the UUID of myrds2.
12. pmm-admin stop mysql:queries
13. systemctl stop pmm-mysql-queries-0
14 sudo rm /usr/local/percona/qan-agent/instance/<myrds2>json
15 ran systemctl start pmm-mysql-queries-0
16. pmm-admin start mysql:queries
17. pmm-admin list showed that the myrds2 was no longer in the list.

I had to remove the <UUID>.json from the qan-agent/instances to get that RDS instance to stop collecting queries. I found that my running a find for the qan UUID on the instance I wanted to go away. Seems like remove should remove that UUID or at least change the name so it won't be used by qan.

I'm putting this here incase someone else is having similar problems.

What is MySQL Partitioning?

In this blog, we’ll quickly look at MySQL partitioning.

Partitioning is a way in which a database (MySQL in this case) splits its actual data down into separate tables, but still get treated as a single table by the SQL layer.

When partitioning, it’s a good idea to find a natural partition key. You want to ensure that table lookups go to the correct partition or group of partitions. This means that all SELECT, UPDATE, DELETE should include that column in the WHERE clause. Otherwise, the storage engine does a scatter-gather, and queries ALL partitions in a UNION that is not concurrent.

Generally, you must add the partition key into the primary key along with the auto increment, i.e., PRIMARY KEY (part_id,id). If you don’t have well-designed and small columns for this composite primary key, it could enlarge all of your secondary indexes.

You can partition by range or hash. Range is great because you have groups of known IDs in each table, and it helps when querying across partition IDs. This still can create hotspots in the newest partition, as all new inserts go there. Partitioning by hash “load balances” the table, and allows you to write to partitions more concurrently. This makes range queries on the partition key a bad idea.

In MySQL 5.7, partitioning became native to the store engine and deprecated the old method where MySQL itself had to handle the partitions. This means InnoDB partitions (and a larger amount of partitions) are a better choice than in the past.

As with all features and recommendations, this only makes sense if it helps your data and workload!

Percona Server for MongoDB 3.2.15-3.5 is Now Available

Latest MySQL Performance Blog posts - July 26, 2017 - 9:42am

Percona announces the release of Percona Server for MongoDB 3.2.15-3.5 on July 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database that supports the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage engine, as well as enterprise-grade features like External Authentication, Audit Logging, Profiling Rate Limiting, and Hot Backup at no extra cost. The software requires no changes to MongoDB applications or code.

NOTE: We deprecated the PerconaFT storage engine. It will not be available in future releases.

This release is based on MongoDB 3.2.15 and does not include any additional changes.

Percona Server for MongoDB 3.2.15-3.5 release notes are available in the official documentation.

Percona audit log logging root events nonstop in XtraDB cluster 5.6.36-82.0-56-log

Lastest Forum Posts - July 26, 2017 - 8:51am
Hi , i am having this issue with installing percona audit plugin in XtraDB Cluster 5.6.36-82.0-56-log and right after installing it , audit.log starts logging root events every seconds quickly. File size is going to be very large. If i set the variables to "exclude" root@localhost then it stops but i do not want to do that and keep monitoring root also. Is there a reason why it does that ? ( any inter cluster communication happening ). Please help.

New cluster install, first cluster node doesn't show as

Lastest Forum Posts - July 26, 2017 - 8:15am
I've been trying to install Percona onto a 3 node cluster. I cant' even get the first node to show up a cluster size of 1:



Code: mysql> show status like 'wsrep%'; +--------------------------+----------------------+ | Variable_name | Value | +--------------------------+----------------------+ | wsrep_cluster_conf_id | 18446744073709551615 | | wsrep_cluster_size | 0 | | wsrep_cluster_state_uuid | | | wsrep_cluster_status | Disconnected | | wsrep_connected | OFF | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 18446744073709551615 | | wsrep_provider_name | | | wsrep_provider_vendor | | | wsrep_provider_version | | | wsrep_ready | ON | +--------------------------+----------------------+ 11 rows in set (0.00 sec)
running:
sudo /etc/init.d/mysql bootstrap-pxc

produces this:
[ ok ] Bootstrapping Percona XtraDB Cluster database server: mysqld already running.

So I have no idea why the cluster size is not 1.



What is innodb_autoinc_lock_mode and why should I care?

Latest MySQL Performance Blog posts - July 26, 2017 - 8:15am

In this blog post, we’ll look at what innodb_autoinc_lock_mode is and how it works.

I was recently discussing innodb_autoinc_lock_mode with some colleagues to address issues at a company I was working with.This variable defines the lock mode to use for generating auto-increment values. The permissible values are 0, 1 or 2 (for “traditional”, “consecutive” or “interleaved” lock mode, respectively). In most cases, this variable is set to the default of 1.

We recommend setting it to 2 when the BINLOG_FORMAT=ROW. With interleaved, INSERT statements don’t use the table-level AUTO-INC lock and multiple statements can execute at the same time. Setting it to 0 or 1 can cause a huge hit in concurrency for certain workloads.

Interleaved (or 2) is the fastest and most scalable lock mode, but it is not safe if using STATEMENT-based replication or recovery scenarios when SQL statements are replayed from the binary log. Another consideration – which you shouldn’t rely on anyway – is that IDs might not be consecutive with a lock mode of 2. That means you could do three inserts and expect IDs 100,101 and 103, but end up with 100, 102 and 104. For most people, this isn’t a huge deal.

If you are only doing simple inserts, this might not help you. I did a sysbench test on MySQL 5.7 in Amazon RDS with 100 threads and found no difference in performance or throughput between lock modes 1 and 2. It helps the most when you when the number of rows can’t be determined, such as with INSERT INTO…SELECT statements.

You can find a longer form article in the manual, but I highly recommend setting this value to 2 if you are not using STATEMENT-based replication.

Webinar Thursday July 27, 2017: Database Backup and Recovery Best Practices (with a Focus on MySQL)

Latest MySQL Performance Blog posts - July 25, 2017 - 12:46pm

Join Percona’s, Architect, Manjot Singh as he presents Database Backup and Recovery Best Practices (with a Focus on MySQL) on Thursday, July 27, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

Register Now

In the case of a failure, do you know how long it will take to restore your database? Do you know how old the backup will be? In this presentation, we will cover the basics of best practices for backup, restoration and business continuity. Don’t put your company on the line due to bad data retention and backup policies.

Register for the webinar here.

Manjot Singh, Architect Manjot Singh is an Architect with Percona in California. He loves to learn about new technologies and apply them to real-world problems. Manjot is a veteran of startup and Fortune 500 enterprise companies alike, with a few years spent in government, education and hospital IT. Now he consults for Percona with companies around the world on many interesting problems.

QAN not picking up change to long_query_time

Lastest Forum Posts - July 25, 2017 - 6:21am
I have decreased long_query_time from the default of 10 to 1 second. I have confirmed that queries > 1 second are in the slow log. QAN does not show queries between 1 and 10 seconds, but it will still display queries > 10 seconds. What do I need to do for QAN to pick up these queries?

I am using PMM 1.2.0. The database is MariaDB 10.1.21.

Thanks - Chris

Percona XtraBackup 2.4.8 is Now Available

Lastest Forum Posts - July 25, 2017 - 12:48am
Percona announces the GA release of Percona XtraBackup 2.4.8 on July 24, 2017. You can download it from our download site and apt and yumrepositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups. New features:

Bugs Fixed:

  • xtrabackup would hang with Waiting for master thread to be suspended message when backup was being prepared. Bug fixed #1671437.
  • xtrabackup would fail to prepare the backup with 6th page is not initialized message in case server didn’t properly initialize the page. Bug fixed #1671722.
  • xbstream could run out of file descriptors while extracting the backup which contains many tables. Bug fixed #1690823.
  • When a table was created with the DATA DIRECTORY option xtrabackup would back up the .frm and .islfiles, but not the .ibd file. Due to the missing .ibd files backup then could not be restored. Bug fixed #1701736.
  • Percona XtraBackup incorrectly determined use of master_auto_postion on a slave, and thus generated invalid xtrabackup_slave_info file. Bug fixed #1705193.
  • Percona XtraBackup will now print a warning if it encounters unsupported storage engine. Bug fixed #1394493.
  • Percona XtraBackup would crash while backing up MariaDB 10.2.x with --ftwrl-* options. Bug fixed #1704636.
  • xtrabackup --slave-info didn’t write the correct information into xtrabackup_slave_info file when multi-source replication was used. Bug fixed #1551634.
  • Along with xtrabackup_checkpints file, xtrabackup now copies xtrabackup_info file into directory specified by --extra-lsndir option. Bug fixed #1600656.
  • GTID position was not recorded when --binlog-info option was set to AUTO. Bug fixed #1651505.
Release notes with all the bugfixes for Percona XtraBackup 2.4.8 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Percona XtraBackup 2.3.9 is Now Available

Lastest Forum Posts - July 25, 2017 - 12:47am
Percona announces the release of Percona XtraBackup 2.3.9 on July 24, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series. New Features

Bugs Fixed:

  • Percona XtraBackup would crash when being prepared if the index compaction was enabled. Bug fixed #1192834.
  • Fixed build failure on Debian Stretch by adding support for building with OpenSSL 1.1. Bug fixed #1678947.
  • xbstream could run out of file descriptors while extracting the backup which contains many tables. Bug fixed #1690823.
  • Percona XtraBackup incorrectly determined use of master_auto_postion on a slave, and thus generated invalid xtrabackup_slave_info file. Bug fixed #1705193.
  • Percona XtraBackup would crash while backing up MariaDB 10.2.x with --ftwrl-* options. Bug fixed #1704636.
  • Along with xtrabackup_checkpints file, xtrabackup now copies xtrabackup_info file into directory specified by --extra-lsndir option. Bug fixed #1600656.
  • GTID position was not recorded when --binlog-info option was set to AUTO. Bug fixed #1651505.
Release notes with all the bugfixes for Percona XtraBackup 2.3.9 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Percona XtraBackup 2.4.8 is Now Available

Latest MySQL Performance Blog posts - July 24, 2017 - 10:59am

Percona announces the GA release of Percona XtraBackup 2.4.8 on July 24, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

New features: Bugs Fixed:
  • xtrabackup would hang with Waiting for master thread to be suspended message when backup was being prepared. Bug fixed #1671437.
  • xtrabackup would fail to prepare the backup with 6th page is not initialized message in case server didn’t properly initialize the page. Bug fixed #1671722.
  • xbstream could run out of file descriptors while extracting the backup which contains many tables. Bug fixed #1690823.
  • When a table was created with the DATA DIRECTORY option xtrabackup would back up the .frm and .isl files, but not the .ibd file. Due to the missing .ibd files backup then could not be restored. Bug fixed #1701736.
  • Percona XtraBackup incorrectly determined use of master_auto_postion on a slave, and thus generated invalid xtrabackup_slave_info file. Bug fixed #1705193.
  • Percona XtraBackup will now print a warning if it encounters unsupported storage engine. Bug fixed #1394493.
  • Percona XtraBackup would crash while backing up MariaDB 10.2.x with --ftwrl-* options. Bug fixed #1704636.
  • xtrabackup --slave-info didn’t write the correct information into xtrabackup_slave_info file when multi-source replication was used. Bug fixed #1551634.
  • Along with xtrabackup_checkpints file, xtrabackup now copies xtrabackup_info file into directory specified by --extra-lsndir option. Bug fixed #1600656.
  • GTID position was not recorded when --binlog-info option was set to AUTO. Bug fixed #1651505.

Release notes with all the bugfixes for Percona XtraBackup 2.4.8 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Percona XtraBackup 2.3.9 is Now Available

Latest MySQL Performance Blog posts - July 24, 2017 - 10:35am

Percona announces the release of Percona XtraBackup 2.3.9 on July 24, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

New Features Bugs Fixed:
  • Percona XtraBackup would crash when being prepared if the index compaction was enabled. Bug fixed #1192834.
  • Fixed build failure on Debian Stretch by adding support for building with OpenSSL 1.1. Bug fixed #1678947.
  • xbstream could run out of file descriptors while extracting the backup which contains many tables. Bug fixed #1690823.
  • Percona XtraBackup incorrectly determined use of master_auto_postion on a slave, and thus generated invalid xtrabackup_slave_info file. Bug fixed #1705193.
  • Percona XtraBackup would crash while backing up MariaDB 10.2.x with --ftwrl-* options. Bug fixed #1704636.
  • Along with xtrabackup_checkpints file, xtrabackup now copies xtrabackup_info file into directory specified by --extra-lsndir option. Bug fixed #1600656.
  • GTID position was not recorded when --binlog-info option was set to AUTO. Bug fixed #1651505.

Release notes with all the bugfixes for Percona XtraBackup 2.3.9 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

unable to restore due to wrong from_lsn referential

Lastest Forum Posts - July 23, 2017 - 12:37pm
unable to restore due to wrong from_lsn referential when incremental backup is scheduled directy following fullbackup / differential backup

the incremental backup is refering to the lsn of the incremental backup taken before the the fullbackup. therefore the chain is broken and we cannot restore to a point in time after the full backup:


STEP 1
======
so i'm trying to restore a full backup with 1 incremental backup


in this case : restore
| 2,735 | F | 1,769 | 3,976,126,388 | 2017-07-22 22:05:03 | XXXXX-mysql-full-0552 |
| 2,736 | I | 2 | 13,599,028 | 2017-07-22 22:29:43 | XXXXX-mysql-inc-0238 |


*restore
Automatically selected Catalog: XXXXXtestCatalog
Using Catalog "XXXXXtestCatalog"

First you select one or more JobIds that contain files
to be restored. You will be presented several methods
of specifying the JobIds. Then you will be allowed to
select which files from those JobIds are to be restored.

To select the JobIds, you have the following choices:
1: List last 20 Jobs run
2: List Jobs where a given File is saved
3: Enter list of comma separated JobIds to select
4: Enter SQL list command
5: Select the most recent backup for a client
6: Select backup for a client before a specified time
7: Enter a list of files to restore
8: Enter a list of files to restore before a specified time
9: Find the JobIds of the most recent backup for a client
10: Find the JobIds for a backup for a client before a specified time
11: Enter a list of directories to restore for found JobIds
12: Select full restore to a specified Job date
13: Cancel
Select item: (1-13): 5
Defined Clients:
1: XXXXXdbscluster01
2: XXXXXtestbardir01
3: XXXXXtestbarsd01
4: XXXXXtestdc01
5: XXXXXtestsrv01
6: XXXXXtestvmon01
7: XXXXXtestws10
Select the Client (1-7): 1
The defined FileSet resources are:
1: XXXXX-mysql-fileset
2: ubuntu-standard-fileset
Select FileSet resource (1-2): 1
+-------+-------+----------+---------------+---------------------+----------------------+
| JobId | Level | JobFiles | JobBytes | StartTime | VolumeName |
+-------+-------+----------+---------------+---------------------+----------------------+
| 2,735 | F | 1,769 | 3,976,126,388 | 2017-07-22 22:05:03 | XXXXX-mysql-full-0552 |
| 2,736 | I | 2 | 13,599,028 | 2017-07-22 22:29:43 | XXXXX-mysql-inc-0238 |
| 2,738 | I | 2 | 14,546,614 | 2017-07-22 23:20:03 | XXXXX-mysql-inc-0239 |
| 2,746 | I | 2 | 48,484,089 | 2017-07-23 00:20:03 | XXXXX-mysql-inc-0240 |

...
+-------+-------+----------+---------------+---------------------+----------------------+
You have selected the following JobIds: 2735,2736,2738,2746,2747,2748,2749,2750,2751,2752, 2753,2754,2755,2756,2757,2759,2760,2761,2762,2763

Building directory tree for JobId(s) 2735,2736,2738,2746,2747,2748,2749,2750,2751,2752, 2753,2754,2755,2756,2757,2759,2760,2761,2762,2763 ... +++++++++++++++++++++++++++++++++++++++++++++
1,644 files inserted into the tree.

You are now entering file selection mode where you add (mark) and
remove (unmark) files to be restored. No files are initially added, unless
you used the "all" keyword on the command line.
Enter "done" to leave this mode.

cwd is: /
$ cd _percona
cwd is: /_percona/
$ ls
xbstream.0000002735
xbstream.0000002736
xbstream.0000002738
...
$ mark xbstream.0000002735
1 file marked.
$ mark xbstream.0000002736
1 file marked.
$ done
Bootstrap records written to /var/lib/bareos/XXXXXtestbardir01-dir.restore.3.bsr




--> restore was succesfull

STEP 2: prepare and recover
=======================
--> the error occurs when trying to recover the incremental backup taken after the fullbackup



### restore the fullbackup

xtrabackup --prepare --apply-log-only --target-dir=/mnt/sharedbackup/mysql/restore/_percona/2764/00000000000000000000_00000003805644266034_00000027 35

xtrabackup version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: 6f7a799)
xtrabackup: cd to /mnt/sharedbackup/mysql/restore/_percona/2764/00000000000000000000_00000003805644266034_00000027 35/
xtrabackup: This target seems to be not prepared yet.
InnoDB: Number of pools: 1
xtrabackup: xtrabackup_logfile detected: size=8388608, start_lsn=(3805643872752)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = .
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 8388608
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = .
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 8388608
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
InnoDB: PUNCH HOLE support available
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Uses event mutexes
InnoDB: GCC builtin __sync_synchronize() is used for memory barrier
InnoDB: Compressed tables use zlib 1.2.3.4
InnoDB: Number of pools: 1
InnoDB: Not using CPU crc32 instructions
InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M
InnoDB: Completed initialization of buffer pool
InnoDB: page_cleaner coordinator priority: -20
InnoDB: Highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 3805643872752
InnoDB: Doing recovery: scanned up to log sequence number 3805644266034 (5%)
InnoDB: Doing recovery: scanned up to log sequence number 3805644266034 (5%)
InnoDB: Database was not shutdown normally!
InnoDB: Starting crash recovery.
InnoDB: Doing recovery: scanned up to log sequence number 3805644266034 (5%)
InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: xtrabackup: Last MySQL binlog file position 226516, file name /var/mysql_binlogs/XXXXXdbs06_mysql_bin_log.028078
InnoDB: page_cleaner: 1000ms intended loop took 49886ms. The settings might not be optimal. (flushed=0 and evicted=0, during the time.)
InnoDB: xtrabackup: Last MySQL binlog file position 226516, file name /var/mysql_binlogs/XXXXXdbs06_mysql_bin_log.028078

xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 3805644266043
InnoDB: Number of pools: 1
170723 17:37:14 completed OK!


### restore the first incremental after fullbackup
xtrabackup --prepare --target-dir=/mnt/sharedbackup/mysql/restore/_percona/2764/00000000000000000000_00000003805644266034_00000027 35 --incremental-dir=/mnt/sharedbackup/mysql/restore/_percona/2764/00000003805643369005_00000003805644499741_00000027 36


root@XXXXXdbs06:/mnt/sharedbackup/mysql/restore/_percona/2764# xtrabackup --prepare --target-dir=/mnt/sharedbackup/mysql/restore/_percona/2764/00000000000000000000_00000003805644266034_00000027 35 --incremental-dir=/mnt/sharedbackup/mysql/restore/_percona/2764/00000003805643369005_00000003805644499741_00000027 36
xtrabackup version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: 6f7a799)
incremental backup from 3805643369005 is enabled.
xtrabackup: cd to /mnt/sharedbackup/mysql/restore/_percona/2764/00000000000000000000_00000003805644266034_00000027 35/
xtrabackup: This target seems to be already prepared with --apply-log-only.
xtrabackup: error: This incremental backup seems not to be proper for the target.
xtrabackup: Check 'to_lsn' of the target and 'from_lsn' of the incremental.


--> this restore of the incremental backups hould refer to the lsn of the fullbackup of job 2735, but according to the bareos logs the incremental backup job 2736 is refering to the lsn of job 2734 (which means the job before the fullbackup)


--> in the bareos logs i do see that the incremental backup launched ater the fullbackup is refering to the lsn of the incremental backup taken just before the fullbackup :


-> see attachment : bareos.log

Can I use innobackupex options for xtrabackup

Lastest Forum Posts - July 21, 2017 - 12:04pm
Hi,

I am testing MySQL backups using percona xtrabackup. In documentation, I have found 2 options - Innobackupex and xtrabackup. Compared to xtrabackup, innobackupex has more command line options. Can I use some of the innobackupex options for xtrabackup to make backups more flexible? If yes, what I can use and what I can' use?

Below are the examples I would like to use with xtrabackup:

--port
--host
--history
--incremental-history-name

Thanks,
Vishnu

Faster Node Rejoins with Improved IST performance

Latest MySQL Performance Blog posts - July 21, 2017 - 9:01am

In this blog, we’ll look at how improvements to Percona XtraDB Cluster improved IST performance.

Introduction

Starting in version 5.7.17-29.20 of Percona XtraDB Cluster significantly improved performance. Depending on the workload, the increase in throughput is in the range of 3-10x. (More details here). These optimization fixes also helped improve IST (Incremental State Transfer) performance. This blog is aimed at studying the IST impact.

IST

IST stands for incremental state transfer. When a node of the cluster leaves the cluster for a short period of time and then rejoins the cluster it needs to catch-up with cluster state. As part of this sync process existing node of the cluster (aka DONOR) donates missing write-sets to rejoining node (aka JOINER). In short, flow involves, applying missing write-sets on JOINER as it does during active workload replication.

Percona XtraDB Cluster / Galera already can apply write-sets in parallel using multiple applier threads. Unfortunately, due to commit contention, the commit action was serialized. This was fixed in the above Percona XtraDB Cluster release, allowing commits to proceed in parallel.

IST uses the same path for applying write-sets, except that it is more like a batch operation.

IST Performance

Let’s look at IST performance before and now.

Setup

  1. Two node cluster (node-1 and node-2) and gcache is configured large enough to avoid purging as we need IST
  2. Start workload against node-1 for 30 seconds
  3. Shutdown node-2
  4. Start workload that performs 4M requests against node-1. Workload produces ~3.5M write-sets that are cached in gcache and used later for IST
  5. Start node-2 with N-applier threads
  6. Wait until IST is done
  7. ….. repeat steps 3-6 with different values of N.

Observations:

  • IST is 4x faster with PXC-5.7.17 (compared to previous releases)
  • Improved performance means a quicker node rejoin, and an overall increase in cluster productivity as joiner node is available to process the workload more quickly
Conclusion

Percona XtraDB Cluster 5.7.17 significantly improved IST performance. A faster re-join of the node effectively means better cluster productivity and flexibility in planning maintenance window. So what are you waiting for? Upgrade to Percona XtraDB Cluster 5.7.17 or latest Percona XtraDB Cluster 5.7 release and experience the power!

pmp-check-mysql-status vs. percona-xtradb-cluster-57 = ERROR 1682

Lastest Forum Posts - July 20, 2017 - 8:18pm
Hi, I am not able to get results from monitoring. How can I fix it?

# /usr/lib64/nagios/plugins/pmp-check-mysql-status -x wsrep_cluster_size -C '<=' -w 2 -c 1
ERROR 1682 (HY000) at line 1: Native table 'performance_schema'.'global_variables' has the wrong structure
UNK could not get MySQL status/variables.

# dpkg -l|grep perc
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
ii percona-nagios-plugins 1.1.7-2.xenial all Percona Monitoring Plugins for Nagios
ii percona-release 0.1-4.xenial all Package to install Percona gpg key and APT repo
ii percona-xtrabackup-24 2.4.7-2.xenial amd64 Open source backup tool for InnoDB and XtraDB
hi percona-xtradb-cluster-57 5.7.18-29.20-1.xenial amd64 Percona XtraDB Cluster with Galera
hi percona-xtradb-cluster-client-5.7 5.7.18-29.20-1.xenial amd64 Percona XtraDB Cluster database client binaries
hi percona-xtradb-cluster-common-5.7 5.7.18-29.20-1.xenial amd64 Percona XtraDB Cluster database common files (e.g. /etc/mysql/my.cnf)
hi percona-xtradb-cluster-server-5.7 5.7.18-29.20-1.xenial amd64 Percona XtraDB Cluster database server binaries

Yes, I had run mysql_upgrade.

CPU Usage display unusual

Lastest Forum Posts - July 20, 2017 - 6:39pm
Display load is slow and IO wait is not normal

Where Do I Put ProxySQL?

Latest MySQL Performance Blog posts - July 20, 2017 - 11:57am

In this blog post, we’ll look at how to deploy ProxySQL.

ProxySQL is a high-performance proxy, currently for MySQL and its forks (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. It was created for DBAs by René Cannaò, as a means of solving complex replication topology issues. When bringing up ProxySQL with my clients, I always get questions about where it fits into the architecture. This post should clarify that.

Before continuing, you might want to know why you should use this software. The features that are of interest include:

  • MySQL firewall
  • Connection pooling
  • Shard lookup and automated routing
  • Ability to read/write split
  • Automatically switch to another master in case of active master failure
  • Query cache
  • Performance metrics
  • Other neat features!
Initial Configuration

In general, you install it on nodes that do not have a running MySQL database. You manage it via the MySQL command line on another port, usually 6032. Once it is started the configuration in /etc is not used, and you do everything within the CLI. The backend database is actually SQLite, and the db file is stored in /var/lib/proxysql.

There are many guides out there on initializing and installing it, so I won’t cover those details here. It can be as simple as:

apt-get install proxysql

ProxySQL Architecture

While most first think to install ProxySQL on a standalone node between the application and database, this has the potential to affect query performance due to the additional latency from network hops.

 

To have minimal impact on performance (and avoid the additional network hop), many recommend installing ProxySQL on the application servers. The application then connects to ProxySQL (acting as a MySQL server) on localhost, using Unix Domain Socket, and avoiding extra latency. It would then use its routing rules to reach out and talk to the actual MySQL servers with its own connection pooling. The application doesn’t have any idea what happens beyond its connection to ProxySQL.

Reducing Your Network Attack Surface

Another consideration is reducing your network attack surface. This means attempting to control all of the possible vulnerabilities in your network’s hardware and software that are accessible to unauthenticated users.

Percona generally suggests that you put a ProxySQL instance on each application host, like in the second image above. This suggestion is certainly valid for reducing latency in your database environment (by limiting network jumps). But while this is good for performance, it can be bad for security.

Every instance must be able to talk to:

  • Every master
  • Every slave

As you can imagine, this is a security nightmare. With every instance, you have x many more connections spanning your network. That’s x many more connections an attacker might exploit.

Instead, it can be better to have one or more ProxySQL instances that are between your application and MySQL servers (like the first image above). This provides a reasonable DMZ-type setup that prevents opening too many connections across the network.

That said, both architectures are valid production configurations – depending on your requirements.

problem replicating my pcx 5.7 on an AWS instance

Lastest Forum Posts - July 20, 2017 - 7:51am
Good morning guys

I am having a problem replicating my pcx 5.7 on an AWS instance.

I've tried replicating over the internet and the VPC, the problems continue.

I can not get past these timeout errors

Does anyone have any ideas ?

Follow the logs.


Packets Pings
Loss% Snt Last Avg Best Wrst StDev
0.5% 3662 3.8 3.4 3.1 266.1 6.1

Transfer is at 15Mb.

Servers without firewall

### joiner

/var/lib/mysql# du -h
1.5G .


### joiner log

# service mysql start
2017-07-20T13:39:01.620686Z 2 [Note] WSREP: State gap can't be serviced using IST. Switching to SST
2017-07-20T13:39:01.627005Z 0 [Note] WSREP: Member 0.0 (ip-x.x.x.x) requested state transfer from '*any*'. Selected 1.0 (teste-mha-01)(SYNCED) as donor.
2017-07-20T13:39:01.627024Z 0 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 55690)
2017-07-20T13:39:01.627046Z 2 [Note] WSREP: Requesting state transfer: success, donor: 1
2017-07-20T13:39:01.627070Z 2 [Note] WSREP: GCache history reset: old(00000000-0000-0000-0000-000000000000:0) -> new(a8d37d7e-5c3b-11e7-9fbf-36d50cc17d69:55690)
2017-07-20T13:39:04.034731Z 0 [Note] WSREP: (c9e807ea, 'tcp://0.0.0.0:4567') connection to peer c9e807ea with addr tcp://x.x.x.x:4567 timed out, no messages seen in PT3S
2017-07-20T13:39:04.534737Z 0 [Note] WSREP: (c9e807ea, 'tcp://0.0.0.0:4567') turning message relay requesting off
Job for mysql.service failed because a timeout was exceeded. See "systemctl status mysql.service" and "journalctl -xe" for details.
root@ip-x.x.x.x:/var/lib/mysql# systemctl status mysql.service
● mysql.service - LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon
Loaded: loaded (/etc/init.d/mysql; bad; vendor preset: enabled)
Active: failed (Result: timeout) since Thu 2017-07-20 13:44:00 UTC; 8min ago
Docs: man:systemd-sysv-generator(8)
Process: 1211 ExecStart=/etc/init.d/mysql start (code=killed, signal=TERM)
Tasks: 15
Memory: 1.5G
CPU: 26.864s
CGroup: /system.slice/mysql.service
├─1253 /bin/sh /usr/bin/mysqld_safe
├─2243 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --wsrep-provider=/usr/lib/libgalera_smm.so --log-error=/var/log/mysql/mysql-error.log --open-files-limit=10
├─2254 sh -c wsrep_sst_rsync --role 'joiner' --address 'x.x.x.x' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '2243' ''
├─2255 /bin/bash -ue /usr/bin/wsrep_sst_rsync --role joiner --address x.x.x.x --datadir /var/lib/mysql/ --defaults-file /etc/mysql/my.cnf --defaults-group-suffix --parent 2243
├─2339 rsync --daemon --no-detach --port 4444 --config /var/lib/mysql//rsync_sst.conf
├─4403 rsync --daemon --no-detach --port 4444 --config /var/lib/mysql//rsync_sst.conf
├─4409 rsync --daemon --no-detach --port 4444 --config /var/lib/mysql//rsync_sst.conf
└─6973 sleep 1

Jul 20 13:45:07 ip-x.x.x.x rsyncd[4398]: receiving file list
Jul 20 13:45:07 ip-x.x.x.x rsyncd[4402]: receiving file list
Jul 20 13:45:07 ip-x.x.x.x rsyncd[4403]: receiving file list
Jul 20 13:45:07 ip-x.x.x.x rsyncd[4401]: receiving file list
Jul 20 13:45:08 ip-x.x.x.x rsyncd[4404]: receiving file list
Jul 20 13:45:10 ip-x.x.x.x rsyncd[4431]: connect from ip-10-10-10-101.sa-east-1.compute.internal (10.10.10.101)
Jul 20 13:45:10 ip-x.x.x.x rsyncd[4431]: rsync to rsync_sst/./performance_schema from ip-10-10-10-101.sa-east-1.compute.internal (10.10.10.101)
Jul 20 13:45:10 ip-x.x.x.x rsyncd[4431]: receiving file list
Jul 20 13:45:37 ip-x.x.x.x rsyncd[4401]: rsync: connection unexpectedly closed (10869 bytes received so far) [generator]
Jul 20 13:45:37 ip-x.x.x.x rsyncd[4401]: rsync error: error in rsync protocol data stream (code 12) at io.c(226) [generator=3.1.1]

#### joiner conf

# cat /etc/mysql/my.cnf
[mysql]

# CLIENT #
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld]
federated
performance_schema

innodb_force_recovery=0

pxc_strict_mode = PERMISSIVE

# GENERAL #
user = mysql
default_storage_engine = InnoDB
socket = /var/run/mysqld/mysqld.sock
pid_file = /var/run/mysqld/mysqld.pid
bind_address = 0.0.0.0
skip-name-resolve

# DATA STORAGE #
datadir = /var/lib/mysql/
tmpdir = /var/lib/mysql/tmp

# MyISAM #
key_buffer_size = 32M
myisam_recover_options = FORCE,BACKUP

# Path to Galera library
wsrep_provider = /usr/lib/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3

wsrep_cluster_address = gcomm://10.10.10.101,x.x.x.x

# In order for Galera to work correctly binlog format should be ROW
binlog_format = ROW

# MyISAM storage engine has only experimental support
default_storage_engine = InnoDB

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode = 2

# Node #1 address
wsrep_node_address = x.x.x.x

# SST method
wsrep_sst_method = rsync

# INNODB #
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 265Mb
innodb_flush_log_at_trx_commit = 0
innodb_file_per_table = 1
innodb_buffer_pool_size = 4G
thread_pool_size = 36
innodb_doublewrite = 1
innodb_thread_concurrency = 0
innodb_max_dirty_pages_pct = 80
innodb_thread_concurrency = 0
innodb_buffer_pool_instances = 20
innodb_log_buffer_size = 8M
innodb_lock_wait_timeout = 120
thread_pool_idle_timeout = 3600


binlog_row_event_max_size = 16M
binlog_cache_size = 16M

# Gcache
#wsrep_provider_options = "gcache.size=300M; gcache.page_size=300M; gmcast.segment=2; evs.send_window=512; evs.user_send_window=256; gcs.fc_limit=128; evs.inactive_timeout = PT90S; evs.suspect_timeout = PT30S"
wsrep_provider_options = " gcs.max_packet_size=1048576; evs.send_window=512; evs.user_send_window=256; evs.inactive_timeout = PT100S; evs.suspect_timeout = PT60S"

#myisam
#wsrep_replicate_myisam = 1

# Cluster name
wsrep_cluster_name = ev-cluster-teste

# Node Name
wsrep_node_name = ip-x-x-x-x

# Set to number of cpu cores.

wsrep_slave_threads = 8

# to enable debug level logging, set this to 1

wsrep_debug = 0

#innodb_print_all_deadlock = 1

# how many times to retry deadlocked autocommits
wsrep_retry_autocommit = 1

# convert locking sessions into transactions
wsrep_convert_LOCK_to_trx = 1

# Generate fake primary keys for non-PK tables (required for multi-master and parallel applying operation)
wsrep_certify_nonPK = 1

#### Required for Galera
innodb_locks_unsafe_for_binlog = 1

# BINARY LOGGING #
log_bin = /var/lib/mysql/mysql-bin
expire_logs_days = 5
sync_binlog = 1
server_id = 1

# SAFETY #
max_allowed_packet = 16M
max_connect_errors = 1000000
skip_name_resolve
sql-mode = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_ AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_S UBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE #,ONLY_FULL_GROUP_BY
sysdate_is_now = 1
innodb = FORCE
innodb_strict_mode = 1

# CACHES AND LIMITS #
tmp_table_size = 256M
max_heap_table_size = 256M
query_cache_type = 0
query_cache_size = 0
max_connections = 100
thread_cache_size = 120
open_files_limit = 100000
table_definition_cache = 4096
table_open_cache = 4096
read_buffer_size = 8M
query_cache_limit = 1M
join_buffer_size = 6M
#table_cache = 2000

# LOGGING #
log_error = /var/log/mysql/mysql-error.log
#log_queries_not_using_indexes = 1
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log

general_log_file = /var/log/mysql/mysql-general.log
general_log = 1


#### donor

/var/lib/mysql# du -h
64G .


#### donor log

#
2017-07-20T14:07:53.707628Z 2 [Note] WSREP: DONOR thread signaled with 0
2017-07-20T14:07:53.787297Z 0 [Note] WSREP: Flushing tables for SST...
2017-07-20T14:07:53.794344Z 0 [Note] WSREP: Provider paused at a8d37d7e-5c3b-11e7-9fbf-36d50cc17d69:55690 (5)
2017-07-20T14:07:53.797108Z 0 [Note] WSREP: Table flushing completed.
2017-07-20T14:07:54.004707Z WSREP_SST: [INFO] Starting rsync of data-dir............
2017-07-20T14:07:55.810592Z 0 [Note] WSREP: (c4b642ee, 'tcp://0.0.0.0:4567') turning message relay requesting off
rsync: read error: Connection timed out (110)
rsync error: error in socket IO (code 10) at io.c(802) [sender=3.1.0]
rsync: read error: Connection timed out (110)
rsync error: error in socket IO (code 10) at io.c(802) [sender=3.1.0]
rsync: read error: Connection timed out (110)
rsync error: error in socket IO (code 10) at io.c(802) [sender=3.1.0]

### network

net.core.somaxconn = 1000
net.core.netdev_max_backlog = 5000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_wmem = 4096 12582912 16777216
net.ipv4.tcp_rmem = 4096 12582912 16777216
net.ipv4.tcp_max_syn_backlog = 8096
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 10240 65535

fs.file-max=200000
kernel.sem=250 32000 100 1024
kernel.shmmax=4294967295
######
net.ipv4.tcp_retries2 = 2
net.ipv4.tcp_keepalive_time = 12
net.ipv4.tcp_keepalive_intvl = 2
net.ipv4.tcp_keepalive_probes = 9

vm.swappiness = 0
vm.dirty_ratio = 80
vm.dirty_background_ratio = 5
vm.dirty_expire_centisecs = 12000

event scheduler in replication master - slave

Lastest Forum Posts - July 20, 2017 - 5:08am
Hi there! I have a problem with percona mysql replication (master-slave) and events. When i create any event on master

CREATE EVENT `test_event`
ON SCHEDULE
EVERY 1 MINUTE
ON COMPLETION
PRESERVE
DO
insert into test1 (unix_time) values (NOW() + 0);

on master i see for this event STATUS: ENABLED and on slave STATUS: SLAVESIDE_DISABLED - this is ok, all works correct. But if i change roles between master and slave: on new master this event stay SLAVESIDE_DISABLED and on new slave - ENABLED. Why? Why event status don't change when roles changed? How i can do it automatically?
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.