Buy Percona ServicesBuy Now!

PMM Docker, QAN not starting after container restart

Lastest Forum Posts - February 16, 2018 - 6:27am
Recently I've encountered an issue while deploying PMM on AWS ECS, however this issue will also apply to Docker deployments.
As per Percona documentation in order to make configuration persistent fallowing path have to be persistent:
-v /opt/prometheus/data \
-v /opt/consul-data \
-v /var/lib/mysql \
-v /var/lib/grafana \

Looks that currently QAN agent configuration is stored in:
Due to that in order to make sure that all agents are up and running after container crash etc. those paths currently have to be set as permanent as well.
In case that those paths are not persistent, QAN agent is not collecting Query stats from PS resulting with only partial startup of PMM in Docker Container after "crash" etc.

I think that it should be updated in documentation, and/or qan-agent paths modified.


Added 4 MySQL servers as clients but the PMM Metrics Monitor only shows one

Lastest Forum Posts - February 16, 2018 - 2:04am
Hi All, I ha ve 5 servers in total:

1. Used for pmm-server, installed with Docker
2. MySQL master server for DB1
3. MySQL slave server for DB1 (Maria DB)
4. MySQL slave server for DB1 (Maria DB)
5. MySQL master server for DB2

I installed the pmm-client on all 4 servers (2-5) but on the graph/dashboard/db/cross-server-graphs?refresh=1m&orgId=1 url, I can only see the metrics for the pmm-server and the server number 2 hosting the DB1 master DB.

Any idea on what I am doing wrong? Should I open some ports on other servers (firewall)? Is there something to do in MySQL settings, on the Pmm server side?
Note that the port used is 8077 for the master.

On all 4 MySQL servers, I can see while executing sudo pmm-admin list:

2. pmm-admin 1.7.0

PMM Server |
Client Name | MySQL
Client Address | bbb.bbb.bbb.bbb
Service Manager | linux-upstart

-------------- ------ ----------- -------- ------------------------------------------- ------------------------------------------
-------------- ------ ----------- -------- ------------------------------------------- ------------------------------------------
mysql:queries MySQL - YES root:***@unix(/var/run/mysqld/mysqld.sock) query_source=slowlog, query_examples=true
linux:metrics MySQL 42000 YES -
mysql:metrics MySQL 42002 YES root:***@unix(/var/run/mysqld/mysqld.sock) tablestats=OFF

3. pmm-admin 1.7.0

PMM Server |
Client Name | mysql3
Client Address | ccc.ccc.ccc.ccc
Service Manager | linux-systemd

-------------- ------- ----------- -------- ------------------------------------------- ------------------------------------------
-------------- ------- ----------- -------- ------------------------------------------- ------------------------------------------
mysql:queries mysql3 - YES root:***@unix(/var/run/mysqld/mysqld.sock) query_source=slowlog, query_examples=true
linux:metrics mysql3 42000 YES -
mysql:metrics mysql3 42002 YES root:***@unix(/var/run/mysqld/mysqld.sock) tablestats=OFF

4. pmm-admin 1.7.0

PMM Server |
Client Name | mysql2
Client Address | ddd.ddd.ddd.ddd
Service Manager | linux-systemd

-------------- ------- ----------- -------- ------------------------------------------- ------------------------------------------
-------------- ------- ----------- -------- ------------------------------------------- ------------------------------------------
mysql:queries mysql2 - YES root:***@unix(/var/run/mysqld/mysqld.sock) query_source=slowlog, query_examples=true
linux:metrics mysql2 42000 YES -
mysql:metrics mysql2 42002 YES root:***@unix(/var/run/mysqld/mysqld.sock) tablestats=OFF

5. pmm-admin 1.7.0

PMM Server |
Client Name | chbslds018
Client Address | eee.eee.eee.eee
Service Manager | linux-upstart

-------------- ----------- ----------- -------- ------------------------------------------- ------------------------------------------
-------------- ----------- ----------- -------- ------------------------------------------- ------------------------------------------
mysql:queries chbslds018 - YES root:***@unix(/var/run/mysqld/mysqld.sock) query_source=slowlog, query_examples=true
linux:metrics chbslds018 42000 YES -
mysql:metrics chbslds018 42002 YES root:***@unix(/var/run/mysqld/mysqld.sock)

Thank you!

i [Failed] How to setup a slave for replication ... please help

Lastest Forum Posts - February 15, 2018 - 3:16am
Hello all

i have tried to go step by step following the Percona Backup 2.4 Documentation
"How to setup a slave for replication in 6 simple steps with Percona XtraBackup "

i have completed steps 1,2 and 3
(both the Master and the slave are having the same my.cnf (Slave server-id = 2), they are binary log enabled, and the slave can mysql to the master, and it has a full prepared backup)

once i go to step 4 and start mysqld on the Slave , i got
Code: user@pc-1:~$ sudo service mysql start * Starting MariaDB database server mysqld [fail] This is how my datadir looks like
Code: user@pc-1:~$ ll /var/lib/mysql total 1138896 drwxr-xr-x 6 mysql mysql 4096 Feb 15 02:59 ./ drwxr-xr-x 45 root root 4096 Feb 15 00:48 ../ drwxrwxr-x 2 mysql mysql 90112 Feb 15 02:19 1bd3e0294da19198/ -rwxrwxr-x 1 mysql mysql 16384 Feb 15 02:59 aria_log.00000001* -rwxrwxr-x 1 mysql mysql 52 Feb 15 02:59 aria_log_control* -rwxrwxr-x 1 mysql mysql 79691776 Feb 15 02:59 ibdata1* -rwxrwxr-x 1 mysql mysql 536870912 Feb 15 02:59 ib_logfile0* -rwxrwxr-x 1 mysql mysql 536870912 Feb 15 02:18 ib_logfile1* -rwxrwxr-x 1 mysql mysql 12582912 Feb 15 02:19 ibtmp1* drwxrwxr-x 2 mysql mysql 4096 Feb 15 02:19 mysql/ -rw-rw---- 1 mysql mysql 404 Feb 15 02:59 mysql-slow.log drwxrwxr-x 2 mysql mysql 4096 Feb 15 02:19 performance_schema/ -rwxrwxr-x 1 mysql mysql 38 Feb 15 02:18 xtrabackup_binlog_pos_innodb* -rwxrwxr-x 1 mysql mysql 558 Feb 15 02:19 xtrabackup_info*
I have faced similar issue when i followed " The Backup Cycle - Full Backups"
, i was able to fix this by (manually ) copying the binary log file mysql-bin.00000x that the xtrabackup_binlog_pos_innodb points to, from the /tmp/mysql/ to /var/lib/mysql...
i found that xtrabackup doesn't copy the binary log itself... should i copy them too when i use replication... or i have missed something that makes the xtrabackup copies the binary log files automatically.

my extrabackup version is
Code: user@pc-1:~$ xtrabackup --version xtrabackup version 2.3.10 based on MySQL server 5.6.24 Linux (i686) (revision id: bd0d4403f36)
Please help !

Percona Server for MySQL 5.6.39-83.1 Is Now Available

Lastest Forum Posts - February 15, 2018 - 2:01am
Percona announces the release of Percona Server for MySQL 5.6.39-83.1 on February 13, 2018. Based on MySQL 5.6.39, including all the bug fixes in it, Percona Server for MySQL 5.6.39-83.1 is now the current stable release in the 5.6 series.

Percona Server for MySQL is open-source and free. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed
  • With innodb_large_prefix set to 1, Blackhole storage engine was incompatible with InnoDB table definitions, thus adding new indexes would cause replication errors on the slave. Fixed #1126 (upstream #53588).
  • Intermediary slave with Blackhole storage engine couldn’t record updates from master to its own binary log in case master has binlog_rows_query_log_events option enabled. Bug fixed #1119 (upstream #88057).
  • A build error on FreeBSD caused by fixing the bug #255 was present. Bug fixed #2284.
  • Server queries that contained JSON special characters and were logged by audit_log plugin in JSON format caused invalid output due to lack of escaping. Bug fixed #1115.
  • Compilation warnings fixed in module. Bug fixed #3632 (upstream #77637).
  • A memory leak fixed in PFS unit test. Bug fixed #1806 (upstream #89384).
  • A GCC 7 warning fix introduced regression in Percona Server for MySQL 5.6.38-83.0 that lead to a wrong SQL query built to access the remote server when Federated storage engine was used. Bug fixed #1134.
  • Percona Server now uses Travis CI for additional tests. Bug fixed #3777.
Other bugs fixed: #257, #1090 (upstream #78048), #1127, and #2415.

This release also contains fixes for the following CVE issues: CVE-2018-2562,
CVE-2018-2573, CVE-2018-2583, CVE-2018-2590, CVE-2018-2591, CVE-2018-2612,
CVE-2018-2622, CVE-2018-2640, CVE-2018-2645, CVE-2018-2647, CVE-2018-2665,
CVE-2018-2668, CVE-2018-2696, CVE-2018-2703, CVE-2017-3737.

TokuDB Changes
  • Percona fixed a memory leak in the PerconaFT library. Bug fixed #TDB-98.
  • A clang-format configuration was added to PerconaFT and TokuDB (bug fixed #TDB-104).
Other bugs fixed: #TDB-48, #TDB-78, #TDB-93, and #TDB-99.

Find the release notes for Percona Server for MySQL 5.6.39-83.1 in our online documentation. Report bugs in the Jira bug tracker.

MariaDB Galera cluster with xtrabackup issue

Lastest Forum Posts - February 14, 2018 - 6:09am

I have recently installed Galera cluster, but I cannot get up running mariadb on the second node.

NODE1 /etc/my.cnf.d/server.cnf:






optimizer_switch = 'index_condition_pushdown=off'
query_cache_size = 0
query_cache_type = 0
innodb_flush_log_at_trx_commit = 2
innodb_buffer_pool_size = 10G
innodb_flush_method = O_DIRECT
innodb_log_buffer_size = 128M
innodb_lock_wait_timeout = 500
innodb_log_file_size = 100M


NODE2 /etc/my.cnf.d/server.cnf:





optimizer_switch = 'index_condition_pushdown=off'
query_cache_size = 0
query_cache_type = 0
innodb_flush_log_at_trx_commit = 2
innodb_buffer_pool_size = 10G
innodb_flush_method = O_DIRECT
innodb_log_buffer_size = 128M
innodb_lock_wait_timeout = 500
innodb_log_file_size = 100M


1. I start galera_new_cluster on NODE1. Everything seems to be OK:

[root@zabbix-db-node1 ~]# systemctl status mariadb
● mariadb.service - MariaDB 10.1.31 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/mariadb.service.d
Active: active (running) since Wed 2018-02-14 05:31:09 EST; 8min ago
Docs: man:mysqld(8)
Process: 10744 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 10694 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=0/SUCCESS)
Process: 10691 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Main PID: 10711 (mysqld)
Status: "Taking your SQL requests now..."
CGroup: /system.slice/mariadb.service
└─10711 /usr/sbin/mysqld --wsrep-new-cluster

Feb 14 05:31:19 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:19 139659971651328 [Note] WSREP: Flow-control interval: [16, 16]
Feb 14 05:31:19 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:19 139659971651328 [Note] WSREP: Trying to continue unpaused monitor
Feb 14 05:31:19 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:19 139660283120384 [Note] WSREP: New cluster view: global state: ad8e3ac3-116d-11e8-859f-eadc403c6207:1780, view# 3: P...ol version 3
Feb 14 05:31:19 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:19 139660283120384 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
Feb 14 05:31:19 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:19 139660283120384 [Note] WSREP: REPL Protocols: 7 (3, 2)
Feb 14 05:31:19 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:19 139660283120384 [Note] WSREP: Assign initial position for certification: 1780, protocol version: 3
Feb 14 05:31:19 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:19 139660030375680 [Note] WSREP: Service thread queue flushed.
Feb 14 05:31:23 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:23 139659980044032 [Note] WSREP: (2b3ca384, 'tcp://') connection established to 3324f771 tcp://
Feb 14 05:31:23 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:23 139659980044032 [Warning] WSREP: discarding established (time wait) 3324f771 (tcp://
Feb 14 05:31:24 zabbix-db-node1 mysqld[10711]: 2018-02-14 5:31:24 139659980044032 [Note] WSREP: cleaning up 3324f771 (tcp://
Hint: Some lines were ellipsized, use -l to show in full.

2. I systemctl start mariadb on NODE2, wait some 30 seconds and get:

Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xe" for details.

3. Then I look into the journal:

Feb 14 12:48:03 zabbix-db-node2 systemd[1]: Starting MariaDB 10.1.31 database server...
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] /usr/sbin/mysqld (mysqld 10.1.31-MariaDB) starting as process 3150 ...
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: Read nil XID from storage engines, skipping position init
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/'
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: wsrep_load(): Galera 25.3.22(r3764) by Codership Oy <> loaded successfully.
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: gcomm thread scheduling priority set to other:0
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: restore pc from disk failed
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: GMCast version 0
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: (887aad3d, 'tcp://') listening at tcp://
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: (887aad3d, 'tcp://') multicast: , ttl: 1
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: EVS version 0
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: gcomm: connecting to group 'zabbix-cluster', peer ','
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Note] WSREP: (887aad3d, 'tcp://') connection established to 887aad3d tcp://
Feb 14 12:48:03 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:03 139646100506880 [Warning] WSREP: (887aad3d, 'tcp://') address 'tcp://' points to own listening address,
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139645799098112 [Note] WSREP: Flow-control interval: [23, 23]
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139645799098112 [Note] WSREP: Trying to continue unpaused monitor
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139645799098112 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 2321)
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139646100187904 [Note] WSREP: State transfer required:
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: Group state: ad8e3ac3-116d-11e8-859f-eadc403c6207:2321
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: Local state: ad8e3ac3-116d-11e8-859f-eadc403c6207:-1
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139646100187904 [Note] WSREP: New cluster view: global state: ad8e3ac3-116d-11e8-859f-eadc403c6207:2321, view# 4: Primary, number o
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139646100187904 [Warning] WSREP: Gap in state sequence. Need state transfer.
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139645769742080 [Note] WSREP: Running: 'wsrep_sst_xtrabackup --role 'joiner' --address '' --datadir '/var/lib/mysql/' -
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: /usr//bin/wsrep_sst_xtrabackup: line 397: setup_ports: command not found
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139645769742080 [ERROR] WSREP: Failed to read 'ready <addr>' from: wsrep_sst_xtrabackup --role 'joiner' --address '' --da
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: Read: '(null)'
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139645769742080 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup --role 'joiner' --address '' --datadir
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139646100187904 [ERROR] WSREP: Failed to prepare for 'xtrabackup' SST. Unrecoverable.
Feb 14 12:48:04 zabbix-db-node2 mysqld[3150]: 2018-02-14 12:48:04 139646100187904 [ERROR] Aborting
Feb 14 12:48:18 zabbix-db-node2 mysqld[3150]: Error in my_thread_global_end(): 1 threads didn't exit
Feb 14 12:48:18 zabbix-db-node2 systemd[1]: mariadb.service: main process exited, code=exited, status=1/FAILURE
Feb 14 12:48:18 zabbix-db-node2 systemd[1]: Failed to start MariaDB 10.1.31 database server.
Feb 14 12:48:18 zabbix-db-node2 systemd[1]: Unit mariadb.service entered failed state.
Feb 14 12:48:18 zabbix-db-node2 systemd[1]: mariadb.service failed.

What is wrong?
Thank you!

Mysql Replication Lag is increasing continuously

Lastest Forum Posts - February 13, 2018 - 10:01am
Hi Team,
Could anyone please help me to come out of this situation. When i actually change the
pt-online-schema-change -u root --ask-pass --alter "MODIFY COLUMN id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT" on production changes reflected. But it has a slave where i see the relication delay is continuously increasing. unfortunately i dont see any long running query in slave. The queries are changing. Master_Log_File: mysql-bin.001797 Read_Master_Log_Pos: 289879640 Relay_Log_File: mysqld-relay-bin.005384 Relay_Log_Pos: 217855411 Relay_Master_Log_File: mysql-bin.001788 Slave_IO_Running: Yes Slave_SQL_Running: Yes Seconds_Behind_Master: 66980 The table we are updating is around 30G.Please ask if for any queries. Thanks for your time in advance.

Receiving error:select() error: Bad address during backup.

Lastest Forum Posts - February 13, 2018 - 5:53am
We have 6 databases in Amazon's US East region. Our backup script connects over SSH to the 6 database servers and executes the following command:

sudo innobackupex --defaults-extra-file=/home/dbcontrol/ --socket=/tmp/mysql.sock --slave-info --safe-slave-backup --safe-slave-backup-timeout=1200 --use-memory=4G --stream=xbstream --parallel=6 --compress --compress-threads=6 --encrypt=AES256 --encrypt-key=blah --encrypt-threads=3 /home/dbcontrol/backup

The xbstream file is getting streamed to the central server we use to back up and finally gets copied over to an S3 bucket for long-term storage.

If I run each backup serially I'm not seeing the problem but when we run all six in parallel I'm getting the error:

select() error: Bad address

Which gets written to the log file millions of times generating log files well over 16GB in size. Actually it doesn't stop writing this until I kill the process.

Since it doesn't happen when I run it serially (at least so far) I thought maybe I was hitting some kind of open file handle limit or something but I've checked all of the servers and everything is configured per the documentation's recommendations so I'm at a loss. I'm not even sure if the error is coming from XtraBackup or SSH.

Has anyone else run into this problem or have any knowledge of it?



OVA server change IP

Lastest Forum Posts - February 13, 2018 - 4:51am
Hi, I've installed pmm-server from the OVA file.

After the deployment operation, I've changed the hostname and the IP in the file /etc/sysconfig/network-scripts/ifcfg-eth0 but those settings are lost after rebooting.

I've been researching a bit and it seems that those settings are controlled by cloud-init.

Could you provide a guidance to change the IP server address using cloud-init, please?

In my opinion, being able to change the IP address should be a basic operation available in a production solution like PMM.

Kind regards,

Configure Alerts for MongoDB on PMM

Lastest Forum Posts - February 12, 2018 - 8:39pm
I am new to PMM.
I have setup a PMM environment on my VM. but it seems to be complicated to setup the Alerts. especially when you want to setup alerts by modifying on the basis of default queries. I need some basic alerts for
1. mongodb service uptime.
2. Mongodb state change(when primary automatically switches to a different state)
3. max connections threshold.
4. number of delete/update and read queries.

Can someone please provide the queries which I can use to setup Alerts in PMM.

when could we use prometheus 2.0 version?

Lastest Forum Posts - February 12, 2018 - 5:15pm

I have a question about prometheus 2.0 version.

as I know, PMM use prometheus 1.8.2 version.

but the level db is very heavy.

and sometimes prometheus was crashed by the level db.

as you can see in this web page, the level db is not used any more in prometheus 2.0 version.

so please check upgrading prometheus version.


CentOS 7.4-1708 install problem

Lastest Forum Posts - February 12, 2018 - 12:03pm
I am having trouble installing Percona on a freshly installed CentOS 7.4-1708. I am following the instructions on this page -

I started with the easy install on yum:yum install
Loaded plugins: fastestmirror, langpacks
percona-release-0.1-4.noarch.rpm | 6.4 kB 00:00:00
Examining /var/tmp/yum-root-tQnoaa/percona-release-0.1-4.noarch.rpm: percona-release-0.1-4.noarch
Marking /var/tmp/yum-root-tQnoaa/percona-release-0.1-4.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package percona-release.noarch 0:0.1-4 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================== ================================================== ================================================== ================================================== =============
Package Arch Version Repository Size
================================================== ================================================== ================================================== ================================================== =============
percona-release noarch 0.1-4 /percona-release-0.1-4.noarch 5.8 k

Transaction Summary
================================================== ================================================== ================================================== ================================================== =============
Install 1 Package

Total size: 5.8 k
Installed size: 5.8 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : percona-release-0.1-4.noarch 1/1
Verifying : percona-release-0.1-4.noarch 1/1

percona-release.noarch 0:0.1-4

This worked quite well, however the next step failed:yum install Percona-Server-server-57
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base:
* epel:
* extras:
* updates:
* webtatic:
Resolving Dependencies
--> Running transaction check
---> Package Percona-Server-server-57.x86_64 0:5.7.20-19.1.el7 will be installed
--> Processing Dependency: Percona-Server-shared-57 for package: Percona-Server-server-57-5.7.20-19.1.el7.x86_64
--> Processing Dependency: Percona-Server-client-57 for package: Percona-Server-server-57-5.7.20-19.1.el7.x86_64
--> Running transaction check
---> Package Percona-Server-client-57.x86_64 0:5.7.20-19.1.el7 will be installed
---> Package Percona-Server-shared-57.x86_64 0:5.7.20-19.1.el7 will be installed
--> Processing Dependency: Percona-Server-shared-compat-57 for package: Percona-Server-shared-57-5.7.20-19.1.el7.x86_64
--> Running transaction check
---> Package Percona-Server-shared-compat-57.x86_64 0:5.7.20-19.1.el7 will be obsoleting
---> Package mariadb-libs.x86_64 1:5.5.56-2.el7 will be obsoleted
--> Processing Dependency: mariadb-libs(x86-64) = 1:5.5.56-2.el7 for package: 1:mariadb-5.5.56-2.el7.x86_64
--> Processing Dependency: mariadb-libs(x86-64) = 1:5.5.56-2.el7 for package: 1:mariadb-server-5.5.56-2.el7.x86_64
--> Finished Dependency Resolution
Error: Package: 1:mariadb-5.5.56-2.el7.x86_64 (@anaconda)
Requires: mariadb-libs(x86-64) = 1:5.5.56-2.el7
Removing: 1:mariadb-libs-5.5.56-2.el7.x86_64 (@anaconda)
mariadb-libs(x86-64) = 1:5.5.56-2.el7
Obsoleted By: Percona-Server-shared-compat-57-5.7.20-19.1.el7.x86_64 (percona-release-x86_64)
Not found
Error: Package: 1:mariadb-server-5.5.56-2.el7.x86_64 (@anaconda)
Requires: mariadb-libs(x86-64) = 1:5.5.56-2.el7
Removing: 1:mariadb-libs-5.5.56-2.el7.x86_64 (@anaconda)
mariadb-libs(x86-64) = 1:5.5.56-2.el7
Obsoleted By: Percona-Server-shared-compat-57-5.7.20-19.1.el7.x86_64 (percona-release-x86_64)
Not found
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
I attempted both 'suggestions' but they did not help (skip broken and rpm...)

So I decided to run the rpm install. I first removed percona withyum remove percona* and then downloaded the .tar file, expanded it and ran:rpm -ivh Percona-Server-server-57-5.7.10-3.1.el7.x86_64.rpm Percona-Server-client-57-5.7.10-3.1.el7.x86_64.rpm Percona-Server-shared-57-5.7.10-3.1.el7.x86_64.rpm
error: Failed dependencies:
Percona-Server-shared-compat-57 is needed by Percona-Server-shared-57-5.7.10-3.1.el7.x86_64
Any ideas? What am I doing wrong?

In place upgrade to 5.7, after upgrade mysql won't start because of missing binaries

Lastest Forum Posts - February 12, 2018 - 11:36am

I'm using that guide in the RPM based section, using centos. I have tried multiple times, if I install 5.6 it works, if I install 5.7 it won't start because the binaries it points to in mysql.service do not exist on my system despite 5.7 being installed properly:

# Systemd service file for Percona Server
# # This service file is to start PS just with mysqld_safe

Description=MySQL Percona Server


# Needed to create system tables etc.
ExecStartPre=/usr/bin/mysql-systemd pre

# Start main service

# Don't signal startup success before a ping works
ExecStartPost=/usr/bin/mysql-systemd post

# Give up if ping don't get an answer


this is the error in the journalctl:
Failed at step EXEC spawning /usr/bin/mysql-systemd: No such file or directory

I have ran a search and this file is not on my system. neither is mysqld_safe. I have looked online for an updated mysql.service file and can not find one. I have tried running /usr/sbin/mysqld directly since that actually is on my system but I'm not sure which parameters to pass to it or even what user to try to start it with (won't let me start it as root to test).

If I remove the percona RPMs and reinstall the 56 server my data is still there and everything works.

Spent about 3 hours on this, not sure why 57 won't start after installed and why there are missing binaries. I followed the guide to the letter.

Restore (Galera Cluster) does not delete Databases after restore

Lastest Forum Posts - February 11, 2018 - 2:14pm
Hello All

We're using XtraBackup to backup our Galera Cluster. We had some testings. We also tested the restore. We're doing at 00:30 every day a full backup and then incremental ones. It was working perfectly with prepare and then the restore. But after the restore we see a lot (amount of incremental backups) databases on the restored cluster/server (#mysql50#Incremental-09022018_xxxxx [xxxx = time]). All of those does not have any tables and are 0.0 bytes big. when I want to delete them it says me 'Error Code: 1010. Error dropping database (can't rmdir './Incremental-09022018_011001/', errno: 17)'

What are does databases and how i can delete them? and why are they there?
When i do a 'dir -l' on '/var/lib/mysql' i see those databases too, and all belongs to mysql:mysql like the other (normal) databases

Thank you.

xtrabackup restore error (qpress error)

Lastest Forum Posts - February 9, 2018 - 8:08pm
I am trying to validate a database backup here and while running "xtrabackup --decompress" command, hit this error :
$ xtrabackup --decompress --target-dir=/home/xxxxxxxxxx
xtrabackup version 2.4.9 based on MySQL server 5.7.13 Linux (x86_64) (revision id: a467167cdd4)
180210 01:42:32 [01] decompressing ./xtrabackup_logfile.qp
qpress: Unexpected end of source file -- try the -R flag to recover
Error: decrypt and decompress thread 0 failed.

then I figure out to use "parallel" option, based on its usage on backup step (if this makes sense ...).
But, still :
[[ lot of decompressing messages ]]
180210 01:56:07 [05] decompressing ./mysql/help_category.ibd.qp
[[ in the end ...]]
qpress: Unexpected end of source file -- try the -R flag to recover
qpress: Unexpected end of source file -- try the -R flag to recover
Error: decrypt and decompress thread 0 failed.
Error: decrypt and decompress thread 1 failed.
Error: decrypt and decompress thread 2 failed.
Error: decrypt and decompress thread 3 failed.
Error: decrypt and decompress thread 4 failed.

So, my question is : am I hitting some sort of bug or file size limit, qpress issue, xtrabackup wrong setup, etc ?
I am not able to decompress all files in order to perform next step (--copy-back).
thank you!

pt-online-schema-change table cannot already have triggers

Lastest Forum Posts - February 9, 2018 - 2:22pm
I'm trying to run pt-online-schema-change script and getting a "Table already has triggers" error, however I ran a SHOW TRIGGERS on the database and there are no triggers in the entire database. I'm running on a replication db.


This Week in Data with Colin Charles 27: Percona Live Tutorials Released and a Comprehensive Review of the FOSDEM MySQL DevRoom

Latest MySQL Performance Blog posts - February 9, 2018 - 5:19am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Santa Clara 2018 update: tutorials have been announced. The committee rated over 300+ talks, and easily 70% of the schedule should go live next week as well. In practice, then, you should see about 50 talks announced next week. There’s been great competition: we only have 70 slots in total, so about 1 in 5 talks get picked — talk about a competitive ratio.


FOSDEM was truly awesome last week. From a Percona standpoint, we had a lot of excellent booth traffic (being outside of the PostgreSQL room on Saturday, and not too far out from the MySQL room on Sunday). We gave away bottle openers — useful in Brussels with all the beer; we tried a new design with a magnet to attach it to your fridge — stickers, some brochures, but most of all we had plenty of great conversations. There was quite a crowd from Percona, and it was excellent to see the MySQL & Friends DevRoom almost constantly full! A few of us brave souls managed to stay there the whole day, barely with any breaks, so as to enjoy all the talks.

I find the quality of talks to be extremely high. And when it comes to a community run event, with all content picked by an independent program committee, FOSDEM really sets the bar high. There is plenty of competition to get a good talk in, and I enjoyed everything we picked (yes, I was on the committee too). We’ve had plenty of events in the ecosystem that sort of had “MySQL” or related days, but FOSDEM might be the only one that has really survived. I understand we will have a day of some sort at SCALE16x, but even that has been scaled down. So if you care about the MySQL ecosystem, you will really want to ensure that you are at FOSDEM next year.

This year, we started with the usual MySQL Day on Friday. I could not be present, as I was at the CentOS Dojo, giving a presentation. So, the highlight of Friday for me? The community dinner. Over 80 people showed up, I know there was a waiting list, and lots of people were trying to get tickets at the last minute. Many missed out too; sorry, better luck next year; and also, hopefully, we will get a larger venue going forward. I really thank the organizers for this — we affectionately refer to them as the Belconians (i.e. a handful of Perconians based in Belgium). The conversation, the food, the drink —  they were all excellent. It’s good to see representation from all parts of the community: MySQL, Percona, MariaDB, Pythian, and others. So thank you again, Liz, Dimitri, Tom, and Kenny in absentia. I think Tjerk also deserves special mention for always helping (this year with the drinks)

As for FOSDEM itself, beyond the booth, I think the most interesting stuff was the talks. There are video recordings and slides of pretty much all talks, but I will also give you the “Cliff’s Notes” of them here.

MySQL DevRoom talk quick summaries Beyond WHERE and GROUP BY – Sergei Golubchik
  • EXCEPT is in MariaDB Server 10.3
  • recursive CTEs are good for hierarchical data, graphs, data generation, Turing complete (you can use it to solve Sudoku even)
  • non-recursive CTEs can be an alternative syntax for subqueries in the FROM clause
  • Window functions:
    • Normal: one result per row, depend on that row only
    • Aggregate: one result per group, depending on the whole group
    • Window: one result per row, depending on the whole group
  • System versioned tables with AS OF
  • Aggregate stored functions
MySQL 8.0 Performance: InnoDB Re-Design – Dimitri Kravtchuk
  • Contention-Aware Transactions Scheduling (CATS), since 8.0.3. Not all transactions are equal, FIFO could not be optimal, unblock the most blocking transactions first
  • CATS (VATS) had a few issues, and there were bugs (they thought everything worked since MariaDB Server had implemented it). They spent about 9 months before fixing everything.
  • Where does CATS help? Workloads hitting row lock contentions. You can monitor via SHOW ENGINE INNODB MUTEX.
  • the main problem is because of repeatable read versus read committed transaction isolation on the same workload. You really need to understand your workload when it comes to VATS.
MySQL 8.0 Roles – Giuseppe Maxia
  • Created like a user, granted like privileges. You need to activate them to use them.
  • Before roles, you created a user, then grant, grant, and more grant’s… Add another user? Same deal. Lots of repetitive work and a lot of chances to make mistakes.
  • Faster user administration – define a role, assign it many times. Centralized grant handling – grant and revoke privileges to roles, add/edit all user profiles.
  • You need to remember to set the default role.
  • A user can have many roles; default role can be a list of roles.
  • Roles are users without a login – roles are saved in user tables. This is useful from an account lock/unlock perspective.
  • You can grant a user to a user
  • SET ROLE is for session management; SET DEFAULT ROLE is a permanent assignment of a role for a user. SET ROLE DEFAULT means assign the default role for this user for this session
  • The role_edges table reports which roles are assigned to which users. default_roles keeps track of the current default roles assigned to users. A default role may not exist.
Histogram support in MySQL 8.0 – Øystein Grøvlen
  • You can now do ANALYZE TABLE table UPDATE HISTOGRAM on column WITH n BUCKETS;
  • New storage engine API for sampling (default implementation is full table scan even when sampling)
  • Histogram is stored in a JSON column in the data dictionary. Grab this from the INFORMATION_SCHEMA.
  • Histograms are useful for columns that are not the first column of any index, and used in WHERE conditions of JOIN queries, queries with IN-subqueries, ORDER BY … LIMIT queries. Best fit: low cardinality columns (e.g. gender, orderStatus, dayOfWeek, enums), columns with uneven distribution (skew), stable distribution (do not change much over time)
  • How many buckets? equi-height, 100 buckets should be enough.
  • Histograms are stored in the data dictionary, so will persist over restarts of course.
Let’s talk database optimizers – Vicențiu Ciorbaru TLS for MySQL at Large Scale – Jaime Crespo
  • Literally took 3 lines in the my.cnf to turn on TLS
  • They wanted to do a data centre failover and wanted to ensure replication would be encrypted.
  • They didn’t have proper orchestration in place (MySQL could have this too). Every time OpenSSL or MySQL had to be upgraded, the daemon needed restarting. If there was an incompatible change, you had to sync master/replicas too.
  • The automation and orchestration that Wikipedia uses: (it is called Cumin:
  • Server support was poor – OpenSSL – so they had to deploy wmf-mysql and wmf-mariadb of their own
  • Currently using MariaDB 10.0, and looking to migrate to MariaDB 10.1
  • Client library pain they’ve had
  • TLSv1.2 from the beginning (2015).
  • 20-50x slower for actual connecting; the impact is less than 5% for the actual query performance. Just fix client libraries, make them use persistent connections. They are now very interested in ProxySQL for this purpose.
  • Monty asks, would a double certificate help? Jaime says sure. But he may not actually use double certificates; might not solve CA issues, and the goal is not to restart the server.
  • Monty wonders why not to upgrade to 10.2? “Let’s talk outside because it’s a much larger question.”
MySQL InnoDB Cluster – Miguel Araújo
  • group replication: update everywhere (multi-master), virtually synchronous replication, automatic server failover, distributed recovery, group reconfiguration, GCS (implementation of Paxos – group communication system). HA is a critical factor.
  • mysqlsh: interactive and batch operations. Document store (CRUD and relational access)
  • admin API in mysqlsh: create & manage clusters, hide complexity of configuration/provisioning/orchestration of the InnoDB clusters. Works with JavaScript and Python
  • Usability. HA out of the box.
  • It’s easy to join a new node; new node goes into recovery mode (and as long as you have all the binary logs, this is easy; otherwise start from a backup)
  • SET PERSIST – run a command remotely, and the configuration is persisted in the server
  • Network flapping? Group replication will just reject the node from the cluster if its flapping too often
Why we’re excited about MySQL 8 – Peter Zaitsev
  • Native data dictionary – atomic, crash safe, DDLs, no more MyISAM system table requirements
  • utf8mb4 as default character set
  • Security: roles, breakdown of SUPER privileges, password history, faster cached-SHA2 authentication (default), builds using OpenSSL (like Percona Server), skip grants blocks remote connections, logs now encrypted when tablespace encryption enabled
  • Persistent AUTO_INCREMENT
  • auto-managed undo tablespaces – do not use system table space for undo space. Automatically reclaim space on disks.
  • Self-tuning, limited to InnoDB (innodb_dedicated_server to auto-tune)
  • partial in-place update for JSON – update filed in JSON object without full rewrite. Good for counters/statuses/timestamps. Update/removal of element is supported
  • Invisible indexes – test impact of dropping indexes before actually dropping them. Maintained but unused by the optimizer. If not needed or used, then drop away.
  • TmpTable Storage Engine – more efficient storage engine for internal temporary tables. Efficient storage for VARCHAR and VARBINARY columns. Good for GROUP BY queries. Doesn’t support BLOB/TEXT columns yet (this reverts to InnoDB temp table now)
  • Backup locks – prevent operations which may result in inconsistent backups. CHECK INSTANCE FOR BACKUP (something Percona Server has had before)
  • Optimizer histograms – detailed statistics on columns, not just indexes
  • improved cost model for the optimizer –
  • Performance schematic – faster (via “fake” indexes), error instrumentation, response time histograms (global & per query), digest summaries
  • select * from sys.session – fast potential replacement for show processlist
  • RESTART (command)
  • SET PERSIST – e.g. change the buffer pool size, and this helps during a restart
  • assumes default storage is SSD now
  • binary log on by default, log_slave_updates enabled by default, and log expires after 30 days by default
  • query cache removed. Look at ProxySQL or some other caching solution
  • native partitioning only – remove partitions from MyISAM or convert to InnoDB
  • resource groups – isolation and better performance (map queries to specific CPU cores; can jail your costly queries, like analytical queries)
  • Feature Requests: better single thread performance, no parallel query support
MySQL Test Framework for Support and Bugs Work – Sveta Smirnova
  • MTR allows you to add multiple connections
  • has commands for flow control
ProxySQL – GTID Consistent Reads – René Cannaò, Nick Vyzas
  • threshold is configurable in increments of 1 second. Replication lag can be monitored with ProxySQL. Want to ensure you don’t have stale reads.
  • Why is GTID important? To guarantee consistently. Auto positioning for restructuring topologies.
  • –session-track-gtids is an important feature which allows sending the GTID for a transaction on the OK packet for a transaction. Not available in MariaDB.
  • There is a ProxySQL Binlog Reader now – GTID information about a MySQL server to all connected ProxySQL instances. Lightweight process to run on your MySQL server.
  • ProxySQL can be configured to enforce GTID consistency for reads on any hostgroup/replication hostgroup.
  • Live demo by René
Turbocharging MySQL with Vitess – Sugu Sougoumarane
  • trend for the cloud: container instances, short-lived containers, tolerate neighbors, discoverability. No good tools yet for Kubernetes.
  • non-ideal options: application sharing, NoSQL, paid solutions, NewSQL (CockroachDB, TiDB, Yugabyte)
  • Vitess: leverage MySQL at massive scale, opensource, 8+ years of work, and multiple production examples
  • Square uses Vitess for Square Cash application.
  • Can MySQL run on Docker? Absolutely, many of the companies do huge QPS on Docker.
  • YouTube does a major re-shard every 2-3 months once. No one notices nowadays when that happens.
  • app server connects to vtgate, and only underneath it’s a bunch of smaller databases with vttablet + mysqld. The lockserver is what makes it run well in the cloud.
  • pluggable architecture with no compromise on performance: monitoring, health check, ACLs, tracing, more.
  • at most, it adds about 2ms overhead to connections
  • Go coding standards are enforced, unit tests with strict coverage requirements, end-to-end tests, Travis, CodeClimate and Netlify. Readability is king.
  • On February 5 2018, it will be a CNCF project. One year of due diligence. They said there was nothing to compare it with. Looked at maturity and contributors. It’s becoming a truly community-owned project! (CNCF to Host Vitess is already live as of now)
  • roadmap: full cross-shard queries, migration tools, simplify configurability, documentation.
  • full MySQL protocol, but a limited query set – they want to get it to a point where it accepts a full MySQL query.
Orchestrator on Raft – Shlomi Noach
  • Raft: guaranteed to be in-order replication log, an increasing index. This is how nodes choose a leader based on who has the higher index. Get periodic snapshots (node runs a full backup).
  • HashiCorp raft, a Golang raft implementation, used by Consul
  • orchestrator manages topology for HA topologies; also want orchestrator to be highly available. Now with orchestrator/raft, remove the MySQL backend dependency, and you can have data center fencing too. Now you get: better cross-DC deploys, DC-local KV control, and also Kubernetes friendly.
  • n-orchestrator nodes, each node still runs its own backend (either MySQL or SQLite). Orchestrator provides the communication for SQLite between the nodes. Only one (the Raft leader) will handle failovers
  • implementation & deployment @ Github – one node per DC (deployed at 3 different DCs). 1-second raft polling interval. 2 major DCs, one in the cloud. Step-down, raft-yield, SQLite-backed log store, and still a MySQL backend (SQLite backend use case is in the works)
  • They patched the HashiCorp raft library. The library doesn’t care about the identity of nodes, with Github they do want to control the identity of the leader. There is an “active” data center, and locality is important. This is what they mean by raft-yield (picking a candidate leader).
  • The ability for a leader to step down is also something they had to patch.
  • HashiCorp Raft only supports LMDB and another database, so the replication log is now kept in a relational SQLite backed log store. Another patch.
  • once orchestrator can’t run its own self-health check, it recognizes this. The application can tell raft now that it’s stepping down. Takes 5 seconds to step down, and raft then promotes another orchestrator node to be the leader. This is their patch.
  • can also grab leadership
  • DC fencing handles network partitioning.
  • orchestrator is Consul-aware. Upon failover, orchestrator updates Consul KV with the identity of the promoted master.
  • considerations to watch out for: what happens if, upon replay of the Raft log, you hit two failovers for the same cluster? NOW() and otherwise time-based assumptions. Reapplying snapshot/log upon startup
  • roadmap: use Kubernetes (cluster IP based configuration in progress, already container friendly via auto-re-provisioning of nodes via Raft)
MyRocks Roadmaps – Yoshinori Matsunobu
  • Facebook has a large User Database (UDB). Social graph, massively sharded, low latency, automated operations, pure flash storage (constrained by space, not CPU/IOPS)
  • They have a record cache in-front of MySQL – Tao for reads. If cache misses, then it hits the database. And all write requests go thru MySQL. UDB has to be fast to ensure a good user experience.
  • they also at Facebook run 2 instances of MySQL on the same machine, because CPU wasn’t huge, but the space savings were awesome.
  • design decisions: clustered index (same as InnoDB), slower for reads, faster for writes (bloom filters, column family), support for transactions including consistency between binlog and MyRocks. Faster data loading/deletes/replication, dynamic options (instead of having to restart mysqld), TTL (comparable to HBase TTL feature, specify the TTL, any data older than time, can be removed), online logical (for recovery purposes) & binary backup (for creating replicas)
  • Pros: smaller space, better cache hit rate, writes are faster so you get faster replication, much smaller bytes written
  • Cons: no statement based replication, GAP locks, foreign keys, full-text index, spatial index support. Need to use case sensitive collations for performance. Reads are slower, especially if the data fits in memory. Dependent on file system and OS; lack of solid direct I/O (uses buffered I/O). You need a newer than 4.6 kernel. Too many tuning options beyond buffer pool such as bloom filter, compactions, etc.
  • Completed InnoDB to MyRocks migration. Saved 50% space in UDB compared to compressed InnoDB.
  • Roadmaps: getting in MariaDB and Percona Server for MySQL. Read Mark’s blog for matching read performance vs InnoDB. Supporting mixed engines. Better replication and bigger instance sizes.
  • mixed engines: InnoDB and MyRocks on the same instance, though single transaction does not overlap engines. Plan to extend star backup to integrate `myrocks_hotbackup. Backport gtid_pos_auto_engines from MariaDB?
  • Removing engine log. Could be caused by binlog and engine log, which requires 2pc and ordered commits. Use one log? Either binlog or binlog like service or RocksDB WAL? Rely on binlog now (semi-sync, binlog consumers), need to determine how much performance is gained by stopping writing to WAL.
  • Parallel replication apply is important in MySQL 8
  • support bigger instance sizes: shared nothing database is not a general purpose database. Today you can get 256GB+ RAM and 10TB+ flash on commodity servers. Why not run one big instance and put everything there? Bigger instances may help general purpose small-mid applications. Then you don’t have to worry about sharing. Atomic transactions, joins and secondary keys will just work. Amazon Aurora today supports a 60TB instance!
  • today: you can start deploying slaves with consistency check. Many status counters for instance monitoring.
ProxySQL internals – René Cannaò
  • reduce latency, scales, maximize throughput. Single instance to travel hundreds of thousands of connections and to handle thousands of backend servers.
  • threading models: one thread per connection (blocking I/O), thread pooling (non-blocking I/O, scalable).
  • ProxySQL thread pool implementation: known as “MySQL threads”, fixed number of worker threads (configurable), all threads listen on the same port(s), client connections are not shared between threads, all threads perform their own network I/O, and it uses poll() (does that scale? True, but there is a reason why poll over epoll)
  • threads never share client connections – no need for synchronization, thread contention is reduced, each thread calls poll(). Possibly imbalanced load as a con (one thread that has way more connections that another). Is it really a problem? Most of the time, no, connections will automatically balance.
  • poll() is O(N), epoll() is O(1). Poll() is faster than epoll() for fewer connections (around 1000). Performance degrees when there are a lot of connections. So by default, it uses poll() instead of epoll(), around 50,000 connections performance degrades badly – so ProxySQL has auxiliary threads.
  • MySQL_Session() is implemented as a state machine. Stores metadata associated with the client session (running timers, default hostgroup, etc.)
MySQL Point-in-time recovery like a rockstar – Frederic Descamps Releases
  • Percona Monitoring and Management 1.7.0 (PMM) – This release features improved support for external services, which enables a PMM Server to store and display metrics for any available Prometheus exporter. For example, you could deploy the postgres_exporter and use PMM’s external services feature to store PostgreSQL metrics in PMM. Immediately, you’ll see these new metrics in the Advanced Data Exploration dashboard. Then you could leverage many of the pre-developed PostgreSQL dashboards available on, and with a minimal amount of edits have a working PostgreSQL dashboard in PMM!
  • MariaDB Server 10.1.31 – usual updates to storage engines, and a handful of bug fixes.
Link List Upcoming appearances
  • SCALE16x – Pasadena, California, USA – March 8-11 2018

I look forward to feedback/tips via e-mail at or on Twitter @bytebot.

'Command Not Found' during installation process

Lastest Forum Posts - February 9, 2018 - 4:16am
I am new to Linux and generally use Windows, but this service is only for Linux so perhaps someone help me figure out what I am doing wrong in this scenario.

I have followed the guide and instructions to installing the Percona XtraDB Cluster to the tee. However, I get hanged up when I get to Bootstrapping the first node. According to the documentation, it says to do this:

Instead of changing the configuration, start the first node using the following command:

[root@pxc1 ~]# /etc/init.d/mysql bootstrap-pxc I input that information into my terminal and I just get: Command Not Found.

$ [root@pxc1 ~]# /etc/init.d/mysql bootstrap-pxc1
[root@pxc1: command not found

What am I doing wrong? I copied and pasted it so it can't be a miss spelling. Googling the issue I didn't come up with any results.

Fsync Performance on Storage Devices

Latest MySQL Performance Blog posts - February 8, 2018 - 7:53am

While preparing a post on the design of ZFS based servers for use with MySQL, I stumbled on the topic of fsync call performance. The fsync call is very expensive, but it is essential to databases as it allows for durability (the “D” of the ACID acronym).

Let’s first review the type of disk IO operations executed by InnoDB in MySQL. I’ll assume the default InnoDB variable values.

The first and most obvious type of IO are pages reads and writes from the tablespaces. The pages are most often read one at a time, as 16KB random read operations. Writes to the tablespaces are also typically 16KB random operations, but they are done in batches. After every batch, fsync is called on the tablespace file handle.

To avoid partially written pages in the tablespaces (a source of data corruption), InnoDB performs a doublewrite. During a doublewrite operation, a batch of dirty pages, from 1 to about 100 pages, is first written sequentially to the doublewrite buffer and fsynced. The doublewrite buffer is a fixed area of the ibdata1 file, or a specific file with the latest Percona Server for MySQL 5.7. Only then do the writes to the tablespaces of the previous paragraph occur.

That leaves us with the writes to the InnoDB log files. During those writes, the transaction information — a kind of binary diff of the affected pages — is written to the log files and then the log file is fsynced. The duration of the fsync call can be a major contributor to the COMMIT latency.

Because the fsync call takes time, it greatly affects the performance of MySQL. Because of this, you probably noticed there are many status variables that relate to fsyncs. To overcome the inherent limitations of the storage devices, group commit allows multiple simultaneous transactions to fsync the log file once for all the transactions waiting for the fsync. There is no need for a transaction to call fsync for a write operation that another transaction already forced to disk. A series of write transactions sent over a single database connection cannot benefit from group commit.

Fsync Results

In order to evaluate the fsync performance, I used the following Python script:

#!/usr/bin/python import os, sys, mmap # Open a file fd = "testfile", os.O_RDWR|os.O_CREAT|os.O_DIRECT ) m = mmap.mmap(-1, 512) for i in range (1,1000): os.lseek(fd,os.SEEK_SET,0) m[1] = "1" os.write(fd, m) os.fsync(fd) # Close opened file os.close( fd )

The script opens a file with the O_DIRECT flag, writes and fsyncs it 1000 times and close the file. I added O_DIRECT after an internal discussion with my colleagues, but it doesn’t change the results and it doesn’t remove the need for calling fsync. We’ll discuss in more detail the impacts of O_DIRECT after we reviewed the results. The script is called with the time command like below:

root@lab:/tmp/testfsync# time python /root/ real 0m18.320s user 0m0.060s sys 0m0.096s

In the above example using a 7.2k rpm drive, the fsync rate is about 56/s for a latency of 18ms. A 7.2k RPM drive performs 120 rotations per second. On average, the fsyncs require a bit more than two rotations to complete. The filesystem appears to make very little differences: ext4 and XFS show similar results. That means if MySQL uses such storage devices for the InnoDB log files, the latency of each transaction is at least 18ms. If the application workload requires 200 write transactions per second, they’ll need to be executed using at least four database connections.

So, let’s begin with rotational devices. These are becoming a bit less common now with databases, especially without a raid controller. I could only find a few.

Drive RPM Rate Latency Notes WDC WD2500BJKT 5400 22/s 45 ms Laptop SATA from 2009 ST2000LM003 5400 15/s 66 ms USB-3 portable drive ST3750528AS 7200 40/s 25 ms Desktop grade SATA WD2502ABYS-18B7A0 7200 56/s 18 ms Desktop grade SATA HUA723020ALA641 7200 50/s 20 ms Enterprise grade SATA, md mirror Dell SAS unknown 7200 58/s 17 ms Behind Perc ctrl but no write cache HDWE150 7200 43/s 23 ms Recent Desktop grade SATA, 5TB


I unfortunately didn’t have access to any 10k or 15k RPM drives that were not behind a raid controller with a write cache. If you have access to such drives, run the above script a few times and send me your results, that would help create a more complete picture! So, we can see a correlation between the rotational speed and the fsync rate, which makes sense. The faster a disk turns, the faster it can fsync. The fsync call saves the data and then updates the metadata. Hence, the heads need to move. That’s probably the main explanation for the remaining disparity. A good point, all drives appears to be fully complying with the SATA flush command even though they all have an enabled write cache. Disabling the drives write caches made no difference.

With the above number, the possible transaction rates in fully ACID mode is pretty depressing. But those drives were rotating ones, what about SSD drives? SSD are memory devices and are much faster for random IO operations. There are extremely fast for reads, and good for writes. But as you will see below, not that great for fsyncs.

Drive rate latency notes SAMSUNG MZ7LN512 160/s 6.3ms Consumer grade SATA Crucial_CT480M500SSD1 108/s 9.3ms Consumer grade SATA Intel 520 2031/s 0.49ms Consumer grade SATA SAMSUNG MZVPV512HDGL 104/s 9.6ms Consumer grade NVMe Samsung SSD 960 PRO 267/s 3.8ms High-end consumer grade NVMe Intel PC-3100 1274/s 0.79ms Low-end consumer grade NVMe (cheat?) Intel 750 2038/s 0.49ms High-end consumer grade NVMe Intel PC-3700 7380/s 0.14ms High-end enterprise-grade NVMe


Again, this is a small sample of the devices I have access to. All SSD/Flash have write caches, but only the high-end devices have capacitors to flush their write cache to the flash with a loss of power. The PC-3100 device is actually in my home server, and it is obviously cheating. If you look at the card specs on the Intel website, it doesn’t have the “Enhanced Power Loss Data Protection” and “End-to-End Data Protection” features. The much more expansive PC-3700 does. I use the PC-3100 as a ZFS L2ARC device, so I am good. In general, the performance of a flash device varies a bit more than rotational devices, since factors like the number of recent writes and the filling factor come into play.

Even when using a high-end NVMe device like the PC-3700, you can’t reach 10k fully ACID transactions per second at low thread concurrency. How do you reach the higher levels? The answer here is the good old raid controller with a protected write cache. The write cache is basically using DRAM memory protected from power loss by a battery. SAN controllers have similar caches. The writes to the InnoDB log files are sequential writes interleaved with fsyncs. The raid controller concatenates the sequential writes, eventually writing one big chunk on disk and… ignoring the fsyncs. Here’s the result from the only device I had access to:

Drive rate latency notes Dell Perc with BBU 23000/s 0.04ms Array of 7.2k rpm drives


That’s extremely fast but, of course, it is memory. I modified the script to loop 10k times instead of 1k. In theory, something a single slave thread doing simple transactions could reach a rate of 20k/s or more while being fully ACID.


We must always consider the results we got in the previous section in the context of a given application. For example, a server using an Intel PC-3700 NVMe card can do more than 7000 fully ACID transactions per second even if it is fully durable provided those transactions are issued by a sufficient number of threads. Adding threads will not allow scaling infinitely. At some point, other bottlenecks like mutex contention or page flushing will dominate.

We often say that Galera-based cluster solutions like Percona XtraDB Cluster (PXC) add latency to the transactions, since it involves communication over the network. With the Galera protocol, a commit operation returns only when all the nodes have received the data. Thus, tt is a good practice to relax the local durability and use innodb_flush_log_at_trx_commit set to 0 or 2. On a local network, the ping time is always below 1ms and often below 0.1ms. As a result, the transaction latency is often smaller.

About fdatasync

The fsync system is not the only system call that persists data to disk. There is also the fdatasync call. fdatasync persists the data to disk but does not update the metadata information like the file size and last update time. Said otherwise, it performs one write operation instead of two. In the Python script, if I replace os.fsync with os.fdatasync, here are the results for a subset of devices:

Drive rpm rate latency notes ST2000LM003 5400 72/s 13 ms USB-3 portable drive WD2502ABYS-18B7A0 7200 118/s 8.5 ms Desktop grade SATA SAMSUNG MZ7LN512 N/A 333/s 3.0ms Consumer grade SATA Crucial_CT480M500SSD1 N/A 213/s 4.7ms Consumer grade SATA Samsung SSD 960 PRO N/A 714/s 1.4ms High-end consumer grade NVMe


In all cases, the resulting rates have more than doubled. The fdatasync call has a troubled history, as there were issues with it many years ago. Because of those issues, InnoDB never uses fdatasync, only fsyncs. You can find the following comments in the InnoDB os/

/* We let O_SYNC only affect log files; note that we map O_DSYNC to O_SYNC because the datasync options seemed to corrupt files in 2001 in both Linux and Solaris */

2001 is a long time ago. Given the above results, maybe we should reconsider the use of fdatasync. From the Linux main page on fdatasync, you find:

fdatasync() is similar to fsync(), but does not flush modified metadata unless that metadata is needed in order to allow a subsequent data retrieval to be correctly handled. For example, changes to st_atime or st_mtime (respectively, time of last access and time of last modification; see stat(2)) do not require flushing because they are not necessary for a subsequent data read to be handled correctly. On the other hand, a change to the file size (st_size, as made by say ftruncate(2)), would require a metadata flush.

So, even with fdatasync, operations like extending an InnoDB tablespace will update the metadata correctly. This appears to be an interesting low-hanging fruit in term of MySQL performance. In fact, webscalesql already have fdatasync available


Why do we need a fsync or fdatasync with O_DIRECT? With O_DIRECT, the OS is not buffering anything along the way. So the data should be persisted right? Actually, the OS is not buffering but the device very likely is. Here are a few results to highlight the point using a 7.2k rpm SATA drive:

Test rate latency O_DIRECT, drive Write cache enabled 4651/s 0.22ms O_DIRECT, drive Write cache disabled 101/s 9.9ms ASYNC + fdatasync, Write cache enabled 119/s 8.4ms ASYNC + fdatasync, Write cache disabled 117/s 8.5ms


The drive write cache was enabled/disabled using the hdparm command. Clearly, there’s no way the drive can persist 4651 writes per second. O_DIRECT doesn’t send the SATA flush command to the disk, so we are only writing to the drive write cache. If the drive write cache is disabled, the rate falls to a more reasonable value of 101/s. What is interesting — and I don’t really understand why — is that opening the file in async mode and performing fdatasync is significantly faster. As expected, the presence of the drive write cache has no impacts on ASYNC + fdatasync. When the fdatasync call occurs, the data is still in the OS file cache.

If you want to use only O_DIRECT, you should make sure all the storage write caches are crash safe. That’s why MySQL adds a fsync call after a write to a file opened with O_DIRECT.


These days, I find it difficult to write a blog post without talking about ZFS. How does ZFS handles fsyncs and fdatasyncs? ZFS, like a database, performs write ahead logging in the ZIL. That means calls like fsync and fdatasync return when the data has been persisted to the ZIL, and not to the actual filesystem. The real write operation is done a few seconds later by a background thread. That means the added write for the metadata does not impact performance right away. My home server uses ZFS over a pair of 7.2k RPM drive and doesn’t have a SLOG device. The ZIL is thus stored on the 7.2k RPM drives. The results are the following:

Drive rpm rate latency ZFS fsync 7200 104/s 9.6 ms ZFS fdatasync 7200 107/s 9.3 ms


Remember that with ZFS, you need to disable the O_DIRECT mode. The fdatasync rate appears to be slightly faster, but it is not really significant. With ZFS, the fsync/fdatasync performance relates to where the ZIL is stored. If there is no SLOG device, the ZIL is stored with the data and thus, the persitence performance of the devices used for the data matter. If there is a SLOG device, the persistence performance is governed by the specs of the device(s) on which the SLOG is located. That’s a very important aspect we have to consider when designing a MySQL server that will use ZFS. The design of such server will be discussed in more details in a future post.

pt-archiver does not honor original where clause when used with Index hint

Lastest Forum Posts - February 8, 2018 - 7:20am
Im purging a table using pt-archiver , using a where clause of "deleted=1 and <= date_sub(now(),interval 30 day)". This was very slow , as the select statement was using the Unique key index on the table . I tested the performance of the select query by passing an index hint idx_deleted, which is an index on the deleted column, and this was much faster. So i made use of the -i parameter to use the index on the deleted column. The archiver job was running much faster and not showing up on the slow query log at all . However i noticed that the actual statement running was "DELETE from table where deleted=1 LIMIT 1". And on checking the data , I see that this statement was actually deleting records without honoring the last_activity date!! Is this a Bug? The version i'm using is pt-archiver 2.2.5

Tutorial Schedule for Percona Live 2018 Is Live

Latest MySQL Performance Blog posts - February 8, 2018 - 4:58am

Percona has revealed the line-up of in-depth tutorials for the Percona Live 2018 Open Source Database Conference, taking place April 23-25, 2018 at the Santa Clara Convention Center in Santa Clara, Calif. Secure your spot now with Advanced Registration prices (available until March 4, 2018). Sponsorship opportunities for the conference are still available.

Percona Live 2018 Open Source Database Conference is the premier open source database event. The theme for the upcoming conference is “Championing Open Source Databases,” with a range of topics on MySQL, MongoDB and other open source databases, including time series databases, PostgreSQL and RocksDB. Session tracks include Developers, Operations and Business/Case Studies. 

Tutorials take place throughout the day on April 23, 2018. Tutorials provide practical, in-depth knowledge of critical open source database issues. Topics include:

Hyatt Regency Santa Clara & The Santa Clara Convention Center

Percona Live 2018 Open Source Database Conference will be held at the Hyatt Regency Santa Clara & The Santa Clara Convention Center, at 5101 Great America Parkway Santa Clara, CA 95054.

The Hyatt Regency Santa Clara & The Santa Clara Convention Center is a prime location in the heart of the Silicon Valley. Enjoy this spacious venue with complimentary wifi, on-site expert staff and three great restaurants. You can reserve a room by booking through the Hyatt’s dedicated Percona Live reservation site.

Book your hotel using Percona’s special room block rate!


Sponsorship opportunities for Percona Live 2018 Open Source Database Conference are available and offer the opportunity to interact with the DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event. Contact for sponsorship details.

  • Diamond Sponsors – Continuent, VividCortex
  • Gold Sponsors – Facebook, Grafana
  • Bronze Sponsors – SolarWinds, TwinDB, Yelp
  • Media Sponsors – Datanami, EnterpriseTech, HPCWire,
Visit Percona Store

General Inquiries

For general inquiries, please send us your question and someone will contact you.