Buy Percona ServicesBuy Now!

Cluster hangs during alter table command

Lastest Forum Posts - June 14, 2018 - 5:37am
We have a three node cluster on PXC 5.5. Writes happen on only one of the nodes. Whenever we alter a table (specifically very large tables), the entire cluster hangs. Process list shows many connections in state 'wsrep in pre-commit stage' on the main write node. These are queries pointing to tables not related to the alter table.

Is this simply a case of one of the nodes causing flow control? We have plans to replace our slowest node very soon, so I'm hoping it's that. If we were less concerned about a single node being slightly behind, would increasing fc_limit help the cluster from getting hung up during alters?

Audit Log no longer works with MySQL 5.7 ?

Lastest Forum Posts - June 13, 2018 - 8:05am
I have been using the Percona Audit Log Plugin on my MySQL 5.5 databases for some time now, and it works very well. I have recently started testing the upgrade to MySQL 5.7, however the audit log plugin no longer seems to work.

I have tried the version downloaded with Percona Server 5.7, 5.6 and 5.5 but when I try to install it I get:

Error Code: 1126. Can't open shared library '/usr/lib/mysql/plugin/audit_log.so' (errno: 2 /usr/lib/mysql/plugin/audit_log.so: undefined symbol: _my_thread_var) I have Debian Servers, and have installed the Oracle Build of MySQL. Are there any known incompatibilities? And if so is there a version that does work?



Question about QAN and MongoDB

Lastest Forum Posts - June 12, 2018 - 1:38pm
Hello,
all is functionnal on PMM for us but we have one of our service which spam a lot Mongo with db.stats() command (for health check purpose) and it will not be possible (for now..) to deactivate this.
Do you know if it's possible to filter in QAN some queries, or tell to QAN to not get that querie ?
Thank you for your reply and have a good day

Miche

Unable to install XtraDBCluster on a server where Percona Server for MySQL is running

Lastest Forum Posts - June 12, 2018 - 9:39am
I have Percona Sever for MySQL 5.7 installed in my linux server. When i install XtraDBCluster 5.7 in my server. I get the following error (highlighted in red). Based on the error message I see that there are conflicting files between the percona-xtradb-cluster-server and percona-server-common. I tried uninstalling the percona-server-common files and then installling the xtradb-cluster. The installation is successful but the percona mysql server doesn't come up when I try to bootstrap the server. How do I resolve this issue?
OS: Ubuntu TLS 16.04
Server: AWS EC2 instances


$ sudo apt-get install percona-xtradb-cluster-57
[sudo] password for indiadev:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
libboost-program-options1.58.0 percona-server-common-5.7 percona-xtradb-cluster-garbd-5.7 percona-xtradb-cluster-garbd-debug-5.7
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
percona-xtradb-cluster-client-5.7 percona-xtradb-cluster-common-5.7 percona-xtradb-cluster-server-5.7
Suggested packages:
tinyca
The following packages will be REMOVED:
percona-server-client-5.7 percona-server-server-5.7
The following NEW packages will be installed:
percona-xtradb-cluster-57 percona-xtradb-cluster-client-5.7 percona-xtradb-cluster-common-5.7 percona-xtradb-cluster-server-5.7
0 upgraded, 4 newly installed, 2 to remove and 7 not upgraded.
Need to get 0 B/12.5 MB of archives.
After this operation, 71.3 MB disk space will be freed.
Do you want to continue? [Y/n] y
Preconfiguring packages ...
(Reading database ... 84071 files and directories currently installed.)
Removing percona-server-server-5.7 (5.7.22-22-1.xenial) ...
Removing percona-server-client-5.7 (5.7.22-22-1.xenial) ...
Processing triggers for man-db (2.7.5-1) ...
Selecting previously unselected package percona-xtradb-cluster-common-5.7.
(Reading database ... 83904 files and directories currently installed.)
Preparing to unpack .../percona-xtradb-cluster-common-5.7_5.7.21-29.26-1.xenial_amd64.deb ...
Unpacking percona-xtradb-cluster-common-5.7 (5.7.21-29.26-1.xenial) ...
Selecting previously unselected package percona-xtradb-cluster-client-5.7.
Preparing to unpack .../percona-xtradb-cluster-client-5.7_5.7.21-29.26-1.xenial_amd64.deb ...
Unpacking percona-xtradb-cluster-client-5.7 (5.7.21-29.26-1.xenial) ...
Setting up percona-xtradb-cluster-common-5.7 (5.7.21-29.26-1.xenial) ...
dpkg-query: no path found matching pattern /etc/mysql/my.cnf
-------------
* The suggested mysql options and settings are in /etc/mysql/percona-server.conf.d/mysqld.cnf
* If you want to use mysqld.cnf as default configuration file please make backup of /etc/my.cnf
* Once it is done please execute the following commands:
rm -rf /etc/mysql/my.cnf
update-alternatives --install /etc/mysql/my.cnf my.cnf "/etc/mysql/percona-xtradb-cluster.cnf" 200
-------------
Selecting previously unselected package percona-xtradb-cluster-server-5.7.
(Reading database ... 83933 files and directories currently installed.)
Preparing to unpack .../percona-xtradb-cluster-server-5.7_5.7.21-29.26-1.xenial_amd64.deb ...
................Unpacking percona-xtradb-cluster-server-5.7 (5.7.21-29.26-1.xenial) ...
dpkg: error processing archive /var/cache/apt/archives/percona-xtradb-cluster-server-5.7_5.7.21-29.26-1.xenial_amd64.deb (--unpack):
trying to overwrite '/usr/share/mysql/danish/errmsg.sys', which is also in package percona-server-common-5.7 5.7.22-22-1.xenial
dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)
Selecting previously unselected package percona-xtradb-cluster-57.
Preparing to unpack .../percona-xtradb-cluster-57_5.7.21-29.26-1.xenial_amd64.deb ...
Unpacking percona-xtradb-cluster-57 (5.7.21-29.26-1.xenial) ...
Processing triggers for man-db (2.7.5-1) ...
Errors were encountered while processing:
/var/cache/apt/archives/percona-xtradb-cluster-server-5.7_5.7.21-29.26-1.xenial_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

Bootstrap Node

Lastest Forum Posts - June 12, 2018 - 5:35am
Hi All,

I have a cluster setup with 5 nodes
node1 - bootstrap
node2, node3, node4, node5

Unfortunately, node 1 was shut down, so I have started as a normal node. Currently, my network doesn't have any bootstrap. How to start the node1 as bootstrap. I know if i need to start as bootstrap I need to stop all node 2 - 5 then restart the Node 1 as Bootstrap.

Is there is any other procedure to start bootstrap.

Audit loggin exclude accounts hostname

Lastest Forum Posts - June 12, 2018 - 1:48am
Hi All,

Any one has experience with setting wildcard hostname for audit_log_exclude_accounts like user1@%.
Is it possible if so how would the variable be like ?

Thanks.

Daan

ProxySQL Experimental Feature: Native ProxySQL Clustering

Latest MySQL Performance Blog posts - June 11, 2018 - 5:18am

ProxySQL 1.4.2 introduced native clustering, allowing several ProxySQL instances to communicate with and share configuration updates with each other. In this blog post, I’ll review this new feature and how we can start working with 3 nodes.

Before I continue, let’s review two common methods to installing ProxySQL.

ProxySQL as a centralized server

This is the most common installation, where ProxySQL is between application servers and the database. It is simple, but without any high availability. If ProxySQL goes down you lose all connectivity to the database.

ProxySQL on app instances

Another common setup is to install ProxySQL onto each application server. This is good because the loss of one ProxySQL/App server will not bring down the entire application.

For more information about the previous installation, please visit this link Where Do I Put ProxySQL?

Sometimes our application and databases grow fast. Maybe you need add a loadbalancer, for example, and in that moment you start thinking … “What could I do to configure and maintain all these ProxySQL nodes without mistakes?

To do that, there are many tools like Ansible, Puppet, and Chef, but you will need write/create/maintain scripts to do those tasks. This is really difficult to administer for one person.

Now, there is a native solution, built into ProxySQL, to create and administer a cluster in an easy way.

At the moment this feature is EXPERIMENTAL and subject to change. Think very carefully before installing it in production, in fact I strongly recommend you wait. However, if you would like to start testing this feature, you need to install ProxySQL 1.4.2, or better.

This clustering feature is really useful if you have installed one ProxySQL per application instance, because all the changes in one of the ProxySQL nodes will be propagated to all the other ProxySQL nodes. You can also configure a “master-slave” style setup with ProxySQL clustering.

There are only 4 tables where you can make changes and propagate the configuration:

  • mysql_query_rules
  • mysql_servers
  • mysql_users
  • proxysql_servers
How does it work?

It’s easy. When you make a change like INSERT/DELETE/UPDATE on any of these tables, after running the command LOAD … TO RUNTIME , ProxySQL creates a new checksum of the table’s data and increments the version number in the table runtime_checksums_values. Below we can see an example.

admin ((none))>SELECT name, version, FROM_UNIXTIME(epoch), checksum FROM runtime_checksums_values ORDER BY name; +-------------------+---------+----------------------+--------------------+ | name | version | FROM_UNIXTIME(epoch) | checksum | +-------------------+---------+----------------------+--------------------+ | admin_variables | 0 | 1970-01-01 00:00:00 | | | mysql_query_rules | 1 | 2018-04-26 15:58:23 | 0x0000000000000000 | | mysql_servers | 1 | 2018-04-26 15:58:23 | 0x0000000000000000 | | mysql_users | 4 | 2018-04-26 18:36:12 | 0x2F35CAB62143AE41 | | mysql_variables | 0 | 1970-01-01 00:00:00 | | | proxysql_servers | 1 | 2018-04-26 15:58:23 | 0x0000000000000000 | +-------------------+---------+----------------------+--------------------+

Internally, all nodes are monitoring and communicating with all the other ProxySQL nodes. When another node detects a change in the checksum and version (both at the same time), each node will get a copy of the table that was modified, make the same changes locally, and apply the new config to RUNTIME to refresh the new config, make it visible to the applications connected and automatically save it to DISK for persistence.

The following setup creates a “synchronous cluster” so any changes to these 4 tables on any ProxySQL server will be replicated to all other ProxySQL nodes. Be careful!

How can I start testing this new feature?

1) To start we need to get at least 2 nodes. Download and install ProxySQL 1.4.2 or higher and start a clean version.

2) On all nodes, we need to update the following global variables. These changes will set the username and password used by each node’s internal communication to cluster1/clusterpass. These must be the same on all nodes in this cluster.

update global_variables set variable_value='admin:admin;cluster1:clusterpass' where variable_name='admin-admin_credentials'; update global_variables set variable_value='cluster1' where variable_name='admin-cluster_username'; update global_variables set variable_value='clusterpass' where variable_name='admin-cluster_password'; update global_variables set variable_value=200 where variable_name='admin-cluster_check_interval_ms'; update global_variables set variable_value=100 where variable_name='admin-cluster_check_status_frequency'; update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_query_rules_save_to_disk'; update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_servers_save_to_disk'; update global_variables set variable_value='true' where variable_name='admin-cluster_mysql_users_save_to_disk'; update global_variables set variable_value='true' where variable_name='admin-cluster_proxysql_servers_save_to_disk'; update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_query_rules_diffs_before_sync'; update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_servers_diffs_before_sync'; update global_variables set variable_value=3 where variable_name='admin-cluster_mysql_users_diffs_before_sync'; update global_variables set variable_value=3 where variable_name='admin-cluster_proxysql_servers_diffs_before_sync'; load admin variables to RUNTIME; save admin variables to disk;

3) Add all IPs from the other ProxySQL nodes into each other node:

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.138.180.183',6032,100,'PRIMARY'); INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.138.244.108',6032,99,'SECONDARY'); INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.138.244.244',6032,98,'SECONDARY'); LOAD PROXYSQL SERVERS TO RUNTIME; SAVE PROXYSQL SERVERS TO DISK;

At this moment, we have all nodes synced.

In the next example from the log file, we can see when node1 detected node2.

[root@proxysql1 ~]# $ tail /var/lib/proxysql/proxysql.log ... 2018-05-10 11:19:51 [INFO] Cluster: Fetching ProxySQL Servers from peer 10.138.244.108:6032 started 2018-05-10 11:19:51 [INFO] Cluster: Fetching ProxySQL Servers from peer 10.138.244.108:6032 completed 2018-05-10 11:19:51 [INFO] Cluster: Loading to runtime ProxySQL Servers from peer 10.138.244.108:6032 2018-05-10 11:19:51 [INFO] Destroyed Cluster Node Entry for host 10.138.148.242:6032 2018-05-10 11:19:51 [INFO] Cluster: Saving to disk ProxySQL Servers from peer 10.138.244.108:6032 2018-05-10 11:19:52 [INFO] Cluster: detected a new checksum for proxysql_servers from peer 10.138.180.183:6032, version 6, epoch 1525951191, checksum 0x3D819A34C06EF4EA . Not syncing yet ... 2018-05-10 11:19:52 [INFO] Cluster: checksum for proxysql_servers from peer 10.138.180.183:6032 matches with local checksum 0x3D819A34C06EF4EA , we won't sync. 2018-05-10 11:19:52 [INFO] Cluster: closing thread for peer 10.138.148.242:6032 2018-05-10 11:19:52 [INFO] Cluster: detected a new checksum for proxysql_servers from peer 10.138.244.244:6032, version 4, epoch 1525951163, checksum 0x3D819A34C06EF4EA . Not syncing yet ... 2018-05-10 11:19:52 [INFO] Cluster: checksum for proxysql_servers from peer 10.138.244.244:6032 matches with local checksum 0x3D819A34C06EF4EA , we won't sync ...

Another example is to add users to the table mysql_users. Remember these users are to enable MySQL connections between the application (frontend) and MySQL (backend).

We will add a new username and password on any server; in my test I’ll use node2:

admin proxysql2 ((none))>INSERT INTO mysql_users(username,password) VALUES ('user1','crazyPassword'); Query OK, 1 row affected (0.00 sec) admin proxysql2 ((none))>LOAD MYSQL USERS TO RUNTIME; Query OK, 0 rows affected (0.00 sec)

In the log file from node3, we can see the update immediately:

[root@proxysql3 ~]# $ tail /var/lib/proxysql/proxysql.log ... 2018-05-10 11:30:57 [INFO] Cluster: detected a new checksum for mysql_users from peer 10.138.244.108:6032, version 2, epoch 1525951873, checksum 0x2AF43564C9985EC7 . Not syncing yet ... 2018-05-10 11:30:57 [INFO] Cluster: detected a peer 10.138.244.108:6032 with mysql_users version 2, epoch 1525951873, diff_check 3. Own version: 1, epoch: 1525950968. Proceeding with remote sync 2018-05-10 11:30:57 [INFO] Cluster: detected a peer 10.138.244.108:6032 with mysql_users version 2, epoch 1525951873, diff_check 4. Own version: 1, epoch: 1525950968. Proceeding with remote sync 2018-05-10 11:30:57 [INFO] Cluster: detected peer 10.138.244.108:6032 with mysql_users version 2, epoch 1525951873 2018-05-10 11:30:57 [INFO] Cluster: Fetching MySQL Users from peer 10.138.244.108:6032 started 2018-05-10 11:30:57 [INFO] Cluster: Fetching MySQL Users from peer 10.138.244.108:6032 completed 2018-05-10 11:30:57 [INFO] Cluster: Loading to runtime MySQL Users from peer 10.138.244.108:6032 2018-05-10 11:30:57 [INFO] Cluster: Saving to disk MySQL Query Rules from peer 10.138.244.108:6032 2018-05-10 11:30:57 [INFO] Cluster: detected a new checksum for mysql_users from peer 10.138.244.244:6032, version 2, epoch 1525951857, checksum 0x2AF43564C9985EC7 . Not syncing yet ... 2018-05-10 11:30:57 [INFO] Cluster: checksum for mysql_users from peer 10.138.244.244:6032 matches with local checksum 0x2AF43564C9985EC7 , we won't sync. 2018-05-10 11:30:57 [INFO] Cluster: detected a new checksum for mysql_users from peer 10.138.180.183:6032, version 2, epoch 1525951886, checksum 0x2AF43564C9985EC7 . Not syncing yet ... 2018-05-10 11:30:57 [INFO] Cluster: checksum for mysql_users from peer 10.138.180.183:6032 matches with local checksum 0x2AF43564C9985EC7 , we won't sync. ...

What happens if some node is down?

In this example, we will see and find out what happens if one node is down or has a network glitch, or other issue. I’ll stop ProxySQL node3:

[root@proxysql3 ~]# service proxysql stop Shutting down ProxySQL: DONE!

On ProxySQL node1, we can check that node3 is unreachable:

[root@proxysql1 ~]# tailf /var/lib/proxysql/proxysql.log 2018-05-10 11:57:33 ProxySQL_Cluster.cpp:180:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer 10.138.244.244:6032 . Error: Can't connect to MySQL server on '10.138.244.244' (107) 2018-05-10 11:57:33 ProxySQL_Cluster.cpp:180:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer 10.138.244.244:6032 . Error: Can't connect to MySQL server on '10.138.244.244' (107) 2018-05-10 11:57:33 ProxySQL_Cluster.cpp:180:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer 10.138.244.244:6032 . Error: Can't connect to MySQL server on '10.138.244.244' (107)

And another check can be run in any ProxySQL node like node2, for example:

admin proxysql2 ((none))>SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='proxysql_servers' ORDER BY hostname; +----------------+--------------------+---------------------+---------------------+ | hostname | checksum | changed_at | updated_at | +----------------+--------------------+---------------------+---------------------+ | 10.138.180.183 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:39 | 2018-05-10 12:01:59 | | 10.138.244.108 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:38 | 2018-05-10 12:01:59 | | 10.138.244.244 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:39 | 2018-05-10 11:56:59 | +----------------+--------------------+---------------------+---------------------+ 3 rows in set (0.00 sec)

In the previous result, we can see node3 (10.138.244.244) is not being updated; the column updated_at should have a later datetime. This means that node3 is not running (or is down or network glitch).

At this point, any change to any of the tables, mysql_query_rules, mysql_servers, mysql_users, proxysql_servers, will be replicated between nodes 1 & 2.

In this next example, while node3 is offline, we will add another user to mysql_users table.

admin proxysql2 ((none))>INSERT INTO mysql_users(username,password) VALUES ('user2','passwordCrazy'); Query OK, 1 row affected (0.00 sec) admin proxysql2 ((none))>LOAD MYSQL USERS TO RUNTIME; Query OK, 0 rows affected (0.00 sec)

That change was propagated to node1:

[root@proxysql3 ~]# $ tail /var/lib/proxysql/proxysql.log ... 2018-05-10 12:12:36 [INFO] Cluster: detected a peer 10.138.244.108:6032 with mysql_users version 3, epoch 1525954343, diff_check 4. Own version: 2, epoch: 1525951886. Proceeding with remote sync 2018-05-10 12:12:36 [INFO] Cluster: detected peer 10.138.244.108:6032 with mysql_users version 3, epoch 1525954343 2018-05-10 12:12:36 [INFO] Cluster: Fetching MySQL Users from peer 10.138.244.108:6032 started 2018-05-10 12:12:36 [INFO] Cluster: Fetching MySQL Users from peer 10.138.244.108:6032 completed 2018-05-10 12:12:36 [INFO] Cluster: Loading to runtime MySQL Users from peer 10.138.244.108:6032 2018-05-10 12:12:36 [INFO] Cluster: Saving to disk MySQL Query Rules from peer 10.138.244.108:6032 ...

We keep seeing node3 is out of sync about 25 minutes ago.

admin proxysql2 ((none))>SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; +----------------+--------------------+---------------------+---------------------+ | hostname | checksum | changed_at | updated_at | +----------------+--------------------+---------------------+---------------------+ | 10.138.180.183 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:39 | 2018-05-10 12:21:35 | | 10.138.244.108 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:38 |2018-05-10 12:21:35 | | 10.138.244.244 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:39 |2018-05-10 12:21:35 | +----------------+--------------------+---------------------+---------------------+ 3 rows in set (0.00 sec)

Let’s start node3 and check if the sync works. node3 should connect to the other nodes and get the last changes.

[root@proxysql3 ~]# tail /var/lib/proxysql/proxysql.log ... 2018-05-10 12:30:02 [INFO] Cluster: detected a peer 10.138.244.108:6032 with mysql_users version 3, epoch 1525954343, diff_check 3. Own version: 1, epoch: 1525955402. Proceeding with remote sync 2018-05-10 12:30:02 [INFO] Cluster: detected a peer 10.138.180.183:6032 with mysql_users version 3, epoch 1525954356, diff_check 3. Own version: 1, epoch: 1525955402. Proceeding with remote sync … 2018-05-10 12:30:03 [INFO] Cluster: detected peer 10.138.180.183:6032 with mysql_users version 3, epoch 1525954356 2018-05-10 12:30:03 [INFO] Cluster: Fetching MySQL Users from peer 10.138.180.183:6032 started 2018-05-10 12:30:03 [INFO] Cluster: Fetching MySQL Users from peer 10.138.180.183:6032 completed 2018-05-10 12:30:03 [INFO] Cluster: Loading to runtime MySQL Users from peer 10.138.180.183:6032 2018-05-10 12:30:03 [INFO] Cluster: Saving to disk MySQL Query Rules from peer 10.138.180.183:6032

Looking at the status from the checksum table, we can see node3 is now up to date.

admin proxysql2 ((none))>SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; +----------------+--------------------+---------------------+---------------------+ | hostname | checksum | changed_at | updated_at | +----------------+--------------------+---------------------+---------------------+ | 10.138.180.183 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:39 | 2018-05-10 12:21:35 | | 10.138.244.108 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:38 |2018-05-10 12:21:35 | | 10.138.244.244 | 0x3D819A34C06EF4EA | 2018-05-10 11:19:39 |2018-05-10 12:21:35 | +----------------+--------------------+---------------------+---------------------+ 3 rows in set (0.00 sec)admin proxysql2 ((none))>SELECT hostname, checksum, FROM_UNIXTIME(changed_at) changed_at, FROM_UNIXTIME(updated_at) updated_at FROM stats_proxysql_servers_checksums WHERE name='mysql_users' ORDER BY hostname; +----------------+--------------------+---------------------+---------------------+ | hostname | checksum | changed_at | updated_at | +----------------+--------------------+---------------------+---------------------+ | 10.138.180.183 | 0x3928F574AFFF4C65 | 2018-05-10 12:12:24 | 2018-05-10 12:31:58 | | 10.138.244.108 | 0x3928F574AFFF4C65 | 2018-05-10 12:12:23 | 2018-05-10 12:31:58 | | 10.138.244.244 | 0x3928F574AFFF4C65 | 2018-05-10 12:30:19 | 2018-05-10 12:31:58 | +----------------+--------------------+---------------------+---------------------+ 3 rows in set (0.00 sec)

Now we have 3 ProxySQL nodes up to date. This example didn’t add any MySQL servers, hostgroups, etc, because the functionality is the same. The post is intended as an introduction to this new feature and how you can create and test a ProxySQL cluster.

Just remember that this is still an experimental feature and is subject to change with newer versions of ProxySQL.

Summary

This feature is really needed if you have more than one ProxySQL running for the same application in different instances. It is easy to maintain and configure for a single person and is easy to create and attach new nodes.

Hope you find this post helpful!

References

http://www.proxysql.com/blog/proxysql-cluster
http://www.proxysql.com/blog/proxysql-cluster-part2
http://www.proxysql.com/blog/proxysql-cluster-part3-mysql-servers
https://github.com/sysown/proxysql/wiki/ProxySQL-Cluster

The post ProxySQL Experimental Feature: Native ProxySQL Clustering appeared first on Percona Database Performance Blog.

InnoDB Failed to drop table causing crash.

Lastest Forum Posts - June 11, 2018 - 3:18am
Hi,
i have recently hit an issue with our Percona instance and am wonderign if its an error anyone has seen before or coudl assist with. We are running version 5.7.21-20 and recieved the following error when a conrete5 job tried to perform some housekeeping of the database:

2018-06-08T00:01:22.497336Z 9 [Note] InnoDB: MySQL is trying to drop table `c5`.`FTS_0000000000000a42_00000000000013ad_INDEX_ 1` though there are still open handles to it. Adding the table to the background drop queue.
2018-06-08T00:01:22.993845Z 9 [Note] InnoDB: MySQL is trying to drop table `c5`.`FTS_0000000000000a42_00000000000013bb_INDEX_ 1` though there are still open handles to it. Adding the table to the background drop queue.
2018-06-08 00:01:23 0x7f81d1ffb700 InnoDB: Assertion failure in thread 140195550705408 in file pars0pars.cc line 822
InnoDB: Failing assertion: sym_node->table != NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/...-recovery.html
InnoDB: about forcing recovery.
00:01:23 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://jira.percona.com/projects/PXC/issues

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=138
max_threads=513
thread_count=10
connection_count=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 212322 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0xf2e19b]
/usr/sbin/mysqld(handle_fatal_signal+0x471)[0x7a7171]
/lib64/libpthread.so.0(+0xf680)[0x7f821468e680]
/lib64/libc.so.6(gsignal+0x37)[0x7f8212790207]
/lib64/libc.so.6(abort+0x148)[0x7f82127918f8]
/usr/sbin/mysqld[0x76fec6]
/usr/sbin/mysqld(_Z21pars_insert_statementP10sym_node_tPvP10 sel_node_t+0x291)[0x10e6291]
/usr/sbin/mysqld(_Z7yyparsev+0x1e09)[0x1316aa9]
/usr/sbin/mysqld(_Z8pars_sqlP11pars_info_tPKc+0x9e)[0x10e779e]
/usr/sbin/mysqld(_Z13fts_parse_sqlP11fts_table_tP11pars_info _tPKc+0x18a)[0x12f700a]
/usr/sbin/mysqld(_Z14fts_write_nodeP5trx_tPP10que_fork_tP11f ts_table_tP12fts_string_tP10fts_node_t+0x24a)[0x12d048a]
/usr/sbin/mysqld[0x12d0ae0]
/usr/sbin/mysqld(_Z14fts_sync_tableP12dict_table_tbbb+0x321)[0x12d6591]
/usr/sbin/mysqld(_Z23fts_optimize_sync_tablem+0x45)[0x12e96c5]
/usr/sbin/mysqld(_Z19fts_optimize_threadPv+0x39a)[0x12e9a8a]
/lib64/libpthread.so.0(+0x7dd5)[0x7f8214686dd5]
/lib64/libc.so.6(clone+0x6d)[0x7f8212858b3d]
You may download the Percona XtraDB Cluster operations manual by visiting
http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.
Log of wsrep recovery (--wsrep-recover):

I have had a search around on the web but cant seem to find anythign that points me int he right direction. One thing to note is that prior to the housekeepign job running the databse was restored from another environment, whereby the databse was dropped then recreated, i was wondering if this is at the root cause f the issue.

Any assistance gratefully recived.

Thanks in advance
Scott

pmm-server not collecting mysql metrics from client

Lastest Forum Posts - June 10, 2018 - 11:44pm
Hello,

I have installed pmm-server on RHEL 7.4 through docker and pmm-clients in two RHEL 6.6 servers.
I am able to collect OS metrics from clients, but not able to collect mysql metrics from clients.
I have telnet from the pmm-server to clients and from clients to pmm server.

pmm-admin list output
01:11:20 # pmm-admin list
pmm-admin 1.11.0

PMM Server | [server_ip]
Client Name | [client]
Client Address | [client_ip]
Service Manager | linux-upstart

-------------- ------------ ----------- -------- ----------------------------- --------------------------------------------------------------------------------------
SERVICE TYPE NAME LOCAL PORT RUNNING DATA SOURCE OPTIONS
-------------- ------------ ----------- -------- ----------------------------- --------------------------------------------------------------------------------------
mysql:queries [server] - YES root:***@tcp(127.0.0.1:3306) query_source=slowlog, query_examples=true, slow_log_rotation=true, retain_slow_logs=1
linux:metrics [server] 42000 YES -
mysql:metrics [server] 42002 YES root:***@tcp(127.0.0.1:3306) tablestats=OFF

pmm-admin check-network output

01:14:24 # pmm-admin check-network
PMM Network Status

Server Address | [server]:80
Client Address | [server]

* System Time
PMM Server | 2018-06-11 06:16:23 +0000 GMT
PMM Client | 2018-06-11 01:16:29 -0500 CDT
PMM Client to PMM Server Time Drift | OK

* Connection: Client --> Server
-------------------- -------
SERVER SERVICE STATUS
-------------------- -------
Consul API OK
Prometheus API OK
Query Analytics API OK

Connection duration | 712.884µs
Request duration | 6.964205ms
Full round trip | 7.677089ms


* Connection: Client <-- Server
-------------- ------------ ----------------- ------- ---------- ---------
SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD
-------------- ------------ ----------------- ------- ---------- ---------
linux:metrics [server] [server]:42000 OK YES -
mysql:metrics [server] [server]:42002 DOWN YES -

When an endpoint is down it may indicate that the corresponding service is stopped (run 'pmm-admin list' to verify).
If it's running, check out the logs /var/log/pmm-*.log

When all endpoints are down but 'pmm-admin list' shows they are up and no errors in the logs,
check the firewall settings whether this system allows incoming connections from server to addressort in question.

Also you can check the endpoint status by the URL: http://[server]/prometheus/targets


output of /var/log/pmm-mysql-metrics-42002.log

time="2018-06-08T11:56:02-05:00" level=info msg="Starting mysqld_exporter (version=, branch=, revision=)" source="mysqld_exporter.go:292"
time="2018-06-08T11:56:02-05:00" level=info msg="Build context (go=go1.9.4, user=, date=)" source="mysqld_exporter.go:293"
time="2018-06-08T11:56:02-05:00" level=info msg="HTTPS/TLS is enabled" source="mysqld_exporter.go:362"
time="2018-06-08T11:56:02-05:00" level=info msg="Enabled High Resolution scrapers:" source="mysqld_exporter.go:376"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.info_schema.innodb_metrics" source="mysqld_exporter.go:378"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.global_status" source="mysqld_exporter.go:378"
time="2018-06-08T11:56:02-05:00" level=info msg="Enabled Medium Resolution scrapers:" source="mysqld_exporter.go:382"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.info_schema.processlist" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.slave_status" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.perf_schema.eventswaits" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.perf_schema.file_events" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.info_schema.query_response_time" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:02-05:00" level=info msg="Enabled Low Resolution scrapers:" source="mysqld_exporter.go:388"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.global_variables" source="mysqld_exporter.go:390"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.binlog_size" source="mysqld_exporter.go:390"
time="2018-06-08T11:56:02-05:00" level=info msg=" --collect.info_schema.userstats" source="mysqld_exporter.go:390"
time="2018-06-08T11:56:02-05:00" level=info msg="Listening on [server]:42002" source="mysqld_exporter.go:399"
time="2018-06-08T11:56:33-05:00" level=info msg="Starting mysqld_exporter (version=, branch=, revision=)" source="mysqld_exporter.go:292"
time="2018-06-08T11:56:33-05:00" level=info msg="Build context (go=go1.9.4, user=, date=)" source="mysqld_exporter.go:293"
time="2018-06-08T11:56:33-05:00" level=info msg="HTTPS/TLS is enabled" source="mysqld_exporter.go:362"
time="2018-06-08T11:56:33-05:00" level=info msg="Enabled High Resolution scrapers:" source="mysqld_exporter.go:376"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.info_schema.innodb_metrics" source="mysqld_exporter.go:378"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.global_status" source="mysqld_exporter.go:378"
time="2018-06-08T11:56:33-05:00" level=info msg="Enabled Medium Resolution scrapers:" source="mysqld_exporter.go:382"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.info_schema.processlist" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.slave_status" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.perf_schema.eventswaits" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.perf_schema.file_events" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.info_schema.query_response_time" source="mysqld_exporter.go:384"
time="2018-06-08T11:56:33-05:00" level=info msg="Enabled Low Resolution scrapers:" source="mysqld_exporter.go:388"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.global_variables" source="mysqld_exporter.go:390"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.binlog_size" source="mysqld_exporter.go:390"
time="2018-06-08T11:56:33-05:00" level=info msg=" --collect.info_schema.userstats" source="mysqld_exporter.go:390"
time="2018-06-08T11:56:33-05:00" level=info msg="Listening on [server]:42002" source="mysqld_exporter.go:399"
2018/06/08 11:58:19 http: TLS handshake error from [server]:52196: tls: first record does not look like a TLS handshake
2018/06/08 12:20:33 http: Accept error: accept tcp [server]:42002: accept4: too many open files; retrying in 5ms
2018/06/08 12:20:33 http: Accept error: accept tcp [server]:42002: accept4: too many open files; retrying in 10ms
2018/06/08 12:20:33 http: Accept error: accept tcp [server]:42002: accept4: too many open files; retrying in 20ms
2018/06/08 12:20:33 http: Accept error: accept tcp [server]:42002: accept4: too many open files; retrying in 40ms
2018/06/08 12:20:33 http: Accept error: accept tcp [server]:42002: accept4: too many open files; retrying in 80ms
2018/06/08 12:20:33 http: Accept error: accept tcp [server]:42002: accept4: too many open files; retrying in 160ms
2018/06/08 12:20:33 http: Accept error: accept tcp [server]:42002: accept4: too many open files; retrying in 320ms

In prometheus dashboard for mysql metrics status is down.
Telnet is working fine and OS metrics is working, only mysql metrics it is giving this error.

Please help me in solving this.



crash of 2 nodes

Lastest Forum Posts - June 10, 2018 - 3:45pm
Hello,

We are using a percona cluster with 3 nodes and today at the same time 2 nodes go down.

In the log I only see that they is an active transaction going in ROLL BACK and after a report about the crash

I have attached the log.

any idea ?

BR

Laurent



pt-online-schema-change error: Error creating new table ... duplicate key

Lastest Forum Posts - June 6, 2018 - 12:14pm
When using pt-online-schema-change 3.0.10 on a table that has previously been altered using the same tool, we get the "errno: 121 Duplicate key on write or update" error.
Googling & your bug tracker lead me to https://jira.percona.com/browse/PT-394, but this issue is marked as fixed while it appears that it still exists.

I found an online blog-post detailing the issue and the fix, but I'm very uncomfortable with patching the software myself.

Is there any progress or news about the issue, or do I need to assume the tool works as intended and we need to find another workaround?

Deadlock with &amp;quot;single-write&amp;quot; node!

Lastest Forum Posts - June 6, 2018 - 10:31am

Cluster: 3-nodes at 1 server(no LAN/WAN lag)
OS: Debian Stretch
Mysql version: 5.7.21-20 (xtradb-cluster)
Average qps: 50q/s


Using this script for testing cluster(php7.0 + pdo plugin):
https://pastebin.com/05nvKtb9

It creates table and then just use random queries like "INSERT", "UPDATE", "DELETE". :

Code: CREATE TABLE IF NOT EXISTS test ( id int(10) UNSIGNED NOT NULL AUTO_INCREMENT, field_int int(10) NOT NULL, field_float float NOT NULL, field_varchar varchar(100) NOT NULL, field_text longtext NOT NULL, field_datetime datetime NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB CHARACTER SET utf8 COLLATE utf8_unicode_ci;

When running one copy of script on singe mysql, percona server, mariadb - all is ok
When running multiple copies of script(parallel): on singe mysql, percona server, mariadb - all is ok

When running one copy of script on pxc and to one single node - all is ok
When running multiple multiple copies of script(parallel) on pxc and more and they connect to different nodes - deadlock on INSERT or UPDATE. (no problem we can use "single-node write mode", but)​​
When running multiple multiple copies of script on pxc and more and they connect to one node(!) - deadlock on INSERT or UPDATE.


PXC can't even hold multiple parallel "write" queries even on single node?


Tested with different options:
wsrep_retry_autocommit - from 0 to 10
wsrep_sync_wait - from 0 to 7 (7 is "helping", but speed is greatly reduced, so I think it just harder to emulate deadlock)

Cluster 3 nodes but wsrep_cluster_size = 1

Lastest Forum Posts - June 5, 2018 - 7:43am
Dear community,

I did the past year maybe 2 install of percona but since may i'm not able to perform an installation who works..
I already tried everything I could but i'm still blocked..

Maybe some of you can help me ?

I have followed the official doc

Here is what I have and tryed:
3 Ubuntu 16.04 freshly installed
SQL-NODE01 192.168.1.101/24
SQL-NODE02 192.168.1.102/24
SQL-NODE03 192.168.1.103/24
Every server are able to communicate to each other

Apparmor services stopped (Also tried with the services running or removed it but same conclusion)
Code: iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 3306 --source 192.168.1.0/24 --jump ACCEPT iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4567 --source 192.168.1.0/24 --jump ACCEPT iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4568 --source 192.168.1.0/24 --jump ACCEPT iptables --append INPUT --in-interface ens18 --protocol tcp --match tcp --dport 4444 --source 192.168.1.0/24 --jump ACCEPT iptables --append INPUT --in-interface ens18 --protocol udp --match udp --dport 4567 --source 192.168.1.0/24 --jump ACCEPT Repo configured as described here : https://www.percona.com/doc/percona-...fig/index.html
Right after "apt-get install percona-xtradb-cluster-57"

The installation goes well, I stop the mysql services on the 3 nodes "sudo service mysql stop"

Here is my my.cnf for the NODE01:

Code: [mysqld] wsrep_provider=/usr/lib/libgalera_smm.so wsrep_cluster_name=db-cluster wsrep_cluster_address=gcomm://192.168.1.101,192.168.1.102,192.168.1.103 wsrep_node_name=SQL-NODE01 wsrep_node_address=192.168.1.101 wsrep_sst_method=xtrabackup-v2 wsrep_sst_auth=sstuser:passw0rd pxc_strict_mode=ENFORCING binlog_format=ROW default_storage_engine=InnoD innodb_autoinc_lock_mode=2 # # The Percona XtraDB Cluster 5.7 configuration file. # # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # Please make any edits and changes to the appropriate sectional files # included below. # !includedir /etc/mysql/conf.d/ !includedir /etc/mysql/percona-xtradb-cluster.conf.d/ Time to bootstrap the first node "/etc/init.d/mysql bootstrap-pxc"
Everything seems well except the error of PID on the syslog :

Code: Jun 5 15:33:03 SQL-NODE01 systemd[1]: Stopping LSB: AppArmor initialization... Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: * Clearing AppArmor profiles cache Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: ...done. Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: All profile caches have been cleared, but no profiles have been unloaded. Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: Unloading profiles will leave already running processes permanently Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: unconfined, which can lead to unexpected situations. Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: To set a process to complain mode, use the command line tool Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: 'aa-complain'. To really tear down all profiles, run the init script Jun 5 15:33:03 SQL-NODE01 apparmor[2107]: with the 'teardown' option." Jun 5 15:33:03 SQL-NODE01 systemd[1]: Stopped LSB: AppArmor initialization. Jun 5 15:37:04 SQL-NODE01 systemd[1]: Starting Cleanup of Temporary Directories... Jun 5 15:37:04 SQL-NODE01 systemd-tmpfiles[3052]: [/usr/lib/tmpfiles.d/var.conf:14] Duplicate line for path "/var/log", ignoring. Jun 5 15:37:10 SQL-NODE01 systemd[1]: Started Cleanup of Temporary Directories. Jun 5 15:38:54 SQL-NODE01 systemd[1]: Reloading. Jun 5 15:38:54 SQL-NODE01 systemd[1]: Started ACPI event daemon. Jun 5 15:42:37 SQL-NODE01 systemd[1]: Reloading. Jun 5 15:42:37 SQL-NODE01 systemd[1]: Started ACPI event daemon. Jun 5 15:42:37 SQL-NODE01 systemd[1]: Reloading. Jun 5 15:42:37 SQL-NODE01 systemd[1]: Started ACPI event daemon. Jun 5 15:42:37 SQL-NODE01 systemd[1]: Starting LSB: Start and stop the mysql (Percona XtraDB Cluster)daemon... Jun 5 15:42:37 SQL-NODE01 mysql[4312]: * Starting MySQL (Percona XtraDB Cluster) database server mysqld Jun 5 15:42:37 SQL-NODE01 /etc/init.d/mysql[4357]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid Jun 5 15:42:43 SQL-NODE01 mysql[4312]: ...done. Jun 5 15:42:43 SQL-NODE01 systemd[1]: Started LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon. Jun 5 15:42:50 SQL-NODE01 systemd[1]: Reloading. Jun 5 15:42:50 SQL-NODE01 systemd[1]: Started ACPI event daemon. Jun 5 15:49:37 SQL-NODE01 systemd[1]: Stopping LSB: Start and stop the mysql (Percona XtraDB Cluster)daemon... Jun 5 15:49:37 SQL-NODE01 mysql[4859]: * Stopping MySQL (Percona XtraDB Cluster) mysqld Jun 5 15:49:53 SQL-NODE01 /etc/init.d/mysql[4987]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid Jun 5 15:49:53 SQL-NODE01 /etc/init.d/mysql[4991]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid Jun 5 15:49:53 SQL-NODE01 mysql[4859]: ...done. Jun 5 15:49:53 SQL-NODE01 systemd[1]: Stopped LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon. Jun 5 15:54:37 SQL-NODE01 /etc/init.d/mysql[5047]: ERROR: The partition with /var/lib/mysql is too full! Jun 5 15:55:00 SQL-NODE01 /etc/init.d/mysql[5109]: MySQL PID not found, pid_file detected/guessed: /var/run/mysqld/mysqld.pid Here is the result of "show status like 'wsrep%';" : (The first line got some value, it's because I did this copy after trying to add a the second node)
+----------------------------------+------------------+
| Variable_name | Value |
+----------------------------------+------------------+
| wsrep_local_state_uuid | 508226c7-68c6-11e8-b4e3-2faae9ed4ea1 |
| wsrep_protocol_version | 8 |
| wsrep_last_applied | 3 |
| wsrep_last_committed | 3 |
| wsrep_replicated | 3 |
| wsrep_replicated_bytes | 728 |
| wsrep_repl_keys | 3 |
| wsrep_repl_keys_bytes | 96 |
| wsrep_repl_data_bytes | 425 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 2 |
.....
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.1.101:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 0a96d502-68c8-11e8-87ae-12c3077b44a0 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 508226c7-68c6-11e8-b4e3-2faae9ed4ea1 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.26(rac090bc) |
| wsrep_ready | ON |
+----------------------------------+--------------------------------------+
68 rows in set (0.01 sec)

So for the moment everything seems to be fine I'll create the sst users
mysql@pxc1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd';
mysql@pxc1> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; mysql@pxc1> FLUSH PRIVILEGES;

Time to bootsrap an another node via "/etc/init.d/mysql start"
The services start properly and there is what I got :
mysql> show status like 'wsrep%';
+----------------------------------+---------------------+
| Variable_name | Value |
+----------------------------------+---------------------+
| wsrep_local_state_uuid | 3bb3744c-68c6-11e8-8782-e32c0e432b82 |
| wsrep_protocol_version | 8 |
| wsrep_last_applied | 0 |
| wsrep_last_committed | 0 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 2 |
| wsrep_received_bytes | 155 |
....
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.1.102:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 4.391e-06/9.1114e-06/1.9822e-05/5.63194e-06/5 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 440afb89-68c8-11e8-b82c-976d9c2e0aa3 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 3bb3744c-68c6-11e8-8782-e32c0e432b82 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com>|
| wsrep_provider_version | 3.26(rac090bc) |
| wsrep_ready | ON |
+----------------------------------+-----------------------------------------------+
68 rows in set (0.01 sec)

Did someone already encountered this error ? Did I do something wrong ?
I'll wait for your answer

Does pmm-client work on Ubuntu 18.04?

Lastest Forum Posts - June 5, 2018 - 2:04am
Hello.
Could you please confirm that pmm-client was tested and working fine on Ubuntu 18.04?
Thank you.

Mysql Master eat up swap space and doesn't release it ... why?

Lastest Forum Posts - June 4, 2018 - 2:35pm
I am running a mysql 5.5 master
on a dedicated mysql master host.
it has 26Gb swap space and now alert on it's 93% used
it was 24GB a week ago and I added a tmp file a 2gb and now it's eaten up as well.
Swapoff/on doesn't work cause it isn't enough free mem avaliable.
What can I do ? ( besides reboot the master and wait until it all chewed up again )
What causes this ? bug in 5.5 ? memory leak ?
thx


swapoff -a; sleep 2; swapon -a
swapoff: /dev/sda3: swapoff failed: Cannot allocate memory

| innodb_buffer_pool_size | 30064771072

cat /proc/sys/vm/swappiness
1

cat /proc/sys/vm/dirty_ratio
20

RAM_ Size HD_Size HD_Used HD_Free Used% Speed Cache HT MySQL OSbits Linux_Version Model
49408748 689G 307G 348G 47% 2.40GHz 12288 16 5.5.32 x86_64 CentOS6.9(Final) R610f

DB_Type SwapSize SwapUsed SwapFree Usage% Cpu_1m Cpu_5m Cpu15m
master 26869752 25055192 1814560 93.247 2.23 2.30 2.30

free -m
total used free shared buffers cached
Mem: 48250 47649 600 0 8 101
-/+ buffers/cache: 47539 710
Swap: 26239 24699 1540

./find-out-what-is-using-your-swap.sh
..............................................
Overall swap used: 25019700 kB
========================================
kB pid name
========================================
24816896 7087 mysqld
145968 4636 messaging_local
28228 4311 puppet
7016 3226 snmpd
2444 30865 rsyslogd
2420 7746 nscd
2308 7789 nslcd

Use of uninitialized value in join or string at /bin/pt-online-schema-change

Lastest Forum Posts - June 4, 2018 - 6:07am
Hi everybody

we are trying to apply migrations to a small forum database but sadly get confusing error messages.

Database:
percona xtradb - 5.7.16-10-57-log on Centos 7.4


The table:
+------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------------+--------------+------+-----+---------+-------+
| conversationID | int(10) | NO | MUL | NULL | |
| participantID | int(10) | YES | MUL | NULL | |
| username | varchar(255) | NO | | | |
| hideConversation | tinyint(1) | NO | | 0 | |
| isInvisible | tinyint(1) | NO | | 0 | |
| lastVisitTime | int(10) | NO | | 0 | |
| joinedAt | int(10) | NO | | 0 | |
| leftAt | int(10) | NO | | 0 | |
| lastMessageID | int(10) | YES | MUL | NULL | |
+------------------+--------------+------+-----+---------+-------+


The original query:
ALTER TABLE wcf1_conversation_to_user ADD COLUMN conversationToUserID INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT;


That should work:
PTDEBUG=1 pt-online-schema-change --execute "ADD COLUMN conversationToUserID INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT" D=board_wcf,t=wcf1_conversation_to_user


But doesnt:
<snip lots of seemingly unimportend stuff to get under the 10000 char limit>
# /usr/bin/perl 5.016003
# Linux exampledb01.not.real 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
# Arguments: _[--execute]_ _[--alter]_ _[ADD COLUMN conversationToUserID INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT]_ _[D=board_wcf,t=wcf1_conversation_to_user]_
...
# NibbleIterator:5971 11747 EXPLAIN SELECT * FROM `board_wcf`.`wcf1_conversation_to_user` WHERE 1=1
# NibbleIterator:5973 11747 $VAR1 = {
# extra => undef,
# filtered => '100.00',
# id => '1',
# key => undef,
# key_len => undef,
# partitions => undef,
# possible_keys => undef,
# ref => undef,
# rows => '6059',
# select_type => 'SIMPLE',
# table => 'wcf1_conversation_to_user',
# type => 'ALL'
# };
...
# NibbleIterator:5521 11747 Ascend params: $VAR1 = {
# boundaries => {
# '<' => '((((? IS NOT NULL AND `participantid` IS NULL) OR (`participantid` < ?))) OR (((? IS NULL AND `participantid` IS NULL) OR (`participantid` = ?)) AND `conversationid` < ?))',
# '<=' => '((((? IS NOT NULL AND `participantid` IS NULL) OR (`participantid` < ?))) OR (((? IS NULL AND `participantid` IS NULL) OR (`participantid` = ?)) AND `conversationid` <= ?))',
# '>' => '((((? IS NULL AND `participantid` IS NOT NULL) OR (`participantid` > ?))) OR (((? IS NULL AND `participantid` IS NULL) OR (`participantid` = ?)) AND `conversationid` > ?))',
# '>=' => '((((? IS NULL AND `participantid` IS NOT NULL) OR (`participantid` > ?))) OR (((? IS NULL AND `participantid` IS NULL) OR (`participantid` = ?)) AND `conversationid` >= ?))'
# },
# cols => [
# 'conversationid',
# 'participantid',
# 'username',
# 'hideconversation',
# 'isinvisible',
# 'lastvisittime',
# 'joinedat',
# 'leftat',
# 'lastmessageid'
# ],
# index => 'participantid',
# scols => [
# 'participantid',
# 'participantid',
# 'participantid',
# 'participantid',
# 'conversationid'
# ],
# slice => [
# 1,
# 1,
# 1,
# 1,
# 0
# ],
# where => '((((? IS NULL AND `participantid` IS NOT NULL) OR (`participantid` > ?))) OR (((? IS NULL AND `participantid` IS NULL) OR (`participantid` = ?)) AND `conversationid` > ?))'
# };
#
# NibbleIterator:5538 11747 First lower boundary statement: SELECT /*!40001 SQL_NO_CACHE */ `participantid`, `participantid`, `participantid`, `participantid`, `conversationid` FROM `board_wcf`.`wcf1_conversation_to_user` FORCE INDEX(`participantid`) ORDER BY `participantid`, `conversationid` LIMIT 1 /*first lower boundary*/
# NibbleIterator:5563 11747 Last upper boundary statement: SELECT /*!40001 SQL_NO_CACHE */ `participantid`, `participantid`, `participantid`, `participantid`, `conversationid` FROM `board_wcf`.`wcf1_conversation_to_user` FORCE INDEX(`participantid`) ORDER BY `participantid` DESC,`conversationid` DESC LIMIT 1 /*last upper boundary*/
# NibbleIterator:5574 11747 Upper boundary statement: SELECT /*!40001 SQL_NO_CACHE */ `participantid`, `participantid`, `participantid`, `participantid`, `conversationid` FROM `board_wcf`.`wcf1_conversation_to_user` FORCE INDEX(`participantid`) WHERE ((((? IS NULL AND `participantid` IS NOT NULL) OR (`participantid` > ?))) OR (((? IS NULL AND `participantid` IS NULL) OR (`participantid` = ?)) AND `conversationid` >= ?)) ORDER BY `participantid`, `conversationid` LIMIT ?, 2 /*next chunk boundary*/
...
# NibbleIterator:5603 11747 Initial chunk size (LIMIT): 999
# NibbleIterator:5983 11747 Preparing statement handles
# NibbleIterator:6012 11747 First lower boundary: $VAR1 = [
# undef,
# undef,
# undef,
# undef,
# '1'
# ];
#
# NibbleIterator:6028 11747 Next lower boundary: $VAR1 = [
# undef,
# undef,
# undef,
# undef,
# '1'
# ];
#
2018-06-04T13:07:38 Copying approximately 6059 rows...
# IndexLength:6678 11747 SELECT /*!40001 SQL_NO_CACHE */ `participantid`, `conversationid` FROM `board_wcf`.`wcf1_conversation_to_user` FORCE INDEX (`participantid`) WHERE `participantid` IS NOT NULL AND `conversationid` IS NOT NULL ORDER BY `participantid`, `conversationid` LIMIT 1 /*key_len*/
# IndexLength:6645 11747 EXPLAIN SELECT /*!40001 SQL_NO_CACHE */ * FROM `board_wcf`.`wcf1_conversation_to_user` FORCE INDEX (`participantid`) WHERE `participantid` = ? AND `conversationid` >= ? /*key_len*/ params: 1 1
# IndexLength:6649 11747 Range scan: $VAR1 = {
# extra => 'Using index condition',
# filtered => '100.00',
# id => '1',
# key => 'participantID',
# key_len => '9',
# partitions => undef,
# possible_keys => 'participantID',
# ref => undef,
# rows => '8',
# select_type => 'SIMPLE',
# table => 'wcf1_conversation_to_user',
# type => 'range'
# };
#
# NibbleIterator:5657 11747 init callback returned 1
# pt_online_schema_change:11489 11747 EXPLAIN SELECT /*!40001 SQL_NO_CACHE */ `participantid`, `participantid`, `participantid`, `participantid`, `conversationid` FROM `board_wcf`.`wcf1_conversation_to_user` FORCE INDEX(`participantid`) WHERE ((((? IS NULL AND `participantid` IS NOT NULL) OR (`participantid` > ?))) OR (((? IS NULL AND `participantid` IS NULL) OR (`participantid` = ?)) AND `conversationid` >= ?)) ORDER BY `participantid`, `conversationid` LIMIT ?, 2 /*next chunk boundary*/ params: undef undef undef undef 1 999
# pt_online_schema_change:11498 11747 EXPLAIN plan: $VAR1 = {
# extra => 'Using where; Using index',
# filtered => '100.00',
# id => '1',
# key => 'participantID',
# key_len => '9',
# partitions => undef,
# possible_keys => 'participantID',
# ref => undef,
# rows => '3236',
# select_type => 'SIMPLE',
# table => 'wcf1_conversation_to_user',
# type => 'range'
# };
#
# NibbleIterator:6083 11747 next_boundaries callback returned 1
# CleanupTask:6551 11747 Calling cleanup task CODE(0x3159090)
....
2018-06-04T13:07:38 Error copying rows from `board_wcf`.`wcf1_conversation_to_user` to `board_wcf`.`_wcf1_conversation_to_user_new`: Use of uninitialized value in join or string at /bin/pt-online-schema-change line 6092.

...
# Cxn:4096 11747 Destroying cxn
# Cxn:4105 11747 DBI::db=HASH(0x3091470) Disconnecting dbh on exampledb01.not.real

--------


It seems the script stumbles over the 'undefined' results from the boundry queries:

Query:
SELECT /*!40001 SQL_NO_CACHE */ `participantid`, `participantid`, `participantid`, `participantid`, `conversationid` FROM `board_wcf`.`wcf1_conversation_to_user` FORCE INDEX(`participantid`) ORDER BY `participantid`, `conversationid` LIMIT 1
Result:
$VAR1 = [
# undef,
# undef,
# undef,
# undef,
# '1'
# ];

But we don't really know what to do with this information.
We tried with Percona toolkit 2.2.10 and 3.0.10, same problem.

Does anyone has an idea whats amiss?

Thanks
Tarwin



xtrabackup prepare is very slow

Lastest Forum Posts - June 4, 2018 - 3:05am
hi team,
i am using xtrabackup 2.4 with mysql 5.7 in centos 6 . initially preparing was very fast. after exporting and importing some databases it is very slow.
please can somebody explain me why it is very slow?????

Streaming backup

Lastest Forum Posts - June 4, 2018 - 1:23am
Hi all,

I'm designing my backup solution for Percona MySQL. Currently I have a (working) setup which does a full backup with xtrabackup to a remote location. These backups are then stored 4 weeks and rotated.

Now comes the part I have some trouble with; During the week I would like to stream binlogs to a remote location in order to be able to roll forward the full backups and get point in time recovery (PITR) capability. I say streaming because I want to minimize the gap between binlogs being send to a remote location and a possible disaster. I'm having a hard time finding a tool which manages receiving binlogs and stores them for backup purposes (and also does rotation and such). Am I trying to do something here that doesn't make sense?

This setup is basically the same as what pgbarman does for PostgreSQL; Take full backups and then supplement those by pretending to be part of PostgreSQL replication topology and receive WAL's.

xtrabackup has messed up my MySQL installation

Lastest Forum Posts - June 4, 2018 - 12:02am
I'm posting here at the recommendation of Lorraine@Percona, after a 1st post at StackExchange.

Last Thursday I experimented a bit with Percona xtrabackup on a MySQL 5.5.23 server (we're stuck at this very old version, that's another discussion). I installed the Percona repo as part of the task. I uninstalled the tool before the end of the day, but didn't uninstall the repo.

On Friday I noticed that when I ran "mysql -u root -p" I got a segmentation fault!

I investigated and found (using the rpm -qa --last command) that at 5:19am Friday morning, when we were all asleep, somehow, Percona-Server-Shared was installed (by itself??), and /usr/lib64/libmysqlclient.so.16.0.0 is now part of that package, as I confirm with the rpm -qf command. I suspect that's why the plain vanilla mysql client is dumping core, it can't find the libraries it expects.

When I attempt to uninstall Percona server, I see that it cascades to remove the mysql-server package itself (!), so I can't do it.

At StackExchange I was told that this is a known bug that plagues the 5.5.23 MySQL version and I received the recommendation to uninstall Percona-Server-Shared using the --nodeps option, and then install mysql-libs.

I must fix this problem ASAP, the server is a staging server (no I didn't do it on a production server :-) ) It's running Redhat 6.9.

Thanks for your help,

George

After upgrading to MySQL 5.7 queries on Views not using underlying indexes of tables

Lastest Forum Posts - June 2, 2018 - 7:59pm


select final.grad_year_flag ,
final.registration_id ,
final.first_name ,
final.last_name ,
final.grad_year ,
final.photo_id ,
final.photo_type ,
final.photo_image_path
from
(
select B.grad_year_flag,
B.registration_id,
B.first_name,
B.last_name,
B.grad_year,
B.photo_id,
B.photo_type,
B.photo_image_path,
@rn1 := if(@grad_flag1=grad_year_flag, @rn1+1,1) as rn1,
@grad_flag1 := grad_year_flag
from
(
select A.*,
@rn := if(@grad_flag=grad_year_flag and @reg_flag=registration_id, @rn+1,1) as rn,
@grad_flag := grad_year_flag,
@reg_flag := registration_id
from
(
select distinct
CASE WHEN rk.grad_year= NAME_CONST('i_grad_year',1975)THEN 0
ELSE rk.grad_year- NAME_CONST('i_grad_year',1975)END grad_year_flag,
rpd.photo_type,
rpd.registration_id,
rpd.first_name,
rpd.last_name,
rk.grad_year,
rpd.photo_id,
rpd.photo_image_path
from db_cache.reg_k12_grad_year rk,
db_cache.reg_photo_details rpd
where rk.registration_id=rpd.registration_id
and rpd.photo_type in(2,3)
and rk.school_id = NAME_CONST('i_commId',9183)and rk.grad_year between ( NAME_CONST('i_grad_year',1975)-3) and ( NAME_CONST('i_grad_year',1975)+3)
and rk.registration_id!= NAME_CONST('i_regId',7490338311)order by grad_year_flag,rpd.registration_id,rpd.photo_type
) A
) B
where rn=1
ORDER BY grad_year_flag, rand()
)final
WHERE case when grad_year_flag = 0
then rn1 <= 15
else rn1 <=5
end ;

Explain plan from 5.6

+----+-------------+---------------------+-------+-------------------------------------------------------+------------------------+---------+----------------------------------------------+------+----------+-----------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+---------------------+-------+-------------------------------------------------------+------------------------+---------+----------------------------------------------+------+----------+-----------------------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 10 | 100.00 | Using where |
| 2 | DERIVED | <derived3> | ref | <auto_key0> | <auto_key0> | 9 | const | 10 | 100.00 | Using temporary; Using filesort |
| 3 | DERIVED | <derived4> | ALL | NULL | NULL | NULL | NULL | 1755 | 100.00 | NULL |
| 4 | DERIVED | reg_k12_grad_year_b | range | PRIMARY,REG_K12_GRAD_YEAR_B_N1,REG_K12_GRAD_YEAR_B _N2 | REG_K12_GRAD_YEAR_B_N1 | 10 | NULL | 1755 | 100.00 | Using where; Using index; Using temporary; Using filesort |
| 4 | DERIVED | reg_photo_details_k | ref | reg_photo_details_k_n1 | reg_photo_details_k_n1 | 8 | db_cache.reg_k12_grad_year_b.registration_id | 1 | 100.00 | Using index condition; Distinct |
+----+-------------+---------------------+-------+-------------------------------------------------------+------------------------+---------+----------------------------------------------+------+----------+-----------------------------------------------------------+

Explain plan from 5.7
+----+-------------+---------------------+------------+------+---------------+-------------+---------+--------------------+----------+----------+----------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+---------------------+------------+------+---------------+-------------+---------+--------------------+----------+----------+----------------------------------------------+
| 1 | PRIMARY | <derived2> | NULL | ALL | NULL | NULL | NULL | NULL | 10 | 100.00 | Using where |
| 2 | DERIVED | <derived3> | NULL | ref | <auto_key0> | <auto_key0> | 9 | const | 10 | 100.00 | Using temporary; Using filesort |
| 3 | DERIVED | <derived4> | NULL | ALL | NULL | NULL | NULL | NULL | 217730 | 100.00 | NULL |
| 4 | DERIVED | <derived5> | NULL | ref | <auto_key0> | <auto_key0> | 8 | const | 907299 | 10.00 | Using where; Using temporary; Using filesort |
| 4 | DERIVED | <derived6> | NULL | ref | <auto_key1> | <auto_key1> | 8 | rk.registration_id | 12 | 20.00 | Using where |
| 6 | DERIVED | reg_photo_details_i | NULL | ALL | NULL | NULL | NULL | NULL | 79803766 | 100.00 | NULL |
| 5 | DERIVED | reg_k12_grad_year_b | NULL | ALL | NULL | NULL | NULL | NULL | 90729926 | 100.00 | NULL |
+----+-------------+---------------------+------------+------+---------------+-------------+---------+--------------------+----------+----------+----------------------------------------------+

Same query took less than a sec on 5.6 and is taking 24 mins on 5.7, i tried recreating views on underlying table with ALGORITHM=MERGE but no change in explain plan, any help would be appreciated.
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.