Buy Percona SupportEmergency? Get 24/7 Help Now!

Percona Live 2017 Call for Papers Extended Deadline: November 13th

Latest MySQL Performance Blog posts - November 8, 2016 - 10:01am

We’ve extended the Percona Live Open Source Database Conference 2017 call for papers deadline until November 13th!

If you haven’t submitted already, please consider doing so. Speaking at Percona Live is a great way to build your personal and company brands, and if selected you will receive a complimentary full conference pass! For Percona Live 2017, we’re not just looking for MySQL and MongoDB topics, but also talks on other open source databases. 

The Percona Live 2017 Call for Papers is open until November 13, 2016. Do you have a MySQL, MongoDB, PostgreSQL or open source database use case to share, a skill to teach, or a big idea to discuss? We invite you to submit your speaking proposal for either breakout or tutorial sessions. This conference provides an opportunity to network with peers and technology professionals. It brings together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience.

Percona Live 2017 is looking for topics for Breakout Sessions, Tutorial Sessions, and Lightning Talks:

  • Breakout Session. Make submissions detailed, and clearly indicate the topic and content of your proposal for the Conference Committee. Sessions should either be 25 minutes or 50 minutes in length, including Q&A.
  • Tutorial Session. Make submissions detailed, and include an agenda for review by the Conference Committee. Tutorial sessions should present immediate and practical applications of in-depth knowledge of MySQL, MongoDB and open source database technologies. They should be presented at a level between a training class and a conference breakout session. Attendees are expected to have their laptops to work through detailed and potentially hands-on presentations. Tutorials will be 3 hours in length including Q&A. If you would like to submit your proposal as a full day, 6-hour tutorial, please indicate this in your submission.
  • Lightning Talks. Lightning talks are five-minute presentations focusing on one key point that will be of interest to the community. Talks can be technical, lighthearted, fun or otherwise entertaining submissions. These can include new ideas, a successful project, a cautionary story, quick tip or demonstration. This session is an opportunity for ideas to get the attention they deserve. The rules for this session are easy: five minutes and only five minutes. Use this time wisely to present the pertinent message of the subject matter and have fun doing so!

Submit your topics as soon as you can, the period closes on November 13, 2016!

Percona Live Open Source Database Conference 2017: Santa Clara, CA

The Percona Live Open Source Database Conference 2017 is the premier event for the diverse and active open source database community, as well as organizations that develop and use open source database software.

The conference will feature one day of tutorials and three days of keynote talks and breakout sessions related to open source databases and software. Learn about the hottest topics, building and maintaining high-performing deployments and what top industry leaders have to say.

The Percona Live Open Source Database Conference 2017 is April 24th – 27th, at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Register for Percona Live 2017 now! Super Saver registration lasts until Nov 13. This year’s Percona Live Europe sold out, and we’re looking to do the same for Percona Live 2017. Don’t miss your chance to get your ticket at its most affordable price. Click here to register.

Percona Live 2017 sponsorship opportunities are available now. Click here to find out how to sponsor.

Configuration File

Lastest Forum Posts - November 8, 2016 - 2:07am
I Want to know which file my.cnf , percona server is using.

So ps aux | grep mysql:

mysql 17819 70.6 73.3 121906760 96747724 ? Sl 724:05 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/log/mysql/mysql_error.log --open-files-limit=64535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306

It doesn't tell me the conf file.

In MYSQL Workbench says: Configuration File: Unknown.

How can this be possible?

thanks.

Is there a cleaner way to store fractional seconds in update times?

Lastest Forum Posts - November 7, 2016 - 5:39pm
Percona 5.6.32 :
mysql> create table test0( -> id int(2) not null auto_increment, -> modified timestamp not null default current_timestamp on update current_timestamp, -> primary key (id) -> ); Query OK, 0 rows affected (0.01 sec) mysql> show create table test0\G *************************** 1. row *************************** Table: test0 Create Table: CREATE TABLE `test0` ( `id` int(2) NOT NULL AUTO_INCREMENT, `modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec)
so far so good. Now:
mysql> create table test1( -> id int(2) not null auto_increment, -> modified timestamp(2) not null default current_timestamp on update current_timestamp, -> primary key (id) -> ); ERROR 1067 (42000): Invalid default value for 'modified'
What's more, I can create the table with a timestamp(2) column, as long as I skip the defaults, and can create separate triggers to update that column on insert/udate, but adding the current_timestamp default just fails...

Updating Percona XtraDB Cluster from 5.6.24-72.2 to 5.6.32-25.17

Latest MySQL Performance Blog posts - November 7, 2016 - 11:35am

This blog describes how to upgrade Percona XtraDB Cluster in place from 5.6.24-72.2 to 5.6.32-25.17.

This very hands-on blog is the result of some questions such as “can I perform an in-place upgrade for Percona XtraDB Cluster” coming in. We have done these minor upgrades for Percona Managed Services customers running Percona XtraDB Cluster with lots of nodes, and I think it’s feasible to smoothly do it – if we pay special attention to some specific points I’ll call out. The main concern you should have is that if you have a big dataset, you should avoid SST (which consumes a lot of time if a node rebuild is needed).

Make sure you have all the steps very clear in order to avoid spending too much time when updating packages. The crucial point is Galera’s API GCache size. If you’re executing this when part of the cluster is online, and writes cannot be avoided, check first if the current configuration for the GCache can avoid nodes being written to SST while shutting down Percona Server on each of the nodes, updating packages and finally getting Percona Server back up online again.

A blog post written by Miguel Angel Nieto provides instructions on how to check the GCache file’s size and make sure it’s covering all the transactions for the time you need to take the node out. After increasing the size of the GCache, if the new node finds all the missing transactions on the donor’s GCache, it goes to IST. If not, it will need to use SST.

You can read more about the difference between IST and SST in the Galera API documentation.

Little less talk, little more action…

At this point, we need to update the packages one cluster node at a time. The cluster needs to stay up. I’m going to use a cluster with three nodes. Node 01 is dedicated to writes, while nodes 02 and 03 are dedicated to scaling the cluster’s reads (all are running 5.6.24-72.2). Just for the reference, it’s running on CentOS 6.5, and I’m going to use yum, but you can convert that to any other package manager depending on the Linux distort you’re running. This is the list of nodes and the packages we need to update:

#: servers are like below (writes) node01::192.168.50.11:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL) (reads) node02::192.168.50.12:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL) (reads) node03::192.168.50.13:3306, Server version: 5.6.24-72.2 Percona XtraDB Cluster (GPL) #: packages currently installed [vagrant@node02 ~]$ sudo rpm -qa | grep Percona Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64 Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64
Before updating the packages above, make sure you update the XtraBackup package in case you have configured the variable wsrep_sst_method as xtrabackup-v2, this avoids the error below:

WSREP_SST: [ERROR] FATAL: The innobackupex version is 2.3.4. Needs xtrabackup-2.3.5 or higher to perform SST (2016102620:47:15.307) 2016-10-26 20:47:15 5227 [ERROR] WSREP: Failed to read 'ready <addr>' from: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.50.12' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '5227'  ''
So, on all three nodes, update percona-xtrabackup to make sure we’re running the latest version:

[root@node02 vagrant]# yum update percona-xtrabackup Loaded plugins: fastestmirror, versionlock Determining fastest mirrors ... --> Running transaction check ---> Package percona-xtrabackup.x86_64 0:2.3.4-1.el6 will be updated ---> Package percona-xtrabackup.x86_64 0:2.3.5-1.el6 will be an update
With that, take out of the cluster one node at a time, update all old binaries using yum update and start mysqld back up online. You don’t need to run mysql_upgrade in this case. When you start mysqld with the newer binaries in place, depending on the size of configured cache, it’s going to perform either an IST or SST.

As you’re going to take the node out of rotation and out of the cluster, you don’t need to worry about configuring it as read_only. If you can do that in a maintenance window, where no one is writing data to the main node, it’s the best scenario. You won’t need to worry about SST, as in most cases the dataset is too big (TB++) and the SST time can be some hours (an overnight streaming in my experience).

Let’s take out node02 and update the packages: #: let's take out node02 to update packages [vagrant@node02 ~]$ sudo /etc/init.d/mysql stop Shutting down MySQL (Percona XtraDB Cluster).... SUCCESS! [vagrant@node02 ~]$ sudo yum update Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64 Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64 ... Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.24-72.2.el6 will be updated ---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.32-25.17.1.el6 will be an update ---> Package Percona-XtraDB-Cluster-devel-56.x86_64 1:5.6.24-72.2.el6 will be updated ---> Package Percona-XtraDB-Cluster-devel-56.x86_64 1:5.6.32-25.17.1.el6 will be an update ---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.15-1.rhel6 will be updated ---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.17-1.rhel6 will be an update ---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.24-72.2.el6 will be updated ---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.32-25.17.1.el6 will be an update ---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.24-72.2.el6 will be updated ---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.32-25.17.1.el6 will be an update #: new packages in place after yum update - here, make sure you run yum clean all before yum update [root@node02 ~]# rpm -qa | grep Percona Percona-XtraDB-Cluster-shared-56-5.6.32-25.17.1.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.17-1.rhel6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.32-25.17.1.el6.x86_64 Percona-XtraDB-Cluster-client-56-5.6.32-25.17.1.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.32-25.17.1.el6.x86_64
Now start node02, knowing that it’s going to join the cluster, but with updated packages:

[root@node02 vagrant]# /etc/init.d/mysql start Starting MySQL (Percona XtraDB Cluster)...State transfer in progress, setting sleep higher .. SUCCESS! #: here you can see that the state transfer was required due to different states from cluster and current node #: this is gonna test the wsrep_sst_method to make sure it’s working well after updating percona-xtrabackup #: to latest version available 2016-10-26 21:51:38 3426 [Note] WSREP: State transfer required:  Group state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52613  Local state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52611 2016-10-26 21:51:38 3426 [Note] WSREP: New cluster view: global state: 63788863-1f8c-11e6-a8cc-12f338870ac3:52613, view# 2: Primary, number of nodes: 2, my index: 0, protocol version 3 2016-10-26 21:51:38 3426 [Warning] WSREP: Gap in state sequence. Need state transfer. 2016-10-26 21:51:38 3426 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.50.12' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '3426'  '' ' WSREP_SST: [INFO] Streaming with xbstream (20161026 21:51:39.023) WSREP_SST: [INFO] Using socat as streamer (20161026 21:51:39.025) WSREP_SST: [INFO] Evaluating timeout -s9 100 socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} )(20161026 21:51:39.100) 2016-10-26 21:51:39 3426 [Note] WSREP: Prepared SST request: xtrabackup-v2|192.168.50.12:4444/xtrabackup_sst//1 ... 2016-10-26 21:51:39 3426 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 52613) 2016-10-26 21:51:39 3426 [Note] WSREP: Requesting state transfer: success, donor: 1 WSREP_SST: [INFO] Proceeding with SST (20161026 21:51:39.871) WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (2016102621:51:39.873) WSREP_SST: [INFO] Cleaning the existing datadir and innodb-data/log directories (20161026 21:51:39.876) ... WSREP_SST: [INFO] Moving the backup to /var/lib/mysql/ (20161026 21:51:55.826) WSREP_SST: [INFO] Evaluating innobackupex --defaults-file=/etc/my.cnf  --defaults-group=mysqld --no-version-check  --datadir=/var/lib/mysql/ --move-back --force-non-empty-directories ${DATA} &>${DATA}/innobackup.move.log (2016102621:51:55.829) WSREP_SST: [INFO] Move successful, removing /var/lib/mysql//.sst (20161026 21:51:55.859) ... Version: '5.6.32-78.1-56'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release rel78.1,Revision 979409a, WSREP version 25.17, wsrep_25.17 2016-10-26 21:51:56 3426 [Note] WSREP: 0.0 (pxc01): State transfer from 1.0 (pxc01) complete. 2016-10-26 21:51:56 3426 [Note] WSREP: Shifting JOINER -> JOINED (TO: 52613) 2016-10-26 21:51:56 3426 [Note] WSREP: Member 0.0 (pxc01) synced with group. 2016-10-26 21:51:56 3426 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 52613) 2016-10-26 21:51:56 3426 [Note] WSREP: Synchronized with group, ready for connections 2016-10-26 21:51:56 3426 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
As you can see above, node02 is back in the cluster. Additionally, it’s important to see that both the Percona-Server packages and the Galera API packages were updated. When the node is up and part of the cluster, you should see a new API version in the output of a SHOW GLOBAL STATUS LIKE ‘wsrep%’ command:

#: node02, the one we just updated [root@node02 mysql]# mysql -e "show global status like 'wsrep_provider_version'G" *************************** 1. row *************************** Variable_name: wsrep_provider_version         Value: 3.17(r447d194) #: node01 not updated yet [root@node01 mysql]# mysql -e "show global status like 'wsrep_provider_version'G" *************************** 1. row *************************** Variable_name: wsrep_provider_version         Value: 3.15(r5c765eb)

Summarizing the procedure until now, the cluster packages update plan is:
  1. Take nodes out of rotation one at a time
  2. Shutdown mysqld on each node in order
  3. Update the below packages (or the ones corresponding to what you’re running):

[vagrant@node02 ~]$ sudo rpm -qa | grep Percona Percona-XtraDB-Cluster-client-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-server-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-galera-3-3.15-1.rhel6.x86_64 Percona-XtraDB-Cluster-shared-56-5.6.24-72.2.el6.x86_64 Percona-XtraDB-Cluster-devel-56-5.6.24-72.2.el6.x86_64

  1. Update percona-xtrabackup on all the cluster’s nodes to avoid issues (as explained above):

WSREP_SST: [ERROR] FATAL: The innobackupex version is 2.3.4. Needs xtrabackup-2.3.5 or higher to perform SST (2016102620:47:15.307) ... [root@node01 ~]# yum update percona-xtrabackup ... [root@node02 ~]# xtrabackup --version xtrabackup version 2.3.5 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 45cda89)

  1. Start mysqld back online to grab the cluster’s current state

After finishing up with each node’s packages update, make sure you check the main node to see if they have joined the cluster. On node01, you can enter the below query to return the main status variables. This checks the current status of node01 and the cluster size:

mysql> SELECT @@HOSTNAME AS HOST, NOW() AS `DATE`, VARIABLE_NAME,VARIABLE_VALUE FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME IN ('wsrep_cluster_state_uuid','wsrep_cluster_conf_id','wsrep_cluster_size','wsrep_cluster_status','wsrep_local_state_comment')G *************************** 1. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_LOCAL_STATE_COMMENT VARIABLE_VALUE: Synced *************************** 2. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_CLUSTER_CONF_ID VARIABLE_VALUE: 10 *************************** 3. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_CLUSTER_SIZE VARIABLE_VALUE: 3 *************************** 4. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_CLUSTER_STATE_UUID VARIABLE_VALUE: 1e0b9725-9c5e-11e6-886d-7708872d6aa5 *************************** 5. row *************************** HOST: node01 DATE: 2016-10-27 18:14:42 VARIABLE_NAME: WSREP_CLUSTER_STATUS VARIABLE_VALUE: Primary 5 rows in set (0.00 sec)

Check the other nodes as well:

#: node02 [root@node02 mysql]# mysql -e "show global status like 'wsrep_local_state%'G" *************************** 1. row *************************** Variable_name: wsrep_local_state_uuid Value: 1e0b9725-9c5e-11e6-886d-7708872d6aa5 *************************** 2. row *************************** Variable_name: wsrep_local_state Value: 4 *************************** 3. row *************************** Variable_name: wsrep_local_state_comment Value: Synced #: node03 [root@node03 ~]# mysql -e "show global status like 'wsrep_local_state%'G" *************************** 1. row *************************** Variable_name: wsrep_local_state_uuid Value: 1e0b9725-9c5e-11e6-886d-7708872d6aa5 *************************** 2. row *************************** Variable_name: wsrep_local_state Value: 4 *************************** 3. row *************************** Variable_name: wsrep_local_state_comment Value: Synced

Cheers!

Query to a partitioned table doing full partition scan

Lastest Forum Posts - November 6, 2016 - 10:26pm
Hello,

I have a table a history table which keeps data for last 1 month and is purged. This table have about 10 million records. I partitioned this table using range partition on a datetime column. The table and partition definition is as below.

CREATE TABLE `messages_history` (
`HistoryId` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`EventID` BIGINT(20) UNSIGNED NOT NULL,
`SourceDevice` VARCHAR(20) NOT NULL,
`DestinationDevice` VARCHAR(20) NOT NULL,
`EventTime` DATETIME NOT NULL,
`Event` VARCHAR(15) NOT NULL,
`Message` VARCHAR(255) NOT NULL,
PRIMARY KEY (`HistoryId`,`EventTime`),
KEY `indx_messages_new_message` (`Message`),
KEY `idx_eventtime` (`EventTime`)
) ENGINE=MYISAM DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (DAY(EventTime))
(PARTITION p1 VALUES LESS THAN (2) ENGINE = MyISAM,
PARTITION p2 VALUES LESS THAN (3) ENGINE = MyISAM,
PARTITION p3 VALUES LESS THAN (4) ENGINE = MyISAM,
PARTITION p4 VALUES LESS THAN (5) ENGINE = MyISAM,
PARTITION p5 VALUES LESS THAN (6) ENGINE = MyISAM,
PARTITION p6 VALUES LESS THAN (7) ENGINE = MyISAM,
PARTITION p7 VALUES LESS THAN (8) ENGINE = MyISAM,
PARTITION p8 VALUES LESS THAN (9) ENGINE = MyISAM,
PARTITION p9 VALUES LESS THAN (10) ENGINE = MyISAM,
PARTITION p10 VALUES LESS THAN (11) ENGINE = MyISAM,
PARTITION p11 VALUES LESS THAN (12) ENGINE = MyISAM,
PARTITION p12 VALUES LESS THAN (13) ENGINE = MyISAM,
PARTITION p13 VALUES LESS THAN (14) ENGINE = MyISAM,
PARTITION p14 VALUES LESS THAN (15) ENGINE = MyISAM,
PARTITION p15 VALUES LESS THAN (16) ENGINE = MyISAM,
PARTITION p16 VALUES LESS THAN (17) ENGINE = MyISAM,
PARTITION p17 VALUES LESS THAN (18) ENGINE = MyISAM,
PARTITION p18 VALUES LESS THAN (19) ENGINE = MyISAM,
PARTITION p19 VALUES LESS THAN (20) ENGINE = MyISAM,
PARTITION p20 VALUES LESS THAN (21) ENGINE = MyISAM,
PARTITION p21 VALUES LESS THAN (22) ENGINE = MyISAM,
PARTITION p22 VALUES LESS THAN (23) ENGINE = MyISAM,
PARTITION p23 VALUES LESS THAN (24) ENGINE = MyISAM,
PARTITION p24 VALUES LESS THAN (25) ENGINE = MyISAM,
PARTITION p25 VALUES LESS THAN (26) ENGINE = MyISAM,
PARTITION p26 VALUES LESS THAN (27) ENGINE = MyISAM,
PARTITION p27 VALUES LESS THAN (28) ENGINE = MyISAM,
PARTITION p28 VALUES LESS THAN (29) ENGINE = MyISAM,
PARTITION p29 VALUES LESS THAN (30) ENGINE = MyISAM,
PARTITION p30 VALUES LESS THAN (31) ENGINE = MyISAM,
PARTITION p31 VALUES LESS THAN (32) ENGINE = MyISAM) */

Now, when I am querying this table for records between two dates, the explain plan show that it is looking into all partitions even though I am searching for couple of days. Query and explain plan is below.

EXPLAIN PARTITIONS
SELECT * FROM messages_history WHERE eventTime >= '2016-10-24 00:00:00' AND eventTime < '2016-10-26 00:00:00';

"id" "select_type" "table" "partitions" "type" "possible_keys" "key" "key_len" "ref" "rows" "filtered" "Extra"
"1" "SIMPLE" "messages_history" "p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p1 5,p16,p17,p18,p19,p20,p21,p22,p23,p24,p25,p26,p27, p28,p29,p30,p31" "ALL" "idx_eventtime" \N \N \N "2547217" "100.00" "Using where"


If I am querying like below its using partitions.

EXPLAIN PARTITIONS
SELECT * FROM messages_history WHERE eventTime = '2016-10-24 00:00:00'

"id" "select_type" "table" "partitions" "type" "possible_keys" "key" "key_len" "ref" "rows" "filtered" "Extra"
"1" "SIMPLE" "messages_history" "p24" "ref" "idx_eventtime" "idx_eventtime" "5" "const" "2" "100.00" \N


So, what is wrong in the first query ? Was it supposed to use just two partitions instead of looking for data in all. Am I doing anything wrong ? Please help me on this.

With Regards
Raghupradeep

Performance issue with wsrep: in pre-commit stage

Lastest Forum Posts - November 6, 2016 - 11:00am
Hi,
I have following problem with PXC.

1) I have following configuration
2 x DB + Arbitrator
PXC version 5.7.14

2) I'm writing data to database on 50 concurrent threads.
Writes are batched and every 10k rows there is a commit.

3) The problem is that commit is super slow. It can take up to 60 second.
And during commit thread is in "wsrep: in pre-commit stage" state.

4) my.cnf

[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld]
log_error = /var/log/mysql/error.log
server-id=1
log-bin
expire_logs_days=2
binlog_format=ROW

default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2

wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_address=gcomm://list_of_nodes
wsrep_node_address=address
wsrep_sst_method=xtrabackup-v2
wsrep_cluster_name=bi-percona
wsrep_sst_auth="loginassword"
wsrep_slave_threads=15
wsrep_provider_options="gcache.size=128G;gcs.fc_li mit=128;gsc.fc_master_slave=YES;repl.commit_order= 0"

skip-external-locking
sql_mode="STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_B Y_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
pxc_strict_mode = PERMISSIVE

bind-address = 0.0.0.0
ssl = on

max_allowed_packet = 1G
group_concat_max_len = 1073741824
key_buffer_size = 8589934592
max_heap_table_size = 10737418240
max_connect_errors = 1000000
thread_stack = 192K
thread_cache_size = 8

query_cache_type = 0

max_connections = 2000
thread_cache_size = 100
open_files_limit = 10440

query_cache_size = 0

net_read_timeout = 360
net_write_timeout = 360
default-storage-engine = InnoDB
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 4
innodb_log_file_size = 1G
innodb_flush_log_at_trx_commit = 0
innodb_file_per_table = 1
innodb_buffer_pool_size = 128G
innodb_buffer_pool_instances = 64
character-set-client-handshake = FALSE
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
range_optimizer_max_mem_size = 100M


What can I do to optimize writes ?
I already enabled wsrep_slave_threads. I was experimenting with wsrep_provider_options. What else can I do?

Regards,
Maciek




General Inquiries

For general inquiries, please send us your question and someone will contact you.