]]>
]]>

You are here

Feed aggregator

Impossible to join primary node

Lastest Forum Posts - June 22, 2015 - 9:58am
2 nodes percona cluster 5.6.24-72.2 + garbd
direct cable between eth1 of two nodes,
iptables disabled,

Node 2 is running perfectly
I installed the new release 5.6.24-72.2

Node 1 went down one night
I used the down time period to shrink the system partition and install the new release

What I have been trying
a. cleaning the content of /var/lib/mysql folder and starting the mysql server
node 1 stop
service mysql start
********************************
2015-06-22 18:05:54 154050 [Note] WSREP: State transfer required:
Group state: db960768-171f-11e5-847c-3349410895e7:4357233
Local state: 00000000-0000-0000-0000-000000000000:-1
2015-06-22 18:05:54 154050 [Note] WSREP: New cluster view: global state: db960768-171f-11e5-847c-3349410895e7:4357233, view# 4: Primary, number of nodes: 2, my index: 0, protocol version 3
2015-06-22 18:05:54 154050 [Warning] WSREP: Gap in state sequence. Need state transfer.
2015-06-22 18:05:54 154050 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '172.18.172.145' --auth 'sst_user:umi_tss_20131205' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '154050' '' '
WSREP_SST: [INFO] Streaming with xbstream (20150622 18:05:55.528)
WSREP_SST: [INFO] Using socat as streamer (20150622 18:05:55.532)
WSREP_SST: [INFO] Xtrabackup based encryption enabled in my.cnf - Supported only from Xtrabackup 2.1.4 (20150622 18:05:55.572)
WSREP_SST: [INFO] Evaluating timeout -s9 100 socat -u TCP-LISTEN:4444,reuseaddr stdio | pv -f -i 10 -N joiner 2>>/var/log/mysql-sst-progress | xbcrypt --encrypt-algo=AES256 --encrypt-key=uo1zoo2ALoothaingookow7sho4eot4a -d | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20150622 18:05:55.582)
2015-06-22 18:05:55 154050 [Note] WSREP: Prepared SST request: xtrabackup-v2|172.18.172.145:4444/xtrabackup_sst//1
Warning: Using a password on the command line interface can be insecure.
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
2015-06-22 18:05:55 154050 [Note] WSREP: REPL Protocols: 7 (3, 2)
2015-06-22 18:05:55 154050 [Note] WSREP: Service thread queue flushed.
2015-06-22 18:05:55 154050 [Note] WSREP: Assign initial position for certification: 4357233, protocol version: 3
2015-06-22 18:05:55 154050 [Note] WSREP: Service thread queue flushed.
2015-06-22 18:05:55 154050 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (db960768-171f-11e5-847c-3349410895e7): 1 (Operation not permitted)
at galera/src/replicator_str.cpprepare_for_IST():463. IST will be unavailable.
2015-06-22 18:05:55 154050 [Warning] WSREP: Member 0.0 (icts-zabbix01) requested state transfer from 'icts-zabbix02', but it is impossible to select State Transfer donor: No route to host
2015-06-22 18:05:55 154050 [ERROR] WSREP: Requesting state transfer failed: -113(No route to host)
2015-06-22 18:05:55 154050 [ERROR] WSREP: State transfer request failed unrecoverably: 113 (No route to host). Most likely it is due to inability to communicate with the cluster primary component. Restart required.
2015-06-22 18:05:55 154050 [Note] WSREP: Closing send monitor...
2015-06-22 18:05:55 154050 [Note] WSREP: Closed send monitor.
2015-06-22 18:05:55 154050 [Note] WSREP: gcomm: terminating thread
2015-06-22 18:05:55 154050 [Note] WSREP: gcomm: joining thread
2015-06-22 18:05:55 154050 [Note] WSREP: gcomm: closing backend

************************************
Node 2 do not list anything in the error log
************************************
tcpdump trace on both nodes show around 140 packets exchanged on tcp port 4567
nothing exchanges on udp port 4567 nor on tcp port 4568 or 4444
************************************

B. I have reinstalled the server ... and rebooted ... but nothing different.

Anyone have an idea on what is happening ?

Xibu

Impossible to join primary node

Lastest Forum Posts - June 22, 2015 - 9:58am
2 nodes percona cluster 5.6.24-72.2 + garbd
direct cable between eth1 of two nodes,
iptables disabled,

Node 2 is running perfectly
I installed the new release 5.6.24-72.2

Node 1 went down one night
I used the down time period to shrink the system partition and install the new release

What I have been trying
a. cleaning the content of /var/lib/mysql folder and starting the mysql server
node 1 stop
service mysql start
********************************
2015-06-22 18:05:54 154050 [Note] WSREP: State transfer required:
Group state: db960768-171f-11e5-847c-3349410895e7:4357233
Local state: 00000000-0000-0000-0000-000000000000:-1
2015-06-22 18:05:54 154050 [Note] WSREP: New cluster view: global state: db960768-171f-11e5-847c-3349410895e7:4357233, view# 4: Primary, number of nodes: 2, my index: 0, protocol version 3
2015-06-22 18:05:54 154050 [Warning] WSREP: Gap in state sequence. Need state transfer.
2015-06-22 18:05:54 154050 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '172.18.172.145' --auth 'sst_user:umi_tss_20131205' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '154050' '' '
WSREP_SST: [INFO] Streaming with xbstream (20150622 18:05:55.528)
WSREP_SST: [INFO] Using socat as streamer (20150622 18:05:55.532)
WSREP_SST: [INFO] Xtrabackup based encryption enabled in my.cnf - Supported only from Xtrabackup 2.1.4 (20150622 18:05:55.572)
WSREP_SST: [INFO] Evaluating timeout -s9 100 socat -u TCP-LISTEN:4444,reuseaddr stdio | pv -f -i 10 -N joiner 2>>/var/log/mysql-sst-progress | xbcrypt --encrypt-algo=AES256 --encrypt-key=uo1zoo2ALoothaingookow7sho4eot4a -d | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20150622 18:05:55.582)
2015-06-22 18:05:55 154050 [Note] WSREP: Prepared SST request: xtrabackup-v2|172.18.172.145:4444/xtrabackup_sst//1
Warning: Using a password on the command line interface can be insecure.
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
2015-06-22 18:05:55 154050 [Note] WSREP: REPL Protocols: 7 (3, 2)
2015-06-22 18:05:55 154050 [Note] WSREP: Service thread queue flushed.
2015-06-22 18:05:55 154050 [Note] WSREP: Assign initial position for certification: 4357233, protocol version: 3
2015-06-22 18:05:55 154050 [Note] WSREP: Service thread queue flushed.
2015-06-22 18:05:55 154050 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (db960768-171f-11e5-847c-3349410895e7): 1 (Operation not permitted)
at galera/src/replicator_str.cpprepare_for_IST():463. IST will be unavailable.
2015-06-22 18:05:55 154050 [Warning] WSREP: Member 0.0 (icts-zabbix01) requested state transfer from 'icts-zabbix02', but it is impossible to select State Transfer donor: No route to host
2015-06-22 18:05:55 154050 [ERROR] WSREP: Requesting state transfer failed: -113(No route to host)
2015-06-22 18:05:55 154050 [ERROR] WSREP: State transfer request failed unrecoverably: 113 (No route to host). Most likely it is due to inability to communicate with the cluster primary component. Restart required.
2015-06-22 18:05:55 154050 [Note] WSREP: Closing send monitor...
2015-06-22 18:05:55 154050 [Note] WSREP: Closed send monitor.
2015-06-22 18:05:55 154050 [Note] WSREP: gcomm: terminating thread
2015-06-22 18:05:55 154050 [Note] WSREP: gcomm: joining thread
2015-06-22 18:05:55 154050 [Note] WSREP: gcomm: closing backend

************************************
Node 2 do not list anything in the error log
************************************
tcpdump trace on both nodes show around 140 packets exchanged on tcp port 4567
nothing exchanges on udp port 4567 nor on tcp port 4568 or 4444
************************************

B. I have reinstalled the server ... and rebooted ... but nothing different.

Anyone have an idea on what is happening ?

Xibu

Information required for plugin pmp-check-mysql-pidfile

Lastest Forum Posts - June 22, 2015 - 9:56am
When i configure the plugin for mysql pid check using below command

define command{
command_name check_mysql_pidfile
command_line $USER2$/pmp-check-mysql-pidfile -H hostname -l user -p password -w
}
I am getting warning as WARN missing /var/run/mysqld/mysqld.pid even when that file is present over there

Any solution for this???

Thanks in advance

Information required for plugin pmp-check-mysql-pidfile

Lastest Forum Posts - June 22, 2015 - 9:56am
When i configure the plugin for mysql pid check using below command

define command{
command_name check_mysql_pidfile
command_line $USER2$/pmp-check-mysql-pidfile -H hostname -l user -p password -w
}
I am getting warning as WARN missing /var/run/mysqld/mysqld.pid even when that file is present over there

Any solution for this???

Thanks in advance

difference between Percona XtraDB cluster and Galera Cluster

Lastest Forum Posts - June 22, 2015 - 2:47am
Dear gurus,

We're moving from standard Mysql cluster (Oracle) to other products for our Backend system. Which are chosen are Percona XtraDB Cluster (PXC) and Galera Cluster. I still have a blur between 2 products.

Does PXC and Galera Cluster is the SAME product ?

If not, does PXC used Galera library for Cluster partition ?

What is difference between Percona XtraDB cluster and Galera Cluster on Performance, HA, which one should be chosen ?


So Thanks and best regards

Percona Monitoring Plugins 1.1.5 release

Latest MySQL Performance Blog posts - June 22, 2015 - 12:00am

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.5.

Changelog:

  • Added more DB instance classes to pmp-check-aws-rds.py (issue 1398911)
  • Added configurable query period and average time to pmp-check-aws-rds.py (issue 1436943)
  • Added region support to pmp-check-aws-rds.py (issue 1442980)
  • Added an option to alert when server is not configured as replica to pmp-check-mysql-replication-delay (issue 1357017)
  • Improved usage of lock-free SHOW SLAVE STATUS query (issue 1380690)
  • Fixed reporting of slave lag in ss_get_mysql_stats.php (issue 1389769)

We have also moved the code to Github https://github.com/percona/percona-monitoring-plugins but the bug tracker is still on Launchpad https://bugs.launchpad.net/percona-monitoring-plugins.

A new tarball is available from downloads area or in packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. In addition as part of Percona’s Remote DBA installation and setup of these tools are included with our services. You can find links to the documentation, forums and more at the project homepage.

About Percona Monitoring Plugins
Percona Monitoring Plugins are high-quality components to add enterprise-grade MySQL monitoring and graphing capabilities to your existing in-house, on-premises monitoring solutions. The components are designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix.

The post Percona Monitoring Plugins 1.1.5 release appeared first on MySQL Performance Blog.

contribute code to percona toolkit

Lastest Forum Posts - June 19, 2015 - 12:15pm
Hi,

So if I want to contribute to percona toolkit, what's the process? Do I just push a PR to the github repo? Is there any agreements I have to sign? Do we have to sign?

How to use percona plugins with Nagios

Lastest Forum Posts - June 19, 2015 - 9:48am
Can anyone please list examples of how to install and configure percona plugins in nagios. I was not able to find a good example so far on any website. I am new to percona and nagios so it will be very helpful for me if you provide some detailed steps to configure percona plugin in nagios.

Percona Cluster ERROR 2013 (HY000): Lost connection to MySQL server during query

Lastest Forum Posts - June 19, 2015 - 9:35am
I have3 nodes in a cluster with 24GB of RAM, 8 cores and 300GB SSD on each node.

I have a partitioned database with 30 million entries. When I try to insert large amounts of data (about 10 million), from one table to another, I get the following error:

: ERROR 2013 (HY000): Lost connection to MySQL server during query The log mysql-error is the following:

: 150619 16:38:35 mysqld_safe Number of processes running now: 0 150619 16:38:35 mysqld_safe WSREP: not restarting wsrep node automatically 150619 16:38:35 mysqld_safe mysqld from pid file /var/lib/mysql/mysql.pid ended The PXC versión is 5.6.24-72.2 and the config file:

: # Generated by Percona Configuration Wizard (http://tools.percona.com/) version REL5-20120208 # Configuration name server-2026 generated for ximop_22@hotmail.com at 2015-01-14 08:51:46 [mysql] # CLIENT # port = 3306 socket = /var/lib/mysql/mysql.sock [mysqld] # GENERAL # user = mysql default-storage-engine = InnoDB socket = /var/lib/mysql/mysql.sock pid-file = /var/lib/mysql/mysql.pid tmpdir = /dev/shm/mysqltmp/ server-id = 1 expire_logs_days = 7 delayed_insert_timeout = 3600 skip-host-cache skip-name-resolve # Cluster Config # #wsrep_cluster_address=gcomm:// wsrep_cluster_address=gcomm://10.240.28.7,10.240.46.7,10.240.208.7 binlog_format=ROW wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_cluster_name=AirePXC wsrep_sst_method=xtrabackup-v2 wsrep_sst_auth="sstuser:s3cret" wsrep_node_name=PXC1 wsrep_node_address=10.240.28.7 #innodb_locks_unsafe_for_binlog=1 transaction-isolation=READ-COMMITTED innodb_autoinc_lock_mode=2 # MyISAM # key-buffer-size = 32M myisam-recover-options = FORCE,BACKUP # SAFETY # max-allowed-packet = 16M max-connect-errors = 1000000 # DATA STORAGE # datadir = /var/lib/mysql/ # BINARY LOGGING # log-bin = /var/lib/mysql/mysql-bin expire-logs-days = 2 sync-binlog = 0 log-slave-updates = 1 # CACHES AND LIMITS # tmp-table-size = 32M max-heap-table-size = 32M query-cache-type = 0 query-cache-size = 0 max-connections = 500 thread-cache-size = 50 open-files-limit = 65535 table-definition-cache = 1024 table-open-cache = 2048 # INNODB # innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 innodb-log-file-size = 512M innodb-flush-log-at-trx-commit = 1 innodb-file-per-table = 1 innodb-buffer-pool-size = 20G # LOGGING # log-error = /var/lib/mysql/mysql-error.log log-queries-not-using-indexes = 1 slow-query-log = 1 slow-query-log-file = /var/lib/mysql/mysql-slow.log Is a bug¿?

Thanks!

Q&A: High availability when using MySQL in the cloud

Latest MySQL Performance Blog posts - June 19, 2015 - 6:00am

Last week I hosted a webinar on using MySQL in the cloud for High Availability (HA) alongside 451 Research analyst Jason Stamper. You can watch the recording and also download the slides (free) here. Just click the “Register” button at the end of that page.

We had several excellent questions and we didn’t have time to get to several of them in the allotted time. I’m posting them here along with the answers. Feel free to ask follow-up questions in the comments below.

Q: Can the TokuDB engine be used in a PXC environment?

A: No, TokuDB cannot currently be used in a PXC environment, the only supported engine in Percona XtraDB Cluster 5.6 is InnoDB.

Q: With Galera replication (PXC), is balancing the load on each node?

A: No, you need to implement your own load balancing and HA layer between your clients and the Percona XtraDB Cluster server.  Examples mentioned in the webinar include HAProxy and F5 BigIP.

Q: What’s the best version of Percona XtraDB Cluster regarding InnoDB performance?

A: In general for best performance you should be using the latest release of Percona XtraDB Cluster 5.6, which is currently 5.6.24, released on June 3rd, 2015.

Q: Can I redirect my writes in Percona XtraDB Cluster to multiple nodes using the HAProxy? While trying with SysBench I can see write-only goes to first nodes in PXC while reads does goes to multiple nodes.

A: Yes you can configure HAProxy to distribute both reads and writes across all of your nodes in a Percona XtraDB Cluster environment. Perhaps SysBench created only one database connection for all writes, and so haproxy kept those confined to only one host. You may want to experiment with parallel_prepare.lua.

Q: What’s the optimal HA for small datasets (db is less than 10gb)?

A: The optimal HA deployment for small datasets would be dependent on your level of recovery required (tolerance for loss of transactions) and time that you can be in an unavailable state (seconds, minutes, hours?).  Unfortunately there isn’t a single answer to your question, however, if you are interested in further discussion on this point Percona would be happy to coordinate a time to speak.  Please feel free to contact me directly and we can continue the conversation at michael.coburn@percona.com.

 Q: Is there a concept of local master vs. remote master with PXC?

A: No there is no concept of local vs remote master.  All nodes in a Percona XtraDB Cluster can now be classified as Master, regardless of their proximity to the clients.

Q: Are there any concerns when considering AWS RDS or AURORA DB for MySQL HA in the Cloud?

A: Regarding AWS RDS, yes this a good option for MySQL HA in the Cloud.  I unfortunately haven’t worked with Aurora DB that much yet so I don’t have an opinion on it’s suitability for HA in the Cloud.

Q: We tried out PXC awhile back and it used to lock everything whenever any ddl was done. Has that changed?

A: We would have to look at the specifics of your environment, however, there have been numerous improvements in the 1½ years of development since Percona XtraDB Cluster went Generally Available (GA) on January 30th, 2014 in version 5.6.15.

Q: Is using the arbitrator a must?

A: No the arbitrator role via the garbd daemon is generally only used when operating in a minimal environment of two nodes that contain the data and you need a third node for quorum but don’t want to store the data a third time.

Q: Can we do a cluster across different zones?

A: Yes you can. However be aware that the latency incurred for all cluster certification operations will be impacted by the round trip time between nodes.

Q: Does PXC also support the MyISAM database?

A: No, Percona XtraDB Cluster does not support any storage engine other than InnoDB as of PXC 5.6.

Q: How do load balancers affect the throughput in a Galera-based setup given that the write would be limited by the slowest node?

A: Load balancers will introduce some measure of additional latency in the form of CPU time in the load balancer layer as it evaluates its own ruleset, and also in network time due to additional hop via load balancer.  Otherwise there should be no perceptible difference in the write throughput of a Percona XtraDB Cluster with and without a load balancer as it relates to the “slowest node” factor.

Q: Have you used MaxScale yet? If so, what are your thoughts?

A: Unfortunately I haven’t used MaxScale however Yves Trudeau, Percona Principal Architect, has recently written about MaxScale in this blog post.

Q: How do you configure timeout and maintain persistent connection to HAProxy?

A: I would encourage you to refer to the HAProxy Documentation.

The post Q&A: High availability when using MySQL in the cloud appeared first on MySQL Performance Blog.

Getting EXPLAIN information from already running queries in MySQL 5.7

Latest MySQL Performance Blog posts - June 18, 2015 - 12:00am

When a new version of MySQL is about to be released we read a lot of blog posts about the performance and scalability improvements. That’s good but sometimes we miss some small features that can help us a lot in our day-to-day tasks. One good example is the blog post that Aurimas wrote about a new small feature in MySQL 5.6 that I didn’t know about until I read it: the Automatic InnoDB transaction log file size change. How cool is that?

I plan to write a series of blog posts that will show some of those small new features in MySQL 5.7 that are going to be really useful. I’m going to start with EXPLAIN FOR CONNECTION.

This feature allows us to run an EXPLAIN for an already running statement. Let’s say that you find a query that has been running for a long time and you want to check why that could be happening. In 5.7 you can just ask MySQL to EXPLAIN the query that a particular connection is running and get the execution path. You can use it if the query is a SELECT, DELETE, INSERT, REPLACE or UPDATE. Won’t work if the query is a prepared statement though.

Let me show you an example of how it works.

We have a long running join.

mysql [localhost] {msandbox} ((none)) > show processlist G *************************** 1. row *************************** Id: 9 User: msandbox Host: localhost db: employees Command: Query Time: 49 State: Sending data Info: select count(*) from employees, salaries where employees.emp_no = salaries.emp_no

Let’s see the execution plan for the query:

mysql [localhost] {msandbox} ((none)) > explain for connection 9 G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: employees partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 299540 filtered: 100.00 Extra: NULL *************************** 2. row *************************** id: 1 select_type: SIMPLE table: salaries partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 2803840 filtered: 100.00 Extra: Using where; Using join buffer (Block Nested Loop)

The join between those tables is not using any index at all so there is some room for improvement here

Conclusion

You can use this feature to see why a query is running for too long and based on the info decide how to fix it and how to proceed. This is going to be a very useful feature for DBAs who want to diagnose performance problems and slow queries.

The post Getting EXPLAIN information from already running queries in MySQL 5.7 appeared first on MySQL Performance Blog.

Update on the InnoDB double-write buffer and EXT4 transactions

Latest MySQL Performance Blog posts - June 17, 2015 - 7:15am

In a post, written a few months ago, I found that using EXT4 transactions with the “data=journal” mount option, improves the write performance significantly, by 55%, without putting data at risk. Many people commented on the post mentioning they were not able to reproduce the results and thus, I decided to further investigate in order to find out why my results were different.

So, I ran sysbench benchmarks on a few servers and found when the InnoDB double-write buffer limitations occur and when they don’t. I also made sure some of my colleagues were able to reproduce the results. Basically, in order to reproduce the results you need the following conditions:

  • Spinning disk (no SSD)
  • Enough CPU power
  • A dataset that fits in the InnoDB buffer pool
  • A continuous high write load with many ops waiting for disk

Using the InnoDB double write buffer on an SSD disk somewhat prevents us from seeing the issue, something good performance wise. That comes from the fact that the latency of each write operation is much lower. That makes sense, the double-writer buffer is an area of 128 pages on disk that is used by the write threads. When a write thread needs to write a bunch of dirty pages to disk, it first writes them sequentially to free slots in the double write buffer in a single iop and then, it spends time writing the pages to their actual locations on disk using typically one iop per page. Once done writing, it releases the double-write buffer slots it was holding and another thread can do its work. The presence of a raid controller with a write cache certainly helps, at least until the write cache is full. Thus, since I didn’t tested with a raid controller, I suspect a raid controller write cache will delay the apparition of the symptoms but if the write load is sustained over a long period of time, the issue with the InnoDB double write buffer will appear.

So, to recapitulate, on a spinning disk, a write thread needs to hold a lock on some of the double-write buffer slots for at least a few milliseconds per page it needs to write while on a SSD disk, the slots are released very quickly because of the low latency of the SSD storage. To actually stress the InnoDB double-write buffer on a SSD disk, one must push much more writes.

That leads us to the second point, the amount of CPU resources available. At first, one of my colleague tried to reproduce the results on a small EC2 instance and failed. It appeared that by default, the sysbench oltp.lua script is doing quite a lot of reads and those reads saturate the CPU, throttling the writes. By lowering the amount of reads in the script, he was then able to reproduce the results.

For my benchmarks, I used the following command:

sysbench --num-threads=16 --mysql-socket=/var/lib/mysql/mysql.sock
--mysql-database=sbtest --mysql-user=root
--test=/usr/share/doc/sysbench/tests/db/oltp.lua --oltp-table-size=50000000
--oltp-test-mode=complex --mysql-engine=innodb --db-driver=mysql
--report-interval=60 --max-requests=0 --max-time=3600 run

Both servers used were metal boxes with 12 physical cores (24 HT). With less CPU resources, I suggest adding the following parameters:

--oltp-point-selects=1
--oltp-range-size=1
--oltp-index-updates=10

So that the CPU is not wasted on reads and enough writes are generated. Remember we are not doing a generic benchmarks, we are just stressing the InnoDB double-write buffer.

In order to make sure something else isn’t involved, I verified the following:

  • Server independence, tried on 2 physical servers and one EC2 instance, Centos 6 and Ubuntu 14.04
  • MySQL provided, tried on MySQL community and Percona Server
  • MySQL version, tried on 5.5.37 and 5.6.23 (Percona Server)
  • Varied the InnoDB log file size from 32MB to 512MB
  • The impacts of the number of InnoDB write threads (1,2,4,8,16,32)
  • The use of Linux native asynchronous iop
  • Spinning and SSD storage

So, with all those verifications done, I can maintain that if you are using a server with spinning disks and a high write load, using EXT4 transactions instead of the InnoDB double write buffer yields to an increase in throughput of more than 50%. In an upcoming post, I’ll show how the performance stability is affected by the InnoDB double-write buffer under a high write load.

Appendix: the relevant part of the my.cnf

innodb_buffer_pool_size = 12G
innodb_write_io_threads = 8 # or else in {1,2,4,8,16,32}
innodb_read_io_threads = 8
innodb_flush_log_at_trx_commit = 0 # must be 0 or 2 to really stress the double write buffer
innodb_log_file_size = 512M # or 32M, 64M
innodb_log_files_in_group = 2
innodb_file_per_table
innodb_flush_method=O_DIRECT # or O_DSYNC
innodb_buffer_pool_restore_at_startup=300 # On 5.5.x, important to warm up the buffer pool
#innodb_buffer_pool_load_at_startup=ON # on 5.6, important to warm up the buffer pool
#innodb_buffer_pool_dump_at_shutdown=ON # on 5.6, important to warm up the buffer pool,
skip-innodb_doublewrite # or commented out
innodb_flush_neighbor_pages=none # or area for spinning

The post Update on the InnoDB double-write buffer and EXT4 transactions appeared first on MySQL Performance Blog.

Performance inputs for PXC

Lastest Forum Posts - June 17, 2015 - 5:41am
Please suggest some inputs on read/write performance on PXC.

H/W -- 3 nodes with 1 HA Proxy

64 GB RAM
12 Core CPU/DELL
Flash Drive 2.1 Tb
PXC -- 5.6.24 PXC (GPL)



Sample bench marking:
# mysqlslap --concurrency=40 --auto-generate-sql-load-type=read --host=127.0.0.1 --port=3307 --user=****** --password=********* --auto-generate-sql --number-of-queries=100000 --number-char-cols=9 --number-int-cols=13 --auto-generate-sql-secondary-indexes=10 --engine=innodb --verbose
Warning: Using a password on the command line interface can be insecure.
Benchmark
Running for engine innodb
Average number of seconds to run all queries: 19.978 seconds
Minimum number of seconds to run all queries: 19.978 seconds
Maximum number of seconds to run all queries: 19.978 seconds
Number of clients running queries: 40
Average number of queries per client: 2500
]# mysqlslap --concurrency=140 --auto-generate-sql-load-type=write --host=127.0.0.1 --port=3307 --user=***** --password=****** --auto-generate-sql --number-of-queries=100000 --number-char-cols=9 --number-int-cols=13 --auto-generate-sql-secondary-indexes=10 --engine=innodb --verbose
Warning: Using a password on the command line interface can be insecure.
Benchmark
Running for engine innodb
Average number of seconds to run all queries: 15.062 seconds
Minimum number of seconds to run all queries: 15.062 seconds
Maximum number of seconds to run all queries: 15.062 seconds
Number of clients running queries: 140
Average number of queries per client: 714



Sample config on PXC:

# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://xx,xx,xx
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=xxx
# SST method
wsrep_sst_method=xtrabackup-v2
# Cluster name
wsrep_cluster_name=my_centos_cluster
# Authentication for SST method
wsrep_sst_auth="xxxxxx"

wsrep_max_ws_size=2G
wsrep_slave_threads=4

wsrep_provider_options="gcache.size=128M;gcache.pa ge_size =128M;"




Speed up GROUP BY queries with subselects in MySQL

Latest MySQL Performance Blog posts - June 15, 2015 - 11:32am

We usually try to avoid subselects because sometimes they force the use of a temporary table and limits the use of indexes. But, when is good to use a subselect?

This example was tested over table a (1310723 rows), b, c and d ( 5 rows each) and with MySQL version 5.5 and 5.6.

Let’s suppose we have a query like this:

select a.name,sum(a.count) aSum,avg(a.position) aAVG,b.col1,c.col2,d.col3 from a join b on (a.bid = b.id) join c on (a.cid = c.id) join d on (a.did = d.id) group by a.name,b.id,c.id,d.id

What will MySQL do? First it will take the entire data set – this means that will go through each row scanning the value of  “bid,” “cid” and “did” and then apply the join to each table. At this point it has the complete data set and then it will start to cluster it, executing the sum and the average functions.

Let’s analyze it step by step:

  1. Scan each row of  table a which has 1310720 rows.
  2. Join each row of table a with b, c and d – this means that each of the 1310720 rows will be joined, making the temporary table bigger.
  3. Execute the group by which will scan again the 1310720 rows and creating the result data set.

What can we do to optimize this query? We can’t avoid the group by over the 1.3M rows, but we are able to avoid the join over 1.3M of rows. How? We need all of the information from table a for the “group by” but we don’t need to execute all the joins before clustering them. Let’s rewrite the query:

select a.name,aSum,aAVG,b.col1,c.col2,d.col3 from ( select name,sum(count) aSum ,avg(position) aAVG,bid,cid,did from a group by name,bid,cid,did) a join b on (a.bid = b.id) join c on (a.cid = c.id) join d on (a.did = d.id)

We see from the above query that we are doing the “group by” only over table a, the result data set of that subquery is just 20 rows. But what about the query response time? The first query took 2.3 sec avg and the optimized query took 1.8 sec average, half a second faster.

What about adding a covering index? The index that we can add will be:

alter table a add index (name,bid,cid,did,count,position);

The explain plan of both queries shows that it is using just the index to resolve the query.

Now, the response time of the original query is 1.9 sec which is near the time of the optimized query. However, the response time of the optimized query now is 0.7 sec, nearly 3x faster. The cons of adding this index is that we are indexing the whole table and it shows that the index length is near 80% of the data length.

If the original query had “where” conditions, it will depend over which field. Let’s suppose add c.col2=3:select a.name,sum(a.count) aSum,avg(a.position) aAVG,b.col1,c.col2,d.col3 from a join b on (a.bid = b.id) join c on (a.cid = c.id) join d on (a.did = d.id) where c.col2=3 group by a.name,b.id,c.id,d.idNow, in the new query, the subquery will change. Table c and the “where” clause must be added to the subquery:select a.name,aSum,aAVG,b.col1,a.col2,d.col3 from ( select a.name,sum(count) aSum ,avg(position) aAVG,bid,cid,did,c.col2 from a join c on (a.cid = c.id) where c.col2=3 group by name,bid,cid,did) a join b on (a.bid = b.id) join d on (a.did = d.id)

But the differences in times are not as big (original query 1.1 sec and new query 0.9). Why? because the original query will have less data to group by. Adding c.col2=3 to the original query, the amount of data to group by is reduced from 1.3M to 262k. Indeed, if you add more “where” conditions on different tables, the dataset to sort will be smaller and the speed-up will decrease.

Conclusion: We usually add the GROUP BY at the end of queries, and that is ok because the syntax forces us to do it. However we can use a subquery to group only the data that we need and then perform the joins over other tables. This could speed up some of our GROUP BY queries.

The post Speed up GROUP BY queries with subselects in MySQL appeared first on MySQL Performance Blog.

Cannot add some Google account linked users to Percona Cloud organization

Lastest Forum Posts - June 15, 2015 - 10:41am
Error: "A user does not exist for (email)"

I have a user that signed up with their google account that I was able to add, but 2 other users whom signed up with their google account that I can't seem to add. Any suggestions?

Thanks

Incremental streaming backups and restore

Lastest Forum Posts - June 15, 2015 - 9:31am
Hi,

I'm new to Percona and have just been getting my head around the features it has. My ultimate goal is to be able to stream incremental backups to a remote server and to restore/rebuild the database on the remote host.

So far I've got to the point of streaming the increments according to this guide (at the very end):
https://www.percona.com/doc/percona-...obackupex.html
#innobackupex --incremental --incremental-lsn=LSN-number --stream=xbstream ./ | ssh user@hostname " cat - | xbstream -x -C > /backup-dir/" My questions are:
1. How can I send the incremental backups without the need to refer to the LSN-number?
2. How can I rebuild the increments or rather prepare the backups on the remote host ?

Suggestions are most welcome!

Incremental streaming backups and restore

Lastest Forum Posts - June 15, 2015 - 9:25am
Hi,

I'm new to Percona and have just been getting my head around the features it has. My ultimate goal is to be able to stream incremental backups to a remote server and to restore/rebuild the database on the remote host.

So far I've got to the point of streaming the increments according to this guide (at the very end):
https://www.percona.com/doc/percona-...obackupex.html
#innobackupex --incremental --incremental-lsn=LSN-number --stream=xbstream ./ | ssh user@hostname " cat - | xbstream -x -C > /backup-dir/" My questions are:
1. How can I send the incremental backups without the need to refer to the LSN-number?
2. How can I rebuild the increments or rather prepare the backups on the remote host ?

Suggestions are most welcome!

My cluster crash randomly

Lastest Forum Posts - June 15, 2015 - 12:41am
Hi, i ran a cluster with 3 nodes. It's created and run normally in about 1 day before 1 node crash randomly.

My information
1. Server version: 5.6.21-70.1-56-log Percona XtraDB Cluster (GPL), Release rel70.1, Revision 938, WSREP version 25.8, wsrep_25.8.r4150

2. Ram 32G, CPU 24 core, 4 HDD - raid 10

3. datafile: 4G

4. file: /etc/my.conf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

## REPLICATE ##
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so

wsrep_provider_options="gcache.size = 1G; gcache.page_size = 512M; gcs.fc_limit = 512"

wsrep_slave_threads=24

wsrep_restart_slave=1

wsrep_forced_binlog_format=ROW

# Cluster connection URL contains IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.1.83,192.168.1.84,192.168.1.85
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #2 address
wsrep_node_address=192.168.1.xx
# Cluster name
wsrep_cluster_name=my_centos_cluster
# SST method
wsrep_sst_method=xtrabackup-v2
#Authentication for SST method
wsrep_sst_auth="xxx:xxx"

# Maximum number of rows in write set
wsrep_max_ws_rows=262144
# Maximum size of write set
wsrep_max_ws_size=2147483648

#################### TUNNING ########################

###### Slow query log

slow_query_log=1

slow_query_log_file =/var/log/mysql/slow_queries.log

long_query_time=4

connect_timeout=300

skip_name_resolve

innodb_flush_log_at_trx_commit=2

innodb_file_per_table=1

max_allowed_packet=1G

max_connect_errors=1000000

innodb_buffer_pool_size=4G

read_buffer_size=4M

read_rnd_buffer_size=4M

join_buffer_size=8M

sort_buffer_size=4M

innodb_log_buffer_size=16M

thread_cache_size=256

innodb_additional_mem_pool_size=32M

innodb_flush_method=O_DIRECT

log_queries_not_using_indexes=1

innodb_thread_concurrency=0

wait_timeout=300

interactive_timeout=300

max_connections=800

innodb_fast_shutdown=0

open_files_limit=10000

table_open_cache=3000

tmp_table_size=32M

max_heap_table_size=32M

##### Set Ramdisk #####

tmpdir = /usr/mysqltmp

#######################


5. Error message


05:05:00 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=8388608
read_buffer_size=4194304
max_used_connections=45
max_threads=802
thread_count=27
connection_count=2
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 6590372 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0xa09eba0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7fa617fedd38 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x35)[0x8f97d5]
/usr/sbin/mysqld(handle_fatal_signal+0x4b4)[0x6655c4]
/lib64/libpthread.so.0(+0xf710)[0x7fa78c98e710]
/usr/sbin/mysqld(_Z11ull_get_keyPKhPmc+0x14)[0x5fb2b4]
/usr/sbin/mysqld(my_hash_first_from_hash_value+0x6b)[0x8e2c4b]
/usr/sbin/mysqld(my_hash_search+0x11)[0x8e2e31]
/usr/sbin/mysqld(_ZN22Item_func_release_lock7val_intEv+0x10f )[0x60068f]
/usr/sbin/mysqld(_ZN4Item4sendEP8ProtocolP6String+0x1c4)[0x5b06d4]
/usr/sbin/mysqld(_ZN8Protocol19send_result_set_rowEP4ListI4I temE+0xc7)[0x65ef47]
/usr/sbin/mysqld(_ZN11select_send9send_dataER4ListI4ItemE+0x 67)[0x6ae287]
/usr/sbin/mysqld(_ZN4JOIN4execEv+0x521)[0x6c9e81]
/usr/sbin/mysqld(_Z12mysql_selectP3THDP10TABLE_LISTjR4ListI4 ItemEPS4_P10SQL_I_ListI8st_orderESB_S7_yP13select_ resultP18st_select_lex_unitP13st_select_lex+0x250)[0x7120e0]
/usr/sbin/mysqld(_Z13handle_selectP3THDP13select_resultm+0x1 87)[0x712967]
/usr/sbin/mysqld[0x6e836d]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x3cdb)[0x6ed50b]
/usr/sbin/mysqld(_ZN18Prepared_statement7executeEP6Stringb+0 x40e)[0x7002ae]
/usr/sbin/mysqld(_ZN18Prepared_statement12execute_loopEP6Str ingbPhS2_+0xde)[0x7044ae]
/usr/sbin/mysqld(_Z22mysql_sql_stmt_executeP3THD+0xbe)[0x70500e]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1324)[0x6eab54]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x65 8)[0x6f0338]
/usr/sbin/mysqld[0x6f0491]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3 THDPcj+0x19d5)[0x6f2675]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x22b)[0x6f3b5b]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x17f)[0x6bc30f]
/usr/sbin/mysqld(handle_one_connection+0x47)[0x6bc4f7]
/usr/sbin/mysqld(pfs_spawn_thread+0x12a)[0xaf38ba]
/lib64/libpthread.so.0(+0x79d1)[0x7fa78c9869d1]
/lib64/libc.so.6(clone+0x6d)[0x7fa78ae8a8fd]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fa4dc004178): is an invalid pointer
Connection ID (thread ID): 6222895
Status: NOT_KILLED

You may download the Percona XtraDB Cluster operations manual by visiting
http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.
150608 12:05:00 mysqld_safe Number of processes running now: 0
150608 12:05:00 mysqld_safe WSREP: not restarting wsrep node automatically
150608 12:05:00 mysqld_safe mysqld from pid file /var/lib/mysql/RR-Cluster-DB2.pid ended


I dig the internet for a month but nothing can fix my error. Hope some here, in percona forum can help me.
Thanks million times.

Third Node cannot connect / 2 Nodes in AWS / 1 node inhouse with NAT

Lastest Forum Posts - June 12, 2015 - 10:53pm
Hi guys ..

I always read all posts and normally they solve my issues.
I would like to take the time to explain my situation. Please bear with me ..
All node using Ubuntu 12.04
AWS nodes using percona Server version: 5.5.34-31.1 Percona XtraDB Cluster (GPL), Release 31.1, wsrep_25.9.r3928
Both AWS have elastic public IP
In house node using Percona 5.5.41-25.11-853

AWS configured of course with the internal IPs 10.xxx.xxx.xxx
and In house its a 172.21.12.11 through NAT on a public IP 199.xxx.xxx.xxx

Both AWS nodes have in the gcomm the public IP of the in house server and the in house server has the 2 elastic ips and the internal 172.21.12.11

And here is the dump of the

tail -f /var/lib/mysql/novo.err


150612 23:25:52 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
150612 23:25:52 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.UoqEvJ' --pid-file='/var/lib/mysql/noomedici-recover.pid'
150612 23:25:55 mysqld_safe WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
150612 23:25:55 [Note] WSREP: wsrep_start_position var submitted: '00000000-0000-0000-0000-000000000000:-1'
150612 23:25:55 [Note] WSREP: Read nil XID from storage engines, skipping position init
150612 23:25:55 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/libgalera_smm.so'
150612 23:25:55 [Note] WSREP: wsrep_load(): Galera 2.12(r318911d) by Codership Oy <info@codership.com> loaded successfully.
150612 23:25:55 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
150612 23:25:55 [Note] WSREP: Reusing existing '/var/lib/mysql//galera.cache'.
150612 23:25:55 [Note] WSREP: Passing config to GCS: base_host = 172.21.12.11; base_port = 4567; cert.log_conflicts = no; debug = no; evs.inactivecheck_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_eport_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.kep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0;gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 923372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignor_quorum = false; pc.ignore_sb = false; pc.npvo = false; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = P30S; pc.weight = 1; protonet.ackend = asio; pr
150612 23:25:55 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
150612 23:25:55 [Note] WSREP: wsrep_sst_grab()
150612 23:25:55 [Note] WSREP: Start replication
150612 23:25:55 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
150612 23:25:55 [Note] WSREP: protonet asio version 0
150612 23:25:55 [Note] WSREP: backend: asio
150612 23:25:55 [Note] WSREP: GMCast version 0
150612 23:25:55 [Note] WSREP: (aa293f06, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
150612 23:25:55 [Note] WSREP: (aa293f06, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
150612 23:25:55 [Note] WSREP: EVS version 0
150612 23:25:55 [Note] WSREP: PC version 0
150612 23:25:55 [Note] WSREP: gcomm: connecting to group 'Percona-XtraDB-Cluster', peer '107.xx.xxx.xx:,54.xxx.xx.xxx:,172.21.12.11:'
150612 23:25:55 [Warning] WSREP: (aa293f06, 'tcp://0.0.0.0:4567') address 'tcp://172.21.12.11:4567' points to own listening address, blacklisting
150612 23:25:55 [Note] WSREP: (aa293f06, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.147.174.188:4567
150612 23:25:55 [Note] WSREP: declaring 919c045d at tcp://10.147.174.188:4567 stable
150612 23:25:55 [Note] WSREP: declaring af980804 at tcp://10.154.176.220:4567 stable
150612 23:25:56 [Note] WSREP: Node 919c045d state prim
150612 23:25:57 [Note] WSREP: gcomm: connected
150612 23:25:57 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
150612 23:25:57 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
150612 23:25:57 [Note] WSREP: Opened channel 'Percona-XtraDB-Cluster'
150612 23:25:57 [Note] WSREP: Waiting for SST to complete.
150612 23:25:57 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 3
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: sent state msg: ab12c5d2-118c-11e5-a9f3-cf551b1e5734
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: got state msg: ab12c5d2-118c-11e5-a9f3-cf551b1e5734 from 0 (db2)
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: got state msg: ab12c5d2-118c-11e5-a9f3-cf551b1e5734 from 2 (db1)
150612 23:25:57 [Note] WSREP: STATE EXCHANGE: got state msg: ab12c5d2-118c-11e5-a9f3-cf551b1e5734 from 1 (db3)
150612 23:25:57 [Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 2,
members = 2/3 (joined/total),
act_id = 35909744,
last_appl. = -1,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = d3538475-b069-11e3-92aa-c323b5f792a4
150612 23:25:57 [Note] WSREP: Flow-control interval: [28, 28]
150612 23:25:57 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 35909744)
150612 23:25:57 [Note] WSREP: State transfer required:
Group state: d3538475-b069-11e3-92aa-c323b5f792a4:35909744
Local state: 00000000-0000-0000-0000-000000000000:-1
150612 23:25:57 [Note] WSREP: New cluster view: global state: d3538475-b069-11e3-92aa-c323b5f792a4:35909744, view# 3: Primary, number of nodes: 3,my index: 1, protocol version 2
150612 23:25:57 [Note] WSREP: closing client connections for protocol change 3 -> 2
150612 23:25:58 [Note] WSREP: (aa293f06, 'tcp://0.0.0.0:4567') turning message relay requesting off
150612 23:25:59 [Warning] WSREP: Gap in state sequence. Need state transfer.
150612 23:25:59 [Note] WSREP: Running: 'wsrep_sst_xtrabackup --role 'joiner' --address '172.21.12.11' --auth 'sstuser:zASDF@7890@yhn' --datadir '/ar/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '25936''
WSREP_SST: [INFO] Streaming with tar (20150612 23:25:59.618)
WSREP_SST: [INFO] Using socat as streamer (20150612 23:25:59.624)
WSREP_SST: [INFO] Stale sst_in_progress file: /var/lib/mysql//sst_in_progress (20150612 23:25:59.636)
WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | tar xfi - --recursive-unlink -h; RC=( ${PIPESTATUS[@]} ) (20150612 23:25:5.664)
150612 23:25:59 [Note] WSREP: Prepared SST request: xtrabackup|172.21.12.11:4444/xtrabackup_sst
150612 23:25:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
150612 23:25:59 [Note] WSREP: Assign initial position for certification: 35909744, protocol version: 2
150612 23:25:59 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does no match group state UUID (d3538475-b069-11e3-92aa-c323b5f792a4): 1 (Operation not permitted)
at galera/src/replicator_str.cpprepare_for_IST():447. IST will be unavailable.
150612 23:25:59 [Note] WSREP: Node 1 (db3) requested state transfer from '*any*'. Selected 0 (db2)(SYNCED) as donor.
150612 23:25:59 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 35909744)
150612 23:25:59 [Note] WSREP: Requesting state transfer: success, donor: 0





150612 23:27:04 [Warning] WSREP: 0 (db2): State transfer to 1 (db3) failed: -1 (Operation not permitted)
150612 23:27:04 [ERROR] WSREP: gcs/src/gcs_group.cpp:long int gcs_group_handle_join_msg(gcs_group_t*, const gcs_recv_msg_t*)():717: Will never recive state. Need to abort.
150612 23:27:04 [Note] WSREP: gcomm: terminating thread
150612 23:27:04 [Note] WSREP: gcomm: joining thread
150612 23:27:04 [Note] WSREP: gcomm: closing backend
150612 23:27:04 [Note] WSREP: gcomm: closed
150612 23:27:04 [Note] WSREP: /usr/sbin/mysqld: Terminated.
Aborted (core dumped)
150612 23:27:04 mysqld_safe mysqld from pid file /var/lib/mysql/novomedici.pid ended


FYI Galera used on the AWS servers is ..

| wsrep_provider_version | 2.8(r165) |

and on the in house server as you can see above Galera 2.12(r318911d)

could this be the problem ??

Thank you everyone and sorry if I posted on the wrong place hope not ..

Ed

pt-query-digest and prepare statements

Lastest Forum Posts - June 12, 2015 - 9:11pm
I am using pt-query-digest from Percona Toolkit and I found out that the tool is not examining the prepare or execute statements from MySQL query log.Could you please tell if it is possible to analyse prepare statements from log at all.

Pages

Subscribe to Percona aggregator
]]>