]]>
]]>

You are here

Feed aggregator

Making HAProxy 1.5 replication lag aware in MySQL

HAProxy is frequently used as a software load balancer in the MySQL world. Peter Boros, in a past post, explained how to set it up with Percona XtraDB Cluster (PXC) so that it only sends queries to available nodes. The same approach can be used in a regular master-slaves setup to spread the read load across multiple slaves. However with MySQL replication, another factor comes into play: replication lag. In this case the approach mentioned for Percona XtraDB Cluster does not work that well as the check we presented only returns ‘up’ or ‘down’. We would like to be able to tune the weight of a replica inside HAProxy depending on its replication lag. This is what we will do in this post using HAProxy 1.5.

Agent checks in HAProxy

HAProxy 1.5 allows us to run an agent check, which is a check that can be added to a regular health check. The benefit of agent checks is that the return value can be ‘up’ or ‘down’, but also a weight.

What is an agent? It is simply a program that can be accessed from a TCP connection on a given port. So if we want to run an agent on a MySQL server that will:

  • Mark the server as down in HAProxy if replication is not working
  • Set the weight to 100% if the replication lag is < 10s
  • Set the weight to 50% if the replication lag is >= 10s and < 60s
  • Set the weight to 5% in all other situations

We can use a script like this:

$ less agent.php = 10 && $lag < 60){ return "up 50%"; } else return "up 5%"; } set_time_limit(0); $socket = stream_socket_server("tcp://127.0.0.1:$port", $errno, $errstr); if (!$socket) { echo "$errstr ($errno) n"; } else { while ($conn = stream_socket_accept($socket,9999999999999)) { $cmd = "$mysql -h127.0.0.1 -u$user -p$password -P$mysql_port -Ee "$query" | grep Seconds_Behind_Master | cut -d ':' -f2 | tr -d ' '"; exec("$cmd",$lag); $weight = set_weight($lag[0]); unset($lag); fputs ($conn, $weight); fclose ($conn); } fclose($socket); } ?>

If you want the script to be accessible from port 6789 and connect to a MySQL instance running on port 3306, run:

$ php agent.php 6789 3306

You will also need a dedicated MySQL user:

mysql> GRANT REPLICATION CLIENT ON *.* TO 'haproxy'@'127.0.0.1' IDENTIFIED BY 'haproxy_pwd';

When the agent is started, you can check that it is working properly:

# telnet 127.0.0.1 6789 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. up 100% Connection closed by foreign host.

Assuming it is run locally on the app server, that 2 replicas are available (192.168.10.2 and 192.168.10.3) and that the application will send all reads on port 3307, you will define a frontend and a backend in your HAProxy configuration like this:

frontend read_only-front bind *:3307 mode tcp option tcplog log global default_backend read_only-back backend read_only-back mode tcp balance leastconn server slave1 192.168.10.2 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions server slave2 192.168.10.3 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions

Demo

Now that everything is set up, let’s see how HAProxy can dynamically change the weight of the servers depending on the replication lag.

No lag

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # HAProxy $ echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,100 read_only-back,slave2,UP,100 read_only-back,BACKEND,UP,200

Slave1 lagging

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 25 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,50 read_only-back,slave2,UP,100 read_only-back,BACKEND,UP,150

Slave2 down

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: NULL # echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,100 read_only-back,slave2,DOWN (agent),100 read_only-back,BACKEND,UP,100

Conclusion

Agent checks are a nice addition in HAProxy 1.5. The setup presented above is a bit simplistic though: for instance, if HAProxy fails to connect to the agent, it will not mark the corresponding as down. It is then recommended to keep a regular health check along with the agent check.

Astute readers will also notice that in this configuration, if replication is broken on all nodes, HAProxy will stop sending reads. This may not be the best solution. Possible options are: stop the agent and mark the servers as UP using the stats socket or add the master as a backup server.

And as a final note, you can edit the code of the agent so that replication lag is measured with Percona Toolkit’s pt-heartbeat instead of Seconds_Behind_Master.

The post Making HAProxy 1.5 replication lag aware in MySQL appeared first on MySQL Performance Blog.

replication error

Lastest Forum Posts - December 17, 2014 - 11:14pm
Hi all.

My replication stopped with error Last_SQL_Errno: 1032 or Last_SQL_Errno: 1062
example error:
Last_SQL_Error: Could not execute Delete_rows event on table cportal.b_search_content_stem; Can't find record in 'b_search_content_stem', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mk-db1-master-bin.000488, end_log_pos 74862564

I am found this issue http://www.percona.com/blog/2013/09/...ith-mysql-5-6/
but it not working for me.
my version is Percona-Server-server-56-5.6.21-rel69.0.el6.x86_64.

my slave cnf http://pastebin.com/wTdcbTwU
my master cnf http://pastebin.com/q2uJAynU

thanks for help

Why pt-align discard first line automatically?

Lastest Forum Posts - December 17, 2014 - 11:10pm
When I type following commands:
: gf@li741-164 ~> mysql -uroot -h 127.0.0.1 -Be "select * from mysql.user limit 2" The result contains one column name title line and two host line:
: Host User Password Select_priv Insert_priv Update_priv Delete_priv Create_priv Drop_priv Reload_priv Shutdown_priv Process_priv File_priv Grant_priv References_priv Index_priv Alter_priv Show_db_priv Super_priv Create_tmp_table_priv Lock_tables_priv Execute_priv Repl_slave_priv Repl_client_priv Create_view_priv Show_view_priv Create_routine_priv Alter_routine_priv Create_user_priv Event_priv Trigger_priv Create_tablespace_priv ssl_type ssl_cipher x509_issuer x509_subjectmax_questions max_updates max_connections max_user_connections plugin authentication_string password_expired localhost root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y YY Y Y Y Y Y 0 0 0 0 N li741-164.members.linode.com root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y YY Y Y Y Y Y Y Y 0 0 0 0 N But after using pt-align:
: gf@li741-164 ~> mysql -uroot -h 127.0.0.1 -Be "select * from mysql.user limit 2" | pt-align first column name title line is ignored,just left two host lines
: localhost root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 0 0 0 0 N li741-164.members.linode.com root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 0 0 0 0 N Why this happens?

pt-table-checsum details

Lastest Forum Posts - December 17, 2014 - 12:51pm
Hi Team,

We have performed pt-table-check sum on specific server and understand that a Database by name Percona gets created under which checksums table will be added once we run the pt-table-checksum.

There are different fields in the checksum table. Could you please clarify the actual meaning of the below fields:
this_crc,
this_cnt,
master_crc,
master_cnt,
chunk,
chunk_index,
lower_boundary,
upper_boundary.

Best Regards,
Krishna

InnoDB’s multi-versioning handling can be Achilles’ heel

Latest MySQL Performance Blog posts - December 17, 2014 - 8:05am

I believe InnoDB storage engine architecture is great for a lot of online workloads, however, there are no silver bullets in technology and all design choices have their trade offs. In this blog post I’m going to talk about one important InnoDB limitation that you should consider.

InnoDB is a multiversion concurrency control (MVCC) storage engine which means many versions of the single row can exist at the same time. In fact there can be a huge amount of such row versions. Depending on the isolation mode you have chosen, InnoDB might have to keep all row versions going back to the earliest active read view, but at the very least it will have to keep all versions going back to the start of SELECT query which is currently running.

In most cases this is not a big deal – if you have many short transactions happening you will have only a few row versions to deal with. If you just use the system for reporting queries but do not modify data aggressively at the same time you also will not have many row versions. However, if you mix heavy updates with slow reporting queries going at the same time you can get into a lot of trouble.

Consider for example an application with a hot row (something like actively updated counter) which has 1000 updates per second together with some heavy batch job that takes 1000 to run. In such case we will have 1M of row versions to deal with.

Let’s now talk about how those old-row versions are stored in InnoDB – they are stored in the undo space as an essentially linked list where each row version points to the previous row version together with transaction visibility information that helps to decide which version will be visible by this query. Such design favors short new queries that will typically need to see one of the newer rows, so they do not have to go too far in this linked list. This might not be the case with reporting queries that might need to read rather old row version which correspond to the time when the query was started or logical backups that use consistent reads (think mysqldump or mydumper) which often would need to access such very old row versions.

So going through the linked list of versions is expensive, but how expensive it can get? In this case a lot depends upon whenever UNDO space fits in memory, and so the list will be traversed efficiently – or it does not, in which case you might be looking at the massive disk IO. Keep in mind undo space is not clustered by PRIMARY key, as normal data in InnoDB tables, so if you’re updating multiple rows at the same time (typical case) you will be looking at the row-version chain stored in many pages, often as little as one row version per page, requiring either massive IO or a large amount of UNDO space pages to present in the InnoDB Buffer pool.

Where it can get even worse is Index Scan. This is because Indexes are structured in InnoDB to include all row versions corresponding to the key value, current and past. This means for example the index for KEY=5 will contain pointers to all rows that either have value 5 now or had value 5 some time in the past and have not been purged yet. Now where it can really bite is the following – InnoDB needs to know which of the values stored for the key are visible by the current transaction – and that might mean going through all long-version chains for each of the keys.

This is all theory, so lets see how we can simulate such workloads and see how bad things really can get in practice.

I have created 1Bil rows “sysbench” table which takes some 270GB space and I will use a small buffer pool – 6GB. I will run sysbench with 64 threads pareto distribution (hot rows) while running a full table scan query concurrently: select avg(k) from sbtest1 Here is exact sysbench run done after prepare.

sysbench --num-threads=64 --report-interval=10 --max-time=0 --max-requests=0 --rand-type=pareto --oltp-table-size=1000000000 --mysql-user root --mysql-password=password --test /usr/share/doc/sysbench/tests/db/oltp.lua run

Here is the explain for the “reporting” query that you would think to be a rather efficient index scan query. With just 4 bytes 1 Billion of values would be just 4G (really more because of InnoDB overhead) – not a big deal for modern systems:

mysql> explain select avg(k) from sbtest1 G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: sbtest1 type: index possible_keys: NULL key: k_1 key_len: 4 ref: NULL rows: 953860873 Extra: Using index 1 row in set (0.00 sec)

2 days have passed and the “reporting” query is still running… furthermore the foreground workload started to look absolutely bizarre:

[207850s] threads: 64, tps: 0.20, reads: 7.40, writes: 0.80, response time: 222481.28ms (95%), errors: 0.70, reconnects: 0.00 [207860s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207870s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207880s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207890s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207900s] threads: 64, tps: 2.70, reads: 47.60, writes: 11.60, response time: 268815.49ms (95%), errors: 0.00, reconnects: 0.00 [207910s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207920s] threads: 64, tps: 2.30, reads: 31.60, writes: 9.50, response time: 294954.28ms (95%), errors: 0.00, reconnects: 0.00 [207930s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [207940s] threads: 64, tps: 2.90, reads: 42.00, writes: 12.20, response time: 309332.04ms (95%), errors: 0.00, reconnects: 0.00 [207950s] threads: 64, tps: 0.20, reads: 4.60, writes: 1.00, response time: 318922.41ms (95%), errors: 0.40, reconnects: 0.00 [207960s] threads: 64, tps: 0.20, reads: 1.90, writes: 0.50, response time: 335170.09ms (95%), errors: 0.00, reconnects: 0.00 [207970s] threads: 64, tps: 0.60, reads: 13.20, writes: 2.60, response time: 292842.88ms (95%), errors: 0.60, reconnects: 0.00 [207980s] threads: 64, tps: 2.60, reads: 37.60, writes: 10.20, response time: 351613.43ms (95%), errors: 0.00, reconnects: 0.00 [207990s] threads: 64, tps: 5.60, reads: 78.70, writes: 22.10, response time: 186407.21ms (95%), errors: 0.00, reconnects: 0.00 [208000s] threads: 64, tps: 8.10, reads: 120.20, writes: 32.60, response time: 99179.05ms (95%), errors: 0.00, reconnects: 0.00 [208010s] threads: 64, tps: 19.50, reads: 280.50, writes: 78.90, response time: 27559.69ms (95%), errors: 0.00, reconnects: 0.00 [208020s] threads: 64, tps: 50.70, reads: 691.28, writes: 200.70, response time: 5214.43ms (95%), errors: 0.00, reconnects: 0.00 [208030s] threads: 64, tps: 77.40, reads: 1099.72, writes: 311.31, response time: 2600.66ms (95%), errors: 0.00, reconnects: 0.00 [208040s] threads: 64, tps: 328.20, reads: 4595.40, writes: 1313.40, response time: 911.36ms (95%), errors: 0.00, reconnects: 0.00 [208050s] threads: 64, tps: 538.20, reads: 7531.90, writes: 2152.10, response time: 484.46ms (95%), errors: 0.00, reconnects: 0.00 [208060s] threads: 64, tps: 350.70, reads: 4913.45, writes: 1404.09, response time: 619.42ms (95%), errors: 0.00, reconnects: 0.00 [208070s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208080s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208090s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208100s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208110s] threads: 64, tps: 1.60, reads: 24.20, writes: 6.80, response time: 42385.40ms (95%), errors: 0.10, reconnects: 0.00 [208120s] threads: 64, tps: 0.80, reads: 28.20, writes: 3.40, response time: 51381.54ms (95%), errors: 2.80, reconnects: 0.00 [208130s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208140s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208150s] threads: 64, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects: 0.00 [208160s] threads: 64, tps: 0.60, reads: 14.20, writes: 2.40, response time: 93248.04ms (95%), errors: 0.80, reconnects: 0.00

As you can see we have long stretches of times when there are no queries completed at all… going to some spikes of higher performance. This is how it looks on the graph:

Corresponding CPU usage:

This shows what we are not only observing something we would expect with InnoDB design but also there seems to be some serve starvation happening in this case which we can confirm:

Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.0/storage/innobase/row/row0ins.cc line 2357 --Thread 139790809552640 has waited at btr0cur.cc line 3852 for 194.00 seconds the semaphore: S-lock on RW-latch at 0x7f24d9c01bc0 '&block->lock' a writer (thread id 139790814770944) has reserved it in mode wait exclusive number of readers 1, waiters flag 1, lock_word: ffffffffffffffff Last time read locked in file row0sel.cc line 4125 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.0/storage/innobase/row/row0ins.cc line 2357 --Thread 139790804735744 has waited at row0sel.cc line 3506 for 194.00 seconds the semaphore: S-lock on RW-latch at 0x7f24d9c01bc0 '&block->lock' a writer (thread id 139790814770944) has reserved it in mode wait exclusive number of readers 1, waiters flag 1, lock_word: ffffffffffffffff Last time read locked in file row0sel.cc line 4125 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.0/storage/innobase/row/row0ins.cc line 2357 --Thread 139790810756864 has waited at row0sel.cc line 4125 for 194.00 seconds the semaphore: S-lock on RW-latch at 0x7f24d9c01bc0 '&block->lock' a writer (thread id 139790814770944) has reserved it in mode wait exclusive number of readers 1, waiters flag 1, lock_word: ffffffffffffffff Last time read locked in file row0sel.cc line 4125 Last time write locked in file /mnt/workspace/percona-server-5.6-debian-binary/label_exp/ubuntu-trusty-64bit/percona-server-5.6-5.6.21-70.0/storage/innobase/row/row0ins.cc line 2357 --Thread 139790811158272 has waited at btr0cur.cc line 3852 for 194.00 seconds the semaphore: S-lock on RW-latch at 0x7f24d9c01bc0 '&block->lock' a writer (thread id 139790814770944) has reserved it in mode wait exclusive

Waiting for the given buffer pool block to become available for more than 3 minutes is a big issue – this lock should never be held by more than a few microseconds.

SHOW PROCESSLIST confirms even most basic selects by primary key can get stalled for long time

| 5499 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5500 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5501 | root | localhost | sbtest | Query | 185 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5502 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5503 | root | localhost | sbtest | Query | 14 | statistics | SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 1 AND 1+99 ORDER BY c | 0 | 0 | | 5504 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5505 | root | localhost | sbtest | Query | 14 | updating | UPDATE sbtest1 SET k=k+1 WHERE id=1 | 0 | 0 | | 5506 | root | localhost | sbtest | Query | 236 | updating | DELETE FROM sbtest1 WHERE id=1 | 0 | 0 | | 5507 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5508 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id BETWEEN 1 AND 1+99 | 0 | 0 | | 5509 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=1 | 0 | 0 | | 5510 | root | localhost | sbtest | Query | 14 | updating | UPDATE sbtest1 SET c='80873149502-45132831358-41942500598-17481512835-07042367094-39557981480-593123 | 0 | 0 | | 5511 | root | localhost | sbtest | Query | 236 | updating | UPDATE sbtest1 SET k=k+1 WHERE id=18 | 0 | 1 | | 5512 | root | localhost | sbtest | Query | 14 | statistics | SELECT c FROM sbtest1 WHERE id=7 | 0 | 0 | | 6009 | root | localhost | sbtest | Query | 195527 | Sending data | select avg(k) from sbtest1 | 0 | 0 |

How do I know it is UNDO space related issue in this case? Because it ends up taking majority of buffer pool

mysql> select page_type,count(*) from INNODB_BUFFER_PAGE group by page_type; +-------------------+----------+ | page_type | count(*) | +-------------------+----------+ | EXTENT_DESCRIPTOR | 1 | | FILE_SPACE_HEADER | 1 | | IBUF_BITMAP | 559 | | IBUF_INDEX | 855 | | INDEX | 2186 | | INODE | 1 | | SYSTEM | 128 | | UNDO_LOG | 382969 | | UNKNOWN | 6508 | +-------------------+----------+ 9 rows in set (1.04 sec)

And it does so in a very crazy way – when there is almost no work being done UNDO_LOG contents of the buffer pool is growing very rapidly while when we’re getting some work done the INDEX type pages take a lot more space. To me this seems like as the index scan is going it touches some hot rows and some not-so-hot ones, containing less row versions and so does not put much pressure on “undo space.”

Take Away: Now you might argue that this given workload and situation is rather artificial and rather narrow. It well might be. My main point here is what if you’re looking at just part of your workload, such as your main short application queries, and not taking reporting or backups into account “because their performance is not important.” In this case you might be in for a big surprise. Those background activities might be taking much more than you would expect, and in addition, they might have much more of a severe impact to your main application workload, like in this case above.

The post InnoDB’s multi-versioning handling can be Achilles’ heel appeared first on MySQL Performance Blog.

Replacing a MySQL server in a non-standard location

Lastest Forum Posts - December 17, 2014 - 7:17am
Good afternoon,

Is it possible to install Percona in a non-standard location? In this instance /opt/bitnami/mysql/bin?

Thank you for you help.

Query Optimization and tunning

Lastest Forum Posts - December 16, 2014 - 11:01am
can any one help me to optimize this query:
SELECT CONCAT(CCD.CONTACT_FIRST_NAME, ' ', CCD.CONTACT_LAST_NAME) AS NAME, A.EMAIL_IDS, CCD.UNSUBSCRIBE, IFNULL(CSL.LOG_DATE, '') AS LOG_DATE, CSL.IP_ADDRESS, IF(CSL.BROWSER IS NULL, '', CSL.BROWSER) AS BROWSER, (CASE WHEN (UNSUBSCRIBE = 0 OR UNSUBSCRIBE IS NULL) THEN 'Opted In' ELSE (CASE WHEN (UNSUBSCRIBE = 1) THEN 'Opted Out' ELSE (CASE WHEN ((UNSUBSCRIBE = 2 OR UNSUBSCRIBE = 3) AND CCD.CONTACT_ID NOT IN (SELECT CONTACT_ID FROM CM_SUBSCRIPTION_MAIL_DATA WHERE IS_MAIL_SENT = 'Y')) THEN 'Opt-In Request not Sent' ELSE 'Opt-In Pending' END) END) END) AS CURR_SUB, (CASE WHEN (SUBSCRIBE_FROM = 0) THEN 'Opted In' ELSE (CASE WHEN (SUBSCRIBE_FROM = 1) THEN 'Opted Out' ELSE (CASE WHEN (SUBSCRIBE_FROM = 2 OR SUBSCRIBE_FROM = 3) THEN 'Opt-In Pending' ELSE '' END) END) END) AS SUB_FROM, SUBSCRIBE_FROM, (CASE WHEN (SUBSCRIBE_TO = 0) THEN 'Opted In' ELSE (CASE WHEN (SUBSCRIBE_TO = 1) THEN 'Opted Out' ELSE (CASE WHEN (SUBSCRIBE_TO = 2 OR SUBSCRIBE_TO = 3) THEN 'Opt-In Pending' ELSE '' END) END) END) AS SUB_TO, SUBSCRIBE_TO FROM CM_CONTACT_DETAILS CCD LEFT JOIN ADDRESS A ON CCD.CONTACT_ID = A.FOREIGN_ID LEFT JOIN CM_SUBSCRIPTION_LOGS CSL ON CSL.CONTACT_ID = CCD.CONTACT_ID WHERE 1 = 1 GROUP BY CSL.LOG_DATE , CCD.CONTACT_ID ORDER BY NAME DESC , LOG_DATE DESC LIMIT 0 , 20

Query Optimization and tunning

Lastest Forum Posts - December 16, 2014 - 11:01am
can any one help me to optimize this query:
SELECT CONCAT(CCD.CONTACT_FIRST_NAME, ' ', CCD.CONTACT_LAST_NAME) AS NAME, A.EMAIL_IDS, CCD.UNSUBSCRIBE, IFNULL(CSL.LOG_DATE, '') AS LOG_DATE, CSL.IP_ADDRESS, IF(CSL.BROWSER IS NULL, '', CSL.BROWSER) AS BROWSER, (CASE WHEN (UNSUBSCRIBE = 0 OR UNSUBSCRIBE IS NULL) THEN 'Opted In' ELSE (CASE WHEN (UNSUBSCRIBE = 1) THEN 'Opted Out' ELSE (CASE WHEN ((UNSUBSCRIBE = 2 OR UNSUBSCRIBE = 3) AND CCD.CONTACT_ID NOT IN (SELECT CONTACT_ID FROM CM_SUBSCRIPTION_MAIL_DATA WHERE IS_MAIL_SENT = 'Y')) THEN 'Opt-In Request not Sent' ELSE 'Opt-In Pending' END) END) END) AS CURR_SUB, (CASE WHEN (SUBSCRIBE_FROM = 0) THEN 'Opted In' ELSE (CASE WHEN (SUBSCRIBE_FROM = 1) THEN 'Opted Out' ELSE (CASE WHEN (SUBSCRIBE_FROM = 2 OR SUBSCRIBE_FROM = 3) THEN 'Opt-In Pending' ELSE '' END) END) END) AS SUB_FROM, SUBSCRIBE_FROM, (CASE WHEN (SUBSCRIBE_TO = 0) THEN 'Opted In' ELSE (CASE WHEN (SUBSCRIBE_TO = 1) THEN 'Opted Out' ELSE (CASE WHEN (SUBSCRIBE_TO = 2 OR SUBSCRIBE_TO = 3) THEN 'Opt-In Pending' ELSE '' END) END) END) AS SUB_TO, SUBSCRIBE_TO FROM CM_CONTACT_DETAILS CCD LEFT JOIN ADDRESS A ON CCD.CONTACT_ID = A.FOREIGN_ID LEFT JOIN CM_SUBSCRIPTION_LOGS CSL ON CSL.CONTACT_ID = CCD.CONTACT_ID WHERE 1 = 1 GROUP BY CSL.LOG_DATE , CCD.CONTACT_ID ORDER BY NAME DESC , LOG_DATE DESC LIMIT 0 , 20

OpenStack Live tutorials & sessions to bring OpenStack users up to speed

Latest MySQL Performance Blog posts - December 16, 2014 - 9:28am

I attended the OpenStack Paris summit last month (Percona had a booth there). It was my first opportunity to meet face-to-face with this thriving community of developers and users. I’m proud that Percona is part of this open source family and look forward to reconnecting with many of the developers and users I met in Paris – as well as meeting new faces – at OpenStack Live in Silicon Valley April 13-14.

OpenStack summits, generally held twice a year, are the place where (for the most part) developers meet and design “in the open,” as the OpenStack organization says. OpenStack Live 2015, held in parallel with the annual Percona Live MySQL Conference and Expo, will be a unique opportunity for users and enthusiasts to learn from leading OpenStack experts in the field about top cloud strategies, improving overall cloud performance, and operational best practices for managing and optimizing OpenStack and its MySQL database core.

OpenStack Live will also provide some serious classroom-style learning. Percona announced the OpenStack Live tutorials sessions a couple days ago. Most sessions are three hours long and because they really are “hands-on” require that you bring your laptop – and a power cord (not to be confused with a “power chord,” though those also welcome”).

Let’s take a closer look at the OpenStack Live tutorial sessions.

Barbican: Securing Your Secrets.” Join Rackspace gurus Douglas Mendizábal, Chelsea Winfree and Steve Heyman on a tour through the magical world of Barbican (yes, they are dedicated members of the Barbican project).

Don’t be intimidated if don’t have any previous experience with Barbican (and if you’ve never heard of it, more the reason to attend!). A basic understanding of security components (such as keys and certificates) and a basic understanding of ReST is helpful, but not required.

By the end of the class you will know:
1)   Importance of secret storage
2)   How to store & retrieve secrets with Barbican
3)   How to submit an order with Barbican
4)   How to create a container
5)   Use cases for Barbican / Examples
6)   The future of Barbican –Ordering SSL Certs

Deploying, Configuring and Operating OpenStack Trove.” As the title suggests, these three hours focus squarely on Trove. The tutorial – led by Tesora founder & CTO Amrith Kumar, along with Doug Shelley, the company’s vice president of product development – will begin with a quick overview of OpenStack and the various services.

If you attend this tutorial you’ll actually deploy your own OpenStack environment – and create and manage a Nova (compute) instance using a command line and a graphical user interface (Horizon). And the fun continues! You’ll then install and configure Trove, and create and manage a single MySQL instance. Finally, pupils will create and operate a simple replicated MySQL instance pair and ensure that data is being properly replicated from master to slave.

Essential DevStack.” DevStack is an opinionated script to quickly create an OpenStack development environment. It can also be used to demonstrate starting/running OpenStack services and provide examples of using them from a command line. The power of DevStack lies within small trick that if people understand can hugely improve the contribution effectiveness, quality and required time. This three-hour tutorial will be led by Red Hat senior software engineer Swapnil Kulkarni.

OpenStack Networking Introduction,” with PLUMgrid’s Valentina Alaria and Brendan Howes. Buckle your seat belts! Designed for IT professionals looking to expand their OpenStack “networking” (no, not the LinkedIn sort of networking) knowledge, OpenStack Networking Fundamentals will be a comprehensive and fast-paced course.

This half-day training provides an overview of OpenStack, its components and then dives deep into OpenStack Networking – the features and plugin model and its role in building an OpenStack Cloud. The training is concluded with a hands-on lab to bring all the concepts together.

OpenStack Networking (Neutron) Introduction [1 hour]
– Goal of Neutron
– Architecture of Neutron
– Plugin Architecture
– Use cases for Neutron
– What’s new in Juno & what’s planned for Kilo

OpenStack Networking (Neutron) Advanced [1 hour]
– Interaction with other OpenStack components (Compute & Storage)
– Designing Neutron for HA
– Installing Neutron
– Troubleshooting Neutron

Hands-on Lab [1 hour]
– Creation of tenant networks
– Configuration of external connectivity
– Advanced Features Configurati

Percona’s director of conferences, Kortney Runyan, offered a sneak peek at the OpenStack sessions last week. Attendees of the Percona Live MySQL Conference and Expo 2015 (April 13-16, 2015) with full-pass access are also welcome to attend OpenStack Live sessions. The two conferences are running in parallel, which is very exciting since there will be crossover opportunities between them.

I hope to see you next April! And be sure to take advantage of Early Bird pricing for OpenStack Live (register here).
And if you are an organizer of an OpenStack (or MySQL) Meetup and need some financial help, Percona is happy to chip in as a sponsor. Just let me know and I’ll work with you to set that up! You can drop me a note in the comments and I’ll contact you offline.

The post OpenStack Live tutorials & sessions to bring OpenStack users up to speed appeared first on MySQL Performance Blog.

Which vision is to merge art and poetr

Lastest Forum Posts - December 16, 2014 - 9:15am
http://www.invisionpower.ro/topic/49...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/

Which vision is to merge art and poetr

Lastest Forum Posts - December 16, 2014 - 9:00am
http://www.invisionpower.ro/topic/49...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/
http://www.invisionpower.ro/topic/50...li-ne-hd1080p/

Backup fails with InnoDB: Operating system error number 0

Lastest Forum Posts - December 16, 2014 - 3:19am
Hi,

I was just trying to take hot backup with innobackupex but I get a read error whilst innobackupex is trying to scan the logfiles - strangely though the failure is an Error 0 which is reported as a success:

: xtrabackup_55 version 1.6.3 for Percona Server 5.5.9 Linux (x86_64) (revision id: undefined) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql xtrabackup: Target instance is assumed as followings. xtrabackup: innodb_data_home_dir = ./ xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend xtrabackup: innodb_log_group_home_dir = ./ xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 5242880 141216 10:59:08 InnoDB: Using Linux native AIO 141216 10:59:08 InnoDB: Warning: allocated tablespace 53, old maximum was 9 >> log scanned up to (1221137553871) [01] Copying ./ibdata1 to /home/nick/2014-12-16_10-59-04/ibdata1 >> log scanned up to (1221137615178) >> log scanned up to (1221137624351) >> log scanned up to (1221137650292) InnoDB: Error: tried to read 1048576 bytes at offset 0 534773760. InnoDB: Was only able to read 40960. 141216 11:00:15 InnoDB: Operating system error number 0 in a file operation. InnoDB: Error number 0 means 'Success'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html InnoDB: File operation call: 'read'. InnoDB: Cannot continue operation. innobackupex: Error: ibbackup child process has died at /usr/bin/innobackupex line 336. Can anyone shed any light on what causes this? MySQL is using this Inno DB normally without any apparent errors, and starts up and shuts down normally when required, so I was assuming that the integrity of the data files + logs was ok, but perhaps not? Is this a known issue with xtrabackup? Is there a workaround or some way to get more information on this issue?

Thanks,

Nick

Recovering / Restoring a single database with Xtrabackup

Lastest Forum Posts - December 15, 2014 - 11:23pm
Hi ,

Is it possible to recovery / restore a single database using xtrabackup ?

Thanks & Regards
Raghupradeep

Recovering / Restoring a single database with Xtrabackup

Lastest Forum Posts - December 15, 2014 - 11:18pm
Hi ,

Is it possible to recovery / restore a single database using xtrabackup ?

Thanks & Regards
Raghupradeep

Can not build cluster

Lastest Forum Posts - December 15, 2014 - 10:49am
Hi
I have two nodes on Ubuntu 12.04 Server 64 bit

When i bootstrap first node, in error log i can see

[Warning] WSREP: access file(gvwstate.dat) failed(No such file or directory)

but it is start

When i start second node

I can see that errors in log
[Warning] WSREP: access file(gvwstate.dat) failed(No such file or directory)
[Warning] WSREP: (5b571936, 'tcp://0.0.0.0:4567') address 'tcp://10.1.1.13:4567' points to own listening address, blacklisting
[Warning] WSREP: Gap in state sequence. Need state transfer.
[Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (2f13e6f3-845d-11e4-a92a-1353578a748d): 1 (Operation not permitted)
[Warning] WSREP: 1.0 (xtradb1): State transfer to 0.0 (xtradb2) failed: -22 (Invalid argument)
[ERROR] WSREP: gcs/src/gcs_group.cpp:int gcs_group_handle_join_msg(gcs_group_t*, const gcs_recv_msg_t*)():722: Will never receive state. Need to abort.
[ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.1.1.13' --auth 'sstuser:s3cretPass' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '20901' '' : 1 (Operation not permitted)
[ERROR] WSREP: Failed to read uuid:seqno from joiner script.
[ERROR] WSREP: SST failed: 1 (Operation not permitted)
[ERROR] Aborting




Percona Xtradb Cluster version is 5.6.21-25.8-938.precise on two nodes

Config on first node
skip-name-resolve

explicit_defaults_for_timestamp=TRUE
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_provider_options = 'socket.checksum = 1;'
#wsrep_cluster_address=gcomm://10.1.1.12,10.1.1.13
wsrep_cluster_address=gcomm://
wsrep_slave_threads = 96
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_node_address=10.1.1.12
wsrep_node_name=xtradb1
wsrep_sst_method=xtrabackup-v2
wsrep_cluster_name=stalker_db_cluster
wsrep_sst_auth="sstuser:s3cretPass"




Config on second node

skip-name-resolve

explicit_defaults_for_timestamp=TRUE
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_provider_options = 'socket.checksum = 1;'
wsrep_cluster_address=gcomm://10.1.1.12,10.1.1.13
#wsrep_cluster_address=gcomm://
wsrep_slave_threads = 96
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_node_address=10.1.1.13
wsrep_node_name=xtradb2
wsrep_sst_method=xtrabackup-v2
wsrep_cluster_name=stalker_db_cluster
wsrep_sst_auth="sstuser:s3cretPass"




What is WRONG ?

I alredy have 3 working well clusters on previos version of percona cluster ?
What is wrong with new one ?

Can someone help ?

Strange behaviour on partitioned table (Percona Server 5.5.36)

Lastest Forum Posts - December 15, 2014 - 5:34am
This one's really got me. Maybe it's a bug. Ideas anyone? See below...

We have a table with the following partition scheme (example names and values):

: PARTITION BY LIST COLUMNS(`owner`) SUBPARTITION BY HASH(CASE WHEN (`foo`+`bar`) < 1 THEN 0 ELSE 1 END) SUBPARTITIONS 2 ( PARTITION p0 VALUES IN('A','B','C','D','E','F','G'), PARTITION p1 VALUES IN('H'), PARTITION p2 VALUES IN('I','J'), PARTITION p3 VALUES IN('K') ); N.B. the A,B,C, etc. values above are only examples. The real table has strings that represent the owner of the row. There are over 3m rows.

We should end up with a partition that has all the foo+bar = 0 values owned by A through G as the first subpartition of partition p0.

We get the following output from SELECT owner,COUNT(*) FROM maintable GROUP BY owner;:

: +----------+----------+ | owner | COUNT(*) | +----------+----------+ | H | 2077675 | | B | 138243 | | I | 13333 | | C | 26507 | | A | 806889 | | K | 134 | | E | 38165 | | D | 14118 | +----------+----------+ There are no values for F, G or J.

Now when I run SELECT COUNT(*) FROM maintable WHERE owner = 'A';, I get zero.

If I change the list to IN('A','B','C','D','E','F','G') then I get the expected result. If I omit G, I get the correct result. If I omit F and G from the list, it goes back to zero. Whaaaaat?

I've tested this on 5.6 on another server with a subset of the data, and it is fine, with the additional feature of selecting directly from p0, etc.

Anyone ever seen anything like this? Am I missing something? Let me know if you want to know any variables.

--
Sam

MySQL Tutorials: A time to learn at Percona Live 2015

Latest MySQL Performance Blog posts - December 15, 2014 - 12:00am

The many hours of intensive tutorials, led by some of the top minds in MySQL, have always been a major draw each year to the Percona Live MySQL Conference and Expo. And 2015’s event will be no exception.

Percona Live 2015 runs April 13-16 in Santa Clara, Calif. and the first day is dedicated to the classroom – so bring your laptops for the combined 45 hours of learning. MySQL tutorials are included with the full-conference pass but a “tutorial-only pass” is also available. Super-saver registration discounts have been extended until Dec. 19. Here’s a look at this year’s tutorials lineup. (There will be a couple more announced in January).

And that’s just on Monday! There will be much more over the four days of the Percona Live MySQL Conference and Expo 2015. Percona conference director Kortney Runyan shared a sneak peek at the full schedule a few days ago. And remember, super-saver registration discounts have been extended until Dec. 19 so register now – and don’t forgot your laptop (and power cord)!

The post MySQL Tutorials: A time to learn at Percona Live 2015 appeared first on MySQL Performance Blog.

Can't join second node (rsync)

Lastest Forum Posts - December 14, 2014 - 6:33pm
After bootstrapping nodeA, nodeB fails to start because it can't SST
Using rsync
Tried 5.6.21 and reverted to 5.6.20 with dsame result

2014-12-14 21:41:40 9277 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 1)
2014-12-14 21:41:40 9277 [Note] WSREP: State transfer required:
Group state: cc13f472-83fa-11e4-8ea3-138a7ef719a7:1
Local state: 00000000-0000-0000-0000-000000000000:-1
2014-12-14 21:41:40 9277 [Note] WSREP: New cluster view: global state: cc13f472-83fa-11e4-8ea3-138a7ef719a7:1, view# 4: Primary, number of nodes: 2, my index: 1, protocol version 3
2014-12-14 21:41:40 9277 [Warning] WSREP: Gap in state sequence. Need state transfer.
2014-12-14 21:41:40 9277 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address '10.10.1.62' --auth '' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --parent '9277' '' '
2014-12-14 21:41:40 9277 [Note] WSREP: Prepared SST request: rsync|10.10.1.62:4444/rsync_sst
2014-12-14 21:41:40 9277 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2014-12-14 21:41:40 9277 [Note] WSREP: REPL Protocols: 6 (3, 2)
2014-12-14 21:41:40 9277 [Note] WSREP: Service thread queue flushed.
2014-12-14 21:41:40 9277 [Note] WSREP: Assign initial position for certification: 1, protocol version: 3
2014-12-14 21:41:40 9277 [Note] WSREP: Service thread queue flushed.
2014-12-14 21:41:40 9277 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (cc13f472-83fa-11e4-8ea3-138a7ef719a7): 1 (Operation not permitted)
at galera/src/replicator_str.cpprepare_for_IST():456. IST will be unavailable.
2014-12-14 21:41:40 9277 [Note] WSREP: Member 1.0 (nodeB) requested state transfer from '*any*'. Selected 0.0 (nodeA)(SYNCED) as donor.
2014-12-14 21:41:40 9277 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 1)
2014-12-14 21:41:40 9277 [Note] WSREP: Requesting state transfer: success, donor: 0
2014-12-14 21:41:40 9277 [Warning] WSREP: 0.0 (nodeA): State transfer to 1.0 (nodeB) failed: -255 (Unknown error 255)
2014-12-14 21:41:40 9277 [ERROR] WSREP: gcs/src/gcs_group.cpp:int gcs_group_handle_join_msg(gcs_group_t*, const gcs_recv_msg_t*)():722: Will never receive state. Need to abort.
2014-12-14 21:41:40 9277 [Note] WSREP: gcomm: terminating thread
2014-12-14 21:41:40 9277 [Note] WSREP: gcomm: joining thread
2014-12-14 21:41:40 9277 [Note] WSREP: gcomm: closing backend
2014-12-14 21:41:41 9277 [Note] WSREP: view(view_id(NON_PRIM,dcb11fa5,4) memb {
e50b4be3,0
} joined {
} left {
} partitioned {
dcb11fa5,0
})
2014-12-14 21:41:41 9277 [Note] WSREP: view((empty))
2014-12-14 21:41:41 9277 [Note] WSREP: gcomm: closed
2014-12-14 21:41:41 9277 [Note] WSREP: /usr/sbin/mysqld: Terminated.
141214 21:41:41 mysqld_safe mysqld from pid file /var/lib/mysql/mysqlb.pid ended
WSREP_SST: [ERROR] Parent mysqld process (PID:9277) terminated unexpectedly. (20141214 21:41:42.237)
WSREP_SST: [INFO] Joiner cleanup. (20141214 21:41:42.239)
WSREP_SST: [INFO] Joiner cleanup done. (20141214 21:41:42.746)

2006 MySQL server has gone away

Lastest Forum Posts - December 14, 2014 - 1:53pm
Hi,
In the last couple of weeks we have been experiencing a 2006 MySQL server has gone away error intermittently on our servers, to allow connections again we run "FLUSH TABLES". (we have this running on a cron to maintain stability at the moment)

We have upgraded to from MySQL 5.6.5 to MySQL 5.6.22 and switched now to Percona 5.6.21-70.

Server Mac Mini Core i7 16GB RAM 512GB Enterprise SSD.


ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 16384
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 512
virtual memory (kbytes, -v) unlimited

uptime
8:45 up 1 day, 15:55, 2 users, load averages: 0.49 0.32 0.31

NB - This config has been tinkered with by many people over the years - My hunch is the problem lays within here...
This is used in MM Replication environment.











Cluster dos not synchronized

Lastest Forum Posts - December 14, 2014 - 9:44am
Hi
I have two nodes with version 5.6.21-25.8-938.precise

| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 91 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000022 |
| wsrep_incoming_addresses | 10.1.1.12:3306,10.1.1.13:3306 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 87ae0011-83ab-11e4-9fc0-d65c844c8cea |
| wsrep_cluster_conf_id | 30 |
| wsrep_cluster_size | 2 |
| wsrep_cluster_state_uuid | 1abb2b3e-82ff-11e4-824a-a24ee1e11d16 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.8(rf6147dd) |
| wsrep_ready | ON |



clustercheck says

clustercheck root pass
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40

Percona XtraDB Cluster Node is synced.



But second node dos not have one table in database

and in error log i see

2014-12-14 18:35:44 53445 [Warning] InnoDB: Cannot open table test_db/all_cities from the internal data dictionary of InnoDB though the .frm file for the table exists. See http://dev.mysql.com/doc/refman/5.6/...eshooting.html for how you can resolve the problem.


also that
[Warning] WSREP: access file(gvwstate.dat) failed(No such file or directory)


from config:

explicit_defaults_for_timestamp=TRUE
wsrep_provider=/usr/lib/libgalera_smm.so
#wsrep_provider_options = 'pc.ignore_sb = true'
#wsrep_provider_options = 'socket.checksum = 1;'
wsrep_cluster_address=gcomm://10.1.1.12,10.1.1.13
#wsrep_cluster_address=gcomm://
wsrep_slave_threads = 128
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_node_address=10.1.1.12
wsrep_node_name=xtradb1
wsrep_sst_method=xtrabackup-v2
wsrep_cluster_name=test_db_cluster
wsrep_sst_auth="sstuser:s3cretPass"




I have now working well other cluster with 10 nodes
There is version 5.6.20-25.7-886.precise

What is different betwen new version and previos ?
Something changed in config ?

How can i install previos worked version ?

Thank you

Pages

Subscribe to Percona aggregator
]]>