]]>
]]>

You are here

Feed aggregator

percona-toolkit 2.2.13: damaged data on master after pt-table-sync

Lastest Forum Posts - March 9, 2015 - 11:58pm
Hi!

OS: Ubuntu 12.04.5 LTS \n \l
DB server: mariadb-server-5.5 5.5.42+maria-1~precise
Percona-toolkit: 2.2.13

We have two mariadb servers (also tried mysql-server-5.5 ( 5.5.41-0ubuntu0.12.04.1 )) with master->slave replication. Recently the replication has got broken so I decided to use pt-table-sync to fix it.

First I used pt-table-checksum to identifiy which tables are out of sync:

pt-table-checksum --chunk-size 500000 --empty-replicate-table --replicate=tracker.checksum -d tracker --tables tracker.chatlogs h=mysql-master,u=root,p=passw0rd

And then I used pt-table-sync:

pt-table-sync --execute --replicate tracker.checksum -d tracker --tables tracker.chatlogs h=mysql-master.site,u=root,p=passw0rd

Everything ran without any problems except for the fact that some data that had cyrillic symbols got damaged on master server. I saw '?' signs instead of normal cyrillic letters on both master and slave. That means that pt-table-sync somehow managed to damage the data on master DB.

MariaDB [tracker]> describe chatlogs;
+-------------+-----------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-----------------------+------+-----+---------+----------------+
| id | mediumint(8) unsigned | NO | PRI | NULL | auto_increment |
| user_id | mediumint(8) | NO | MUL | NULL | |
| username | varchar(20) | NO | MUL | | |
| username_to | varchar(20) | NO | MUL | | |
| IP | varchar(8) | NO | | | |
| time | int(11) | NO | MUL | NULL | |
| message | varchar(1000) | NO | | NULL | |
| blocked | varchar(20) | NO | MUL | | |
+-------------+-----------------------+------+-----+---------+----------------+
8 rows in set (0.00 sec)


MariaDB [tracker]> show create table chatlogs\G
*************************** 1. row ***************************
Table: chatlogs
Create Table: CREATE TABLE `chatlogs` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`user_id` mediumint(8) NOT NULL,
`username` varchar(20) NOT NULL DEFAULT '',
`username_to` varchar(20) NOT NULL DEFAULT '',
`IP` varchar(8) NOT NULL DEFAULT '',
`time` int(11) NOT NULL,
`message` varchar(1000) NOT NULL,
`blocked` varchar(20) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`) USING BTREE,
KEY `time` (`time`) USING BTREE,
KEY `username_to_name` (`username_to`,`username`) USING BTREE,
KEY `id_blocked` (`id`,`blocked`) USING BTREE,
KEY `username_to` (`username_to`) USING BTREE,
KEY `username` (`username`) USING BTREE,
KEY `blocked` (`blocked`) USING BTREE,
KEY `username_id` (`username`,`id`) USING BTREE,
KEY `id_username` (`id`,`username`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=156476 DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT
1 row in set (0.00 sec)


Before pt-table-sync (on master):

MariaDB [tracker]> SELECT id, FROM_UNIXTIME(time) FROM chatlogs WHERE message LIKE '%? ?%' OR blocked LIKE '%? ?%' ORDER BY time DESC;
Empty set (0.17 sec)


After pt-table-sync (on master):

MariaDB [tracker]> SELECT id, FROM_UNIXTIME(time) FROM chatlogs WHERE message LIKE '%? ?%' OR blocked LIKE '%? ?%' ORDER BY time DESC;
+--------+---------------------+
| id | FROM_UNIXTIME(time) |
+--------+---------------------+
| 156475 | 2015-02-23 12:48:15 |
| 156474 | 2015-02-23 12:47:47 |
| 156472 | 2015-02-23 10:55:57 |
| 156471 | 2015-02-23 08:04:15 |
| 156469 | 2015-02-23 07:34:57 |
| 156468 | 2015-02-23 07:34:29 |
| 156467 | 2015-02-23 05:57:14 |
| 156466 | 2015-02-23 05:51:29 |
| 156465 | 2015-02-23 05:45:44 |
| 156464 | 2015-02-23 05:43:37 |
| 156463 | 2015-02-23 05:21:37 |
| 156462 | 2015-02-23 05:15:51 |
| 156461 | 2015-02-23 05:15:30 |
| 156460 | 2015-02-22 21:18:18 |
| 156459 | 2015-02-22 21:16:30 |
+--------+---------------------+
15 rows in set (0.17 sec)

I can provide dumps for this table for both master and slave before sync so that it would be possible to reproduce the issue.

Could anybody please tell if I'm doing anything wrong in order to sync slave db with master and why this sort of thing could happen?

Thanks in advance.

--
Best regards,
Sergey Arlashin

Percona XtraBackup without root

Lastest Forum Posts - March 9, 2015 - 1:01pm
There is anyway to use Percona XtraBackup on a server where I don't have root access?

High availability question on re. 5.6

Lastest Forum Posts - March 9, 2015 - 11:12am
I am running the following cluster:
Node A and C have XtraDB-Cluster-Shared 5.6.20-25.7.888
Node B has XtraDB-Cluster-Shared 5.6.15-25.5.759

If I test failover (shut down) with nodes A or C (shut the node down ... add a record ... then restart the node) everything replicates as expected.
If I test failover (shutdown) with node B I immediately get a too many connections error and I must clear hosts before I can continue.

The question here is .. would this be because node b is an earlier version of XtraDB-Cluster-Shared?
Should I upgrade Node B so it is running the same version as the other two?
When upgrading should I uninstall the current artifact or can I just upgrade to 5.6.20-25 ?

5 free handy tools for monitoring and managing MySQL replication

Latest MySQL Performance Blog posts - March 9, 2015 - 8:50am

MySQL Replication is very simple to set up. In this post I’ll discuss its importance and five handy tools for monitoring and managing MySQL replication.

What is MySQL Replication? It’s the process of copying the (real-time events) data from one master instance to another slave instance and maintaining the redundant consistent data in a different machine. This enables a distributed database system that shares the same level of information.

In MySQL the replication works based on the three threads as shown below.

1) I/O thread on the slave server:  To start on receiving replication events, an I/O thread starts on the slave server and connects to the master server.

2) Master connection handler thread:  As a connection handier, master starts a thread whenever a replication slave connects to a master. The master server sends the events from its binary log file to the slave I/O thread, notifying slave about newly written events to its binary log. The slave I/O thread which records them to in the slave’s relay log file.

3) Slave SQL thread:  When it starts, immediately reads the events from the relay log and applies on the Slave DB. Once it finishes the processing of every relay log and if the I/O thread is writing the events to a new relay log file then it deletes the processed one. Suppose if the  I/O thread is writing  the events on a relay log and which is the same file SQL thread is reading then the SQL thread pauses until more events are available in the relay log.

MySQL replication (slave) redundant instances is an excellent method of improving data performance and stability. It ensures the availability of another copy of a database whenever there arises any issues with the master server. One of the other advantages is the report query (select) offloading to a slave server, which is a common practice to reduce the workload of the master DB server as there are multiple servers that are able to respond to the queries. The third advantage is to schedule the backup from the slave server, etc.

All the benefits discussed above are smart and effective only if replication is up and running and the data is in sync with the master.

Let us see the set of very useful tools from Percona Toolkit which help you in monitoring and managing the MySQL replication (Slaves).

1) pt-heartbeat: Tool measures/monitor replication lag on a MySQL in real time. It is important to have a replication monitoring system to confirm that replication is up and running and lag is current.

In typical way of monitoring, we use “SHOW SLAVE STATUS” to find out the information like Slave_IO_Running: Yes, Slave_SQL_Running: Yes and Seconds_Behind_Master: 0 etc, but is not reliable as  Seconds_Behind_Master shows difference between last timestamp read in the binlogs and current time. Many reasons like slow network, long running queries, blocking operations or a 2nd level slaves(Master > slave1> slave2) etc. can produce an irrelevant value for the variable.

So I recommend to use pt-heartbeat, which depends only on the heartbeat record being replicated to slave instead of the above said unreliable method of finding the lag. pt-heartbeat will insert/update a row in the master and the time delay is calculated depending on when the data was inserted and when it became available to read in the slave. It works at any depth in the replication hierarchy. For example, it will reliably report how far a slave lags its original master (master’s master).

Example :

On Master: [root@Tst1Master ~]#pt-heartbeat --daemonize -D test --update -h<IP address> --create-table On Slave: [root@Tst1Slave ~]#pt-heartbeat -D test --monitor --master-server-id 1 0.00s [  0.00s,  0.00s,  0.00s ] 0.00s [  0.00s,  0.00s,  0.00s ] 0.00s [  0.00s,  0.00s,  0.00s ] 0.00s [  0.00s,  0.00s,  0.00s ]

We used to schedule the backup from the slave to avoid the additional load with the master server. In this case it is important to confirm the slave is current with the master to ascertain the backup is having the recent data. Here is a simple script you can use to verify the replication status on a periodical basis(cron) and to know the status just before the backup scheduled.

#!/bin/bash #     <300 - [Good] #     300> <600 - [Warning] #     > 600 - [Critical] MAIL_FROM="root@`hostname`" MAIL_TO="mailid@mail.com" Warningthreshold=300 Criticalthreshold=600 backup=$1 CMD=$(/root/bin/pt-heartbeat -D test --master-server-id 1 --check | cut -d. -f1) # Pass the parameter "test.sh backup" to denote the call is from the backup script. if [ $CMD -lt $Warningthreshold ] then MESSAGE=`date +'%m:%d:%Y %H:%M:%S'`" [Good] current delay: "$CMD; elif [ $CMD -gt $Warningthreshold ] && [ $CMD -lt $Criticalthreshold ] then MESSAGE=`date +'%m:%d:%Y %H:%M:%S'`" [Warning] current delay: "$CMD; elif [ $CMD -gt $Criticalthreshold ] then MESSAGE=`date +'%m:%d:%Y %H:%M:%S'`" [Critical] current delay: $CMD Check the replication" else MESSAGE=`date +'%m:%d:%Y %H:%M:%S'`" [Error] Replication status check failed need to investigate." fi #No arguments supplied" if [ -z "$1" ] && [ $CMD -gt $Warningthreshold ] then (echo "Subject: Replication status on `hostname`"; echo "Replication status : " echo $MESSAGE )  | /usr/sbin/sendmail -O NoRecipientAction=add-to -f${MAIL_FROM} ${MAIL_TO} elif [ $# -eq 1 ] then (echo "Subject: Replication status check prior to backup on `hostname`"; echo "Replication status prior to backup:" echo $MESSAGE )  | /usr/sbin/sendmail -O NoRecipientAction=add-to -f${MAIL_FROM} ${MAIL_TO} fi

2) pt-slave-find: Finds and prints replication hierarchy of the slaves – shows you the topology and replication hierarchy of your MySQL replication instances.

Example :

[root@Tst1Master ~]# ./pt-slave-find --host=192.168.56.10 192.168.56.10 Version 5.6.22-72.0-log Server ID 1 Uptime 42:09 (started 2015-03-03T01:40:42) Replication Is not a slave, has 1 slaves connected, is not read_only Filters Binary logging STATEMENT Slave status Slave mode STRICT Auto-increment increment 1, offset 1 InnoDB version 5.6.22-72.0 +- 192.168.56.11 Version 5.6.22-72.0 Server ID 2 Uptime 41:48 (started 2015-03-03T01:41:03) Replication Is a slave, has 0 slaves connected, is not read_only Filters Binary logging STATEMENT Slave status 0 seconds behind, running, no errors Slave mode STRICT Auto-increment increment 1, offset 1 InnoDB version 5.6.22-72.0

[root@Tst1Master ~]# ./pt-slave-find --host=192.168.56.10 --report-format=hostname 192.168.56.10 +- 192.168.56.11

3) pt-slave-restart: Watches the MySQL replication slaves for any error and tries to restart the replication.

The tool is very useful for skipping statements that cause errors and continuing replication. If you use this carelessly, the slave will be having the inconsistent data. However  when you use the tool, I recommended you to confirm the consistency of data between master and slave with help of pt-table-checksum.

Example : Restart the slave for error-numbers=1062 (Duplicate entry ‘1’ for key ‘PRIMARY’)

#pt-slave-restart --socket=/var/lib/mysql/custom-feeds/mysql.sock --ask-pass --error-numbers=1062

4) pt-table-checksum: Performs an online replication consistency check by executing checksum queries on the master, which produces different results on replicas that are inconsistent with the master.

Example :

[root@Tst1Master ~]# ./pt-table-checksum -dD TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 03-03T02:34:44 0 1 2 1 0 0.011 d.t

Note: It  is important to run the pt-table-checksum tool regardless of whether or not you’ve ever skipped an event with pt-slave-restart to make sure we are having the identical data on the slave side.

5) pt-table-sync: Sync the slave with their master (synchronizes data efficiently between MySQL tables.)

Example :

[root@Tst1Slave ~]# ./pt-table-sync -dD --print --sync-to-master 192.168.56.11 REPLACE INTO `d`.`t`(`id`, `data`) VALUES ('1', 'Test1') /*percona-toolkit src_db:d src_tbl:t src_dsn:P=3306,h=192.168.56.10 dst_db:d dst_tbl:t dst_dsn:h=192.168.56.11 lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:6435 user:root host:Tst1Slave.mysql*/; REPLACE INTO `d`.`t`(`id`, `data`) VALUES ('2', 'Test2') /*percona-toolkit src_db:d src_tbl:t src_dsn:P=3306,h=192.168.56.10 dst_db:d dst_tbl:t dst_dsn:h=192.168.56.11 lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:6435 user:root host:Tst1Slave.mysql*/; [root@Tst1Slave ~]#

[root@Tst1Slave ~]# ./pt-table-sync -dD  --verbose  --execute  --sync-to-master 192.168.56.11 # Syncing h=192.168.56.11 # DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE #      0       2      0      0 Chunk     03:38:09 03:38:09 2    d.t

We have successfully sync the tables so let us try the checksum again and confirm the table is in sync.

[root@Tst1Master ~]# ./pt-table-checksum -dD TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 03-03T03:03:40 0 0 2 1 0 0.111 d.t

The aforesaid are the simple examples and based on your needs, you can choose the tools, options or modify the scripts. I also recommend that you to go through the documentations for more details on each tools.

The post 5 free handy tools for monitoring and managing MySQL replication appeared first on MySQL Performance Blog.

What stopped MySQL? Tracing back signals sent to MySQL

Latest MySQL Performance Blog posts - March 6, 2015 - 8:59am

Have you ever had a case where you needed to find a process which sent a HUP/KILL/TERM or other signal to your database? Let me rephrase. Did you ever have to find which process messed up your night? If so, you might want to read on. I’m going to tell you how you can find it.

Granted, on small and/or meticulously managed systems tracking down the culprit is probably not a big deal. You can likely identify your process simply by checking what processes have enough privileges to send mysqld a HUP/KILL/TERM signal. However, frequently we see cases where this may not work or the elimination process would be too tedious to execute.

We recently had a case where a process was frequently sending SIGHUPs to mysqld and the customer asked us to see if we could get rid of his annoyance. This blog is the direct result of a discussion I had with my colleague Francisco Bordenave, on options available to deal with his issue. I’m only going to cover a few of them in this blog but I imagine that most of you will be able to find one that will work for your case. Note that most tracing tools add some overhead to the system being investigated. The tools presented in the following are designed to be lightweight so the impact should be well within acceptable range for most environments.

DISCLAIMER: While writing this blog I discovered that David Busby has also discussed one of the tools that I’m going to cover in his article. For those who have read the article note that I’m going to cover other tools as well and I will also cover a few extra SystemTap details in this blog. For those who haven’t yet had chance to read David’s blog, you can read it here.

All right, let’s see what “low hanging tools” there are available to us to deal with our issue!

Linux
  • SystemTap: widely available on Linux but usually not enabled by default. You need to install debuginfo and devel kernel packages and systemtap itself. Similar to DTrace.
  • Perf: although not quite written for generic tracing, due to its ability to trace system calls we can use it to our advantage if we trace sys_enter_sigkill.
  • Audit: generic system auditing platform. Given its nature, we can use it to track down many things, including rogue processes sending HUP signals to our poor mysqld!
  • Code!: Given that MySQL is opensource, you could customize the signal handler to obtain extra information. See more in sigaction(2) and the SA_SIGINFO flag. I’m not sure if this should be listed as a more efficient solution but it’s an option nevertheless. I guess one could also preload/inject his own singal handler via an LD_PRELOAD trick and a custom library but that’s beyond the scope what I intend to cover. However, for certain signals (most notably, SIGSEGV) you may not need to write your own tools as the OS may already come with libs/tools that can assist you. See Ulrich Drepper’s catchsegv or /usr/lib64/libSegFault.so, for instance.
  • Debuggers: These may be efficient to use in some cases but I won’t cover them this time, either.
FreeBSD/Solaris
  • DTrace: a very decent, stable tracing platform. Included in most recent kernels by default for the mentioned platforms (FreeBSD 9.2+, FreeBSD 10+, Solaris 10+).

In this article I’m going to focus on Linux as that’s what people in the MySQL community seem to care about most nowadays. The tools that I will discuss will be SystemTap, Perf and Audit. If you feel that you would like to read about the rest, let me know and I will cover the rest of the options in a followup article.

SystemTap

I’m going to set up SystemTap on a recent, 64 bit CentOS 7 box. I will only cover basic install, you can find more about how to install SystemTap here.

The strength of SystemTap is definitely its flexibility, potentially the best tool for solving our problem on the Linux platform. It’s been around for some time and is generally regarded mature but I would recommend to test your “tapscripts” in dev/qa before you run them in production.

Installing SystemTap

Follow below steps to install SystemTap:

[root@centos7]~# sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Debuginfo.repo [root@centos7]~# yum repolist ... base-debuginfo/x86_64 CentOS-7 - Debuginfo 1,688 ...

[root@centos7]~# yum install kernel-debuginfo kernel-debuginfo-common kernel-devel [root@centos7]~# yum install systemtap systemtap-runtime

Tracing with SystemTap

Create a tapscript like the one below:

[root@centos7]~# cat find_sighupper.stp #!/usr/bin/stap # Prints information on process which sent HUP signal to mysqld probe begin { printf("%-26s %-8s %-5s %-8s %-5sn", "TIME", "SOURCE", "SPID", "TARGET", "TPID"); } probe nd_syscall.kill.return { sname = @entry(execname()); spid = @entry(pid()); sig = @entry(uint_arg(2)); tpid = @entry(uint_arg(1)); tname = pid2execname(tpid); time = ctime(gettimeofday_s()); if (sig == 1 && tname == "mysqld") printf("%-26s %-8s %-5d %-8s %-5dn", time, sname, spid, tname, tpid); }

Then run the tap script in a dedicated terminal:

[root@centos7]~# stap find_sighupper.stp TIME SOURCE SPID TARGET TPID

Send your HUP signal to mysqld from another terminal:

[root@centos7]~# kill -1 1984

The culprit should will show up on your first window like so:

[root@centos7]~# stap find_sighupper.stp TIME SOURCE SPID TARGET TPID Thu Feb 26 21:20:44 2015 kill 6326 mysqld 1984 ^C

Note that with this solution I was able to define fairly nice constraints relatively easily. With a single probe (well, quasi, as @entry refers back to the callee) I was able to get all this information and filter out HUP signals sent to mysqld. No other filtering is necessary!

Perf

Perf is another neat tool to have. As its name implies, it was originally developed for lightweight profiling, to use the performance counters subsystem in Linux. It became fairly popular and got extended many times over these past years. Since it happens to have probes we can leverage, we are going to use it!

Installing Perf

As you can see, installing Perf is relatively simple.

# yum install perf

Start perf in a separate terminal window. I’m only going to run it for a minute but I could run it in screen for a longer period of time.

[root@centos7 ~]# perf record -a -e syscalls:sys_enter_kill sleep 60

In a separate terminal window send your test and obtain the results via “perf script”:

[root@centos7 ~]# echo $$ 11380 [root@centos7 ~]# pidof mysqld 1984 [root@centos7 ~]# kill -1 1984 [root@centos7 ~]# perf script # ======== # captured on: Thu Feb 26 14:25:02 2015 # hostname : centos7.local # os release : 3.10.0-123.20.1.el7.x86_64 # perf version : 3.10.0-123.20.1.el7.x86_64.debug # arch : x86_64 # nrcpus online : 2 # nrcpus avail : 2 # cpudesc : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz # cpuid : GenuineIntel,6,70,1 # total memory : 1885464 kB # cmdline : /usr/bin/perf record -a -e syscalls:sys_enter_kill sleep 60 # event : name = syscalls:sys_enter_kill, type = 2, config = 0x9b, config1 = 0x0, config2 = 0x0, excl_usr = 0, exc # HEADER_CPU_TOPOLOGY info available, use -I to display # HEADER_NUMA_TOPOLOGY info available, use -I to display # pmu mappings: software = 1, tracepoint = 2, breakpoint = 5 # ======== # bash 11380 [000] 6689.348219: syscalls:sys_enter_kill: pid: 0x000007c0, sig: 0x00000001

As you can see in above output process “bash” with pid of 11380 signalled pid 0x07c0 (decimal: 1984) a HUP signal (0x01). Thus, we found our culprit with this method as well.

Audit

You can read more about Audit in the Red Hat Security Guide.

Installing Audit

Depending on your OS installation, it may be already installed.

If case it is not, you can install it as follows:

[root@centos7 ~]# yum install audit

When you are done installing, start your trace and track 64 bit kill system calls that send HUP signals with signal ID of 1:

[root@centos7]~# auditctl -l No rules [root@centos7]~# auditctl -a exit,always -F arch=b64 -S kill -F a1=1 [root@centos7]~# auditctl -l LIST_RULES: exit,always arch=3221225534 (0xc000003e) a1=1 (0x1) syscall=kill [root@centos7]~# auditctl -s AUDIT_STATUS: enabled=1 flag=1 pid=7010 rate_limit=0 backlog_limit=320 lost=0 backlog=0 [root@centos7]~# pidof mysqld 1984 [root@centos7]~# kill -1 1984 [root@centos7]~# tail -2 /var/log/audit/audit.log type=SYSCALL msg=audit(1425007202.384:682): arch=c000003e syscall=62 success=yes exit=0 a0=7c0 a1=1 a2=a a3=7c0 items=0 ppid=11380 pid=3319 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="zsh" exe="/usr/bin/zsh" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null) type=OBJ_PID msg=audit(1425007202.384:682): opid=1984 oauid=-1 ouid=995 oses=-1 obj=system_u:system_r:mysqld_t:s0 ocomm="mysqld"

As you can see from above output, the results showed up nicely in the system audit.log. From the log it’s clear that I sent my SIGHUP to mysqld (pid 1984, “opid” field) from zsh (see the command name in the “comm” field) via the 64 bit kill syscall. Thus, mischief managed, once again!

Summary

In this blog I presented you three different tools to help you trace down sources of signals. The three tools each have their own strengths. SystemTap is abundant of features and really nicely scriptable. The additional features of auditd may make it appealing to deploy to your host. Perf is a great tool for CPU profiling and you might want to install it solely for that reason. On the other hand, your distribution might not have support compiled in its kernel or may make the setup harder for given tool. In my experience most modern distributions support the tools discussed here so the choice comes down to personal preference or convenience.

In case you were wondering, I often pick auditd because it is often already installed. SystemTap might be a bit more complicated to setup but I would likely invest some extra time into the setup if my case is more complex. I primary use perf for CPU tracing and tend to think of the other two tools before I think of perf for tracing signals.

Hope you enjoyed reading! Happy [h/t]racking!

The post What stopped MySQL? Tracing back signals sent to MySQL appeared first on MySQL Performance Blog.

Buffer Pool restore.

Lastest Forum Posts - March 6, 2015 - 3:02am
HI

Just a question to ask about
innodb_buffer_pool_dump
Is there option for this to take the dump automatically on certain
percentage of the buffer page changes,instead of doing that only at
shutdowns. If in case mysql dies abruptly [OS signal 11 or 6) it won't get
a shutdown signal and dies with out the Buffer pool dump inspite of setting
this innodb_buffer_pool_dump_at_shutdown ON, and can't load the old one
that is months or days old when it starts up again after recovery.

At least didn't find anywhere mentioning about it here http://dev.mysql.com/doc/refman/5.6/...ffer-pool.html in a timely manner which was there in PS5.5

MySQL Mixed Replication uses different method for same query

Lastest Forum Posts - March 5, 2015 - 12:34pm

mysql>select*from table_in_question;| id | parent_id |key| array_name | value | time | ip | remark | p_id ||2128363|0| fname | o | Sue |2015-03-0208:47:48|0.0.0.0||166817||2128365|2128363| lname | o | Susanson |2015-03-0208:47:48|||166817||2128367|2128363| address | o |123 Main |2015-03-0208:47:48|||166817||2128373|2128363| city | o | Nowhere |2015-03-0208:47:48|||166817||2128379|2128363| state | o | LA |2015-03-0208:47:48|||166817||2128385|2128363| postcode | o |11111|2015-03-0208:47:48|||166817||2128389|0| fname | o | Amy |2015-03-0208:47:58|2.2.2.2||0||2128391|2128363| telephone | o |(800)555-1212|2015-03-0208:47:48|||166817||2128397|2128363| email_address | o | sue@nowhere.com |2015-03-0208:47:48|||166817||2128403|2128363| click | o |369490878|2015-03-0208:47:48|||166817||2128409|2128363| country | o |223|2015-03-0208:47:48|||166817||2128415|2128363| product | c |7|2015-03-0208:47:48|||166817||2128421|2128363| g_id | c |1|2015-03-0208:47:48|||166817||2128427|2128363| c_id | c |141|2015-03-0208:47:48|||166817| id is the Primary Key of this table.

Last SQL error: Error 'Duplicate entry '2128391' for key 'PRIMARY''on query.Defaultdatabase:'XXX'. Query:'INSERT INTO table_in_question (key,value,array_name,parent_id) VALUES ('lname','Amyson','o',2128389), ('address','444 Main','o',2128389), ('city','Nowhere','o',2128389), ('state','NH','o',2128389), ('postcode','11111','o',2128389), ('telephone','800555-1212','o',2128389), ('email_address','Amy@nowhere.com','o',2128389), ('country','223','o',2128389), ('product','7','c',2128389), ('g_id','1','c',2128389), ('c_id','141','c',2128389)' The issue is: We are upgrading to a PXC cluster with 6 nodes. This causes the auto-increment to be set to 6 on the cluster. We are keeping the data current via a binlog from a Master/Slave pair where the auto-increment is set to 2. The Master/Slave pair has MIXED replication. The row with no parent_id is a simple insert. The remaining rows are inserted as above.
From what I can see the first row and the subsequent 2 rows are populated with row based replication. The remaining rows are switched to statement based. When the 2nd no parent_id row is inserted it takes the ID from the master as row-based replication. When the first row referencing the parent is inserted, it too wants row based replication, causing the collision.
Especially puzzling on the bulk insert, which MySQL apparently breaks up and treats the first 2 as row based replication, and the others as statement based. The question is, because these are all simple inserts and none specifically address the time column (defaulted to the current time stamp) the only potentially non-deterministic column, what would cause the asynchronous replication method to be different?

How to test if CVE-2015-0204 FREAK SSL security flaw affects you

Latest MySQL Performance Blog posts - March 5, 2015 - 8:45am

The CVE-2015-0204 FREAK SSL vulnerability abuses intentionally weak “EXPORT” ciphers which could be used to perform a transparent Man In The Middle attack. (We seem to be continually bombarded with not only SSL vulnerabilities but the need to name vulnerabilities with increasing odd names.)

Is your server vulnerable?

This can be tested using the following GIST

If the result is 0; the server is not providing the EXPORT cipher; and as such is not vulnerable.

Is your client vulnerable?

Point your client to https://oneiroi.co.uk:4443/test if this returns “Vulnerable” then the client is vulnerable, if you find a connection error your client should not be vulnerable for example:

root@host:/tmp$ openssl version
OpenSSL 1.0.1e 11 Feb 2013
root@host:/tmp$ curl https://oneiroi.co.uk:4443/test -k
Vulnerable

root@host:/tmp$ openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify error:num=18:self signed certificate
verify return:1
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify return:1

Certificate chain
0 s:/C=XX/L=Default City/O=Default Company Ltd
i:/C=XX/L=Default City/O=Default Company Ltd

Server certificate
—–BEGIN CERTIFICATE—–
MIIDVzCCAj+gAwIBAgIJANvTn7jl

[root@3654e4df1cc2 bin]# curl https://oneiroi.co.uk:4443/test -k
curl: (35) Cannot communicate securely with peer: no common encryption algorithm(s).
[root@3654e4df1cc2 bin]# openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
139942442694560:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:744:

In short a vulnerable client will complete the connection, and a non vulnerable client should present an SSL handshake failure error.

DIY

You can recreate this setup yourself


openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mycert.pem -out mycert.pem;
openssl s_server -cipher EXPORT -accept 4443 -cert mycert.pem -HTTP;

Is MySQL affected ?

Some of the code per the POODLE Blog post can be re-purposed here.


mysql -Bse "SHOW STATUS LIKE 'Ssl_cipher_list'" | sed 's/:/n/g' | grep EXP | wc -l

A result of 0 means the MySQL instance does not support any of the EXPORT ciphers, and thus should not be vulnerable to this attack.

How about other clients?

Most clients link to another library for SSL purposes; however there are examples where this is not the case; take for example golang http://golang.org/pkg/crypto/tls/ which partially implements the TLS1.2 RFC.

The following test code however shows golang does not appear to be affected.


package main

import (
“fmt”
“net/http”
“crypto/tls”
)

func main() {
tr := &http.Transport{
TLSClientConfig: &tls.Config{},
DisableCompression: true,
}
client := &http.Client{Transport: tr}
resp, err := client.Get(“https://oneiroi.co.uk:4443/test”)
fmt.Println(err)
fmt.Println(resp)
}

Get https://oneiroi.co.uk:4443/test: remote error: handshake failure

References

The post How to test if CVE-2015-0204 FREAK SSL security flaw affects you appeared first on MySQL Performance Blog.

Steve ‘Woz’ Wozniak to speak at Percona Live MySQL Conference & Expo 2015

Latest MySQL Performance Blog posts - March 5, 2015 - 5:00am

Apple’s Steve “Woz” Wozniak will talk at Percona Live 2015

I am thrilled to announce that Apple co-founder and Silicon Valley icon and philanthropist Steve Wozniak will participate in a moderated Q&A on creativity and innovation April 14 during this year’s Percona Live MySQL Conference and Expo in Santa Clara, California.

In addition to “The Woz,” as Steve is nicknamed, we have a great lineup of esteemed industry luminaries, with a broad range of talks and tutorials along with fun and informative networking events during the four-day conference (April 13-16).

Harrison Fisk of Facebook’s Database Performance Team will deliver a keynote titled “Polyglot Persistence @Facebook” exploring why Facebook has so many different data solutions deployed at scale and how the company decides to deploy a new one. He’ll also talk about how Facebook is able to manage all these solutions and what types of optimizations are possible across all their data systems.

‘MySQL 101’ program announced
I wrote about the new “MySQL 101” program earlier this week. Largely driven by the broad demand for (and scarcity of) MySQL expertise across many industries, Percona Live is adding dual tracks for both developers and system administrators that will provide a 2-day course on MySQL fundamentals.

The two days of practical training will include everything needed to handle day-to-day MySQL DBA tasks. The motto of this new program is, “You send us developers and admins, and we will send you back MySQL DBAs.” You can check out the full schedule for MySQL 101 here.

Lightning Talks and Birds of a Feather Sessions (BOFs)
In related news, today we also announced the schedules for the popular “Lightning Talks” and “Birds of a Feather Sessions” (BOFs).

Lightning Talks provide an opportunity for attendees to propose, explain, exhort, and rant on any MySQL-related topic for five minutes. Lightning Talks will take place Wednesday night, (April 15) during the MySQL Community Networking Reception, which begins immediately following the breakout sessions.

This year’s topics include:

  • “Everything About MySQL Bugs” presented by Seta Smirnova, senior principal technical support engineer, and Valerii Kravchuk, principle support engineer at Percona
  • “MySQL Galera Cluster, Percona XtraDB Cluster, and MariaDB Galera Cluster” presented by Jay Janseen, managing consultant at Percona
  • “Data Security – Emerging Legal Trends” presented by S. Keith Moulsdale, partner at Whiteford, Taylor & Preston
  • “WebScaleSQL Meeting/Hacking” presented by Steaphan Greene, software engineer at WebScaleSQL

BOFs sessions enable attendees with interests in the same project or topic to enjoy some quality face time. The BOF sessions will take place Tuesday night, April 14, from 6:00 p.m. to 7:00 p.m. This year’s topics include:

  • “Working From Home – Fun, Facts and Scares” presented by Raghavendra Prabhu, product lead, Percona XtraDB Cluster at Percona
  • “The Future of MySQL Quality Assurance: Introducing pquery” presented by Roel Van de Paar, senior QA lead, and Ramesh Sivaraman, junior QA engineer at Percona

Community Reception
One of the most fun and compelling aspects of the Percona Live is the networking, providing the opportunity to make connections that can help enhance a career, facilitate a current project, or inspire new ideas. This year’s conference features the Community Networking Reception in the Expo Hall on Wednesday, April 15, from 5:30 p.m. to 7:00 p.m. The event will include the MySQL Community Awards and the Lightning Talks.

Conference Registration Promotions
Advanced pricing discounts are still available but will end on March 8. Register soon to take advantage of the best conference rates available. A Percona Live MySQL Conference & Expo Ticket provides you with access to all OpenStack Live sessions. In addition, in the spirit of the new MySQL 101 track, the first 101 people who sign-up for MySQL 101 using the “101” discount code will get their tickets for a $101 – a $299 savings! Attendees of MySQL 101 will have full access to Percona Live Keynotes and the Expo Hall.

See you next month!

The post Steve ‘Woz’ Wozniak to speak at Percona Live MySQL Conference & Expo 2015 appeared first on MySQL Performance Blog.

Percona XtraDB Cluster 5.6.22-25.8 is now available

Latest MySQL Performance Blog posts - March 5, 2015 - 4:30am

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on March 5th 2015. Binaries are available from downloads area or from our software repositories.

Based on Percona Server 5.6.22-72.0 including all the bug fixes in it, Galera Replicator 3.9, and on Codership wsrep API 25.8, Percona XtraDB Cluster 5.6.21-25.8 is now the current General Availability release. All of Percona’s software is open-source and free, and all the details of the release can be found in the 5.6.22-25.8 milestone at Launchpad.

Bugs Fixed:

  • XtraBackup SST wouldn’t stop when MySQL was SIGKILLed. This would prevent MySQL to initiate a new transfer as port 4444 was already utilized. Bug fixed #1380697.
  • wsrep_sst_xtrabackup-v2 script was causing innobackupex to print a false positive stack trace into the log. Bug fixed #1407599.
  • MyISAM DDL (CREATE/DROP) isn’t replicated any more when wsrep_replicate_myisam is OFF. Note, for older nodes in the cluster, wsrep_replicate_myisam should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended with wsrep_replicate_myisam OFF since if any table in list is MyISAM, the whole DDL statement is not put under TOI (total order isolation), hence not replicated. Bug fixed #1402338.
  • gcache.mem_size option has been deprecated. A warning will now be generated if the variable has value different than 0. Bug fixed #1392408.
  • stderr of SST/Innobackupex is logged to syslog with appropriate tags if sst-syslog is in [sst] or [mysqld_safe] has syslog in my.cnf. This can be overriden by setting the sst-syslog to -1 in [sst]. Bug fixed #1399134.
  • clustercheck can now check if the node is PRIMARY or not, to allow for synced nodes which go out of PRIMARY not to take any writes/reads. Bug fixed #1403566.
  • SST will now fail early if the xtrabackup_checkpoints is missing on the joiner side. Bug fixed #1405985.
  • socat utility was not properly terminated after a timeout. Bug fixed #1409710.
  • When started (without bootstrap), the node would hang if it couldn’t find a primary node. Bug fixed #1413258.
  • 10 seconds timeout in Xtrabackup SST script was not enough for the joiner to delete existing files before it started the socat receiver on systems with big datadir. Bug fixed #1413879.
  • Non booststrap node could crash while attempting to perform table%cache operations with the BF applier failed to open_and_lock_tables warning. Bug fixed #1414635.
  • Percona XtraDB Cluster 5.6 would crash on ALTER TABLE / CREATE INDEX with Failing assertion: table->n_rec_locks == 0 error. Bug fixed #1282707.
  • Variable length arrays in WSREP code were causing debug builds (-DWITH_DEBUG=ON) to fail. Bug fixed #1409042.
  • Race condition between donor and joiner in Xtrabackup SST configuration has been fixed. This caused XtraBackup SST to fail when joiner took longer to spawn the second listener for SST. Bug fixed #1405668.
  • Signal handling in mysqld has been fixed for SST processes. Bug fixed #1399175.
  • SST processes are now spawned with fork/exec instead of posix_spawn to allow for better cleanup of child processes in event of non-graceful termination (SIGKILL or a crash etc.). Bug fixed #1382797.
  • wsrep_local_cached_downto would underflow when the node on which it is queried had no writesets in gcache. Bug fixed #1262179.
  • A typo in wsrep_provider_options could cause an unhandled exception. Bug fixed #215.
  • Interrupted IST would result in HA_ERR_KEY_NOT_FOUND error in subsequent IST. Bug fixed #210.

Other bugs fixed: #1275814.

Known Issue:

  • For those affected by crashes on donor during SST due to backup locks (#1401133), please add the following to your my.cnf configuration file:[sst]
    inno-backup-opts='--no-backup-locks'

    option as a workaround to force FLUSH TABLES WITH READ LOCK (NOTE: This workaround will is available only if you’re using Percona XtraBackup 2.2.9 or newer.). Or, as an alternative you can set your environment variable FORCE_FTWRL to 1 (for passing environment variables, see description of bug #1381492 in the previous release notes).

Release notes for Percona XtraDB Cluster 5.6.22-25.8 are available in our online documentation along with the installation and upgrade instructions.

Percona XtraDB Cluster code hosting has been moved to Github. The Bazaar branches will not be updated further.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

The post Percona XtraDB Cluster 5.6.22-25.8 is now available appeared first on MySQL Performance Blog.

optimize sql query

Lastest Forum Posts - March 5, 2015 - 2:16am
I have the following query that takes almost 30s to execute.

The table techno_tire_hjul has around 1700 entries.

I am using inner join extensively.

Any tips on how to modify the sql to improve the execution time?

: select `techno_tire_hjul`.`id`, `techno_tire_hjul`.`datum`, `techno_tire_lager`.`namn` as Lager, `techno_tire_hjul`.`lagerplats`, `techno_tire_hjul`.`vinter_sommar_dack`, `techno_tire_kategori`.`namn` as Kategori, `techno_tire_dacktyp`.`namn` as Dacktyp1, `techno_tire_dacktyp_1`.`namn` as Dacktyp2, `techno_tire_dacktyp_2`.`namn` as Dacktyp3, `techno_tire_dacktyp_3`.`namn` as Dacktyp4, `techno_tire_fabrikat`.`namn` as Fabrikat1, `techno_tire_fabrikat_1`.`namn` as Fabrikat2, `techno_tire_fabrikat_2`.`namn` as Fabrikat3, `techno_tire_fabrikat_3`.`namn` as Fabrikat4, `techno_tire_hjul`.`dimension_diameter1` as Diameter1, `techno_tire_hjul`.`dimension_diameter2` as Diameter2, `techno_tire_hjul`.`dimension_diameter3` as Diameter3, `techno_tire_hjul`.`dimension_diameter4` as Diameter4, `techno_tire_hjul`.`dimension_profil1` as Profil1, `techno_tire_hjul`.`dimension_profil2` as Profil2, `techno_tire_hjul`.`dimension_profil3` as Profil3, `techno_tire_hjul`.`dimension_profil4` as Profil4, `techno_tire_hjul`.`dimension_bredd1` as Bredd1, `techno_tire_hjul`.`dimension_bredd2` as Bredd2, `techno_tire_hjul`.`dimension_bredd3` as Bredd3, `techno_tire_hjul`.`dimension_bredd4` as Bredd4, `techno_tire_hjul`.`lo1` as LoadIndex1, `techno_tire_hjul`.`lo2` as LoadIndex2, `techno_tire_hjul`.`lo3` as LoadIndex3, `techno_tire_hjul`.`lo4` as LoadIndex4, `techno_tire_si`.`namn` as SpeedIndex1, `techno_tire_si_2`.`namn` as SpeedIndex2, `techno_tire_si_3`.`namn` as SpeedIndex3, `techno_tire_si_4`.`namn` as SpeedIndex4, `techno_tire_hjul`.`rof1` as RunOnFlat1, `techno_tire_hjul`.`rof2` as RunOnFlat2, `techno_tire_hjul`.`rof3` as RunOnFlat3, `techno_tire_hjul`.`rof4` as RunOnFlat4, `techno_tire_hjul`.`monsterdjup1`, `techno_tire_hjul`.`monsterdjup2`, `techno_tire_hjul`.`monsterdjup3`, `techno_tire_hjul`.`monsterdjup4`, `techno_tire_hjul`.`regnr`, `techno_tire_hjul`.`kommentar1`, `techno_tire_hjul`.`brukare_fornamn`, `techno_tire_hjul`.`brukare_efteramn`, `techno_tire_hjul`.`brukare_telefon`, `techno_tire_hjul`.`brukare_epost`, `techno_tire_falgtyp`.`namn` as Falgtyp1, `techno_tire_falgtyp_1`.`namn` as Falgtyp2, `techno_tire_falgtyp_2`.`namn` as Falgtyp3, `techno_tire_falgtyp_3`.`namn` as Falgtyp4, `techno_tire_service_paket`.`namn` as Service_paket, `techno_tire_hjul`.`lagerplats_datum`, `techno_tire_hjul`.`lagerplats_status`, `techno_tire_hjul`.`orderno`, `techno_tire_hjul`.`kund_nr`, `techno_tire_hjul`.`namn`, `techno_tire_hjul`.`utleverans_datum_forslag`, `techno_tire_hjul`.`utleverans_datum_forslag_status`, `techno_tire_hjul`.`utleverans_datum`, `techno_tire_hjul`.`transportsparning` from `techno_tire_hjul` `techno_tire_hjul` inner join `techno_tire_dacktyp` `techno_tire_dacktyp_1` on `techno_tire_hjul`.`dacktyp1` = `techno_tire_dacktyp_1`.`id` inner join `techno_tire_kategori` `techno_tire_kategori` on `techno_tire_hjul`.`kategori_id` = `techno_tire_kategori`.`id` inner join `techno_tire_falgtyp` `techno_tire_falgtyp_2` on `techno_tire_hjul`.`falgtyp3` = `techno_tire_falgtyp_2`.`id` inner join `techno_tire_dacktyp` `techno_tire_dacktyp_3` on `techno_tire_hjul`.`dacktyp4` = `techno_tire_dacktyp_3`.`id` inner join `techno_tire_falgtyp` `techno_tire_falgtyp_1` on `techno_tire_hjul`.`falgtyp2` = `techno_tire_falgtyp_1`.`id` inner join `techno_tire_fabrikat` `techno_tire_fabrikat` on `techno_tire_hjul`.`fabrikat1` = `techno_tire_fabrikat`.`id` inner join `techno_tire_lager` `techno_tire_lager` on `techno_tire_hjul`.`lager_id` = `techno_tire_lager`.`id` inner join `techno_tire_falgtyp` `techno_tire_falgtyp` on `techno_tire_hjul`.`falgtyp1` = `techno_tire_falgtyp`.`id` inner join `techno_tire_fabrikat` `techno_tire_fabrikat_1` on `techno_tire_hjul`.`fabrikat2` = `techno_tire_fabrikat_1`.`id` inner join `techno_tire_falgtyp` `techno_tire_falgtyp_3` on `techno_tire_hjul`.`falgtyp4` = `techno_tire_falgtyp_3`.`id` inner join `techno_tire_dacktyp` `techno_tire_dacktyp_2` on `techno_tire_hjul`.`dacktyp3` = `techno_tire_dacktyp_2`.`id` inner join `techno_tire_fabrikat` `techno_tire_fabrikat_2` on `techno_tire_hjul`.`fabrikat3` = `techno_tire_fabrikat_2`.`id` inner join `techno_tire_fabrikat` `techno_tire_fabrikat_3` on `techno_tire_hjul`.`fabrikat4` = `techno_tire_fabrikat_3`.`id` inner join `techno_tire_service_paket` `techno_tire_service_paket` on `techno_tire_hjul`.`service_paket_id` = `techno_tire_service_paket`.`id` inner join `techno_tire_dacktyp` `techno_tire_dacktyp` on `techno_tire_hjul`.`dacktyp2` = `techno_tire_dacktyp`.`id` inner join `techno_tire_si` `techno_tire_si` on `techno_tire_hjul`.`si1` = `techno_tire_si`.`id` inner join `techno_tire_si` `techno_tire_si_2` on `techno_tire_hjul`.`si2` = `techno_tire_si_2`.`id` inner join `techno_tire_si` `techno_tire_si_3` on `techno_tire_hjul`.`si3` = `techno_tire_si_3`.`id` inner join `techno_tire_si` `techno_tire_si_4` on `techno_tire_hjul`.`si4` = `techno_tire_si_4`.`id` where `techno_tire_hjul`.`hjul_status` = 1 and `techno_tire_hjul`.`aterforsaljare_id` = 1000

Percona Server 5.6.23-72.1 is now available

Latest MySQL Performance Blog posts - March 4, 2015 - 9:29am

Percona is glad to announce the release of Percona Server 5.6.23-72.1 on March 4, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.23, including all the bug fixes in it, Percona Server 5.6.23-72.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.23-72.1 milestone on Launchpad.

New Features:

  • TokuDB storage engine package has been updated to version 7.5.6.

Bugs Fixed:

  • RPM pre-install script assumed that the PID file was always located in the datadir. If it was not, during installation, wrong assumption could be made if the server was running or not. Bug fixed #1201896.
  • SHOW GRANTS displayed only the privileges granted explicitly to the named account. Other effectively available privileges were not displayed. Fixed by implementing Extended SHOW GRANTS feature. Bug fixed #1354988 (upstream #53645).
  • InnoDB lock monitor output was printed even if it was not requested. Bug fixed #1418996.
  • The stored procedure key was made consistent with other keys in the Slow Query Log by replacing space with an underscore. Bug fixed #1419230.
  • Some --big-test MTR tests were failing for Percona Server because they weren’t updated. Bug fixed #1419827.

Other bugs fixed: #1408232, and #1420303.

Release notes for Percona Server 5.6.23-72.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker

The post Percona Server 5.6.23-72.1 is now available appeared first on MySQL Performance Blog.

Percona Server 5.5.42-37.1 is now available

Latest MySQL Performance Blog posts - March 4, 2015 - 7:22am


Percona is glad to announce the release of Percona Server 5.5.42-37.1 on March 4, 2015. Based on MySQL 5.5.42, including all the bug fixes in it, Percona Server 5.5.42-37.1 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.42-37.1 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • RPM pre-install script assumed that the PID file was always located in the datadir. If it was not, during installation, wrong assumption could be made if the server was running or not. Bug fixed #1201896.
  • SHOW GRANTS displayed only the privileges granted explicitly to the named account. Other effectively available privileges were not displayed. Fixed by implementing Extended SHOW GRANTS feature. Bug fixed #1354988 (upstream #53645).
  • InnoDB lock monitor output was printed even if it was not requested. Bug fixed #1418996.
  • The stored procedure key was made consistent with other keys in the Slow Query Log by replacing space with an underscore. Bug fixed #1419230.

Other bugs fixed: #1408232, #1415843 (upstream #75642), bug fixed #1407941, and bug fixed #1424568 (upstream #75868).

(Please also note that Percona Server 5.6 series is the latest General Availability series and current GA release is 5.6.23-72.1.)

Release notes for Percona Server 5.5.42-37.1 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.42-37.1 is now available appeared first on MySQL Performance Blog.

Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16

Latest MySQL Performance Blog posts - March 3, 2015 - 9:18am

Talking with Percona Live attendees last year I heard a couple of common themes. First, people told me that there is a lot of great advanced content at Percona Live but there is not much for people just starting to learn the ropes with MySQL. Second, they would like us to find a way to make such basic content less expensive.

I’m pleased to say we’re able to accommodate both of these wishes this year at Percona Live! We have created a two-day intensive track called “MySQL 101” that runs April 15-16. MySQL 101 is designed for developers, system administrators and DBAs familiar with other databases but not with MySQL. And of course it’s ideal for anyone else who would like to expand their professional experience to include MySQL. The sessions are designed to lay a solid foundation on many aspects of MySQL development, design and operations.

As for the price: Just $101 for both full days, but only if you are among the first 101 people to register using the promo code “101” at checkout.  After that the price returns to $400 (still a great price!).

The MySQL 101 registration pass includes full access to the Percona Live expo hall (and all the fun stuff happening out there) as well as keynotes, which will inform you about most significant achievements in MySQL ecosystem.

As there is so much information to cover in the MySQL 101 track, we’re running two sessions in parallel – one geared more toward developers using MySQL and the other toward sysadmins and MySQL DBAs, focusing more on database operations. Though I want to point out that you do not have to chose one track to attend exclusively, but rather can mix and match sessions depending what is most relevant to your specific circumstances.

I will be leading a couples tracks myself alongside many other Percona experts who are joining me for those two days!

Here’s a peek at just some of the many classes on the MySQL 101 agenda:

You can see the full MySQL 101 agenda here. Don’t forget the promo code “101” and please feel free to ask any questions below. I hope to see you in Santa Clara at Percona Live! The conference runs April 13-16 in sunny Santa Clara, California.

The post Introducing ‘MySQL 101,’ a 2-day intensive educational track at Percona Live this April 15-16 appeared first on MySQL Performance Blog.

UPDATE with LIMIT 1

Lastest Forum Posts - March 3, 2015 - 9:17am
Given a table with a primary key, if I do an UPDATE with WHERE `primaryKey` = "unique value", would LIMIT 1 give any speed optimization? I was told that LIMIT 1 would speed up the query under certain circumstances but if so, under which circumstances?

Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots

Latest MySQL Performance Blog posts - March 3, 2015 - 6:36am

As the need for information grows so does the size of data we need to keep in our databases. SST is unavoidable for spinning up new nodes in a PXC cluster and when datasets reach the “terra-byte” range this becomes ever more cumbersome requiring many hours for a new node to synchronize.

More often that not, it is necessary to implement custom “wsrep_sst” scripts or resort to manual synchronization processes. Luckily cloud providers provide convenient methods to leverage disk snapshots that can be used to quickly transfer data between nodes.

This article deals with the actions needed to perform a snapshot on Google’s Compute Engine (GCE) infrastructure. A similar method can be used on AWS EC2 instances using EBS snapshots or any other form of snapshots such as LVM, ZFS or SAN. The steps described can be used to add a new node to a PXC cluster or to avoid SST. The following procedure can also be used to take advantage of the performance benefit of GCE Snapshots. A similar procedure can be used for adding a regular slave provided the binary log co-ordinates have been captured. This article assumes your “datadir” is on a separate disk to your operating system partition using the “ext4″ filesystem:

  1. Select a suitable “donor” node, we will use “node1″ for this example.
  2. Stop the MySQL service on “node1″ or perform a FTWRL with the MySQL service running on a node which is in “desync/donor” mode
    # Take the snapshot from a stopped instance [root@node1 /] service mysql stop & tail -f /var/log/mysql/error.log   # OR alternatively take the snapshot from a 'desynced' node   ### desync from cluster replication mysql> set global wsrep_desync=ON;    ### get FTWRL mysql> flush tables with read lock;
  3. While the MySQL service is down on “node1″ or the FTWRL is held create a snapshot in the Google Developer Console for the disk or using the GCE API (* this assumes that the datadir is located in a separate standalone disk). This part of the process takes around 15 minutes for a 3.5 TB disk.
    gcloud compute disks snapshot node1-datadir-disk --snapshot-name node1-datadir-disk-snapshot-1
  4. As soon as the snapshot has completed start the MySQL service on “node1″ (verifying the node has successfully joined the cluster) or release the FTWRL
    # Depending on the steps followed in step 1 either start MySQL on node1 [root@node1 /] service mysql start & tail -f /var/log/mysql/error.log   # OR alternatively release the FTWRL and "sync" the node   ### release FTWRL mysql> unlock tables;   ### if there is high load on the cluster monitor wsrep_local_recv_queue  ### until it reaches 0 before running the following command to rejoin  ### the cluster replication (otherwise it can be run immediately after ### releasing the FTWRL): mysql> set global wsrep_desync=OFF;
    ***** IMPORTANT NOTE: In case “node1″ is unable to rejoin the cluster or requires an SST you will need to re-create the snapshot from another node or after SST completes.
  5. Now connect to the “joiner” node, we will use “node2″ for this example.
  6. Unmount the existing disk from “node2″ for this example (assuming MySQL service is not running else stop the MySQL service first)[root@node2 /] umount /var/lib/mysql
  7. Detach and delete the disk containing the MySQL datadir from the “node2″ instance in the Google Developer Console or using the GCE API
    gcloud compute instances detach-disk node2 --disk node2-datadir-disk gcloud compute disks delete node2-datadir-disk
  8. Create and attach a new disk to the “node2″ instance in the Google Developer Console or using the GCE API using the snapshot you created in step 3. This part of the process takes around 10 minutes for a 3.5 TB disk
    gcloud compute disks create node2-datadir-disk --source-snapshot node1-datadir-disk-snapshot-1 gcloud compute instance attach-disk node2 --disk node2-datadir-disk
  9. [ *** LVM only step *** ]: If you are using LVM the device will not show up in this list until you have activated the Volume Group (“vg_mysql_data” in this example)# this command will report the available volume groups [root@node2 /] vgscan   Reading all physical volumes.  This may take a while...   Found volume group "vg_mysql_data" using metadata type lvm2   # this command will report the available logical volumes, you should see the LV INACTIVE now [root@node2 /] lvscan   INACTIVE            '/dev/vg_mysql_data/lv_mysql' [20.00 TiB] inherit   # this command will activate all logical volumes within the volume group [root@node2 /] vgchange -ay vg_mysql_data   # this command will report the available logical volumes, you should see the LV ACTIVE now [root@node2 /] lvscan   ACTIVE            '/dev/vg_mysql_data/lv_mysql' [20.00 TiB]
  10. After the device has been added it should show up on the “node2″ operating system – you can retrieve the new UUID using the following command (in case you have mounted using “/dev/disk/by-name” and the name of the new disk is the same as the previous you do not need to update “/etc/fstab” e.g. this holds true for VM instances created using the Percona XtraDB click-to-deploy installer)[root@node2 /] ls -l /dev/disk/by-uuid/ total 0 lrwxrwxrwx 1 root root 10 Feb 14 15:56 4ad2d22b-500a-4ad2-b929-12f38347659c -> ../../sda1 lrwxrwxrwx 1 root root 10 Feb 19 03:12 9e48fefc-960c-456f-95c9-9d893bcafc62 -> ../../dm-0   # This is the 'new' disk
  11.  You can now proceed to adding the new UUID you retrieved in step 9 to “/etc/fstab” (unless you are using “/dev/disk/by-name” with the same disk name) and mount the new disk[root@node2 /] vi /etc/fstab ... UUID=9e48fefc-960c-456f-95c9-9d893bcafc62 /var/lib/mysql ext4 defaults,noatime 0 0 ...   [root@node2 /] mount -a
  12. Verify the data is mounted correctly and the ownership of the data directory and sub-contents are using the correct UID / GID for the MySQL user on the destination system (although this is usually OK, it is good to do a quick check)[root@node2 /] ls -lhtR /var/lib/mysql/
  13. You are now ready to start MySQL and verify that the node has in fact initialised with IST (provided you have sufficient “gcache” available there shouldn’t be any other issues)[root@node2 /] service mysql start & tail -f /var/log/mysql/error.log

The Percona XtraDB Click-to-deploy tool can be used for automated deployments and further details on creating a cluster on Google Compute Engine using this method can be found in Jay Janssen’s post, “Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process.”

 

The post Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots appeared first on MySQL Performance Blog.

Replicate PXC from MySQL 5.5 Master?

Lastest Forum Posts - March 3, 2015 - 2:56am
Hi, we are currently investigating moving our current MySQL 5.5 Master > 3 x Slaves (with Statement Based Replication, apx 250GB data and growing) to a HA clustering option. We have ruled out MySQL ndb Cluster as it would require too many changes to our current database structure (large number of BLOB fields amongst others).

We would want to start off by setting the Cluster up as a Slave to the main Master (for testing etc), before moving over to the Cluster as our Primary system in the future. Is this possible with Percona Xtra DB Cluster, and is it a relatively straight forward process?

log rotation oddly broken

Lastest Forum Posts - March 2, 2015 - 12:39pm
I have log rotation setup in /etc/logrotate.d/mysql:

: /var/lib/mysql/mysql*.log { # create 600 mysql mysql notifempty daily dateext rotate 7 missingok compress postrotate # just if mysqld is really running if test -x /usr/bin/mysqladmin && \ /usr/bin/mysqladmin ping &>/dev/null then /usr/bin/mysqladmin flush-logs fi endscript } However, this does not appear to be working correctly. On one system I have log rotation occuring as desired (new log file daily) but I have a month of log files (not 7). On another system I have log rotation occuring but writes are happening to a file labeled mysql-slow.log-20150204. Not quite sure where the break down is at this point and the blog has some good guidance but maybe it's out of date?

Any help (or logrotate.d examples) would be much appreciated.

Emulating MySQL roles with the Percona PAM plugin and proxy users

Latest MySQL Performance Blog posts - March 2, 2015 - 8:50am

From time to time people wonder how to implement roles in MySQL. This can be useful for companies having to deal with many user accounts or for companies with tight security requirements (PCI or HIPAA for instance). Roles do not exist in regular MySQL but here is an example on how to emulate them using Percona Server, the PAM plugin and proxy users.

The goal

Say we have 2 databases: db1 and db2, and we want to be able to create 3 roles:

  • db1_dev: can read and write on db1 only.
  • db2_dev: can read and write on db2 only.
  • stats: can read on db1 and db2

For each role, we will create one user: joe (db1_dev), mike (db2_dev) and tom (stats).

Setting up the Percona PAM plugin

The Percona PAM plugin is distributed with Percona Server 5.5 and 5.6. I will be using Percona Server 5.6 in this post and I will authenticate users with /etc/shadow. As explained here, the setup is easy:

  • Make sure /etc/shadow can be read by the mysql user:
    # chgrp mysql /etc/shadow # chmod g+r /etc/shadow
  • Install the plugin:
    mysql> INSTALL PLUGIN auth_pam SONAME 'auth_pam.so';
  • Create a /etc/pam.d/mysqld file containing:
    auth required pam_warn.so auth required pam_unix.so audit account required pam_unix.so audit

Tinkering with the permissions of /etc/shadow may a security concern. Authenticating users against an LDAP server may be a better option. The configuration of the PAM plugin is similar (replace pam_unix.so with pam_ldap.so and forget the part about /etc/shadow).

Testing authentication with the PAM plugin

Now let’s create a user:

# adduser test_pam # passwd test_pam mysql> GRANT ALL PRIVILEGES ON db1.* TO test_pam@localhost IDENTIFIED WITH auth_pam;

And let’s check that the authentication is working as we expect:

mysql -utest_pam -p Enter password: mysql> show grants; +-----------------------------------------------------------+ | Grants for test_pam@localhost | +-----------------------------------------------------------+ | GRANT USAGE ON *.* TO 'test_pam'@'localhost' | | GRANT ALL PRIVILEGES ON `db1`.* TO 'test_pam'@'localhost' | +-----------------------------------------------------------+

That works! We can delete the user and go to the next step.

Creating proxy user

The key to emulate a role is to create a MySQL account for which nobody will know the password (then nobody will be able to use it directly). Instead we will use the PROXY privilege to make sure we map an anonymous account that will match any incoming user to the right MySQL user.

So the first step is to create an anonymous user:

mysql> CREATE USER ''@'' IDENTIFIED WITH auth_pam AS 'mysqld, pam_db1=db1_dev, pam_db2=db2_dev, pam_stats=stats';

The goal of this user is simply to map Unix users in the pam_db1 group to the db1_dev MySQL user, Unix users in the pam_db2 group to the db2_dev MySQL user and Unix users in the pam_stats group to the stats MySQL user.

Creating the proxied accounts

Now we can create the MySQL users corresponding to each of the roles we want to create:

mysql> GRANT SELECT, INSERT ON db1.* TO 'db1_dev'@localhost IDENTIFIED BY 'XXXXX'; mysql> GRANT PROXY ON 'db1_dev'@'localhost' TO ''@''; mysql> GRANT SELECT, INSERT ON db2.* TO 'db2_dev'@localhost IDENTIFIED BY 'YYYYY'; mysql> GRANT PROXY ON 'db2_dev'@'localhost' TO ''@''; mysql> GRANT SELECT ON db1.* TO 'stats'@localhost IDENTIFIED BY 'ZZZZZ'; mysql> GRANT SELECT ON db2.* TO 'stats'@localhost; mysql> GRANT PROXY ON 'stats'@'localhost' TO ''@'';

Creating the Unix user accounts

The last step is to create the Unix users joe, mike and tom and assign them the correct group:

# useradd joe # passwd joe # groupadd pam_db1 # usermod -g pam_db1 joe # useradd mike # passwd mike # groupadd pam_db2 # usermod -g pam_db2 mike # useradd tom # passwd tom # groupadd pam_stats # usermod -g pam_stats tom

Again you may prefer using an LDAP server to avoid creating the users at the OS level.

Testing it out!

Let’s try to connect as mike:

# mysql -umike -p Enter password: mysql> show grants; +----------------------------------------------------------------------------------------------------------------+ | Grants for db2_dev@localhost | +----------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'db2_dev'@'localhost' IDENTIFIED BY PASSWORD '*C1DDB6E980040762275B29A316FD993B4A19C108' | | GRANT SELECT, INSERT ON `db2`.* TO 'db2_dev'@'localhost' | +----------------------------------------------------------------------------------------------------------------+

Not bad!

Alternatives

The Percona PAM plugin is not the only option to use roles:

  • MariaDB 10 supports roles from version 10.0.5
  • Oracle distributes a PAM plugin for MySQL 5.5 and MySQL 5.6 as part of the MySQL Enterprise subscription
  • Securich is a set of stored procedures that has many features regarding user management
  • Google has been offering support for roles through its google-mysql-tools for a long time.
Conclusion

Even if they are not officially supported, roles can be emulated with an authentication plugin and a proxy user. Let’s hope that roles will be added in MySQL 5.7!

The post Emulating MySQL roles with the Percona PAM plugin and proxy users appeared first on MySQL Performance Blog.

Weird LSN is in the Future Error after Updating

Lastest Forum Posts - March 2, 2015 - 1:47am
Hi!,
Iam running an 5 Node Multi-Instances Galera Cluster since 2 years now.
SST is encrypted xtrabackup. The Donor is doing nothing but backup and sst.

I did a minor Update to 5.5.39-36.0-55-log / wsrep_25.11.r4023
The biggest change was the Update from xtrabackup 2.1 to xtrabackup 2.2.

Normally i let the node re-sync the content after an update. So i delete the grastate.dat.
After starting and successfully SST the Log started to fill up with:

---
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
150210 12:57:21 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
150210 12:57:21 InnoDB: Error: page 2 log sequence number 44208161331
---
Endless scrolling until hard-killing it.

After stopping the Node, cleaning the data-dir and restarting it the sst is completing fine and the error is gone.
This is happening only once per instance an node, so Debugging is really hard!

The same happend on the second node. First I deleted the ib_* Files. The Node recreated them and crashed again.
Deleting the table ibd Files did nothing, too.
Deleting the xtrabackup_* Files on the joiner fixes the error!

My current impression is that the new xtrabackup is not deleting all essential files. The old Version wipes all Content.
Does somebody knows something about this?

Greetz

Pages

Subscribe to Percona aggregator
]]>