]]>
]]>

Feed aggregator

You are here

High Availability with MySQL Fabric: Part I

Latest MySQL Performance Blog posts - May 15, 2014 - 6:00am

In our previous post, we introduced the MySQL Fabric utility and said we would dig deeper into it. This post is the first part of our test of MySQL Fabric’s High Availability (HA) functionality.

Today, we’ll review MySQL Fabric’s HA concepts, and then walk you through the setup of a 3-node cluster with one Primary and two Secondaries, doing a few basic tests with it. In a second post, we will spend more time generating failure scenarios and documenting how Fabric handles them. (MySQL Fabric is an extensible framework to manage large farms of MySQL servers, with support for high-availability and sharding.)

Before we begin, we recommend you read this post by Oracle’s Mats Kindahl, which, among other things, addresses the issues we raised on our first post. Mats leads the MySQL Fabric team.

Our lab

All our tests will be using our test environment with Vagrant (https://github.com/martinarrieta/vagrant-fabric)

If you want to play with MySQL Fabric, you can have these VMs running in your desktop following the instructions in the README file. If you don’t want full VMs, our colleague Jervin Real created a set of wrapper scripts that let you test MySQL Fabric using sandboxes.

Here is a basic representation of our environment.

Set up

To set up MyQSL Fabric without using our Vagrant environment, you can follow the official documentation, or check the ansible playbooks in our lab repo. If you follow the manual, the only caveat is that when creating the user, you should either disable binary logging for your session, or use a GRANT statement instead of CREATE USER. You can read here for more info on why this is the case.

A description of all the options in the configuration file can be found here. For HA tests, the one thing to mention is that, in our experience, the failure detector will only trigger an automatic failover if the value for failover_interval in the [failure_tracking] section is greater than 0. Otherwise, failures will be detected and written to the log, but no action will be taken.

MySQL configuration

In order to manage a mysqld instance with MySQL Fabric, the following options need to be set in the [mysqld] section of its my.cnf file:

log_bin gtid-mode=ON enforce-gtid-consistency log_slave_updates

Additionally, as in any replication setup, you must make sure that all servers have a distinct server_id.

When everything is in place, you can setup and start MySQL Fabric with the following commands:

[vagrant@store ~]$ mysqlfabric manage setup [vagrant@store ~]$ mysqlfabric manage start --daemon

The setup command creates the database schema used by MySQL Fabric to store information about managed servers, and the start one, well, starts the daemon. The –daemon option makes Fabric start as a daemon, logging to a file instead of to standard output. Depending on the port and file name you configured in fabric.cfg, this may need to be run as root.

While testing, you can make MySQL Fabric reset its state at any time (though it won’t change existing node configurations such as replication) by running:

[vagrant@store ~]$ mysqlfabric manage teardown [vagrant@store ~]$ mysqlfabric manage setup

If you’re using our Vagrant environment, you can run the reinit_cluster.sh script from your host OS (from the root of the vagrant-fabric repo) to do this for you, and also initialise the datadir of the three instances.

Creating a High Availability Cluster:

A High Availability Cluster is a set of servers using the standard Asynchronous MySQL Replication with GTID.

Creating a group

The first step is to create the group by running mysqlfabric with this syntax:

$ mysqlfabric group create <group_name>

In our example, to create the cluster “mycluster” you can run:

[vagrant@store ~]$ mysqlfabric group create mycluster Procedure : { uuid = 605b02fb-a6a1-4a00-8e24-619cad8ec4c7, finished = True, success = True, return = True, activities = }

Add the servers to the group

The second step is add the servers to the group. The syntax to add a server to a group is:

$ mysqlfabric group add <group_name> <host_name or IP>[:port]

The port number is optional and only required if distinct from 3306. It is important to mention that the clients that will use this cluster must be able to resolve this host or IP. This is because clients will connect directly both with MySQL Fabric’s XML-PRC server and with the managed mysqld servers. Let’s add the nodes to our group.

[vagrant@store ~]$ for i in 1 2 3; do mysqlfabric group add mycluster node$i; done Procedure : { uuid = 9d65c81c-e28a-437f-b5de-1d47e746a318, finished = True, success = True, return = True, activities = } Procedure : { uuid = 235a7c34-52a6-40ad-8e30-418dcee28f1e, finished = True, success = True, return = True, activities = } Procedure : { uuid = 4da3b1c3-87cc-461f-9705-28a59a2a4f67, finished = True, success = True, return = True, activities = }

Promote a node as a master

Now that we have all our nodes in the group, we have to promote one of them. You can promote one specific node or you can let MySQL Fabric to choose one for you.

The syntax to promote a specific node is:

$ mysqlfabric group promote <group_name> --slave_uuid='<node_uuid>'

or to let MySQL Fabric pick one:

$ mysqlfabric group promote <group_name>

Let’s do that:

[vagrant@store ~]$ mysqlfabric group promote mycluster Procedure : { uuid = c4afd2e7-3864-4b53-84e9-04a40f403ba9, finished = True, success = True, return = True, activities = }

You can then check the health of the group like this:

[vagrant@store ~]$ mysqlfabric group health mycluster Command : { success = True return = {'e245ec83-d889-11e3-86df-0800274fb806': {'status': 'SECONDARY', 'is_alive': True, 'threads': {}}, 'e826d4ab-d889-11e3-86df-0800274fb806': {'status': 'SECONDARY', 'is_alive': True, 'threads': {}}, 'edf2c45b-d889-11e3-86df-0800274fb806': {'status': 'PRIMARY', 'is_alive': True, 'threads': {}}} activities = }

One current limitation of the ‘health’ command is that it only identifies servers by their uuid. To get a list of the servers in a group, along with quick status summary, and their host names, use lookup_servers instead:

[vagrant@store ~]$ mysqlfabric group lookup_servers mycluster Command : { success = True return = [{'status': 'SECONDARY', 'server_uuid': 'e245ec83-d889-11e3-86df-0800274fb806', 'mode': 'READ_ONLY', 'weight': 1.0, 'address': 'node1'}, {'status': 'SECONDARY', 'server_uuid': 'e826d4ab-d889-11e3-86df-0800274fb806', 'mode': 'READ_ONLY', 'weight': 1.0, 'address': 'node2'}, {'status': 'PRIMARY', 'server_uuid': 'edf2c45b-d889-11e3-86df-0800274fb806', 'mode': 'READ_WRITE', 'weight': 1.0, 'address': 'node3'}] activities = }

We sent a merge request to use a Json string instead of the “print” of the object in the “return” field from the XML-RPC in order to be able to use that information to display the results in a friendly way. In the same merge, we have added the address of the servers in the health command too.

Failure detection

Now we have the three lab machines set up in a replication topology of one master (the PRIMARY server) and two slaves (the SECONDARY ones). To make MySQL Fabric start monitoring the group for problems, you need to activate it:

[vagrant@store ~]$ mysqlfabric group activate mycluster Procedure : { uuid = 230835fc-6ec4-4b35-b0a9-97944c18e21f, finished = True, success = True, return = True, activities = }

Now MySQL Fabric will monitor the group’s servers, and depending on the configuration (remember the failover_interval we mentioned before) it may trigger an automatic failover. But let’s start testing a simpler case, by stopping mysql on one of the secondary nodes:

[vagrant@node2 ~]$ sudo service mysqld stop Stopping mysqld: [ OK ]

And checking how MySQL Fabric report’s the group’s health after this:

[vagrant@store ~]$ mysqlfabric group health mycluster Command : { success = True return = {'e245ec83-d889-11e3-86df-0800274fb806': {'status': 'SECONDARY', 'is_alive': True, 'threads': {}}, 'e826d4ab-d889-11e3-86df-0800274fb806': {'status': 'FAULTY', 'is_alive': False, 'threads': {}}, 'edf2c45b-d889-11e3-86df-0800274fb806': {'status': 'PRIMARY', 'is_alive': True, 'threads': {}}} activities = }

We can see that MySQL Fabric successfully marks the server as faulty. In our next post we’ll show an example of this by using one of the supported connectors to handle failures in a group, but for now, let’s keep on the DBA/sysadmin side of things, and try to bring the server back online:

[vagrant@node2 ~]$ sudo service mysqld start Starting mysqld: [ OK ] [vagrant@store ~]$ mysqlfabric group health mycluster Command : { success = True return = {'e245ec83-d889-11e3-86df-0800274fb806': {'status': 'SECONDARY', 'is_alive': True, 'threads': {}}, 'e826d4ab-d889-11e3-86df-0800274fb806': {'status': 'FAULTY', 'is_alive': True, 'threads': {}}, 'edf2c45b-d889-11e3-86df-0800274fb806': {'status': 'PRIMARY', 'is_alive': True, 'threads': {}}} activities = }

So the server is back online, but Fabric still considers it faulty. To add the server back into rotation, we need to look at the server commands:

[vagrant@store ~]$ mysqlfabric help server Commands available in group 'server' are: server set_weight uuid weight [--synchronous] server lookup_uuid address server set_mode uuid mode [--synchronous] server set_status uuid status [--update_only] [--synchronous]

The specific command we need is set_status, and in order to add the server back to the group, we need to change it’s status twice: first to SPARE and then back to SECONDARY. You can see what happens if we try to set it to SECONDARY directly:

[vagrant@store ~]$ mysqlfabric server set_status e826d4ab-d889-11e3-86df-0800274fb806 SECONDARY Procedure : { uuid = 9a6f2273-d206-4fa8-80fb-6bce1e5262c8, finished = True, success = False, return = ServerError: Cannot change server's (e826d4ab-d889-11e3-86df-0800274fb806) status from (FAULTY) to (SECONDARY)., activities = }

So let’s try it the right way:

[vagrant@store ~]$ mysqlfabric server set_status e826d4ab-d889-11e3-86df-0800274fb806 SPARE Procedure : { uuid = c3a1c244-ea8f-4270-93ed-3f9dfbe879ea, finished = True, success = True, return = True, activities = } [vagrant@store ~]$ mysqlfabric server set_status e826d4ab-d889-11e3-86df-0800274fb806 SECONDARY Procedure : { uuid = 556f59ec-5556-4225-93c9-b9b29b577061, finished = True, success = True, return = True, activities = }

And check the group’s health again:

[vagrant@store ~]$ mysqlfabric group health mycluster Command : { success = True return = {'e245ec83-d889-11e3-86df-0800274fb806': {'status': 'SECONDARY', 'is_alive': True, 'threads': {}}, 'e826d4ab-d889-11e3-86df-0800274fb806': {'status': 'SECONDARY', 'is_alive': True, 'threads': {}}, 'edf2c45b-d889-11e3-86df-0800274fb806': {'status': 'PRIMARY', 'is_alive': True, 'threads': {}}} activities = }

In our next post, when we discuss how to use the Fabric aware connectors, we’ll also test other failure scenarios like hard VM shutdown and network errors, but for now, let’s try the same thing but on the PRIMARY node instead:

[vagrant@node3 ~]$ sudo service mysqld stop Stopping mysqld: [ OK ]

And let’s check the servers again:

[vagrant@store ~]$ mysqlfabric group lookup_servers mycluster Command : { success = True return = [{'status': 'SECONDARY', 'server_uuid': 'e245ec83-d889-11e3-86df-0800274fb806', 'mode': 'READ_ONLY', 'weight': 1.0, 'address': 'node1'}, {'status': 'PRIMARY', 'server_uuid': 'e826d4ab-d889-11e3-86df-0800274fb806', 'mode': 'READ_WRITE', 'weight': 1.0, 'address': 'node2'}, {'status': 'FAULTY', 'server_uuid': 'edf2c45b-d889-11e3-86df-0800274fb806', 'mode': 'READ_WRITE', 'weight': 1.0, 'address': 'node3'}] activities = }

We can see that MySQL Fabric successfully marked node3 as FAULTY, and promoted node2 to PRIMARY to resolve this. Once we start mysqld again on node3, we can add it back as SECONDARY using the same process of setting it’s status to SPARE first, as we did for node2 above.

Remember that unless failover_interval is greater than 0, MySQL Fabric will detect problems in an active group, but it won’t take any automatic action. We think it’s a good thing that the value for this variable in the documentation is 0, so that automatic failover is not enabled by default (if people follow the manual, of course), as even in mature HA solutions like Pacemaker, automatic failover is something that’s tricky to get right. But even without this, we believe the main benefit of using MySQL Fabric for promotion is that it takes care of reconfiguring replication for you, which should reduce the risk for error in this process, specially once the project becomes GA.

What’s next

In this post we’ve presented a basic replication setup managed by MySQL Fabric and reviewed a couple of failure scenarios, but many questions are left unanswered, among them:

  • What happens to clients connected with a Fabric aware driver when there is a status change in the cluster?
  • What happens when the XML-RPC server goes down?
  • How can we improve its availability?

We’ll try to answer these and other questions in our next post. If you have some questions of your own, please leave them in the comments section and we’ll address them in the next or other posts, depending on the topic.

The post High Availability with MySQL Fabric: Part I appeared first on MySQL Performance Blog.

Why ALTER TABLE runs faster on Percona Server 5.5 vs. MySQL 5.5

Latest MySQL Performance Blog posts - May 15, 2014 - 3:00am

Some of us Perconians are at OpenStack summit this week in Atlanta. Matt Griffin, our director of product management, tweeted about the turbo-hipster CI talk about their experience of ALTER TABLEs running faster on Percona Server. Oracle’s Morgan Tocker then tweeted in response, asking why this was the case. I decided that the simplest way to answer that was here in this post.

The reason for this is the expand_fast_index_creation feature of Percona Server. I did a quick schema change on MySQL 5.5 and Percona Server 5.5 to demonstrate this (in the talk, the speaker mentioned that these versions were used).

The schema modifications in the talk could fall in 2 categories, the ones that could use fast index creation and the ones that could not.

I did the following tests on my laptop, on a sysbench tale with 300k records.

Vanilla MySQL 5.5:

mysql> alter table sbtest1 add index idx_c(c); Query OK, 0 rows affected (4.37 sec)

Percona Server 5.5:

mysql> alter table sbtest1 add index idx_c(c); Query OK, 0 rows affected (3.90 sec)

We know that this used fast index creation from the 0 rows affected. In this case, there is nor substantial difference between the 2 servers, also probably my laptop with CPU frewquency scaling doesn’t have the most consistent performance in the world.

For the second schema change, I added a column which copies the table.

Vanilla MySQL 5.5:

mysql> alter table sbtest1 add column d int default 0; Query OK, 300000 rows affected (37.05 sec) Records: 300000 Duplicates: 0 Warnings: 0

Percona Server 5.5:

mysql> alter table sbtest1 add column d int default 0; Query OK, 300000 rows affected (9.51 sec) Records: 300000 Duplicates: 0 Warnings: 0

The reason for this speed difference is that in case of Percona Server, for the table copy, the table is created only with a primary key, and the secondary indexes are built at the end of the process (rather than on the fly). For more details, check Alexey’s blog post on this topic.

This can be tuned further, by tuning innodb_merge_sort_block_size (in Percona Server 5.6, this is replaced by innodb_sort_buffer_size).

mysql> select @@innodb_merge_sort_block_size/1024/1024; +------------------------------------------+ | @@innodb_merge_sort_block_size/1024/1024 | +------------------------------------------+ | 1.00000000 | +------------------------------------------+ 1 row in set (0.00 sec) mysql> set innodb_merge_sort_block_size=8*1024*1024; Query OK, 0 rows affected (0.00 sec) mysql> alter table sbtest1 add column d int default 0; Query OK, 300000 rows affected (8.61 sec) Records: 300000 Duplicates: 0 Warnings: 0

So, in order to be accurate, schema changes are faster in Percona Server if they are table copies and if the tables have secondary indexes.

The post Why ALTER TABLE runs faster on Percona Server 5.5 vs. MySQL 5.5 appeared first on MySQL Performance Blog.

Tips on benchmarking Go + MySQL

Latest MySQL Performance Blog posts - May 14, 2014 - 9:00am

We just released, as an open source release, our new percona-agent (https://github.com/percona/percona-agent), the agent to work with Percona Cloud Tools. This agent is written in Go.

I will give a webinar titled “Monitoring All MySQL Metrics with Percona Cloud Tools” on June 25 that will cover the new features in percona-agent and Percona Cloud Tools, where I will also explain how it works. You are welcome to register now and join me.

There will be more posts about percona-agent, but in the meantime I want to dedicate this one to Go, Go with MySQL and some performance topics.

I have had an interest in the Go programming language for a long time, but in initial versions I did not quite like the performance of the gorountine scheduler. See my report from more than two years ago on runtime: reduce scheduling contention for large $GOMAXPROCS.

Supposedly this performance issue was fixed in Go 1.1, so this is a good time to revisit my benchmark experiment.

A simple run of prime or fibonachi numbers calculation in N threas is quite boring, so I am going to run queries against Percona Server. Of course it adds some complication as there are more moving parts (i.e. go scheduler, go sql driver, MySQL by itself), but it just makes the experiment more interesting.

Source code of my benchmark: Go-pk-bench:
This is probably not the best example of how to code in Go, but that was not the point of this exercise. This post is really about some tips to take into account when writing an application in Go using a MySQL (Percona Server) database.

So, first, we will need a MySQL driver for Go. The one I used two years ago (https://github.com/Philio/GoMySQL) is quite outdated. It seems the most popular choice today is Go-MySQL-Driver, and this is the one we use for internal development. This driver is based on the standard Go “database/sql” package. This package kind of provides a standard Go-way to deal with SQL-like databases. “database/sql” seems to work out OK, with some questionable design decisions as for my taste. So using “database/sql” and Go-MySQL-Driver you will need to deal with some quirks like almost unmanageable connection pool.

The first thing you should take into account it is a proper setting of
runtime.GOMAXPROCS().

If you do not do that, Go scheduler will use the default, which is 1. That binary will use one and only 1 CPU (so much for a modern concurrent language).

The command runtime.GOMAXPROCS(runtime.NumCPU())
will prescribe to use all available CPUs. Always remember to use this if you care about multi-threaded performance.

The next problem I faced in the benchmark is that when I ran queries in a loop, i.e. to repeat as much possible…

rows, err := db.Query("select k from sbtest"+strconv.Itoa(tab+1)+" where id = "+strconv.Itoa(i))

… very soon we ran out of TCP ports. Apparently “database/sql” and Go-MySQL-Driver and its smart connection pool creates a NEW CONNECTION for each query. I can explain why this happens, but using the following statement:

'db.SetMaxIdleConns(10000)'

helps (I hope somebody with “database/sql” knowledge will explain what it is doing).

So after these adjustments we now can run the benchmark, which by query you see is quite simple – run primary key lookups against Percona Server which we know scales perfectly in this scenario (I used sysbench to create 64 tables 1mln rows each, all this fits into memory). I am going to run this benchmark with 1, 2, 4, 8, 16, 24, 32, 48, 64 user threads.

Below you can see graphs for MySQL Throughput and CPU Usage (both graph are built using new metrics graphing in Percona Cloud Tools)

MySQL Throughput (user threads are increasing from 1 to 64)

CPU Usage (user threads are increasing from 1 to 64)

I would say the result scales quite nicely, at least it is really much better than it was two years ago. It is interesting to compare with something, so there is a graph from an identical run, but now I will use sysbench + lua for main workload driver.

MySQL Throughput (sysbench, user threads are increasing from 1 to 64)

CPU Usage (sysbench, user threads are increasing from 1 to 64)

From the graphs (this is what I like them for), we can clearly see increases in User CPU utilization (and actually we are able to use CPUs on 100% in user+system usage) and it clearly corresponds to increased throughput.

And if you are a fan of raw numbers:

MySQL Throughput, q/s (more is better) Threads | Go-MySQL | sysbench 1 | 13,189 | 16,765 2 | 26,837 | 33,534 4 | 52,629 | 65,943 8 | 95,553 | 116,953 16 | 146,979 | 182,781 24 | 169,739 | 231,895 32 | 181,334 | 245,939 48 | 198,238 | 250,497 64 | 207,732 | 251,972

(one with a knowledge of Universal Scalability Law can draw a prediction till 1000 threads, I leave it as a homework)

So, in conclusion, I can say that Go+MySQL is able to show decent results, but it is still not as effective as plan raw C (sysbench), as it seems it spends some extra CPU time in system calls.

If you want to try these new graphs in Percona Cloud Tools and see how it works with your system – join the free beta!

The post Tips on benchmarking Go + MySQL appeared first on MySQL Performance Blog.

max_allowed_packet and binary log corruption in MySQL

Latest MySQL Performance Blog posts - May 14, 2014 - 1:00am

The combination of max_allowed_packet variable and replication in MySQL is a common source of headaches. In a nutshell, max_allowed_packet is the maximum size of a MySQL network protocol packet that the server can create or read. It has a default value of 1MB (<= 5.6.5) or 4MB (>= 5.6.6) and a maximum size of 1GB. This adds some constraints in our replication environment:

  • The master server shouldn’t write events to the binary log larger than max_allowed_packet
  • All the slaves in the replication chain should have the same max_allowed_packet as the master server

Sometimes, even following those two basic rules we can have problems.

For example, there are situations (also called bugs) where the master writes more data than the max_allowed_packet limit causing the slaves to stop working. In order to fix this Oracle created a new variable called slave_max_allowed_packet. This new configuration variable available from 5.1.64, 5.5.26 and 5.6.6 overrides the max_allowed_packet value for slave threads. Therefore, regardless of the max_allowed_packet value the slaves’ threads will have 1GB limit, the default value of slave_max_allowed_packet. Nice trick that works as expected.

Sometimes even with that workaround we can get the max_allowed_packet error in the slave servers. That means that there is a packet larger than 1GB, something that shouldn’t happen in a normal situation. Why? Usually it is caused by a binary log corruption. Let’s see the following example:

Slave stops working with the following message:

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master'

The important part is “got fatal error 1236 from master”. The master cannot read the event it wrote to the binary log seconds ago. To check the problem we can:

  • Use mysqlbinlog to read the binary log from the position it failed with –start-position.

This is an example taken from our Percona Forums:

#121003 5:22:26 server id 1 end_log_pos 398528 # Unknown event # at 398528 #960218 6:48:44 server id 1813111337 end_log_pos 1835008 # Unknown event ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 1953066613, event_type: 8 DELIMITER ; # End of log file

Check the size of the event, 1953066613 bytes. Or the “Unknown event” messages. Something is clearly wrong there. Another usual thing to check is the server id that sometimes doesn’t correspond with the real value. In this example the person who posted the binary log event confirmed that the server id was wrong.

  • Check master’s error log.

[ERROR] Error in Log_event::read_log_event(): 'Event too big', data_len: 1953066613, event_type: 8

Again, the event is bigger than expected. There is no way the master and slave can read/write it, so the solution is to skip that event in the slave and rotate the logs on the master. Then, use pt-table-checksum to check data consistency.

MySQL 5.6 includes replication checksums to avoid problems with log corruptions. You can read more about it in Stephan’s blog post.

Conclusion

Errors on slave servers about max_allowed_packet can be caused by very different reasons. Although binary log corruption is not a common one, it is something worth checking when you have run out of ideas.

The post max_allowed_packet and binary log corruption in MySQL appeared first on MySQL Performance Blog.

Using Percona Server 5.6 with the Docker open-source engine

Latest MySQL Performance Blog posts - May 13, 2014 - 12:00am

There are a couple of posts about setting up Percona XtraDB Cluster on Vagrant and Percona Server on MySQL Sandbox – those are two of the top tools used by the Percona Support team for testing and bug processing among other things.

In this post, however, I will show you how to use Docker with Percona Server on Ubuntu 12.04.

As per Docker’s official site:

Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.

Docker containers can encapsulate any payload, and will run consistently on and between virtually any server. The same container that a developer builds and tests on a laptop will run at scale, in production*, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.

To install Docker on Ubuntu 12.04 you need to follow instructions from Docker’s official documentation:
http://docs.docker.io/installation/ubuntulinux/#ubuntu-precise-1204-lts-64-bit

After installing Docker, you may either download docker images via ‘docker pull’ and store docker images on your server so you can spin a new docker container in an instance or you may choose to do a ‘docker run’ on the terminal and implicitly download/store the specific docker image from index.docker.io and run the image afterward.

root@Perconallc-Support / # docker images | grep centos centos centos6 0b443ba03958 2 weeks ago 297.6 MB centos latest 0b443ba03958 2 weeks ago 297.6 MB centos 6.4 539c0211cd76 13 months ago 300.6 MB

Let us create a CentOS docker container by running the following command:

root@Perconallc-Support / # docker run -i -t centos:latest bash bash-4.1# cat /etc/redhat-release CentOS release 6.5 (Final)

As you may have noticed, we have just created a new interactive (-i) CentOS 6.5 docker container and ran bash in a single line of command. Detaching from the container is as easy as typing CTRL+p – CTRL+q, you’ll get to your terminal if you typed the right keys.

Verify the active containers:

root@Perconallc-Support / # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 202765d754b7 centos:centos6 bash 11 minutes ago Up 11 minutes elegant_rosalind

To list all existing containers active or not use the following command:

root@Perconallc-Support / # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 202765d754b7 centos:centos6 bash 12 minutes ago Up 12 minutes elegant_rosalind aae47d193a22 centos:6.4 bash 2 hours ago Exited (1) 2 hours ago boring_bardeen

To attach to the active docker container:

root@Perconallc-Support / # docker attach 202765d754b7 bash-4.1#

*Tip: Hit enter twice to get to the container’s bash prompt.*

Install Percona Server 5.6

Now that you have a working docker container you will then have to install the needed packages and repository.

bash-4.1# rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm Retrieving http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm Preparing... ########################################### [100%] 1:percona-release ########################################### [100%] bash-4.1# yum install Percona-Server-server-56 Percona-Server-client-56 Percona-Server-shared-56 Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: ftp.hosteurope.de * extras: ftp.plusline.de * updates: ftp.plusline.de Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package Percona-Server-client-56.x86_64 0:5.6.17-rel65.0.el6 will be installed --> Processing Dependency: /usr/bin/perl for package: Percona-Server-client-56-5.6.17-rel65.0.el6.x86_64 ---> Package Percona-Server-server-56.x86_64 0:5.6.17-rel65.0.el6 will be installed --> Processing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) for package: Percona-Server-server-56-5.6.17-rel65.0.el6.x86_64 --> Processing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) for package: Percona-Server-server-56-5.6.17-rel65.0.el6.x86_64 --> Processing Dependency: libaio.so.1()(64bit) for package: Percona-Server-server-56-5.6.17-rel65.0.el6.x86_64 ---> Package Percona-Server-shared-56.x86_64 0:5.6.17-rel65.0.el6 will be installed --> Running transaction check ---> Package libaio.x86_64 0:0.3.107-10.el6 will be installed ---> Package perl.x86_64 4:5.10.1-136.el6 will be installed --> Processing Dependency: perl-libs = 4:5.10.1-136.el6 for package: 4:perl-5.10.1-136.el6.x86_64 --> Processing Dependency: perl-libs for package: 4:perl-5.10.1-136.el6.x86_64 --> Processing Dependency: perl(version) for package: 4:perl-5.10.1-136.el6.x86_64 --> Processing Dependency: perl(Pod::Simple) for package: 4:perl-5.10.1-136.el6.x86_64 --> Processing Dependency: perl(Module::Pluggable) for package: 4:perl-5.10.1-136.el6.x86_64 --> Processing Dependency: libperl.so()(64bit) for package: 4:perl-5.10.1-136.el6.x86_64 --> Running transaction check ---> Package perl-Module-Pluggable.x86_64 1:3.90-136.el6 will be installed ---> Package perl-Pod-Simple.x86_64 1:3.13-136.el6 will be installed --> Processing Dependency: perl(Pod::Escapes) >= 1.04 for package: 1:perl-Pod-Simple-3.13-136.el6.x86_64 ---> Package perl-libs.x86_64 4:5.10.1-136.el6 will be installed ---> Package perl-version.x86_64 3:0.77-136.el6 will be installed --> Running transaction check ---> Package perl-Pod-Escapes.x86_64 1:1.04-136.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ======================================================================================================================================================================== Package Arch Version Repository Size ======================================================================================================================================================================== Installing: Percona-Server-client-56 x86_64 5.6.17-rel65.0.el6 percona 6.8 M Percona-Server-server-56 x86_64 5.6.17-rel65.0.el6 percona 19 M Percona-Server-shared-56 x86_64 5.6.17-rel65.0.el6 percona 714 k Installing for dependencies: libaio x86_64 0.3.107-10.el6 base 21 k perl x86_64 4:5.10.1-136.el6 base 10 M perl-Module-Pluggable x86_64 1:3.90-136.el6 base 40 k perl-Pod-Escapes x86_64 1:1.04-136.el6 base 32 k perl-Pod-Simple x86_64 1:3.13-136.el6 base 212 k perl-libs x86_64 4:5.10.1-136.el6 base 578 k perl-version x86_64 3:0.77-136.el6 base 51 k Transaction Summary ======================================================================================================================================================================== Install 10 Package(s) Total download size: 38 M Installed size: 158 M Is this ok [y/N]: y Downloading Packages: (1/10): Percona-Server-client-56-5.6.17-rel65.0.el6.x86_64.rpm | 6.8 MB 00:02 (2/10): Percona-Server-server-56-5.6.17-rel65.0.el6.x86_64.rpm | 19 MB 00:00 (3/10): Percona-Server-shared-56-5.6.17-rel65.0.el6.x86_64.rpm | 714 kB 00:00 (4/10): libaio-0.3.107-10.el6.x86_64.rpm | 21 kB 00:00 (5/10): perl-5.10.1-136.el6.x86_64.rpm | 10 MB 00:00 (6/10): perl-Module-Pluggable-3.90-136.el6.x86_64.rpm | 40 kB 00:00 (7/10): perl-Pod-Escapes-1.04-136.el6.x86_64.rpm | 32 kB 00:00 (8/10): perl-Pod-Simple-3.13-136.el6.x86_64.rpm | 212 kB 00:00 (9/10): perl-libs-5.10.1-136.el6.x86_64.rpm | 578 kB 00:00 (10/10): perl-version-0.77-136.el6.x86_64.rpm | 51 kB 00:00 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total 7.4 MB/s | 38 MB 00:05 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Warning: RPMDB altered outside of yum. ** Found 2 pre-existing rpmdb problem(s), 'yum check' output follows: udev-147-2.51.el6.x86_64 has missing requires of /sbin/service udev-147-2.51.el6.x86_64 has missing requires of MAKEDEV >= ('0', '3.11', None) Installing : Percona-Server-shared-56-5.6.17-rel65.0.el6.x86_64 1/10 Installing : 1:perl-Pod-Escapes-1.04-136.el6.x86_64 2/10 Installing : 1:perl-Module-Pluggable-3.90-136.el6.x86_64 3/10 Installing : 4:perl-libs-5.10.1-136.el6.x86_64 4/10 Installing : 3:perl-version-0.77-136.el6.x86_64 5/10 Installing : 1:perl-Pod-Simple-3.13-136.el6.x86_64 6/10 Installing : 4:perl-5.10.1-136.el6.x86_64 7/10 Installing : Percona-Server-client-56-5.6.17-rel65.0.el6.x86_64 8/10 Installing : libaio-0.3.107-10.el6.x86_64 9/10 Installing : Percona-Server-server-56-5.6.17-rel65.0.el6.x86_64 10/10 ... Installed: Percona-Server-client-56.x86_64 0:5.6.17-rel65.0.el6 Percona-Server-server-56.x86_64 0:5.6.17-rel65.0.el6 Percona-Server-shared-56.x86_64 0:5.6.17-rel65.0.el6 Dependency Installed: libaio.x86_64 0:0.3.107-10.el6 perl.x86_64 4:5.10.1-136.el6 perl-Module-Pluggable.x86_64 1:3.90-136.el6 perl-Pod-Escapes.x86_64 1:1.04-136.el6 perl-Pod-Simple.x86_64 1:3.13-136.el6 perl-libs.x86_64 4:5.10.1-136.el6 perl-version.x86_64 3:0.77-136.el6 Complete! bash-4.1# /etc/init.d/mysql start Starting MySQL (Percona Server). SUCCESS! bash-4.1# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.6.17-65.0-56 Percona Server (GPL), Release 65.0, Revision 587 Copyright (c) 2009-2014 Percona LLC and/or its affiliates Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>

There you have it, a freshly installed Percona Server 5.6.17 on CentOS 6.5 Linux container. Yay!

You can now create a new MySQL user to access the server outside the container:

mysql> grant all privileges on *.* to 'test_user'@'%' identified by 'test_pass'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) root@Perconallc-Support / # mysql -h 172.17.0.2 -utest_user -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 4 Server version: 5.6.17-65.0-56 Percona Server (GPL), Release 65.0, Revision 587 Copyright (c) 2009-2014 Percona LLC and/or its affiliates Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>

We’re basically following installation instructions for Percona Server from the documentation.

Just the same way as you would do on any server, you can install other equally important Percona software on the same container to complete your docker environment.

As an added bonus, you can contribute to the Docker Community by commiting your container and pushing it to the docker registry.

Summary

I’ve shown you how easy it is to spin a docker container and install Percona Server 5.6 in it. This is by far one of the fastest ways to create test/staging environments to simulate production servers. For further reading you may want to read Docker’s official documentation here.

I hope that this has piqued your interest to try it out yourself. And if that happens we would love to know how Percona Server and other Percona software perform on your docker environment. Cheers!

The post Using Percona Server 5.6 with the Docker open-source engine appeared first on MySQL Performance Blog.

Practical MySQL performance optimization: May 14 Webinar

Latest MySQL Performance Blog posts - May 12, 2014 - 3:00am

Achieving the best possible MySQL Performance doesn’t have to be complicated. It’s all about knowing which tools are designed for the task at hand – along with some basic (yet often overlooked) best practices.

Join me Wednesday, May 14 at 10 a.m. Pacific for a free webinar titled, “Practical MySQL performance optimization.” I’ll be sharing the main areas for improving MySQL performance – along with what to specifically focus on in each. These will include:

  • Hardware
  • MySQL Configuration
  • Schema and Queries
  • Application Architecture

And as I mentioned earlier, I’ll also show you the best tools for the job and how to use them efficiently. This will help you optimize your time by focusing on the queries that are most important for your application.

At the end of this webinar, you will know how to optimize MySQL performance in the most practical way possible. The webinar is free but I recommend registering now to reserve your spot. I hope to see you on May 14!

The post Practical MySQL performance optimization: May 14 Webinar appeared first on MySQL Performance Blog.

GTIDs in MySQL 5.6: New replication protocol; new ways to break replication

Latest MySQL Performance Blog posts - May 9, 2014 - 12:00am

One of the MySQL 5.6 features many people are interested in is Global Transactions IDs (GTIDs). This is for a good reason: Reconnecting a slave to a new master has always been a challenge while it is so trivial when GTIDs are enabled. However, using GTIDs is not only about replacing good old binlog file/position with unique identifiers, it is also using a new replication protocol. And if you are not aware of it, it can bite.

Replication protocols: old vs new

The old protocol is pretty straightforward: the slave connects to a given binary log file at a specific offset, and the master sends all the transactions from there.

The new protocol is slightly different: the slave first sends the range of GTIDs it has executed, and then the master sends every missing transaction. It also guarantees that a transaction with a given GTID can only be executed once on a specific slave.

In practice, does it change anything? Well, it may change a lot of things. Imagine the following situation: you want to start replicating from trx 4, but trx 2 is missing on the slave for some reason.

With the old replication protocol, trx 2 will never be executed while with the new replication protocol, it WILL be executed automatically.

Here are 2 common situations where you can see the new replication protocol in action.

Skipping transactions

It is well known that the good old SET GLOBAL sql_slave_skip_counter = N is no longer supported when you want to skip a transaction and GTIDs are enabled. Instead, to skip the transaction with GTID XXX:N, you have to inject an empty transaction:

mysql> SET gtid_next = 'XXX:N'; mysql> BEGIN; COMMIT; mysql> SET gtid_next = 'AUTOMATIC';

Why can’t we use sql_slave_skip_counter? Because of the new replication protocol!

Imagine that we have 3 servers like the picture below:

Let’s assume that sql_slave_skip_counter is allowed and has been used on S2 to skip trx 2. What happens if you make S2 a slave of S1?

Both servers will exchange the range of executed GTIDs, and S1 will realize that it has to send trx 2 to S2. Two options then:

  • If trx 2 is still in the binary logs of S1, it will be sent to S2, and the transaction is no longer skipped.
  • If trx 2 no longer exists in the binary logs of S1, you will get a replication error.

This is clearly not safe, that’s why sql_slave_skip_counter is not allowed with GTIDs. The only safe option to skip a transaction is to execute a fake transaction instead of the real one.

Errant transactions

If you execute a transaction locally on a slave (called errant transaction in the MySQL documentation), what will happen if you promote this slave to be the new master?

With the old replication protocol, basically nothing (to be accurate, data will be inconsistent between the new master and its slaves, but that can probably be fixed later).

With the new protocol, the errant transaction will be identified as missing everywhere and will be automatically executed on failover, which has the potential to break replication.

Let’s say you have a master (M), and 2 slaves (S1 and S2). Here are 2 simple scenarios where reconnecting slaves to the new master will fail (with different replication errors):

# Scenario 1

# S1 mysql> CREATE DATABASE mydb; # M mysql> CREATE DATABASE IF NOT EXISTS mydb; # Thanks to 'IF NOT EXITS', replication doesn't break on S1. Now move S2 to S1: # S2 mysql> STOP SLAVE; CHANGE MASTER TO MASTER_HOST='S1'; START SLAVE; # This creates a conflict with existing data! mysql> SHOW SLAVE STATUS\G [...] Last_SQL_Errno: 1007 Last_SQL_Error: Error 'Can't create database 'mydb'; database exists' on query. Default database: 'mydb'. Query: 'CREATE DATABASE mydb' [...]

# Scenario 2

# S1 mysql> CREATE DATABASE mydb; # Now, we'll remove this transaction from the binary logs # S1 mysql> FLUSH LOGS; mysql> PURGE BINARY LOGS TO 'mysql-bin.000008'; # M mysql> CREATE DATABASE IF NOT EXISTS mydb; # S2 mysql> STOP SLAVE; CHANGE MASTER TO MASTER_HOST='S1'; START SLAVE; # The missing transaction is no longer available in the master's binary logs! mysql> SHOW SLAVE STATUS\G [...] Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.' [...]

As you can understand, errant transactions should be avoided with GTID-based replication. If you need to run a local transaction, your best option is to disable binary logging for that specific statement:

mysql> SET SQL_LOG_BIN = 0; mysql> # Run local transaction

Conclusion

GTIDs are a great step forward in the way we are able to reconnect replicas to other servers. But they also come with new operational challenges. If you plan to use GTIDs, make sure you correctly understand the new replication protocol, otherwise you may end up breaking replication in new and unexpected ways.

I’ll do more exploration about errant transactions in a future post.

The post GTIDs in MySQL 5.6: New replication protocol; new ways to break replication appeared first on MySQL Performance Blog.

Percona XtraDB Cluster 5.5.37-25.10 is now available

Latest MySQL Performance Blog posts - May 8, 2014 - 10:40am

Percona is glad to announce the release of Percona XtraDB Cluster 5.5.37-25.10 on May 8, 2014. Binaries are available from the downloads area or from our software repositories.

Based on Percona Server 5.5.37-35.0 including all the bug fixes in it, Galera Replicator 2.10 (including fixes in 2.9 and 2.10 milestones) and on wsrep API 25.10 (including fixes in 25.10), Percona XtraDB Cluster 5.5.37-25.10 is now the current stable release. All of Percona‘s software is open-source and free.

New Features:

  • Percona XtraDB Cluster now supports stream compression/decompression with new xtrabackup-sst compressor/decompressor options.
  • Garbd init script and configuration files have been packaged for CentOS and Debian, in addition, in Debian garbd is packaged separately in percona-xtradb-cluster-garbd-2.x package.
  • New meta packages are now available in order to make the Percona XtraDB Cluster installation easier.
  • Debian/Ubuntu debug packages are now available for Galera and garbd.
  • New SST options have been implemented: inno-backup-opts, inno-apply-opts, inno-move-opts which pass options to backup, apply and move stages of innobackupex.
  • Initial configurable timeout, of 100 seconds, to receive a first packet via SST has been implemented, so that if donor dies somewhere in between, joiner doesn’t hang. Timeout can be configured with the sst-initial-timeout variable.
  • The joiner would wait and not fall back to choosing other potential donor nodes (not listed in wsrep_sst_donor) by their state. This happened even when comma was added at the end. This fixes it for that particular case.
  • Support for Query Cache has been implemented.
  • Percona XtraDB Cluster packages are now available for Ubuntu 14.04.

Bugs Fixed:

  • To avoid disabling Parallel Apply in case of SAVEPOINT or ROLLBACK TO SAVEPOINT the wsrep_cert_deps_distance was set to 1.0 at all times. Bug fixed #1277703.
  • First connection would hang after changing the wsrep_cluster_address variable. Bug fixed #1022250.
  • When gmcast.listen_addr variable was set manually it did not allow node’s own address in gcomm address list. Bug fixed #1099478.
  • wsrep_sst_rsync would silently fail on joiner when rsync server port was already taken. Bug fixed #1099783.
  • Server would segfault on INSERT DELAYED with wsrep_replicate_myisam set to 1 due to unchecked dereference of NULL pointer. Bug fixed #1165958.
  • When grastate.dat file was not getting zeroed appropriately it would lead to RBR error during the IST. Bug fixed #1180791.
  • Due to the Provides: line in Percona XtraDB Cluster (which provides Percona-Server-server), the command yum install Percona-Server-server would install Percona XtraDB Cluster instead of the expected Percona Server. Bug fixed #1201499.
  • Replication of partition tables without binlogging enabled failed, partition truncation didn’t work because of lack of TO isolation there. Bug fixed #1219605.
  • Exception during group merge after partitioning event has been fixed. Bug fixed #1232747.
  • Default value for binlog_format is now ROW. This is done so that Percona XtraDB Cluster is not started with wrong defaults leading to non-deterministic outcomes like crash. Bug fixed #1243228.
  • CREATE TABLE AS SELECT was not replicated, if the select result set was empty. Bug fixed #1246921.
  • INSERT would return deadlock instead of duplicate key on secondary unique key collision. Bug fixed #1255147.
  • Joiner node would not initialize storage engines if rsync was used for SST and the first view was non-primary. Bug fixed #1257341.
  • Table level lock conflict resolving was releasing the wrong lock. Bug fixed #1257678.
  • Resolved the perl dependencies needed for Percona XtraDB Cluster 5.5. Bug fixed #1258563.
  • Obsolete dependencies have been removed from Percona XtraDB Cluster. Bug fixed #1259256.
  • GCache file allocation could fail if file size was a multiple of page size. Bug fixed #1259952.
  • Percona XtraDB Cluster didn’t validate the parameters of wsrep_provider_options when starting it up. Bug fixed #1260193.
  • Runtime checks have been added for dynamic variables which are Galera incompatible. Bug fixed #1262188.
  • Node would get stuck and required restart if DDL was performed after FLUSH TABLES WITH READ LOCK. Bug fixed #1265656.
  • xtrabackup-v2 is now used as default SST method in wsrep_sst_method. Bug fixed #1268837.
  • FLUSH TABLES WITH READ LOCK behavior on the same connection was changed to conform to MySQL behavior. Bug fixed #1269085.
  • Read-only detection has been added in clustercheck, which can be helpful during major upgrades (this is used by xinetd for HAProxy etc.) Bug fixed #1269469.
  • Binary log directory is now being cleanup as part of the XtraBackup SST. Bug fixed #1273368.
  • Deadlock would happen when NULL unique key was inserted. Workaround has been implemented to support NULL keys, by using the md5 sum of full row as key value. Bug fixed #1276424.

This release contains almost 100 fixed bugs, complete list of fixed bugs can be found in our release notes.

Please see the 5.5.37-25.10 releases notes for complete details on all of the new features and bug fixes. Help us improve quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Percona XtraDB Cluster Errata can be found in our documentation.

The post Percona XtraDB Cluster 5.5.37-25.10 is now available appeared first on MySQL Performance Blog.

Percona XtraBackup 2.2.2 beta release is now available

Latest MySQL Performance Blog posts - May 8, 2014 - 7:58am

Percona is glad to announce the release of Percona XtraBackup 2.2.2-beta1 on May 8th 2014. Downloads are available from our download site here. This BETA release, will be available in our Debian experimental and CentOS testing repositories.

This is a BETA quality release and it is not intended for production. If you want a high quality, Generally Available release, the current Stable version should be used (Percona XtraBackup 2.1.9 in the 2.1 series at the time of writing).

New Features

  • Percona XtraBackup has now been rebased on MySQL 5.6.17.
  • Percona XtraBackup package is now available for Ubuntu 14.04.

Bugs Fixed

  • Percona XtraBackup couldn’t be built with Bison 3.0. Bug fixed #1262439.
  • The xtrabackup binaries now recognize a new my.cnf option, open_files_limit. The effect is the same as for the server: it changes the maximum number of file descriptors available to the xtrabackup process. The actual limit depends on the platform and ulimit settings. Bug fixed #1183793.
  • If a remote InnoDB tablespace got CREATEd or ALTERed during the backup, an attempt to prepare such a backup later would lead to Percona XtraBackup crash. Bug fixed #1291299.
  • Code in both innobackupex and xtrabackup, that was supposed to make sure no child processes are left running in case innobackupex got killed or failed with an error, relied on the fact the SIGTERM and SIGINT signals were not blocked by the xtrabackup process. However, both SIGTERM and SIGINT might be blocked by the process that had invoked innobackupex, for example, by the Percona XtraDB Cluster server processes doing an SST, in which case they were also blocked by the xtrabackup process, since the signal mask is inherited by child processes. Fixed by replacing SIGTERM in innobackupex and SIGINT in xtrabackup auto-termination with SIGKILL. Bug fixed #1294782.

Release notes with all the bugfixes for Percona XtraBackup 2.2.2 are available in our online documentation. All of Percona‘s software is open source and free, all the details of the release can be found in the 2.2.2 milestone at Launchpad. Bugs can be reported on the launchpad bug tracker.

The post Percona XtraBackup 2.2.2 beta release is now available appeared first on MySQL Performance Blog.

Doing a rolling upgrade of Percona XtraDB Cluster from 5.5 to 5.6

Latest MySQL Performance Blog posts - May 8, 2014 - 3:00am
Overview

Percona XtraDB Cluster 5.6 has been GA for several months now and people are thinking more and more about moving from 5.5 to 5.6. Most people don’t want to upgrade all at once, but would prefer a rolling upgrade to avoid downtime and ensure 5.6 is behaving in a stable fashion before putting all of production on it. The official guide to a rolling upgrade can be found in the PXC 5.6 manual. This blog post will attempt to summarize the basic process.

However, there are a few caveats to trying to do a rolling 5.6 upgrade from 5.5:

  1. If you mix Galera 2 and Galera 3 nodes, you must set wsrep_provider_options=”socket.checksum=1″ on the Galera 3 nodes for backwards compatibility between Galera versions.
  2. You must set some 5.6 settings for 5.5 compatibility with respect to replication:
    1. You can’t enable GTID async replication on the 5.6 nodes
    2. You should use –log-bin-use-v1-row-events on the 5.6 nodes
    3. You should set binlog_checksum=NONE  on the 5.6 nodes
  3. You must not SST a 5.5 donor to a 5.6 joiner as the SST script does not handle mysql_upgrade
  4. You should set the 5.6 nodes read_only and not write to them!
The basic upgrade flow

The basic upgrade flow is:

  1. For some node(s): upgrade to 5.6, do all the above stuff, put them into a read_only pool
  2. Repeat step 1 as desired
  3. Once your 5.6 pool is sufficiently large, cut over writes to the 5.6 pool (turn off read_only, etc.) and upgrade the rest.

This is, in essence, exactly like upgrading a 5.5 master/slave cluster to 5.6 — you upgrade the slaves first, promote a slave and upgrade the master; we just have more masters to think about.

Once your upgrade is fully to 5.6, then you can go back through and remove all the 5.5 backwards compatibility

Why can’t I write to the 5.6 nodes?

The heaviest caveat is probably the fact that in a mixed 5.5 / 5.6 cluster, you are not supposed to write to the 5.6 nodes.  Why is that?  Well, the reason goes back to MySQL itself.  PXC/Galera uses standard RBR binlog events from MySQL for replication.   Replication between major MySQL versions is only ever officially supported:

  • across 1 major version (i.e., 5.5 to 5.6, though multiple version hops do often work)
  • from a lower version master to a higher version slave (i.e., 5.5 is your master and 5.6 is your slave, but not the other way around)

This compatibility requirement (which has existed for a very long time in MySQL) works great when you have a single Master replication topology, but true multi-master (multi-writer) has obviously never been considered.

Some alternativesDoes writing to the 5.6 nodes REALLY break things?

The restriction on 5.6 masters of 5.5 slaves is probably too strict in many cases.  Technically only older to newer replication is ever truly supported, but in practice you may be able to run a mixed cluster with writes to all nodes as long as you are careful.  This means (at least) that any modifications to column type formats in the newer version NOT be upgraded while the old version remains active in the cluster.  There might be other issues, I’m not sure, I cannot say I’ve tested every possible circumstance.

So, can I truly say I recommend this?  I cannot say that officially, but you may find it works fine.  As long as you acknowledge that something unforeseen may break your cluster and your migration plan, it may be reasonable.  If you decide to explore this option, please test this thoroughly and be willing to accept the consequences of it not working before trying it in production!

Using Async replication to upgrade

Another alternative is rather than trying to mix the clusters and keeping 5.6 nodes read_only, why not just setup the 5.6 cluster as an async slave of your 5.5 cluster and migrate your application to the new cluster when you are ready?  This is practically the same as maintaining a split 5.5/5.6 read_write/read_only cluster without so much risk and a smaller list of don’ts.  Cutover in this case would be effectively like promoting a 5.6 slave to master, except you would promote the 5.6 cluster.

One caveat with this approach might be dealing with replication throughput:  async may not be able to keep up replicating your 5.5 cluster writes to a separate 5.6 cluster.  Definitely check out wsrep_preordered to speed things up, it may help.  But realize some busy workloads just may not ever be able to use async into another cluster.

Just take the outage

A final alternative for this post is the idea of simply upgrading the entire cluster to 5.6 all at once during a maintenance window.  I grant that this defeats the point of a rolling upgrade, but it may offer a lot of simplicity in the longer run.

Conclusion

A rolling PXC / Galera upgrade across major MySQL versions is limited by the fact that there is no official support or reason for Oracle to support newer master to older slave.  In practice, it may work much of the time, but these situations should be considered carefully and the risk weighed against all other options.

The post Doing a rolling upgrade of Percona XtraDB Cluster from 5.5 to 5.6 appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
Contact Us 24 Hours A Day
Support Contact us 24×7
Emergency? Contact us for help now!
Sales North America (888) 316-9775 or
(208) 473-2904
Sales
Europe
+44-208-133-0309 (UK)
0-800-051-8984 (UK Toll Free)
0-800-181-0665 (GER Toll Free)
More Numbers
Training (855) 55TRAIN or
(925) 271-5054

 

Share This
]]>