I reckon there’s little sense in running 2 or more Percona XtraDB Cluster (PXC) nodes in a single physical server other than for educational and testing purposes – but doing so is still useful in those cases. The most popular way of achieving this seems to be with server virtualization, such as making use of Vagrant boxes. But in the same way you can have multiple instances of MySQL running in parallel on the OS level in the form of concurrent mysqld processes, so too can you have multipleÂ
Percona XtraDB Cluster nodes. And the way to achieve this is precisely the same: using dedicated datadirs and different ports for each node.
Which ports?
- the regular MySQL port (default 3306)
- port for group (Galera) communication (default 4567)
- port for State Transfer (default 4444)
- port for Incremental State Transfer (default is: port for group communication (4567) + 1 = 4568)
Of course, when you have multiple instances in the same server default values won’t work for all of them so we need to define new ports  for the additional instances and make sure to have the local firewall open to them, if there is one active (iptables, selinux,…).
[{ loading … }]
Installing Percona XtraDB Cluster, configuring and starting the first node
My test server was a fresh CentOS 6.5 configured with
Percona yum repository, from which I installed
the latest Percona XtraDB Cluster (5.6.20-25.7.888.el6); note that you’ll need the EPEL repository as well to install
socat, which is a dependency (see this
bug). To avoid confusion, I’ve prevented the mysql service to start automatically:
|
chkconfig --level 3 mysql off chkconfig --del mysql |
I could have installed PXC from the tarball but I decided to do it from the repositories to have all dependencies covered by yum. This is how my initial /etc/my.cnf looked like (note the use of default values):
|
[mysqld] datadir = /var/lib/mysql port=3306 socket=/var/lib/mysql/mysql-node1.sock pid-file=/var/lib/mysql/mysql-node1.pid log-error=/var/lib/mysql/mysql-node1.err binlog_format=ROW innodb_autoinc_lock_mode=2 wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_cluster_name = singlebox wsrep_node_name = node1 wsrep_cluster_address=gcomm:// |
|
$ mysqld_safe --defaults-file=/etc/my.cnf --wsrep-new-cluster |
You should then be able to access this node through the local socket:
|
$ mysql -SÂ /var/lib/mysql/mysql-node1.sock |
Configuring and starting the second node
Then I created a similar configuration configuration file for the second instance, which I named /etc/my2.cnf, with the following modifications:
|
[mysqld] datadir = /var/lib/mysql2 port=3307 socket=/var/lib/mysql2/mysql-node2.sock pid-file=/var/lib/mysql2/mysql-node2.pid log-error=/var/lib/mysql2/mysql-node2.err binlog_format=ROW innodb_autoinc_lock_mode=2 wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_cluster_name = singlebox wsrep_node_name = node2 wsrep_cluster_address=gcomm://127.0.0.1:4567,127.0.0.1:5020 wsrep_provider_options = "base_port=5020;" |
Note the use of base_port: by having it defined, port 5020 is used for group communication and 5021 (the one above it) is reserved for IST (it’s the same as usingÂ
gmcast.listen_addr=tcp://127.0.0.1:5020, just simpler).
You need to create and setup the right permissions to the datadir on this second instance, otherwise MySQL won’t be able to create some files (like .pid and .err), though you don’t need to run the mysql_install_db script:
|
$ chown -R mysql:mysql /var/lib/mysql2 |
You can then start this second instance with the following command:
|
$Â mysqld_safe --defaults-file=/etc/my2.cnf |
While it starts, watch the log to observe how this second node starts, communicates with the primary node and join the cluster. On a different terminal from the one you’ve started the instance, execute:
|
$ tail -f /var/log/mysql2/mysql-node2.err |
Remember that at any time you can use mysqladmin to stop the nodes, you only need to provide the right socket as argument, like follows:
|
$Â mysqladmin -S /var/lib/mysql/mysql-node1.sock shutdown |
Finally, once you have the whole cluster up you should edit the my.cnf of the first node with a complete wsrep_cluster_addres, as show in /etc/my2.cnf above.
Using mysqld_multi
My last blog post was on
running multiple instances of MySQL with myslqd_multi. It applies here as well, the only exception is that you need to make sure to use “wsrep_cluster_address=gcomm://” in the first node whenever you bootstrap the cluster – and pay attention to start it before the other nodes.
The only advantage I see in using mysqld_multi is facilitating the management (start/stop) of the nodes and concentrating all configuration in a single my.cnf file. In any case, you shouldn’t be running a PXC cluster in a single box for any purpose other than educational.
Adding a second Percona XtraDB Cluster node to a production server
What if you have a production cluster composed of multiple physical servers and you want to add a second node to one of them? It works the same way – you’ll just need to use the server’s IP address when configuring it instead of the loopback network interface. Here’s an example of a PXC cluster composed initially by three nodes: 192.168.70.1, 192.168.70.2, and 192.168.70.3. I’ve added a 4th node running on the server that is already hosting the 3rd – the wsrep_cluster_address line looks like as follows after the changes:
|
wsrep_cluster_address = gcomm://192.168.70.1,192.168.70.2,192.168.70.3:4567,192.168.70.3:5020 |
Additional ressources
We have aÂ
documentation page on “How to setup 3 node cluster on single box” that contains more details of what I’ve covered above with a slightly different approach.
About the Author
Fernando Laudares Camargos
Fernando Laudares Camargos joined Percona in early 2013 after working 8 years for a Canadian company specialized in offering services based in open source technologies. Fernando's work experience includes the architecture, deployment and maintenance of IT infrastructures based on Linux, open source software and a layer of server virtualization. From the basic services such as DHCP & DNS to identity management systems, but also including backup routines, configuration management tools and thin-clients. He's now focusing on the universe of MySQL, MongoDB and PostgreSQL with a particular interest in understanding the intricacies of database systems and contributes regularly to this blog. You can
read his other articles here.
Great tutorial, Fernando. I would like to see more of these (if you accept suggestions), for example, applied to setting them up using Docker, ready-to-go VM images on EC2 or integrated into automatic deployment tools like chef or Ansible.
We as DBAs like to get dirty and configure things manually, but a good deployment engineer should automatize everything!
And a question: would it be worth in this case to customize the xtrabackup sst script so that it did not use TCP and just a local rsync/cp?
Hi, Jaime!
Your input is an interesting one, there’s some appeal on systems such as Docker and Vagrant (I’ve got to use the latter a lot for testing) and we see more and more customers using Chef, Puppet and Ansible, thus integrating MySQL configuration with those configuration management tools is a must.
Regarding the choice of wsrep_sst_method, I would like to make a few comments:
1) “xtrabackup-v2” is the default now, and the reccomended option for PXC 5.5.34 and above
2) “rsync” is also a valid option, and even preferred in certain cases. I’ve tested it on my test setup, but it does open tcp connections and uses, as expected, port 4444 for SST
3) custom_script_name: allow users to create their own custom script for performing SST and this might be all you need to come up with a custom solution (ex: local ‘cp’) for SST.
FIXED: I had mixed things above and written “(it’s the same as using gmcast.listen_addr=tcp://127.0.0.1:5021, just simpler)” – it’s actually “:5020”, the same as base_port; :5021 remains then reserved for IST (“ist.recv_addr”).
inspired from me i guess:: fernando… this was really helpfull for me 🙂