Multi-source replication in MySQL 5.7 vs Tungsten Replicator

MySQL 5.7 comes with a new set of features and multi-source replication is one of them. In few words this means that one slave can replicate from different masters simultaneously.

During the last couple of months I’ve been playing a lot with this trying to analyze its potential in a real case that I’ve been facing while working with a customer.

This was motivated because my customer is already using multi-sourced slaves with Tungsten Replicator and I wanted to do a side-by-side comparison between Tungsten Replicator and Multi-source Replication in MySQL 5.7

Consider the following scenario:

DB1 is our main master attending mostly writes from several applications, it also needs to serve read traffic which is putting it’s capacity close to the limit. It has attached 6 replication slaves using regular replication.
A1, A2, A3, B1, B2 and DB7 are reporting slaves used to offload some reads from master and also woking on some offline ETL processes.

Since they had some idle capacity customer decided to go further and set a different architecture:
A1 and B1 became also masters of other slaves using Tungsten Replicator, in this case group A is a set of servers for a statistics application and B is attending a finance application, so A2, A3 and B2 became multi sourced slaves.
New applications writes directly to A1 and B1 without impacting write capacity of main master.

Pros and Cons of this approach


  • It just works. We’ve been running this way for a long time now and we haven’t suffered major issues.
  • Tungsten Replicator has some built in tools and scripts to make slave provision easy.


  • Tungsten Replicator is a great product but bigger than needed for this architecture. In some cases we had to configure Java Virtual Machine with 4GB of RAM to make it work properly.
  • Tungsten is a complex tool that needs some extra expertise to deploy it, make it work and troubleshoot issues when errors happen (i.e. handling duplicate keys errors)

With all this in mind we moved a step forward and started to test if we can move this architecture to use legacy replication only.

New architecture design:
Blank Flowchart - New Page (7)

We added some storage capacity to DB7  for our testing purposes and the goal here is to replace all Tungsten replicated slaves by a single server where all databases are consolidated.

For some data dependency we weren’t able to completely separate A1 and B1 servers to become master-only so they are currently acting as masters of DB7 and slaves of DB1 By data dependency I mean DB1 replicates it’s schemas to all of it’s direct slaves, including DB7.  DB7 also gets replication of the finance DB running locally to B1 and stats DB running locally to A1.

Some details about how this was done and what multi source is implemented:

  • The main difference between regular replication, as known up to 5.6 version, is that now you have replication channels, each channel means a different source, in other words each master has it’s own replication channel.
  • Replication needs to be set as crash safe, meaning that both master_info_repository and
    relay_log_info_repository variables needs to be set to TABLE
  • We haven’t considered GTID because servers acting as masters have different versions than our test multi-sourced slave.
  • log_slave_updates needs to be disabled in A1 and B2 to avoid having duplicate data in DB7 due replication flow.

Pros and Cons of this approach


  • MySQL 5.7 can replicate from different versions of master, we tested multi-source replication working along with 5.5 and 5.6 simultaneously and didn’t suffer problems besides those known changes with timestamp based fields.
  • Administration becomes easier. Any DBA already familiar with legacy replication can adapt to handle multiple channels without much learning, some new variables and a couple of new tables and you’re ready to go here.


  • 5.7 is not production ready yet. At this point we don’t have a GA release data which means that we may expect bugs to appear in the short/mid term.
  • Multi-source is still tricky for some special cases: database and table filtering works globally (can’t set per-channel filters) and administration commands like sql_slave_skip_counter is a global command still which means you can’t easily skip a statement in a particular channel.

Now the funny part: The How

It was easier than you think. First of all we needed to start from a backup of data coming from our masters. Due to versions used in production (main master is 5.5, A1 and B1 are 5.6) we started from a logical dump so we avoided to deal with mysql_upgrade issues.

Disclaimer: this does not pretend to be a guide on how to setup multi-source replication

For the matter of our case we did the backup/restore using mydumper/myloader as follow:

Notice each command was run in each master server, now the restore part:

So at this point we have a new slave with a copy of databases from 3 different masters, just for context we need to dump/restore tungsten* databases because they are constantly updated by Replicator (which at this point is still in use). Pretty easy right?

Now the most important part of this whole process, setting up replication. The procedure is very similar than regular replication but now we need to consider which binlog position is necessary for each replication channel, this is very easy to get from each backup by reading in this case the metadata file created by mydumper. In known backup methods (either logical or physical) you have a way to get binlog coordinates, for example –master-data=2 in mysqldump or xtrabackup_binlog_info file in xtrabackup.

Once we get the replication info (and created a replication user in master) then we only need to run the known CHANGE MASTER TO and START SLAVE commands, but here we have our new way to do it:

Replication is set and now we are good to go:

New commands includes the FOR CHANNEL 'channel_name' option to handle replication channels independently

At this point we have a slave running 3 replication channels from different sources, we can check the status of replication with our known command SHOW SLAVE STATUS (TL;DR)

Yeah I know, output is too large and the Oracle guys noticed it, too, so they have created a set of new tables in performance_schema DB to help us retrieving this information in a friendly manner, check this link for more information. We could also run SHOW SLAVE STATUS FOR CHANNEL 'b1_slave' for instance

Some limitations found during tests:

  • As mentioned some configurations are still global and can’t be set per replication channel, for instance replication filters which can be set without restarting MySQL but they will affect all replication channels as you can see here.
  • Replication events are somehow serialized at slave side, just like a global counter that is not well documented yet. In reality this means that you need to be very careful when troubleshooting issues because you may suffer unexpected issues, for instance if you have 2 replication channels failing with a duplicate key error then is not easy to predict which even you will skip when running set global sql_slave_skip_counter=1

So far this new feature looks very nice and provides some extra flexibility to slaves which helps to reduce architecture complexity when we want to consolidate databases from different sources into a single server. After some time testing it I’d say that I prefer this type of replication over Tungsten Replicator in this kind of scenarios due it’s simplicity for administration, i.e. pt-table-checksum and pt-table-sync will work without proper limitations of Tungsten.

With the exception of some limitations that need to be addressed, I believe this new feature is game changing and will definitely make DBA’s life easier. I still have a lot to test still but that is material for a future post.

Share this post

Comments (8)

  • Giuseppe Maxia Reply

    I noticed that you are not running with GTID.
    Thus, you have replaced a system that offers integrated GTID, much better and clearer monitoring, and the ability of doing filters by channel with a system that has none of the above.

    Please see these two articles for a deeper analysis of MySQL 5.7 multi-source.

    August 31, 2015 at 1:24 pm
  • Franck Reply


    Nice work.
    Quick question: In your first graph, A2 and A3 were slaves, acting as readonly / standby and B2 slave of B1. It looks like in your second graph, A1 only replicates to DB7 and so does B1. (not easy to see with multiple arrows pointing to same point).

    Which server is acting as DB1 Master Standby (Passive) ?

    August 31, 2015 at 1:29 pm
  • Franciso Bordenave Reply

    @Giuseppe, actually I didn’t replace Tungsten replicator here. This was an exercise we ran along with customer that is already running replicator and the goal was to test if legacy multi-source replication in 5.7 was able to hold this workload. Final goal is to simplify architecture by removing 3rd party tools. As described in the post we have some good results but still 5.7 is not GA and there is a lot of work to do before consider multi-source as production ready in this specific scenario.
    Indeed GTID is not in use here because our main master is still 5.5 but is not being used either with current tungsten deployment. Also I’ve pointed about the lack of per-channel filtering as one of limitations, I’m planning to do a feature request once I spent some more time into my tests.
    Thanks for your feedback, really appreciated.

    August 31, 2015 at 2:30 pm
  • Franciso Bordenave Reply

    @Franck in 1st graph both A1 and B1 are mysql slaves from DB1 (main master) and tungsten masters for A2/3 and B2 respectively.
    In second graph we eliminated Tungsten replicator and made A1 and B1 mysql slaves and also mysql masters of DB7 which became a multi-sourced slave from A1, B1 and DB1.
    I noticed that arrows could be confusing, apologizes for that (it was more clear in my mind 🙂 ), DB1 is still our main master, no stand by masters in this PoC (actually we have but I didn’t include them here).
    Hopefully this clarifies the idea, if not, don’t hesitate to ask.


    August 31, 2015 at 2:43 pm
  • Gao Liang Reply


    I have used mysql multi-source in mysql 5.7. But I encounter a problem about a week ago. One channel can’t catch up with it’s master any more. The Seconds_Behind_Master is getting bigger and bigger. The slave status for this channel is much like the main_master in your post. it has Slave_SQL_Running_State: System lock state in the status shown. I don’t know why. Could you give me some advise?

    July 25, 2016 at 11:12 pm
  • Francisco Bordenave Reply

    Gao, sorry for the late reply but I was out of work last few weeks.

    I’d try to see what’s the sql thread doing by checking output of show processlist, some messages in Slave_SQL_Running_State are not clear enough but I’d consider some DDL operation causing locks that is preventing sql_thread to move ahead in replication.

    Does it makes any sense?

    August 16, 2016 at 8:32 am
  • Zafar Malik Reply

    Hi Franciso,

    As you updated that “if you have 2 replication channels failing with a duplicate key error then is not easy to predict which even you will skip when running set global sql_slave_skip_counter=1″…I think we can find out by error that which error belong to which server/channel and if we stop one specific channel replication and execute global slave_skip_counter=1 then it will apply on stopped channel replication only as we can’t execute this command if slave is running and in our case other channel replication is already running….please confirm if it is true or else.

    September 28, 2016 at 11:47 am
  • Franciso Bordenave Reply

    Zafar, actually I should re state it because as a matter of fact it’s easy to know which channel will be affected by sql_slave_skip_counter command: the answer is “the first you issue START SLAVE”, basically this command is applied when you start a replication channel and there is a rule that says “if sql_slave_skip_counter >0 then you can’t start both channels together”
    Does it helps?

    September 28, 2016 at 11:55 am

Leave a Reply