Can MySQL Parallel Replication Help My Slave?

Parallel replication has been around for a few years now but is still not that commonly used. I had a customer where the master had a very large write workload. The slave could not keep up so I recommended to use parallel slave threads. But how can I measure if it really helps and is working?

At my customer the slave_parallel_workers  was 0. But how big should I increase it, maybe to 1? Maybe to 10? There is a blog post about how can we see how many threads are actually used, which is a great help.

We changed the following variables on the slave:

40 threads sounds a lot, right? Of course, this is workload specific: if the transactions are independent it might be useful.

Let’s have a look, how many threads are working:

You can see all the threads are working. Which is great.

But did this really speed up the replication? Could the slave write more in the same period of time?

Let’s see the replication lag:

MySQL Replication Delay graph from PMM

As we can see, lag goes down quite quickly. Is this because the increased thread numbers? Or because the job which generated the many inserts finished and there are no more writes coming? (The replication delay did not go to 0 because this slave is deliberately delayed by an hour.)

Luckily in PMM we have other graphs as well that can help us. Like this one showing InnoDB row operations:

InnoDB Row Operations graph from PMM


That looks promising: the slave now inserts many more rows than usual. But how much rows were inserted, actually? Let’s create a new graph to see how many rows were inserted per hour. In PMM we already have all this information, we just have to create a new graph using the following query:

And this is the result:

InnoDB Row Operations per Hour graph from Percona Monitoring and Management performance monitoring tool

We can see a huge jump in the number of inserted rows per hour, it went from ~50Mil to 200-400Mil per hours. We can say that increasing the number of  slave_parallel_workers  really helped.


In this case, parallel replication was extremely useful and we could confirm that using PMM and Performance Schema. If you tune the slave_parallel_workers  check the graphs. You can show the impact to your boss. 🙂



Share this post

Comments (4)

  • dharshanr

    Hi Tibor,

    Setting parallel workers is usually not sufficient because they can only parallelize across different db’s. If you want to reduce lag for load on a single db you will need to adjust group_commit_sync_delay.

    October 17, 2018 at 5:41 pm
    • Tibor Korocz

      Hi dharshanr,

      That is not completely true. As in this blog post you can see as well I was changing the slave_parallel_type to LOGICAL_CLOCK, which means:

      “Transactions that are part of the same binary log group commit on a master are applied in parallel on a slave. The dependencies between transactions are tracked based on their timestamps to provide additional parallelization where possible. When this value is set, the binlog_transaction_dependency_tracking system variable can be used on the master to specify that write sets are used for parallelization in place of timestamps, if a write set is available for the transaction and gives improved results compared to timestamps.”


      So this is not based on databases which makes it very useful and if you even writing the same database or same table it might can be applied in parallel.

      group_commit_sync_delay is also a useful parameter but you have to be careful because it can slow down commits on the master.


      October 18, 2018 at 11:00 am
  • Fernando Mattera

    Hi, and what about MariaDB?

    October 31, 2018 at 8:20 am
    • Tibor Korocz

      Hi, It looks like MariaDB uses and older version of Performance Schema which does not have the performance_schema.events_transactions_summary_by_thread_by_event_name table, so this will not work on MariaDB. There is a table called events_statements_summary_by_thread_by_event_name which might can be used but that would require some testing.

      October 31, 2018 at 8:53 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.