Fighting MySQL Replication Lag

The problem of MySQL Replication unable to catch up is quite common in MySQL world and in fact I already wrote about it. There are many aspects of managing mysql replication lag such as using proper hardware and configuring it properly. In this post I will just look at couple of query design mistakes which result in low hanging fruit troubleshooting MySQL Replication Lag

First fact you absolutely need to remember is MySQL Replication is single threaded, which means if you have any long running write query it clogs replication stream and small and fast updates which go after it in MySQL binary log can’t proceed. It is either more than than just about queries – if you’re using explicit transactions all updates from the transactions are buffered together and when dumped to binary log as one big chunk which can’t be interleaved by any other query execution. So if you have transaction containing millions of simple updates instead of one large update to help MySQL replication lag it is not going to work.

This brings us to rule number one – if you care about replication latency you must not have any long running updates. Queries or transactions containing multiple update queries which add up to long time. I would keep the maximum query length at about 1/5th of the maximum replication lag you’re ready to tolerate. So if you want your replica to be no more than 1 minute behind keep the longest update query to 10 sec or so. This is of course rule of thumb depending on differences in master/slave configuration, their load and concurrency you may need to keep the ratio higher or allow a bit longer queries.

What should you do if you need to update a lot of rows ? Use Query Chopping – this can be running update/delete with LIMIT in the loop, controlling maximum amount of values per batch in multiple row insert statement or Fetching data you’re planning to update/delete and having multiple queries to delete it (see example below)

This brings us to yet another rule for smart replication – do not make Slave to do more work than it needs to do. It is crippled by having to do all of this in single thread already – do not make it even harder. If there is considerable effort needed to select rows for modification – spread it out and have separate select and update queries. In such case slave will only need to run UPDATE

This query will perform full table scan in MySQL 5.0 (even if there are no spam posts) which will load slave significantly. You can replace it with:

If there could be many ids matched on the first place you should also use query chopping and run update in chunks if application allows it.

In MySQL 5.1 with row level replication you will not have selection process running on SLAVE but it will not do the chopping for you.

In general this trick does not only work well for full table scan updates but in general for cases when there are much more rows examined than modified.

The next common mistake is using INSERT … SELECT – which is in similar to what I just described but can be much worse as SELECT may end up being extremely complicated query. It is best to avoid INSERT … SELECT going through replication in 5.0 for many reasons (locking, long query time, waste of execution on slave). Piping data through application is the best solution in many cases and is quite easy – it is trivial to write the function which will take SELECT query and the table to which store its result set and use in your application in all cases when you need this functionality.

Finally you should not overload your replication – Quite typically I see replication lagging when batch jobs are running. These can load master significantly during their run time and make it impossible for slave to run the same load through single thread. The solution in many cases is to simply space it out and slow down your batch job (such as adding sleep calls) to ensure there is enough breathing room for replication thread.

You can also have controlled execution of batch job – this is when they will check slave lag every so often and pause if it becomes too large. This is a bit more complicated approach but it saves you from running around and adjusting your sleep behavior to keep the progress fast enough and at the same time keep replication from lagging.

In many bad replication lags I’ve seen simply following these simple rules would avoid a lot of problems and often save massive hardware purchases or development efforts based on assumption MySQL replication can’t possibly keep up any more.

Share this post

Comments (10)

  • chad walker

    Recently we had to add an index to a table with 12M rows. It took a couple minutes to run in testing, so we knew we couldn’t just run it on the master and let it trickle through replication. What we did was take a slave at a time out of rotation, add the index, then add the slave back into rotation. Once all the slaves were done we added the index on the master. When the master was done and replicated the index, we went to each slave and skipped that transaction.

    September 23, 2008 at 2:14 pm
  • peter


    You’re completely right. Even though MySQL Replication is great in its ability to replicate DDL too it is often not practical because of replication lag it causes and other reason.

    It often makes sense to use SQL_LOG_BIN=0 to run query only on master (or start from the slave). Plus this often allows you to do massive DDL without taking sight down – you just run it on the slave, promote it to the master and run on ex-master.

    September 23, 2008 at 3:51 pm
  • chad walker

    Yeah I never thought about trying to disable replication for that one command. It would be nice if there was a comment way to do it, like:
    ALTER TABLE foo /* NO_BIN_LOG */ ADD INDEX foo_a_b (a, b);

    I’ve seen you talk about promoting a slave to be the new master before, but in my eyes that has always seemed like a huge pain that is full of potential pitfalls. We run with one master and 4 slaves, the master does about 20k questions a minute and the slaves each do about 11k. How hard is it to synchronize them to the same master bin log position? FWIW, our site wasn’t down for the above index update, we have an intermediate proxy layer that acts as a write-through cache and queued the writes until the master could handle them.

    September 23, 2008 at 8:37 pm
  • Erik Ljungstrom


    SQL_LOG_BIN which Peter mentioned is a session variable. Any statement you run in that thread will not end up in the binary log, thus not be replicated. So rather than skipping a transaction on all the slaves, you simply don’t log it on the master.

    In terms of promoting slaves, it’s not entirely necessary to get them to the same binlog position. If you as per your previous comment have the luxury of being able to ensure that no updates hit the DBs, you can simply do it with a series of CHANGE MASTER statements whilst you are certain that the DBs have the same data (ie. the writes doesn’t go through and all slaves are caught up).

    September 24, 2008 at 12:16 am
  • Memfill

    Dont forget that replication lag can also be simply fake, make sure you clock is in sync
    More details here:

    November 22, 2010 at 4:13 pm
  • Jonathan Valerio


    I also have experienced that before and I have wasted my time looking for the replication issue I until I got an idea of check server clock.

    January 17, 2012 at 11:41 pm
  • Tixik

    Great article. We had exactly the same issues with multiple small insert queries in multiple threads on MASTER, where slave (almost sleeping) is not able to keep the speed .( Is the any possibility to remove simple select transactions, where not UPDATE is done ?

    October 29, 2012 at 9:45 am
  • Abhijit Buchake

    I observed a slave unable to catch replication for weeks (in fact lag getting increased continuously) although simple INSERT DELAYED INTO queries run on master. Both the servers having equal capacity.

    The problem was that master was able to handle about 4k queries per second while due to single-thread nature of replication, slave was able to read and execute only around 600 queries per second (no network issue as MASTER_LOG_FILE in slave status almost remains same as that on master being written currently and there are plenty of pending relay logs).

    Can anybody suggest a workaround apart from hardware upgrade?

    July 30, 2013 at 9:10 am
  • Adarsh

    Thanks Peter for such detailed explaination. I have two doubts :

    1. You mentioned to disable bin logging before any DML that requires significant time. Can we apply the same if we have one
    Master – 10 Slaves ? If I want to change engine from Innodb MyISAM or vice versa , Would you suggest the same
    way or pt-online-schema-change will be better ?
    2. Is there any way i can get to know how many slaves are in Master Slave replication. Will this work :

    select distinct host from information_schema.processlist WHERE user = ‘slave_user’
    If all slaves are not connected to master at the time when i fired this command.

    Looking forward to your reply.


    July 25, 2014 at 3:25 am
  • Gopal

    Hi Adarsh,

    1. Yes ‘SQL_LOG_BIN’ will work for any number of Slaves, as this variables make sure that the event is not written to Binary Log.

    2. For getting the list of Slaves, you can also run following statement in Master:

    show slave hosts;

    Best Regards

    June 2, 2015 at 5:27 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.