MySQL Point in Time Recovery the Right Way

MySQL Point In Time RecoveryIn this blog, I’ll look at how to do MySQL point in time recovery (PITR) correctly.

Sometimes we need to restore from a backup, and then replay the transactions that happened after the backup was taken. This is a common procedure in most disaster recovery plans, when for example you accidentally drop a table/database or run an update/delete without the “where” clause and lose data.

MySQL Point in Time Recovery

The usual way is to get a copy of your binlogs and use mysqlbinlog to replay those transactions. But this approach has many pitfalls that can make the whole PITR process a nightmare. Some examples:

  • You need to make sure to run a single mysqlbinlog command with all related binlogs, and pipe them to mysql at once. Otherwise, if binlog.000001 creates a temporary table, and binlog.000002 requires that temporary table, it will not be present. Each execution of MySQL creates a new connection:

  • We can say that it has to be an atomic operation. If it fails halfway through, it will be very difficult to know where it failed and even more difficult to resume from that point forward. There are many reasons for it to fail: InnoDB lock wait timeout / deadlock with some concurrent transaction, server and client have different max_allowed_packet and you get a Lost connection to MySQL server during query error, and so on.

So how can we overcome those limitations and have a reliable way to do Point In Time Recovery?

We can restore the backup on the desired server, build a second server with just the minimal MySQL required data and move the all binary logs to this “fake” server datadir. Then we need to configure the server where we want the PITR to happen as a slave of the fake server. From this point forward, it’s going to be pure MySQL replication happening.

To illustrate it, I create a Docker container on the machine. I have Percona Server for MySQL running on the box listening on 3306, and have already restored the backup on it. There is a tarball there with all binlogs required. The saved positions for PITR are as follows:

I create a folder to store the Docker MySQL datadir:

I start the Docker container. As we can see from xtrabackup_binlog_info, my binlogs are named master-bin and I’ll be setting the same server-id as original master:

In case you want to make usage of GTID, append --gtid-mode=ON --enforce_gtid_consistency=ON to the end of the Docker command.

The command above starts a MySQL instance, invokes mysqld –initialize, sets the root password to secret and it’s port 3306 is mapped back to my local 3307 port. Now I’ll stop it, remove the binlogs that it created, uncompress and move all required binlogs to its datadir mapped folder and start it again:

If it all worked correctly, at this point we can see the full list of binary logs on the Docker container by connecting on port 3307:

Now, all we need to do is connect to our server, which has the backup restored, and configure it as a slave from 3307:

If you want to apply logs up to a particular time you can make use of mysqlbinlog to verify what the last position / GTID it should apply, and use START SLAVE UNTIL MASTER_LOG_FILE = 'log_name', MASTER_LOG_POS = log_pos or START SLAVE SQL_THREAD UNTIL SQL_AFTER_GTIDS = 3E11FA47-71CA-11E1-9E33-C80AA9429562:11-56.

Special thanks to Marcos Albe, who originally showed me this MySQL point in time recovery approach.

Share this post

Comments (9)

  • Akshay

    Nice trick, faking a master and using the replication protocol.
    One more option to restore could be to export the binlogs to a single file and then import that file in mysql ?
    mysqlbinlog –base64-output=decode -v binlog.000001 >> binlog.txt
    mysqlbinlog –base64-output=decode -v binlog.000002 >> binlog.txt

    and then
    mysql < binlog.txt

    Although if there are too many files to execute then this single file might become large.

    October 23, 2017 at 5:30 pm
    • Marcelo Altmann

      Hi Akshay.

      Thanks for your comment. Yes, if the file becomes big and fail for some reason, it becomes a problem.

      October 24, 2017 at 9:49 am
    • anup

      But will that append the existing binlog?

      October 27, 2017 at 7:28 am
  • lefred

    Hi Marcelo,

    There is even a better approach then playing with a second instance of MySQL, check this post:

    Cheers 😉

    October 23, 2017 at 5:43 pm
    • Akshay

      Nice trick as well lefred 🙂

      October 24, 2017 at 2:32 am
    • Marcelo Altmann

      Hi Lefred.

      Thanks for pointing it out. Similar approach and can be done with a single instance.


      October 24, 2017 at 9:50 am
  • wolfsrudel85Wolfsrudel

    You should never use mysqlbinlog this way:

    shell> mysqlbinlog binlog.000001 | mysql -u root -p # Creates tmp table X
    shell> mysqlbinlog binlog.000002 | mysql -u root -p # Uses tmp table X


    shell> mysqlbinlog binlog.000001 binlog.000002 | mysql -u root -p


    See the docs:

    October 24, 2017 at 8:58 am
    • Marcelo Altmann

      Hi Wolfsrudel.

      That is what I explain in the second paragraph. I will quote here for clearness:

      > You need to make sure to run a single mysqlbinlog command with all related binlogs, and pipe them to mysql at once.

      Thanks for stopping by.

      October 24, 2017 at 10:19 am
  • Emerson S. Gaudencio

    Hey Marcelo,

    Awesome article, So I think its a nice trick if you don’t use xtrabackup or meb(mysql enterprise backup), because It would be more efficient using one of these tools for your backups strategies. Thanks for sharing this one with us.

    Take care.

    October 25, 2017 at 9:42 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.