Recovery deleted ibdata1

PREVIOUS POST
NEXT POST

Recently I had a case when a customer deleted the InnoDB main table space – ibdata1 – and redo logs – ib_logfile*.

MySQL keeps InnoDB files open all the time. The following recovery technique is based on this fact and it allowed to salvage the database.

Actually, the files were deleted long time ago – 6 months or so. As long as file is open physically it still exits in the file system and reachable to processes which have opened it.

Thus, from user perspective nothing has changed after the deletion. By the way, this is a good reason to monitor existence of these files!
But after the restart InnoDB will detect that there is neither system table space nor log files, so it will create empty ones. The InnoDB dictionary will be empty and InnoDB won’t be able to use a bunch of existing ibd files. This situation will be a job for ibdconnect, but as long as MySQL isn’t restarted it is possible to recover database fast. Let me illustrate how.

Let’s simulate the accident. For that I will delete /var/lib/mysql/ib* files, while sysbench generates read/write activity:

Screen0:

Screen1:

Now the files have gone, but MySQL is still running. They don’t exist in /var/lib/mysql, but can be reachable in /proc file system:

where 14101 is the PID of mysqld process.

However, we can’t copy them back because at any given point of time, there are modified pages in the buffer pool. Those pages are not written on disk and will be lost if the changes are not permanently written. This can lead to corruption and data loss.

For the same reason we can’t make MySQL backup by just copying the files.

So, we have to make sure all modifications are written to the disk.

For that we have to stop any further writes and wait until InnoDB flushes all pages.

To stop write activity we can either stop application or lock tables:

Now let’s wait until all dirty pages are flushed on disk. For that we will monitor checkpoint age. The checkpoint age is a difference between current log sequence number and last checkpoint in “SHOW ENGINE INNODB STATUS” output. If checkpoint age is zero, all pages are flushed:

To speed up flushing we can set dirty pages percentage to zero:

It is also important to ensure all other background processes have finished their work.
One of them is the insert buffer thread. Its size should be not more than 1(it’s never less than 1):

Another thread writing in background is the purge thread.
It should purge all transactions up to very last

But if the last transaction wasn’t something that requires a purge operation (SELECT for example) Trx id counter will be bigger.
In that case at least ensure InnoDB isn’t doing any writes:

When all modified pages are flushed it is safe now to copy InnoDB files back:

Let’s fix the owner:

And restart MySQL:

After the restart all InnoDB tables are reachable:

Conclusions

  • Add to your monitoring system checks that InnoDB files ibdata and ib_logfile* do exist
  • Do not restart MySQL until the further recovery strategy is clear
PREVIOUS POST
NEXT POST

Comments

  1. says

    A well known service provider tells prospective data recovery customers:

    “How To Prepare For MySQL Data Recovery

    If you realize that you’ve lost data that you want to attempt to recover, do this:

    * Stop the MySQL Server immediately.
    o Use kill -9 to for the quickest shutdown.”

    Views are my own, for an official view consult a PR person.

    James Day, MySQL Senior Principal Support Engineer, Oracle

  2. says

    Why not make a new hardlink with ln, which will prevent the files from being deleted when the process goes away, and avoid having to copy the files physically?

  3. says

    Baron, that was my thought as well.

    However, in testing this, I could not find a way to link to a deleted open file. Looks like this is not possible in linux from the last few years due to this change:
    http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=aae8a97d3ec30788790d1720b71d76fd8eb44b73

    Even prior to that, it doesn’t seem to be possible to do in ext3 without manipulation in debugfs, which likely would require more work than Aleksandr’s method.

  4. says

    James,

    Good point. This is actually where is complete difference between you having deleting data from MySQL (or dropping the table) and accidentally removing files on file system level. In first case you want to stop changes to file system ASAP to maximize chance of recovery. However if you deleted files you should not kill MySQL as this will “physically” remove them and complicate recovery.

    One more thought though I wonder if sending SIGSTOP to MySQL can be a way to ensure it stops writing to files, followed to kill -9 so it does not write anything else to files.

  5. Jonathan Valerio says

    Nicely put! If only you have blog it earlier I might have a chance to do this because I had this kind of scenario few months ago.

  6. says

    Peter, please file a feature request at bugs.mysql.com, preferably privately identifying the source of the text I quoted if you recognise it, as an illustration of the problem and need for a way for people to quickly stop things in a safe way before they contact their support provider.

    We’ve requests for things like quiescing the server to make snapshots and such work a little better or to allow safe live table copies and this sort of thing would fit nicely with those as part of the same general set of underlying requirements.

    James Day, MySQL Senior Principal Support Engineer, Oracle.

  7. Eric Stadtherr says

    Thanks for capturing this process! We just had a similar thing happen where an administrator accidentally moved the ibdata1 file to a different filesystem, and were successfully able to recover the file.

  8. says

    We recently had a similar situation where ibdata1 and ib_logfiles got deleted by automation in multiple production servers. We found a couple of additional very clean recovery methods that worked well for us and incurred 0 downtime:

    1. Using xtrabackup:
    We created symlinks in an empty directory that would point to the corresponding /prox//fd/ open descriptors of ibdata1 and ib_logfiles. Then we cloned my.cnf and edited the paths to point to the directory containing these files.

    For example, suppose our directory is named recovery, we added the following to my.cnf

    innodb_data_home_dir = /mnt/data-files/recovery
    innodb_log_group_home_dir = /mnt/data-files/recovery

    Recovery directory would then contain the symlinks.

    Then we ran xtrabackup with path to that modified my.cnf. Xtrabackup was able to pick up ibdata1 and ib_logfile symlinks without an issue and create a clean backup snapshot:
    Example commands:
    innobackupex –defaults-file=/mnt/data-files2/my.cnf –no-timestamp /mnt/data-files2/mysql
    innobackupex –apply-log /mnt/data-files2/mysql

    Once a replica was running, we moved to step 2 involving rescuing the files on the master:

    2. Using a python script to open the descriptors. In UNIX when you have a process that keeps deleted files open, you can also create another process and open the same files. Then if you shut down the first process, the files still continue being accessible because the second process keeps them open. This way we were able to have mysql shutdown cleanly and flush everything to the ibdata1 and logfiles descriptors. We then copied over the files that the second process was holding and restarted mysql. Here’s a sample python script. Need to run it from the interactive mode and KEEP IT OPEN until all the steps are completed:

    #
    # point to pid and descriptors of the mysql deleted ibdata and logfiles
    #
    ibdata = ‘/proc/{}/fd/{}’.format(25131, 4)
    ib_log0 = ‘/proc/{}/fd/{}’.format(25131, 9)
    ib_log1 = ‘/proc/{}/fd/{}’.format(25131, 10)

    #open the files
    fin0 = open(ibdata, ‘rb’)
    fin1 = open(ib_log0, ‘rb’)
    fin2 = open(ib_log1, ‘rb’)

    The steps are as follows:
    1. run python in interactive mode
    2. paste the script to open the files in python
    3. While KEEPING THEM OPEN
    4. Shut down mysql
    5. Locate the open files under /proc//fd/…
    6. Copy them over back to mysql data dir.
    7. Chown them to mysql user
    8. Start mysql

    Since we had created a consistent replica in step 1, we were able to do a failover with 0 downtime.

    Hope this helps.

Leave a Reply

Your email address will not be published. Required fields are marked *