Why is the ibdata1 file continuously growing in MySQL?

ibdata1 fileWe often receive this question about the ibdata1 file in MySQL at Percona Support.

The panic starts when the monitoring server sends an alert about the storage of the MySQL server – saying that the disk is about to get filled.

After some research, you realize that most of the disk space is used by the InnoDB’s shared tablespace ibdata1. You have innodb_file_per_table enabled, so the question is:

What is stored in ibdata1?

When you have innodb_file_per_table enabled, the tables are stored in their own tablespace but the shared tablespace is still used to store other InnoDB’s internal data:

  • data dictionary aka metadata of InnoDB tables
  • change buffer
  • doublewrite buffer
  • undo logs

Some of them can be configured on Percona Server to avoid becoming too large. For example, you can set a maximum size for change buffer with innodb_ibuf_max_size or store the doublewrite buffer on a separate file with innodb_doublewrite_file.

In MySQL 5.6 you can also create external UNDO tablespaces so they will be in their own files instead of stored inside ibdata1. Check the following documentation link.

What is causing the ibdata1 to grow that fast?

Usually, the first command that we need to run when there is a MySQL problem is:


That will show us very valuable information. We start checking the TRANSACTIONS section and we find this:

---TRANSACTION 36E, ACTIVE 1256288 sec
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
show engine innodb status
Trx read view will not see trx with id >= 36F, sees < 36F

This is the most common reason, a pretty old transaction created 14 days ago. The status is ACTIVE, that means InnoDB has created a snapshot of the data so it needs to maintain old pages in undo to be able to provide a consistent view of the database since that transaction was started. If your database is heavily write loaded that means lots of undo pages are being stored.

If you don’t find any long-running transaction you can also monitor another variable from the INNODB STATUS, the “History list length.” It shows the number of pending purge operations. In this case, the problem is usually caused because the purge thread (or master thread in older versions) is not capable to process undo records with the same speed as they come in.

How can I check what is being stored in the ibdata1?

Unfortunately, MySQL doesn’t provide information on what is being stored on that ibdata1 shared tablespace but there are two tools that will be very helpful. First a modified version of innochecksum made by Mark Callaghan and published in this bug report.

It is pretty easy to use:

# ./innochecksum /var/lib/mysql/ibdata1
0 bad checksum
0 other
3 max index_id

It has 19272 UNDO_LOG pages from a total of 20608. That’s 93% of the tablespace.

The second way to check the content of a tablespace is the InnoDB Ruby Tools made by Jeremy Cole. It is a more advanced tool to examine the internals of InnoDB. For example, we can use the space-summary parameter to get a list with every page and its data type. We can use standard Unix tools to get the number of UNDO_LOG pages:

# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l

Although in this particular case innochecksum is faster and easier to use, I recommend you play with Jeremy’s tools to learn more about the data distribution inside InnoDB and its internals.

OK, now we know where the problem is. The next question:

How can I solve the problem?

The answer to this question is easy. If you can still commit that query, do it. If not you’ll have to kill the thread to start the rollback process. That will just stop ibdata1 from growing but it is clear that your software has a bug or someone made a mistake. Now that you know how to identify where is the problem you need to find who or what is causing it using your own debugging tools or the general query log.

If the problem is caused by the purge thread then the solution is usually to upgrade to a newer version where you can use a dedicated purge thread instead of the master thread. More information on the following documentation link.

Is there any way to recover the used space?

No, it is not possible – at least in an easy and fast way. InnoDB tablespaces never shrink… see the following 10-year old bug report recently updated by James Day (thanks):

When you delete some rows, the pages are marked as deleted to reuse later but the space is never recovered. The only way is to start the database with fresh ibdata1. To do that you would need to take a full logical backup with mysqldump. Then stop MySQL and remove all the databases, ib_logfile* and ibdata* files. When you start MySQL again it will create a new fresh shared tablespace. Then, recover the logical dump.


When the ibdata1 file is growing too fast within MySQL it is usually caused by a long-running transaction that we have forgotten about. Try to solve the problem as fast as possible (commiting or killing a transaction) because you won’t be able to recover the wasted disk space without the painfully slow mysqldump process.

Monitoring the database to avoid these kinds of problems is also recommended. Our MySQL Monitoring Plugins includes a Nagios script that can alert you if it finds a too old running transaction.

Share this post

Comments (17)

  • Pep Pla

    Actually it is possible to recover unused space. It is not an easy procedure neither fast but it is possible to do it with minimal downtime. During the operation it requires at least double or so disk space and some memory and processor power.
    The procedure is the following (you have to double check all steps for errors and do not even try it if you don’t know how to deal with replication, there are a lot of things which could fail: diferent engines, collations…)
    1) innodb hot backup
    2) restore in alternate location and setup as slave
    3) stop replication
    4) mysqldump
    5) restore dump
    6) restart replication
    7) wait till both databases are at the same level
    8) promote slave to master

    Only step 8 requires downtime, and depending how you make the promotion, it may take really a short time. I like this procedure because you make take your time to check that everything is ok with the replication and then schedule the downtime in an activity valley.


    August 20, 2013 at 6:53 am
  • Alon

    In the new version you can :
    1. stpo mysql
    2. rename mysql dir.
    3. install a new DB.
    4. create a full “no data import”
    5. move the datafiles with import tablespace


    August 20, 2013 at 9:35 am
    • Dazy Parker

      Thanks for this information.

      February 21, 2016 at 11:43 pm
  • Pep Pla

    A few remarks from your procedure:
    – requires MySQL 5.6
    – you need create scripts to discard and import the tablespaces
    – make sure that all tables are contained in external files
    – make sure that the database had no pending operations before the shutdown.
    – long downtime for large databases

    August 20, 2013 at 10:20 am
  • Fernando Ipar


    There’s a missing step in your procedure, between 4 and 5, as you need to shutdown mysqld on the slave, move away innodb’s data files, and start mysqld again, before you can load the dump. Else you will have the same huge ibdata1 file as before the dump/restore process.


    August 20, 2013 at 12:17 pm
  • Pep Pla


    True and important point!


    August 20, 2013 at 12:25 pm
  • Jeremy Cole

    Okay, Miguel. You pretty much forced my hand at making this nicer for you:

    $ innodb_space -f ibdata1 space-page-type-summary
    type count percent description
    ALLOCATED 772 67.01 Freshly allocated
    SYS 133 11.55 System internal
    UNDO_LOG 128 11.11 Undo log
    INDEX 110 9.55 B+Tree index
    INODE 4 0.35 File segment inode
    IBUF_BITMAP 2 0.17 Insert buffer bitmap
    FSP_HDR 2 0.17 File space header
    TRX_SYS 1 0.09 Transaction system header

    Added in: https://github.com/jeremycole/innodb_ruby/commit/0ec97f51d907eaaf3968a4697f80a5762e162e46

    Other feature ideas are welcome!

    August 21, 2013 at 9:17 pm
  • Miguel Angel Nieto

    Wow Jeremy, thank you very much! That is going to be very helpful 🙂 No need of modified innochecksum anymore.

    August 22, 2013 at 3:26 am
  • _ck_

    It would be wonderful if someone could write a better guide to moving undo logs outside of ibdata in MySQL 5.6 with realworld examples and tests/proof that it works properly.

    I also found this somewhat dangerous experiment with directly shrinking ibdata: http://www.fromdual.com/shrinking-innodb-system-tablespace-file-ibdata1-poc

    Hopefully it won’t be another decade before we have a real utility to shrink it automatically.

    September 20, 2013 at 10:09 am
  • Mark Callaghan

    Did Jeremy just kill my plans to monetize innochecksum hacks?
    Well done.

    October 4, 2013 at 2:42 pm
  • Jo Valerio

    great help. typical problem of a DBA.

    October 30, 2013 at 4:53 am
  • Moll

    Great article Miguel!
    Are there any known issues when using innodb_ruby on a running system ?
    Does it affect or cause interruption in service especially, for large ibdata files ?


    August 28, 2014 at 3:32 am
  • Jerem

    thanks for sharing.
    I also needed to delete all my Databases
    Eexcept mysql and performance_schema Otherwise you loose the root user!

    October 3, 2014 at 5:49 am
  • Pankaj Kumar

    Hi Experts,

    Unfortunately I have overwritten MySQL database with same database structure.
    Now all the tables are blank.
    Is there a possible way to get my data back using lbdata1 file.

    Please help me out…

    Thanks in advance.

    December 15, 2014 at 6:41 am
  • Liam Douglas

    On our database, it’s a file called ibdata2 that is massive, 94GB our ibdata1 is only 2GB, what is the ibdata2 file?

    October 18, 2016 at 10:11 am
  • Nick Keefen

    there’s a backslash missing in “SHOW ENGINE INNODB STATUSG”

    January 10, 2017 at 3:12 pm
  • Telmo Calhaço (Zone Soft)

    My ibdata1 have 344GB, is possible to reduce is size ?

    October 9, 2017 at 5:19 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.