Using MySQL OPTIMIZE tables for InnoDB? Stop!


Innodb/XtraDB tables do benefit from being reorganized often. You can get data physically laid out in primary key order as well as get a better feel for the primary key and index pages, and so use less space, it’s just that MySQL OPTIMIZE TABLE might not be the best way to do it.

Why you shouldn’t necessarily optimize tables with MySQL OPTIMIZE

If you’re running Innodb Plugin on Percona Server with XtraDB you get the benefit of a great new feature – ability to build indexes by sort instead of via insertion. This process can be a lot faster, especially for large indexes which would get inserts in very random order, such as indexes on UUID column or something similar. It also produces a lot better fill factor. The problem is, OPTIMIZE TABLE for Innodb tables does not get the advantage of it for whatever reason.

Let’s take a look at little benchmark I did by running OPTIMIZE for a second time on a table which is some 10 times larger than the amount of memory I allocated for buffer pool:

That’s right! Optimizing table straight away takes over 3 hours, while dropping indexes besides primary key, optimizing table and adding them back takes about 10 minutes, which is close than 20x speed difference and more compact index in the end.

So if you’re considering running OPTIMIZE on your tables consider using this trick, it is especially handy when you’re running it on the Slave where it is OK table is exposed without indexes for some time. Note though nothing stops you from using LOCK TABLES on Innodb table to ensure there is not a ton of queries starting reading table with no indexes and bringing the box down.

You can also use this trick for ALTER TABLE which requires a table rebuild. Dropping all indexes; doing ALTER and when adding them back can be a lot faster than straight ALTER TABLE.

P.S I do not know why this was not done when support for creating an index by sorting was implemented. It looks very strange to me to have this feature implemented but a majority of high-level commands or tools (like mysqldump) do not get the advantage of it and will use the old slow method of building indexes by insertion.

More resources on performance:




Free eBooks



Note: In MySQL 5.5, OPTIMIZE TABLE does not take advantage of “InnoDB Fast Index Creation” feature. This limitation is documented in the MySQL 5.5 official documentation.


OPTIMIZE TABLE for an InnoDB table is mapped to an ALTER TABLE operation to rebuild the table and update index statistics and free unused space in the clustered index. This operation does not use fast index creation. Secondary indexes are not created as efficiently because keys are inserted in the order they appeared in the primary key.

Percona Server 5.5.11 and higher allows utilizing fast index creation feature for “ALTER TABLE” and “OPTIMIZE TABLE” operations, which can potentially speed them up greatly. This feature is controlled by the expand_fast_index_creation system variable which is OFF by default.  This variable was implemented in Percona Server – 5.5.16-22.0.

More Information:

MySQL 5.6 introduced the online DDL feature which provides support for in-place table alterations. As of MySQL 5.6.17, OPTIMIZE TABLE can be performed in-place for rebuilding regular and partitioned InnoDB tables which makes “OPTIMIZE TABLE” operation much faster.


Table 14.13 Online DDL Support for Table Operations

OperationIn PlaceRebuilds TablePermits Concurrent DMLOnly Modifies Metadata
Optimizing a tableYes*YesYesNo


Optimizing a table
Performed in-place as of MySQL 5.6.17. In-place operation is not supported for tables with FULLTEXT indexes. The operation uses the INPLACE algorithm, but ALGORITHM and LOCK syntax is not permitted.


Share this post

Comments (26)

  • Ben

    I believe this is what JS was getting at…what if I have a table that has Foreign Keys referencing other tables? Can I just drop those, do the optimize, and then re-add?

    December 9, 2010 at 12:00 am
  • Sheeri

    What does mysqldump have to do with creating indexes? It doesn’t create indexes at all….do you mean when importing an export made by mysqldump? And is that different regardless of

    Also, how big was this table on disk? I’m in the middle of doing some XtraDB table optimizations, they take 3 hours for a 57G .ibd file; after they’re done I’ll try this method.

    According to my calculations, your test table is 528Mb….(charset is latin1, sha(1) is 40 chars + 1 byte to indicate length of the varchar, int is 4 bytes, so that’s 45 bytes per row. Autoincrement is 12582913, so you have max 12582912 rows:


    Is that accurate? (even if it’s more than 1 byte for varchar length, it doesn’t change the result that much…)

    Also did you test a 2nd time by dropping the indexes first and optimizing and re-adding, and then running OPTIMIZE TABLE, to make sure it wasn’t influenced somehow by the first OPTIMIZE TABLE’s defragmentation/rebuild?

    December 9, 2010 at 2:31 pm
  • Sheeri

    er, sorry, my sentence was cut off — “And is that different regardless of whether or not ALTER TABLE…DISABLE KEYS is used?”

    December 9, 2010 at 2:33 pm
  • Morgan Tocker December 9, 2010 at 2:40 pm
  • Sheeri

    Morgan — thanx, that makes more sense re: mysqldump. Something like –indexes-after-for-innodb or something, so that in the mysqldump the indexes will be added after the table data is inserted. Gotcha.

    December 9, 2010 at 2:52 pm
  • peter


    Yes the point is mysqldump could be fixed so it supports creating indexes after data is loaded which would make it a lot faster for Innodb tables. Or Innodb could be fixed to support enable keys/disable keys which mysqldump already includes in the dump.

    The table is not small but buffer pool in this case is also just 128M – this is my test box.

    I mentioned this is Second OPTIMIZE TABLE run just to make sure it is “null” operation it should just rebuild the same table.

    December 9, 2010 at 3:32 pm
  • Holger Thiel

    Besides the UTF-8-bug in Fast Index Creation it is good idea to do drop and create indexes.

    December 10, 2010 at 6:42 am
  • JS

    The InnoDB plugin documentation says that altering foreign keys on tables will cause a table copy instead of fast index creation. Is there an alternate way to optimize the creation of these indexes?

    December 10, 2010 at 2:18 pm
  • Mike

    I tried this trick when importing tables to a InnoDB with about 200,000 rows: I first created the tables without indexes, ran the import, ran OPTIMIZE TABLE and then added the index afterwards (followed by a ANALYZE TABLE). Unfortunately i could not notice any difference. Is there anything more to consider here?

    Actually i wonder, if a table should even be optimized after importing at all.

    November 18, 2011 at 7:10 am
  • Mike

    Sorry, i think i missed that this trick only applies to Percona Server.

    November 18, 2011 at 7:13 am
  • Kenny

    Yeah? You going to do this “drop key” thing manually on every key in every table you want to “optimize” when you’ve got 10 databases, each with several hundred tables and multiple keys?? Let me know next month when you’re finally done.

    January 20, 2012 at 10:27 am
  • Pinoy

    Cool! This is a very good tip. I was actually looking for a better way to optimize my database since it is taking almost 6 hours to optimize a single table. Your trick is way faster thanks!

    March 26, 2012 at 11:22 pm
  • Mike

    Kenny – sure, that’s one reason why even a half competent DBA will write a script.

    May 4, 2012 at 4:52 pm
  • vishnu rao

    hi peter,

    for this trick to work- do i need you enable fast_index_creation ?

    thanking you.

    ch Vishnu

    August 9, 2012 at 8:04 pm
  • Erectrolust

    My Innodb file size is 5,9Gb. and ai can’t optimize using mysqlcheck -o -A…
    fuk innodb….

    January 8, 2013 at 7:28 am
  • jago

    Peter, I have an InnoDB table that I want to optimize, but another table (a child table) has a foreign key constraint pointing into this table. I tried this:


    ALTER TABLE schema.my_parent_table DROP all foreign keys …
    ALTER TABLE schema.my_parent_table DROP all indexes …

    OPTIMIZE TABLE schema.my_parent_table;

    ALTER TABLE schema.my_parent_table ADD back all indexes and foreign keys …


    All of the above works except for the optimize itself (I verified). I get “Error 1025 … Error on rename of … (errno: 150)” because of the FK in the child table. I thought that “SET FOREIGN_KEY_CHECKS = OFF” would disable that relationship.

    I even tried dropping the parent table (which works), then recreating it exactly as it was before the drop (with only the PK). I get the same error.

    Is there a way I get optimize to work on parent tables?

    February 19, 2013 at 10:46 am
  • jago

    Addition to the above entry: I guess I should have said………..

    Is there a way I can get optimize to work on a parent table *without* having to drop the FK constraints in all child tables and then re-create them after I finish with the parent table?

    February 19, 2013 at 11:34 am
  • Alberto


    May 22, 2013 at 4:25 pm
  • Marcin Pohl

    Is this still true for newer MySQL 5.6.x?

    April 17, 2015 at 4:32 pm
  • Alexandr Cherepanov

    Article is outdated (at least for MariaDB).

    MariaDB [uni_db]> CALL dorepeat(1000000);
    Query OK, 0 rows affected (1 min 46.11 sec)

    MariaDB [uni_db]> optimize table a;
    | Table | Op | Msg_type | Msg_text |
    | uni_db.a | optimize | note | Table does not support optimize, doing recreate + analyze instead |
    | uni_db.a | optimize | status | OK |
    2 rows in set (9.51 sec)

    MariaDB [uni_db]> alter table a drop key c;
    Query OK, 0 rows affected (0.01 sec)
    Records: 0 Duplicates: 0 Warnings: 0

    MariaDB [uni_db]> optimize table a;
    | Table | Op | Msg_type | Msg_text |
    | uni_db.a | optimize | note | Table does not support optimize, doing recreate + analyze instead |
    | uni_db.a | optimize | status | OK |
    2 rows in set (4.83 sec)

    MariaDB [uni_db]> alter table a add key(c);
    Query OK, 0 rows affected (4.85 sec)
    Records: 0 Duplicates: 0 Warnings: 0

    January 4, 2016 at 5:51 am
  • shrinivas

    i had table of 6gb data, to optimize it was taking more than 16 hours after dropping two existing indexes, it took 30 mins to optimize and 50min+9min for adding back index, it was pretty good response,
    but still in production i have table of 150gb data, can anyone tell how time it might take for all above activities

    June 2, 2016 at 5:42 am
  • ullas dewan

    I have a innodb table 650 GB in size and data shall be 100 GB.mysql 5.5 on solaris 10. optimize table was taking huge time . I had to cancell in between as speed suggested it will take more than 1 day. I will try to drop all indexes and see

    October 1, 2016 at 10:08 am
  • Joilson Cardoso

    I liked this notes, however, on mysql 5.6, Dropping a Primary Key takes a long time as well… Is there a way to drop it faster?

    March 2, 2017 at 6:05 am
  • Robert

    I received the information that “Tables using the InnoDB engine (10) will not be optimised. Other tables will be optimised (25)” doesn’t understand its mean. Will you please explain what is it and how to optimised?

    July 28, 2017 at 1:54 pm
  • mark

    could you do the same thing using pt-online-schema-change –alter ‘engine=innodb’?

    November 13, 2017 at 3:25 pm
  • Vardan

    For me it did not help
    table with 1887681 and 500 columns take more without indexes (only primary) than with all indexes.

    May 27, 2018 at 9:44 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.