Buy Percona ServicesBuy Now!

Using MySQL OPTIMIZE tables? For InnoDB? Stop!

 | December 9, 2010 |  Posted In: Insight for DBAs, MySQL


MySQL optimize tableInnodb/XtraDB tables do benefit from being reorganized often. You can get data physically laid out in primary key order as well as get better feel for primary key and index pages and so using less space, it is just MySQL OPTIMIZE TABLE might not be best way to do it. In this post, we’re going to look at why you shouldn’t necessarily optimize tables with MySQL OPTIMIZE.

If you’re running Innodb Plugin on Percona Server with XtraDB you get benefit of a great new feature – ability to build indexes by sort instead of via insertion. This process can be a lot faster, especially for large indexes which would get inserts in very random order, such as indexes on UUID column or something similar. It also produces a lot better fill factor. The problem is…. OPTIMIZE TABLE for Innodb tables does not get advantage of it for whatever reason.

Lets take a look at little benchmark I done by running OPTIMIZE for a second time on a table which is some 10 times larger than amount of memory I allocated for buffer pool:

That’s right ! Optimizing table straight away takes over 3 hours, while dropping indexes besides primary key, optimizing table and adding them back takes about 10 minutes, which is close than 20x speed difference and more compact index in the end.

So if you’re considering running OPTIMIZE on your tables consider using this trick, it is especially handy when you’re running it on the Slave where it is OK table is exposed without indexes for some time.
Note though nothing stops you from using LOCK TABLES on Innodb table to ensure there is not ton of queries starting reading table with no indexes and bringing box down.

You can also use this trick for ALTER TABLE which requires table rebuild. Dropping all indexes; doing ALTER and when adding them back can be a lot faster than straight ALTER TABLE.

P.S I do not know why this was not done when support for creating index by sorting was implemented. It looks very strange to me to have this feature implemented but majority of high level commands or tools (like mysqldump) do not get advantage of it and will use old slow method of building indexes by insertion.

More resources on performance:




Free eBooks


Peter Zaitsev

Peter managed the High Performance Group within MySQL until 2006, when he founded Percona. Peter has a Master's Degree in Computer Science and is an expert in database kernels, computer hardware, and application scaling.


  • I believe this is what JS was getting at…what if I have a table that has Foreign Keys referencing other tables? Can I just drop those, do the optimize, and then re-add?

  • What does mysqldump have to do with creating indexes? It doesn’t create indexes at all….do you mean when importing an export made by mysqldump? And is that different regardless of

    Also, how big was this table on disk? I’m in the middle of doing some XtraDB table optimizations, they take 3 hours for a 57G .ibd file; after they’re done I’ll try this method.

    According to my calculations, your test table is 528Mb….(charset is latin1, sha(1) is 40 chars + 1 byte to indicate length of the varchar, int is 4 bytes, so that’s 45 bytes per row. Autoincrement is 12582913, so you have max 12582912 rows:


    Is that accurate? (even if it’s more than 1 byte for varchar length, it doesn’t change the result that much…)

    Also did you test a 2nd time by dropping the indexes first and optimizing and re-adding, and then running OPTIMIZE TABLE, to make sure it wasn’t influenced somehow by the first OPTIMIZE TABLE’s defragmentation/rebuild?

  • Morgan — thanx, that makes more sense re: mysqldump. Something like –indexes-after-for-innodb or something, so that in the mysqldump the indexes will be added after the table data is inserted. Gotcha.

  • Sheeri,

    Yes the point is mysqldump could be fixed so it supports creating indexes after data is loaded which would make it a lot faster for Innodb tables. Or Innodb could be fixed to support enable keys/disable keys which mysqldump already includes in the dump.

    The table is not small but buffer pool in this case is also just 128M – this is my test box.

    I mentioned this is Second OPTIMIZE TABLE run just to make sure it is “null” operation it should just rebuild the same table.

  • The InnoDB plugin documentation says that altering foreign keys on tables will cause a table copy instead of fast index creation. Is there an alternate way to optimize the creation of these indexes?

  • I tried this trick when importing tables to a InnoDB with about 200,000 rows: I first created the tables without indexes, ran the import, ran OPTIMIZE TABLE and then added the index afterwards (followed by a ANALYZE TABLE). Unfortunately i could not notice any difference. Is there anything more to consider here?

    Actually i wonder, if a table should even be optimized after importing at all.

  • Yeah? You going to do this “drop key” thing manually on every key in every table you want to “optimize” when you’ve got 10 databases, each with several hundred tables and multiple keys?? Let me know next month when you’re finally done.

  • Cool! This is a very good tip. I was actually looking for a better way to optimize my database since it is taking almost 6 hours to optimize a single table. Your trick is way faster thanks!

  • hi peter,

    for this trick to work- do i need you enable fast_index_creation ?

    thanking you.

    ch Vishnu

  • Peter, I have an InnoDB table that I want to optimize, but another table (a child table) has a foreign key constraint pointing into this table. I tried this:


    ALTER TABLE schema.my_parent_table DROP all foreign keys …
    ALTER TABLE schema.my_parent_table DROP all indexes …

    OPTIMIZE TABLE schema.my_parent_table;

    ALTER TABLE schema.my_parent_table ADD back all indexes and foreign keys …


    All of the above works except for the optimize itself (I verified). I get “Error 1025 … Error on rename of … (errno: 150)” because of the FK in the child table. I thought that “SET FOREIGN_KEY_CHECKS = OFF” would disable that relationship.

    I even tried dropping the parent table (which works), then recreating it exactly as it was before the drop (with only the PK). I get the same error.

    Is there a way I get optimize to work on parent tables?

  • Addition to the above entry: I guess I should have said………..

    Is there a way I can get optimize to work on a parent table *without* having to drop the FK constraints in all child tables and then re-create them after I finish with the parent table?

  • Article is outdated (at least for MariaDB).

    MariaDB [uni_db]> CALL dorepeat(1000000);
    Query OK, 0 rows affected (1 min 46.11 sec)

    MariaDB [uni_db]> optimize table a;
    | Table | Op | Msg_type | Msg_text |
    | uni_db.a | optimize | note | Table does not support optimize, doing recreate + analyze instead |
    | uni_db.a | optimize | status | OK |
    2 rows in set (9.51 sec)

    MariaDB [uni_db]> alter table a drop key c;
    Query OK, 0 rows affected (0.01 sec)
    Records: 0 Duplicates: 0 Warnings: 0

    MariaDB [uni_db]> optimize table a;
    | Table | Op | Msg_type | Msg_text |
    | uni_db.a | optimize | note | Table does not support optimize, doing recreate + analyze instead |
    | uni_db.a | optimize | status | OK |
    2 rows in set (4.83 sec)

    MariaDB [uni_db]> alter table a add key(c);
    Query OK, 0 rows affected (4.85 sec)
    Records: 0 Duplicates: 0 Warnings: 0

  • i had table of 6gb data, to optimize it was taking more than 16 hours after dropping two existing indexes, it took 30 mins to optimize and 50min+9min for adding back index, it was pretty good response,
    but still in production i have table of 150gb data, can anyone tell how time it might take for all above activities

  • I have a innodb table 650 GB in size and data shall be 100 GB.mysql 5.5 on solaris 10. optimize table was taking huge time . I had to cancell in between as speed suggested it will take more than 1 day. I will try to drop all indexes and see

  • I received the information that “Tables using the InnoDB engine (10) will not be optimised. Other tables will be optimised (25)” doesn’t understand its mean. Will you please explain what is it and how to optimised?

  • For me it did not help
    table with 1887681 and 500 columns take more without indexes (only primary) than with all indexes.

Leave a Reply