Using MySQL OPTIMIZE tables? For InnoDB? Stop!

Using MySQL OPTIMIZE tables? For InnoDB? Stop!

PREVIOUS POST
NEXT POST

MySQL optimize tableInnodb/XtraDB tables do benefit from being reorganized often. You can get data physically laid out in primary key order as well as get better feel for primary key and index pages and so using less space, it is just MySQL OPTIMIZE TABLE might not be best way to do it. In this post, we’re going to look at why you shouldn’t necessarily optimize tables with MySQL OPTIMIZE.

If you’re running Innodb Plugin on Percona Server with XtraDB you get benefit of a great new feature – ability to build indexes by sort instead of via insertion. This process can be a lot faster, especially for large indexes which would get inserts in very random order, such as indexes on UUID column or something similar. It also produces a lot better fill factor. The problem is…. OPTIMIZE TABLE for Innodb tables does not get advantage of it for whatever reason.

Lets take a look at little benchmark I done by running OPTIMIZE for a second time on a table which is some 10 times larger than amount of memory I allocated for buffer pool:

That’s right ! Optimizing table straight away takes over 3 hours, while dropping indexes besides primary key, optimizing table and adding them back takes about 10 minutes, which is close than 20x speed difference and more compact index in the end.

So if you’re considering running OPTIMIZE on your tables consider using this trick, it is especially handy when you’re running it on the Slave where it is OK table is exposed without indexes for some time.
Note though nothing stops you from using LOCK TABLES on Innodb table to ensure there is not ton of queries starting reading table with no indexes and bringing box down.

You can also use this trick for ALTER TABLE which requires table rebuild. Dropping all indexes; doing ALTER and when adding them back can be a lot faster than straight ALTER TABLE.

P.S I do not know why this was not done when support for creating index by sorting was implemented. It looks very strange to me to have this feature implemented but majority of high level commands or tools (like mysqldump) do not get advantage of it and will use old slow method of building indexes by insertion.

More resources on performance:

Posts

Webinars

Presentations

Free eBooks

Tools

PREVIOUS POST
NEXT POST

Share this post

Comments (26)

  • Ben Reply

    I believe this is what JS was getting at…what if I have a table that has Foreign Keys referencing other tables? Can I just drop those, do the optimize, and then re-add?

    December 9, 2010 at 12:00 am
  • Sheeri Reply

    What does mysqldump have to do with creating indexes? It doesn’t create indexes at all….do you mean when importing an export made by mysqldump? And is that different regardless of

    Also, how big was this table on disk? I’m in the middle of doing some XtraDB table optimizations, they take 3 hours for a 57G .ibd file; after they’re done I’ll try this method.

    According to my calculations, your test table is 528Mb….(charset is latin1, sha(1) is 40 chars + 1 byte to indicate length of the varchar, int is 4 bytes, so that’s 45 bytes per row. Autoincrement is 12582913, so you have max 12582912 rows:

    12582912*45/1024/1024=540.000

    Is that accurate? (even if it’s more than 1 byte for varchar length, it doesn’t change the result that much…)

    Also did you test a 2nd time by dropping the indexes first and optimizing and re-adding, and then running OPTIMIZE TABLE, to make sure it wasn’t influenced somehow by the first OPTIMIZE TABLE’s defragmentation/rebuild?

    December 9, 2010 at 2:31 pm
  • Sheeri Reply

    er, sorry, my sentence was cut off — “And is that different regardless of whether or not ALTER TABLE…DISABLE KEYS is used?”

    December 9, 2010 at 2:33 pm
  • Morgan Tocker Reply December 9, 2010 at 2:40 pm
  • Sheeri Reply

    Morgan — thanx, that makes more sense re: mysqldump. Something like –indexes-after-for-innodb or something, so that in the mysqldump the indexes will be added after the table data is inserted. Gotcha.

    December 9, 2010 at 2:52 pm
  • peter Reply

    Sheeri,

    Yes the point is mysqldump could be fixed so it supports creating indexes after data is loaded which would make it a lot faster for Innodb tables. Or Innodb could be fixed to support enable keys/disable keys which mysqldump already includes in the dump.

    The table is not small but buffer pool in this case is also just 128M – this is my test box.

    I mentioned this is Second OPTIMIZE TABLE run just to make sure it is “null” operation it should just rebuild the same table.

    December 9, 2010 at 3:32 pm
  • Holger Thiel Reply

    Besides the UTF-8-bug in Fast Index Creation it is good idea to do drop and create indexes.

    December 10, 2010 at 6:42 am
  • JS Reply

    The InnoDB plugin documentation says that altering foreign keys on tables will cause a table copy instead of fast index creation. Is there an alternate way to optimize the creation of these indexes?

    December 10, 2010 at 2:18 pm
  • Mike Reply

    I tried this trick when importing tables to a InnoDB with about 200,000 rows: I first created the tables without indexes, ran the import, ran OPTIMIZE TABLE and then added the index afterwards (followed by a ANALYZE TABLE). Unfortunately i could not notice any difference. Is there anything more to consider here?

    Actually i wonder, if a table should even be optimized after importing at all.

    November 18, 2011 at 7:10 am
  • Mike Reply

    Sorry, i think i missed that this trick only applies to Percona Server.

    November 18, 2011 at 7:13 am
  • Kenny Reply

    Yeah? You going to do this “drop key” thing manually on every key in every table you want to “optimize” when you’ve got 10 databases, each with several hundred tables and multiple keys?? Let me know next month when you’re finally done.

    January 20, 2012 at 10:27 am
  • Pinoy Reply

    Cool! This is a very good tip. I was actually looking for a better way to optimize my database since it is taking almost 6 hours to optimize a single table. Your trick is way faster thanks!

    March 26, 2012 at 11:22 pm
  • Mike Reply

    Kenny – sure, that’s one reason why even a half competent DBA will write a script.

    May 4, 2012 at 4:52 pm
  • vishnu rao Reply

    hi peter,

    for this trick to work- do i need you enable fast_index_creation ?

    thanking you.

    regards,
    ch Vishnu

    August 9, 2012 at 8:04 pm
  • Erectrolust Reply

    My Innodb file size is 5,9Gb. and ai can’t optimize using mysqlcheck -o -A…
    fuk innodb….

    January 8, 2013 at 7:28 am
  • jago Reply

    Peter, I have an InnoDB table that I want to optimize, but another table (a child table) has a foreign key constraint pointing into this table. I tried this:

    SET FOREIGN_KEY_CHECKS = OFF;

    ALTER TABLE schema.my_parent_table DROP all foreign keys …
    ALTER TABLE schema.my_parent_table DROP all indexes …

    OPTIMIZE TABLE schema.my_parent_table;

    ALTER TABLE schema.my_parent_table ADD back all indexes and foreign keys …

    SET FOREIGN_KEY_CHECKS = ON;

    All of the above works except for the optimize itself (I verified). I get “Error 1025 … Error on rename of … (errno: 150)” because of the FK in the child table. I thought that “SET FOREIGN_KEY_CHECKS = OFF” would disable that relationship.

    I even tried dropping the parent table (which works), then recreating it exactly as it was before the drop (with only the PK). I get the same error.

    Is there a way I get optimize to work on parent tables?

    February 19, 2013 at 10:46 am
  • jago Reply

    Addition to the above entry: I guess I should have said………..

    Is there a way I can get optimize to work on a parent table *without* having to drop the FK constraints in all child tables and then re-create them after I finish with the parent table?

    February 19, 2013 at 11:34 am
  • Alberto Reply

    YOU SAVE MY LIFE!!! THAAAAAAAAAANK YOU!!!!!!

    May 22, 2013 at 4:25 pm
  • Marcin Pohl Reply

    Is this still true for newer MySQL 5.6.x?

    April 17, 2015 at 4:32 pm
  • Alexandr Cherepanov Reply

    Article is outdated (at least for MariaDB).

    MariaDB [uni_db]> CALL dorepeat(1000000);
    Query OK, 0 rows affected (1 min 46.11 sec)

    MariaDB [uni_db]> optimize table a;
    +———-+———-+———-+——————————————————————-+
    | Table | Op | Msg_type | Msg_text |
    +———-+———-+———-+——————————————————————-+
    | uni_db.a | optimize | note | Table does not support optimize, doing recreate + analyze instead |
    | uni_db.a | optimize | status | OK |
    +———-+———-+———-+——————————————————————-+
    2 rows in set (9.51 sec)

    MariaDB [uni_db]> alter table a drop key c;
    Query OK, 0 rows affected (0.01 sec)
    Records: 0 Duplicates: 0 Warnings: 0

    MariaDB [uni_db]> optimize table a;
    +———-+———-+———-+——————————————————————-+
    | Table | Op | Msg_type | Msg_text |
    +———-+———-+———-+——————————————————————-+
    | uni_db.a | optimize | note | Table does not support optimize, doing recreate + analyze instead |
    | uni_db.a | optimize | status | OK |
    +———-+———-+———-+——————————————————————-+
    2 rows in set (4.83 sec)

    MariaDB [uni_db]> alter table a add key(c);
    Query OK, 0 rows affected (4.85 sec)
    Records: 0 Duplicates: 0 Warnings: 0

    January 4, 2016 at 5:51 am
  • shrinivas Reply

    i had table of 6gb data, to optimize it was taking more than 16 hours after dropping two existing indexes, it took 30 mins to optimize and 50min+9min for adding back index, it was pretty good response,
    but still in production i have table of 150gb data, can anyone tell how time it might take for all above activities

    June 2, 2016 at 5:42 am
  • ullas dewan Reply

    I have a innodb table 650 GB in size and data shall be 100 GB.mysql 5.5 on solaris 10. optimize table was taking huge time . I had to cancell in between as speed suggested it will take more than 1 day. I will try to drop all indexes and see

    October 1, 2016 at 10:08 am
  • Joilson Cardoso Reply

    I liked this notes, however, on mysql 5.6, Dropping a Primary Key takes a long time as well… Is there a way to drop it faster?

    March 2, 2017 at 6:05 am
  • Robert Reply

    I received the information that “Tables using the InnoDB engine (10) will not be optimised. Other tables will be optimised (25)” doesn’t understand its mean. Will you please explain what is it and how to optimised?

    July 28, 2017 at 1:54 pm
  • mark Reply

    could you do the same thing using pt-online-schema-change –alter ‘engine=innodb’?

    November 13, 2017 at 3:25 pm
  • Vardan Reply

    For me it did not help
    table with 1887681 and 500 columns take more without indexes (only primary) than with all indexes.

    May 27, 2018 at 9:44 pm

Leave a Reply