EmergencyEMERGENCY? Get 24/7 Help Now!

Real-Life Use Case for “Barracuda” InnoDB File Format

 | April 23, 2008 |  Posted In: Benchmarks, Insight for DBAs


In one of his recent posts Vadim already gave some information about possible benefits from using new InnoDB file format but in this post I’d like to share some real-life example how compression in InnoDB plugin could be useful for large warehousing tasks.

One of our clients had really many problems with one of his servers. This is pretty powerful server (Intel Xeon E5430 / 8Gb RAM / BBU-enabled RAID10 with fast SAS drives) with really interesting dataset on it. The only table customer has on this server is one huge innodb table with a set of TEXT fields. We’ve been trying to optimize this server before by playing with various innodb parameters (including page size changes) but at the end of the day server was dying of I/O load because dataset size was 60Gb+ and all reads from this table were pretty random (buffer pool didn’t help).

Few days ago we’ve decided to try to convert customer’s data to Barracuda file format using new InnoDB plugin recently released by Oracle. First thing I’ve tried was DYNAMIC row format. After 3,5 hours of conversion we’ve got 58Gb data file and 30% drop in our I/O load. This was related to the fact that with DYNAMIC row format InnoDB was able to keep an entire PRIMARY index in memory (TEXT and BLOB fields are stored off-page in this format) and all our lookups weren’t involving random I/O anymore.

This 30% I/O reduction was nice, but we’ve decided to go further with our optimizations and I’ve tried to convert this table using COMPRESSED row format. This time conversion took 1,5 hours and results were really surprising:

  • Only 5Gb data file (from 60Gb)
  • ~5% I/O load according to iostat (from 99%)
  • ~5% CPU load according to top (from 80-100% mostly waiting for I/O)
  • 0.01 sec average lookup time by primary key (from 1-20 sec before the conversion)

These interesting performance results obviously come from the fact that in addition to index pages compression (by default it is trying to compress 16kb pages down to 8kb) new innodb plugin performs TEXT/BLOB/VARCHAR fields compression off-page so every large TEXT field could become as small as a few bytes if its compression ratio is good. One thing I’d like to mention here that in our case we’ve actually tried a few possible options for compressed page sizes: 1k, 4k and 8k. Results were not so different because most of our data was in TEXT fields and there weren’t many other indexes aside from PRIMARY, but for some other cases it could make sense to try other page sizes (not default 8k) to make data set smaller.

So far, so good, customer is really happy with these changes and we’re really glad to have such a nice tool as new InnoDB plugin which now proved its usefulness in data warehousing tasks.



  • Alexey,

    Thank you for sharing this interesting detail about Innodb compression feature which is not that clear unless you spend time carefully reading the manual – it is not only page by page compression but large blobs are compressed separately and so if you have a lot of BLOB data you’re not limited by 1:2 best theoretically possible compression if you’re using 8K page sizes.

    Speaking about CPU info – the 80-100% “CPU” load I guess is a bit misleading – you probably mean there was little idle with a lot of “iowait” (which is also corresponds to CPU being free)

  • I’m very glad this worked out for your client. I want to thank you for your entry into this blog showing some real life performance numbers.

    You mentioned that you had indexes including the Primary index. Were they used often as in a index which was used along with the primary index the majority of the time? I am asking this because the phpBB search in phpBB 3.0.1 has two fields in a table which correspond to a word_id and a post_id, both fields are indexed together and are used as often as search is used. I’m trying to estimate gains based on your findings the improvements that can be gained for phpBB’s search in terms of DB performance.

  • Petya, as you note, the InnoDB Plugin is at this stage an “early adopter release”. It’s for you to test and decide if you want to be an early adopter! As time goes by, more people will use it, and you can be an “adopter”. And, if you wait too long, you’ll be a “late adopter”. 😉

    Innobase has tested the code well. But there are operational considerations as documented in the manual. And some of the very earliest adopters are finding bugs (though nothing too bad so far!), while others are finding excellent performance. It is “early days” in the history of the InnoDB Plugin.

    I suggest you watch this space and the messages posted at http://forums.innodb.com/ to get a sense of what other people are seeing. You are undoubtedly the best judge of whether the software is “production capable”, based on your own application requirements, your tolerance for risk, and your understanding of the actual issues encountered, documented and resolved. It is all too easy for someone to declare software “production”, but that label doesn’t really mean as much as you will learn by doing your own testing and evaluation.

    Best wishes!

  • By the way, Alexy, we at Innobase are very pleased to see these test results. Thank you for doing these tests and posting. We would love to see more details on the specific schema, data and workload. How many columns? Average length of each column and each row?

    And, how did you do the conversion? Did you simply ALTER TABLE from MyISAM to InnoDB? LOAD DATA? Did you create the secondary indexes AFTER loading the table (to take advantage of “fast index create”?

    The more you can share, the more we’ll all know about the potential of the InnoDB Plugin to address varying requirements.


  • to echo Ken Jacobs:

    Would love to know how you preformed the conversion aswell, Dump the data out and reload or doing inplace Alter of the table?


  • 2Ken and Shane: Conversion has been performed by simple ALTER TABLE.

    Some table-related info:
    CREATE TABLE document_queries (
    word_document_id int(11) DEFAULT NULL,
    html mediumtext,
    created_at datetime DEFAULT NULL,
    updated_at datetime DEFAULT NULL,
    query_count int(11) DEFAULT ‘0’,
    short_html text,
    PRIMARY KEY (id),
    KEY index_document_queries_on_word_document_id (word_document_id)
    ) ENGINE=InnoDB

    mysql> show table status like ‘document_queries’\G
    *************************** 1. row ***************************
    Name: document_queries
    Engine: InnoDB
    Version: 10
    Row_format: Compact
    Rows: 10027623
    Avg_row_length: 4266
    Data_length: 42783473664
    Max_data_length: 0
    Index_length: 43614208
    Data_free: 0
    Auto_increment: 3442149
    Create_time: 2008-04-21 19:25:22
    Update_time: NULL
    Check_time: NULL
    Collation: latin1_swedish_ci
    Checksum: NULL
    Comment: InnoDB free: 120832 kB
    1 row in set (0.24 sec)

  • As you probably know, Alexsky, using fast index create to create the secondary index on word_document_id AFTER you cop the data could have benefits in terms of your total “conversion time”, and the resulting index would also be more dense, making it more efficient for that 50% of your queries on the secondary key. If you happen to try it, let us know what you find out!

  • It seems to depend very much on the data. I tested some simple queries on a forum with ~150’000 posts. While 5.1 with a compressed ‘posts’ table had about the same performance as a normal InnoDB table in 5.0, a combined query with the ‘search_table’ revealed that the 5.1 ‘Barracuda’ was more than 65% slower than 5.0. It’s not the 5.0 because the 5.1 performs the same in the ‘old’ InnoDB format.

Leave a Reply