GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

Is the archive storage engine an option

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Is the archive storage engine an option

    Hi,
    We currently use Innodb and I want to see if the archive engine is an option to replace it. Currently we are finding selects and inserts are getting slow.

    We have 31 tables each with 50 million rows together are 1.2 terrabytes of data including an auto increment index on the ID field. Each table represents one of the last 31 days. Once per day we drop one table and start writing to the new table

    We only do inserts (batched constantly throughout the day 1000 rows at a time)
    We only do select * from table where id in(1,6,36,47,59) etc could be 1000 ids in each select.

    My question is could we use the archive engine with auto increment index on the id field and still have fast selects against the primary key without doing full table scans?

    I am just not that familar with how the archive engine would work with this type of selects and inserts.

    Our main requirements are less storage size and fast insert and selects of the above structure.

    We are replacing the old server with a RAID 10 SAN with 10 600gig 15k drives and 48 gig of memory and decent procs

    Thanks very much for any advice

  • #2
    leonch wrote on Tue, 24 August 2010 18:23


    We only do inserts (batched constantly throughout the day 1000 rows at a time)
    We only do select * from table where id in(1,6,36,47,59) etc could be 1000 ids in each select.

    My question is could we use the archive engine with auto increment index on the id field and still have fast selects against the primary key without doing full table scans?


    What does your execution plan look like for the 1000 ids selected? Because IIRC the archive engine _always_ performs a table scan. Which I'm guessing would be worse than 1000/50,000,000 random reads using the index to find the records.

    Another suggestion for you is MyISAM tables which usually requires much less storage space on disk than InnoDB for the same amount of data.
    MyISAM also has some features that makes it fast writing a continuously growing table where there are no delete or updates.

    And with MyISAM you can compress an old table with index functionality intact. Only drawback is you can't change it after compression, but then again you can't do that on archive either.

    As for reliability MyISAM doesn't have the same harnesses that InnoDB has, but then again the benefits might outweigh the drawbacks.

    You just have to set up a test server and try it out. )

    Comment

    Working...
    X