Yesterday I had fun time repairing 1.5Tb ext3 partition, containing many millions of files. Of course it should have never happened – this was decent PowerEdge 2850 box with RAID volume, ECC memory and reliable CentOS 4.4 distribution but still it did. We had “journal failed” message in kernel log and filesystem needed to be checked and repaired even though it is journaling file system which should not need checks in normal use, even in case of power failures. Checking and repairing took many hours especially as automatic check on boot failed and had to be manually restarted.
Same may happen with Innodb tables. They are designed to never crash, surviving power failures and even partial page writes but still they can get corrupted because of MySQL bugs, OS Bugs or hardware bugs, misconfiguration or failures.
Sometimes corruption kind be mild, so ALTER TABLE to rebuild the table fixes it. Sometimes table needs to be dropped and recovered from backup but in certain cases you may need to reimport whole database – if corruption is happens to be in undo tablespace or log files.
So do not forget to have your recovery plan this kind failures. This is one thing you better to have backups for. Backups however take time to restore, especially if you do point in time recovery using binary log to get to actual database state.
The good practice to approach this kind of problem is first to have enough redundancy. I always assume any component, such as piece of hardware or software can fail, even if this piece of hardware has some internal redundancy by itself, such as RAID or SAN solutions.
If you can’t afford full redundancy for everything (and probably even if you do) the good idea is to keep your objects smaller so if you need to do any maintenance with them it will take less times. Smaller RAID volumes would typically rebuild faster, smaller database size per system (yet another reason to like medium end commodity hardware) makes it faster to recover, smaller tables allow per table backup and recovery to happen faster.
With MySQL and blocking ALTER TABLE there is yet another reason to keep tables small, so you do not have to use complicated scenarios to do simple things. Assume for example you need to add extra column to 500GB Innodb table. It will probably take long hours or even days for ALTER TABLE to complete and about 500GB of temporary space will be required which you simply might not have. You can of course use MASTER-MASTER replication and run statement on one server, switch role and then do it on other, but if alter table takes several days do you really can afford having no box to fall back to for such a long time ?
On other hand if you would have 500 of 1GB tables it would be very easy – you can simply move small pieces of data offline for a minute and alter them live. Also all process will be much faster this way as whole indexes will well fit in memory for such small tables.
Not to mention splitting 500 tables to several servers will likely be easy than splitting one big one.
There are bunch of complications with many tables of course, it is not always easy to partition your data appropriately, also code gets complicated but for many applications it is worth the trouble
At NNSEEK for example we have data split at 256 groups of tables. Current data size is small enough so even single table would not be big problem but it is much easier to write your code to handle split from very beginning rather than try to add in later on when there are 100 helper scripts written etc.
For the same reason I would recommend setting up multiple virtual servers even if you work with physical one in the beginning. Different accounts with different permissions will be good enough. Doing so will ensure you will not have problems once you will really need to scale to multiple servers.