An attendee to Espen’s recent webinar asked how to check tables for corruption. This kind of ties into my recent post on InnoDB’s handling of corrupted pages, because the best way to check for corruption is with CHECK TABLES, but if a page is corrupt, InnoDB will crash the server to prevent access to the corrupt data. As mentioned in that post, this can only be changed by changing InnoDB.
So how are you supposed to check for corruption that might be introduced by bad hardware, a bug, or so forth?
It’s a great question. The answer I would give for most cases is “check your backups for corruption instead of your live server.” You need to do this anyway — a backup that isn’t checked is a ticking time bomb. You need to verify (at least periodically) that your backups are recoverable.
The usual procedure goes like this: copy your backup somewhere, start a server instance on it, and run CHECK TABLES. You can use the mysqlcheck program to do this conveniently.
You could also use innochecksum, which doesn’t require starting the server. But it only verifies that each page’s checksum matches the page’s data, it doesn’t do all the other checks that are built into InnoDB (making sure that the LSNs are sane, for example).
How often? As often as possible. Some people refresh their dev/staging environment every day with last night’s backup, which is a great way to make failures obvious, as long as you verify that it truly does happen (e.g. what if it fails and you keep running with yesterday’s without knowing it?). If you can’t do it daily, then weekly is perfectly acceptable to most people. I’m not saying a specific interval should/ought to be your goal, I’m just remarking on what a lot of people seem to feel good about.
Resources
RELATED POSTS