As I stated in my last post, I decided to use the Innodb Table monitor to diagnose an Index count mismatch error a customers found in their mysqld.err log to verify if the problem still existed.
The Innodb Table Monitor has existed for basically forever in Innodb (the MySQL manual discusses it back in the 4.1 manual), and is from a time when what was going on inside of Innodb was a lot murkier than it is now. To use it, you create a table (in any database you choose), like this:
CREATE TABLE innodb_table_monitor (a INT) ENGINE=INNODB;
This, primitively, tells Innodb that it should start the Table monitor. The monitor itself dumps the contents of the Data dictionary to the mysql error log, which looks something like this:
090420 12:09:32 INNODB TABLE MONITOR OUTPUT
TABLE: name SYS_FOREIGN, id 0 11, columns 7, indexes 3, appr.rows 1
COLUMNS: ID: DATA_VARCHAR DATA_ENGLISH len 0;
FOR_NAME: DATA_VARCHAR DATA_ENGLISH len 0;
REF_NAME: DATA_VARCHAR DATA_ENGLISH len 0;
N_COLS: DATA_INT len 4;
DB_ROW_ID: DATA_SYS prtype 256 len 6;
DB_TRX_ID: DATA_SYS prtype 257 len 6;
INDEX: name ID_IND, id 0 11, fields 1/6, uniq 1, type 3
root page 46, appr.key vals 1, leaf pages 1, size pages 1
FIELDS: ID DB_TRX_ID DB_ROLL_PTR FOR_NAME REF_NAME N_COLS
INDEX: name FOR_IND, id 0 12, fields 1/2, uniq 2, type 0
root page 47, appr.key vals 1, leaf pages 1, size pages 1
FIELDS: FOR_NAME ID
INDEX: name REF_IND, id 0 13, fields 1/2, uniq 2, type 0
root page 48, appr.key vals 1, leaf pages 1, size pages 1
FIELDS: REF_NAME ID
Basically you can see every table, every column, and every index in that table. Once you get the output, you just DROP the table and it stops. It’s been around for forever, so it should be stable, right?
So, to my customer. It may be important to note that this customer had a few hundred tables. I scheduled a time with the customer during off hours to connect to their master and collect the output so we could confirm if Innodb really did have all the indexes or not. I collected the data just fine, and I found that indeed the indexes that Innodb knew about were correct, and that the error messages the customer saw was apparently only transitory.
However, as I was writing an email to the customer to explain it, I got an email explaining that he saw a big spike in io-wait on the server during the time I ran the Innodb Table monitor (it was just for a minute) and he was very glad we waited until after hours. The server itself was fine, and there didn’t seem to be any adverse effects from running the monitor, but something clearly was a lot busier than expected during execution.
He had data collected using Percona Tookit‘s pt-stalk during the time and asked me to take a look and see what happened. I looked through the data and found not only was there high io-wait on the system, but also there were over a hundred user queries in SHOW PROCESSLIST that were stuck in the ‘Opening Tables’ state, uh-oh.
I dug through the source code and found a few interesting things about the table monitor:
- It holds the dict_sys->mutex for the entire time it runs
- Before outputting the information for each table, it calls dict_update_statistics and appears to force table statistics to be regenerated for each table
- It’s possible I’m mistaken, but it sure looks like dict_update_statistics is called twice for each table, at least in the latest Percona Server 5.1 in: storage/innodb_plugin/dict/dict0load.c: line 226, and again in: storage/innodb_plugin/dict/dict0dict.c: line 4883
- Table stats weren’t regenerated unless it was necessary
- dict_sys->mutex weren’t such a big hammer