What does the ‘Incorrect key file for table’ error mean?

What does it mean if MySQL returns the ‘Incorrect key file for table’ error for one of my queries? The answer is complicated and depends on which storage engine is returning the error. We have debugged two cases which we describe here.

File system out of space

When running the random query generator, one of the queries failed.

Since this query requires a sort, MySQL creates a hidden temporary table called ‘#sql_6b8_17.MYI’ to hold the intermediate results. While the query was executing, some operation performed on the MyISAM table returned an error. What could it be?

MySQL maps the HA_ERROR_CRASHED error received from the storage engine to the ER_NOT_KEYFILE error that the MySQL client sees. We need to track down where MyISAM returns the HA_ERROR_CRASHED error. Unfortunately, the MyISAM storage engine has lots of places where it returns HA_ERROR_CRASHED, so the cause can be almost anything.

In our case, we found that the MyISAM mi_write function eventually got an ENOSPC (no space) error when trying to write data to one of its files. The file system ran out of space. In this case, MyISAM returns the HA_ERROR_CRASHED error, the MySQL client gets the ER_NOT_KEYFILE error, and the random query generator notes the failed query. Simple, right?

Race in secondary index query

TokuDB returns the HA_ERROR_CRASHED error when it can not read a row from the primary index using a primary key that it read from a secondary index. What does that mean?

Each key defined for a TokuDB table is stored in a fractal tree dictionary. The fractal tree for the primary key is stored in the ‘main’ fractal tree dictionary. If the table does not define a primary key, then TokuDB manufactures a hidden primary key and uses it as the primary key. This hidden primary key is never seen outside of the TokuDB storage engine. Each secondary key is stored in its ‘key’ fractal tree dictionary. The key into these dictionaries is composed of the columns of the secondary key and the columns of the primary key appended to it.

Given this mapping, non-covering queries read the secondary index first and then read the primary index using the primary key read from the secondary index.

Lets suppose that we have a simple table.

We insert a single row into the table

On another MySQL client, we will run a query and force it to use the secondary key for ‘x’. Since it is not a covering key for the query, TokuDB will read a row from the secondary key and use the primary key that was retrieved to read a row in the primary hidden dictionary. We are going to put a delay between these two events and see what happens.

We use read uncommitted isolation level.

We set a debug variable that forces a delay between reading a row from the index on ‘k’ and then using the retrieved primary key to read the full row from the hidden primary index.

Now, we start the query. It should stall the thread in the read_full_row method for the delay time.

On the other MySQL client, we delete the row.

Eventually, the query client resumes after the sleep completes and can no longer find the primary key in the primary fractal tree, so it returns the HA_ERR_CRASHED error.

This problem does not occur for any other transaction isolation levels as far as we know. This is because TokuDB uses snapshot reads when reading MVCC leaf entries for all transaction isolation levels other than read uncommitted. For read uncommitted transaction isolation level, TokuDB just returns the latest MVCC entries from the leaf entry, which may be inconsistent with the rest of the table.

Maybe, TokuDB should silently eat this error for read uncommitted queries. What do you think?

Share this post

Comments (3)

  • Ratheesh Reply

    I am not sure if the error should or should not be silently eaten by Tokudb. But I believe the error is visible on a replication slave irrespective of the tx_isolation. I could be wrong but the tx_isolation level on the server is REPEATABLE-READ and I don’t see any indication of the zabbix client setting a different isolation level.

    Replication breaks with the below error:
    Last_SQL_Error: Error ‘Incorrect key file for table ‘history_uint’; try to repair it’ on query. Default database: ‘zabbix’. Query: ‘delete from history_uint where itemid=39656 and clock show create table zabbix.history_uintG
    *************************** 1. row ***************************
    Table: history_uint
    Create Table: CREATE TABLE history_uint (
    itemid bigint(20) unsigned NOT NULL,
    clock int(11) NOT NULL DEFAULT ‘0’,
    value bigint(20) unsigned NOT NULL DEFAULT ‘0’,
    ns int(11) NOT NULL DEFAULT ‘0’,
    KEY history_uint_1 (itemid,clock)

    The above table was innodb earlier and got converted to Tokudb for testing performace and usability.
    The mysql server version is:
    rpm -qa | grep -i percona

    Tokudb Version:
    mysql> select @@tokudb_version;
    | @@tokudb_version |
    | tokudb-7.5.6 |
    1 row in set (0.00 sec)

    Tokudb settings on the slave:
    mysql> show global variables like ‘toku%’;
    | Variable_name | Value |
    | tokudb_alter_print_error | OFF |
    | tokudb_analyze_delete_fraction | 1.000000 |
    | tokudb_analyze_time | 5 |
    | tokudb_block_size | 4194304 |
    | tokudb_bulk_fetch | ON |
    | tokudb_cache_size | 16826132480 |
    | tokudb_check_jemalloc | 1 |
    | tokudb_checkpoint_lock | OFF |
    | tokudb_checkpoint_on_flush_logs | OFF |
    | tokudb_checkpointing_period | 60 |
    | tokudb_cleaner_iterations | 5 |
    | tokudb_cleaner_period | 1 |
    | tokudb_commit_sync | OFF |
    | tokudb_create_index_online | ON |
    | tokudb_data_dir | |
    | tokudb_debug | 0 |
    | tokudb_directio | OFF |
    | tokudb_disable_hot_alter | OFF |
    | tokudb_disable_prefetching | OFF |
    | tokudb_disable_slow_alter | OFF |
    | tokudb_disable_slow_update | OFF |
    | tokudb_disable_slow_upsert | OFF |
    | tokudb_empty_scan | rl |
    | tokudb_fs_reserve_percent | 5 |
    | tokudb_fsync_log_period | 1000 |
    | tokudb_hide_default_row_format | ON |
    | tokudb_killed_time | 4000 |
    | tokudb_last_lock_timeout | |
    | tokudb_load_save_space | ON |
    | tokudb_loader_memory_size | 100000000 |
    | tokudb_lock_timeout | 4000 |
    | tokudb_lock_timeout_debug | 1 |
    | tokudb_log_dir | |
    | tokudb_max_lock_memory | 2103266560 |
    | tokudb_optimize_index_fraction | 1.000000 |
    | tokudb_optimize_index_name | |
    | tokudb_optimize_throttle | 0 |
    | tokudb_pk_insert_mode | 1 |
    | tokudb_prelock_empty | ON |
    | tokudb_read_block_size | 65536 |
    | tokudb_read_buf_size | 131072 |
    | tokudb_read_status_frequency | 10000 |
    | tokudb_row_format | tokudb_zlib |
    | tokudb_rpl_check_readonly | ON |
    | tokudb_rpl_lookup_rows | ON |
    | tokudb_rpl_lookup_rows_delay | 0 |
    | tokudb_rpl_unique_checks | ON |
    | tokudb_rpl_unique_checks_delay | 0 |
    | tokudb_support_xa | ON |
    | tokudb_tmp_dir | |
    | tokudb_version | tokudb-7.5.6 |
    | tokudb_write_status_frequency | 1000 |
    52 rows in set (0.01 sec)

    What would be a good way to overcome this issue? Does adding a Clustered secondary index help? Or does it need a Primary key (which is not hidden)?

    May 17, 2015 at 10:08 pm
  • Marcos Albe Reply

    There’s one more option: if the table is InnoDB then it could be you have hit https://bugs.mysql.com/bug.php?id=44571

    April 20, 2018 at 1:04 pm

Leave a Reply