Limiting InnoDB Data Dictionary

One of InnoDB’s features is that memory allocated for internal tables definitions is not limited and may grow indefinitely. You may not notice it if you have an usual application with say 100-1000 tables. But for hosting providers and for user oriented applications ( each user has dedicated database / table) it is disaster. For 100.000+ tables InnoDB is consuming gigabytes of memory, keeping definition in memory all time after table was once opened. Only way to cleanup memory is to drop table or restart mysqld – I can’t say this is good solution, so we made patch which allows to restrict memory dedicated for data dictionary.

Patch was made by request of our customer Vertical Response and released under GPL, so you can download it there Currently patch is on testing stage, but later will be included into our releases. To limit memory we introduce new variable innodb_dict_size_limit (in bytes).

Some internals: There is already implemented in InnoDB LRU-based algorithm to keep only recent table entries, but it was not used by reason that InnoDB has to know if table is used or not on MySQL level. We made it by checking MySQL table_cache. If table is placed in table_cache we consider it as used, if not – we can delete it from InnoDB data dictionary. So there is the trick – if you have big enough table_cache, memory consumed by data dictionary may exceed innodb_dict_size_limit, as we can’t delete any table entry from it.

To finalize this post small marketing message – if you faced bug or problem which exists for long time and is not going to be solved by MySQL / InnoDB – contact us regarding Custom MySQL Development.

Share this post

Comments (13)

  • fire9

    Good patch.But I can’t download it.

    error information

    Sorry, no posts matched your criteria.

    February 11, 2009 at 5:59 pm
  • Vadim


    fixed, mistyping in path.

    February 11, 2009 at 6:28 pm
  • Shlomi Noach


    great to hear.
    “We made it by checking MySQL table_cache. If table is placed in table_cache we consider it as used, if not – we can delete it from InnoDB data dictionary.”

    Does this mean a global lock on the table cache is required when checking for table existence? (the table could be loaded into cache right after you check, in which case you must not delete it from data dictionary)

    If so, how does that affect performance? Is the table cache lock a bottleneck?


    February 11, 2009 at 10:49 pm

    This patch can certainly come in handy. Will it become apart of XTRADB?

    February 12, 2009 at 12:01 am
  • Vadim

    Shlomi Noach,

    We only assign flag “table in use” under table_cache lock, that’s all. All main work is done in InnoDB code. So I do not expect problem with table_cache lock.

    February 12, 2009 at 12:36 am
  • wlad

    +extern ulong srv_dict_size_limit;

    This patch has a problem : innodb_dict_size_limit cannot exceed 4GB on 64 bit Windows. Using variations of long is generally not a good idea – it is the worst datatype when it comes to portability. You need size_t for it or larger (i.e ulonglong).

    February 12, 2009 at 9:45 am
  • venu

    As innodb_open_files exists as max open handles; it could have been nice if it was controlled by innodb_table_open_cache (same as table_[open_]cache); so that it limits based on table count instead of size

    February 12, 2009 at 10:57 am
  • peter


    Innodb data dictionary is different. First you can run with single large tablespace and in this case number of open files can be irrelevant. Second if you have very many tables (millions) you may have more of them cached in the innodb data dictionary than have file handler associated with it. Opening/Closing file is relatively fast. The update of Innodb stats (bunch of random dives) is the real cost here.

    February 12, 2009 at 12:11 pm
  • Vadim


    it is actuall

    +extern ulint srv_dict_size_limit;

    and is the same as

    extern ulint srv_pool_size;
    extern ulint srv_awe_window_size;
    extern ulint srv_mem_pool_size;

    so I would say srv_dict_size_limit is the same for memory allocation as and innodb buffer pool

    February 12, 2009 at 1:26 pm
  • wlad

    Ok, now I see it is ulint *on some places*, but well, it is declared as extern ulong somewhere else..

    diff -r 9f28f5888574 sql/ha_innodb.h
    — a/sql/ha_innodb.h Mon Feb 02 10:23:55 2009 -0800
    +++ b/sql/ha_innodb.h Mon Feb 02 10:27:17 2009 -0800
    @@ -234,6 +234,7 @@
    extern ulong srv_thread_concurrency;
    extern ulong srv_commit_concurrency;
    extern ulong srv_flush_log_at_trx_commit;
    +extern ulong srv_dict_size_limit;

    Still, and the value is limited to ULONG_MAX that is 4GB on Windows 64 bit.

    February 12, 2009 at 1:44 pm
  • Yasufumi


    I am conscious of the portability. And I have understood some (ANSI incompatible?) compiler of Windows treat long as 32bit even if in 64bit platforms.

    But, I think there are no 64bit Windows users who uses such huge number of tables over 4GB of dictionary cache….

    So, it may not so serious restriction, and I have chosen simpleness of patch for now.
    In the future, if it becomes serious, we should fix it again.

    February 12, 2009 at 6:21 pm
  • wlad

    ANSI does not specify that size_t is equal to unsigned long anywhere. Casting pointer or long to int is commonly considered bad. Casting pointer or size_t to long is exactly as bad, given that a platform/compiler can choose LP or LLP (or ILP or SILP) model for 64 bit handling (

    In fact, the whole 64 bit support on Windows in compiler and SDK and reference compiler on this platform (MSVC) has been LLP on 64 bit (long is int is 4 bytes, long long is size_t is 8 bytes) since they started playing with Itanium prototypes, and that is almost a decade ago.

    February 12, 2009 at 6:53 pm
  • Vadim

    2 mysqldba

    yes, we will port to XtraDB eventually…

    February 15, 2009 at 10:38 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.