Beware large Query_Cache sizes

During last couple of months I had number of consulting projects with MySQL Query Cache being source of serious problems. One would see queries both for MyISAM and Innodb tables mysteriously stalling for few seconds with “NULL” in State column.

These could be troubleshooted to waiting on Query Cache which at the same time had massive amount of entries invalidated by some batch data load job.

When you should worry ? If you set query_cache_size relatively high at 256MB or more. It can be seen worse if your query cache size is in Gigabytes. At the same time check how many queries do you have in cache – Qcache_queries_in_cache – if it is in hundreds of thousands it may take a while to invalidate them. But first of all you should have something which causes massive amount of invalidations like tens-hundreds of thousands queries being invalidated by single insert – this typically happens if you have rare bulk loads rather than constant insertion.
You can also check how long FLUSH QUERY CACHE takes – if it is in seconds this is about as long as invalidation can take in extreme case.

How to solve the problem ? The easiest solution is to decrease query cache size – smaller values do not have such problems but also often will be less efficient. In many cases you may want to disable query cache at all and use external cache such as memcached which does not have this problem. You also can cause invalidations to happen more regularly but this also should affect query cache hit rate.

Share this post

Comments (5)

  • Kevin Burton

    Yeah.. I think had this problem for a while as well.

    March 23, 2007 at 3:01 pm
  • peter

    Could be. However in this case I do not see why single update would invalidate a lot of queries.
    As far as I know they have millions of tables so updating each of them should invalidate only few.

    The problems with extreme number of tables is another story.

    March 25, 2007 at 3:47 am
    • Rick James

      _Each_ INSERT/UPDATE/DELETE causes _all_ entries in the QC for that table(s) to be invalidated (purged). If you have done a lot of different SELECTs, and you have a gigabyte QC, then a lot of work is needed. (Apparently AWS Aurora has rewritten the QC code, so large query_cache_size works efficiently.)

      My advice is to either turn off the QC (for production systems with lots of write traffic) or keep the size below 50M.

      December 28, 2017 at 10:49 am
  • James Day is worth watching, since it is about improvement of the time to insert the freed query cache record into the sorted list of free blocks. Flush query cache defragments the query cache, so regular flushing is one possible way of reducing the impact.

    On a related topic, has improved the reset query cache operation and caused it to be less likely to block other queries in 5.0.25 and 5.1.12 and later. A reset might be less painful than a flush before a rare table update, even though it’ll remove all of the cached queries.

    The query cache design will undoubtedly be improved to better suit large cache sizes at some time. For the moment, it’s intended for fairly small cache sizes, in tens rather than hundreds of megabytes.

    March 25, 2007 at 4:59 pm
  • Hayden James

    Great advice! Although this article is 7 year old, its still COMPLETELY accurate if anyone wonders. On the other end setting query_cache_size too small can kill performance of cache due to prunes. Such a fine line to thread for achieving the right balance, but can be worth it.

    July 11, 2014 at 2:07 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.