The tool I’ve been waiting for years

I’ve just been pointed to the nice tool which I was waiting for years to see. It is fincore – little perl script which allows you to see what pages of file are cached in OS memory. This is really cool.

When it comes to MySQL it is very useful with MyISAM tables which has their data file cached by OS cache only so you do not have any good information from MySQL side on what data is cached. You can also use it with Innodb to see how much memory are you wasting with double buffering by not using of O_DIRECT.

Besides general clues such as 50% of my file is cached you should watch for dynamics – for example check it during backup process and compare it due to normal load – this can give you a clue if slow down happens because of extra IO pressure or just because pages were washed out. You can also check how pages are cached. For example every second page cached may be helpful for point queries but does not save a lot of IO for doing table scans.

One thing I’m still missing is looking it from another side – so I have say 10GB of OS cache used on the server but how can I tell what is using it ? This look from another side would help me dramatically to find out what is causing cache pressure and what needs to be worked on. Scanning all files on filesystem and checking which are cached obviously does not work.

The fincore looks more like proof of concept tool – it is a bit simplistic, however being written in Perl it is easily hackable – if you want to make it to print percentage of file cached or “graph” showing how cached pages are distributed among file is very easy.

The great thing about this tool it is very fast and it does not disturbs OS file cache by using mincore function to get pages which are currently in cache.

This function is actually the real meat here – the tool is simplistic but it shows how to use the function so you can write real stuff. For example using this tool MySQL can easily add amount of cached data per table for MyISAM and Archive tables to INFORMATION_SCHEMA (or other system tables) which would be really cool Of course than one would need to implement cache content tracking for storage engines which cache everything in their own cache memory – Innodb, Falcon, Maria.

Having information about how large portion of table is cached would allow optimizer to take much smarter decisions in many cases.

Lets now see some examples:

So we can see one of Innodb log files is practically uncached while other has about 1/3rd cached – this makes sense, perhaps second log file is being written now and there is a “tail” of pages which just were not removed from the cache yet. As Innodb does not read logfile unless in recovery these are waste and Innodb could use fadvice to give instruction to kernel not to cache these as long as it can’t perform direct IO to log files on Linux because it is not aligned.

Out of 250GB innodb data file none of pages are in cache – this is because this instance is using O_DIRECT flag to bypass data buffering and we can well see it works.

Lets now see stats for MyISAM tables:

The performance log for yesterday is almost out of cache. It is about 0:50 by server clock this is why we still can see some pages remaining. Today log file is 50% in cache. Knowing access pattern to the file you can draw some conclusions about how much IO pressure we have on this server.

P.S If you would hack this tool or know any similar tools please let me know.

Share this post

Comments (10)

  • Dmitriy T.

    Perl used in this tool just for wrapping C-code, in which mincore called.

    P.S. Sorry for broken english 🙂

    March 19, 2008 at 11:28 am
  • peter

    Yes sure. But the fact the wrapper is in Perl makes it easier to extend for people familiar with scripting language.
    Hardcore C guys just need to know such call exists 🙂

    March 19, 2008 at 2:46 pm
  • Michael Monashev

    Спасибо большое за интересную утилитку. Узнал много интересного. Например то, что .MYD-файлы кэшируются сильнее .MYI-файлов. Хотя может это только у меня так.

    Иногда fincore падает почему-то. Но в целом даёт довольно интересную картину. Ещё раз спасибо.

    March 20, 2008 at 4:49 am
  • peter

    MYD files are expected to be cached more by OS than MYI files because later are cached in key_buffer so OS should get less requests for them and less motivation to have them in the cache.

    March 20, 2008 at 10:59 am
  • Dathan Vance Pattishall

    Wow I always wanted something like this. I had no idea that this even existed. Thank you for letting us know!

    March 20, 2008 at 9:27 pm
  • Alex Savchuk


    there is an util which looks like what you want:

    _but_ (as usual) there is a problem — it worked only for 2 old versions of kernel 🙂
    And it doesn’t work on lastest kernels…

    Need to update it for supporting new kernels.
    I was trying but i have small knowledge of kernel 🙁

    March 22, 2008 at 1:27 am
  • peter


    Anyone knows if there is updated version for this too anywhere ?

    March 22, 2008 at 10:23 am
  • Michael Monashev

    Пётр, спасибо за разъяснение.

    Я раньше всегда считал, что что двойное кэширование (в MySQL и в OS) – это зло и приводит к тому, что память тратися впустую. А на практике выходит, что в этих двух кэшах содержаться разные данные и память напрасно почти не тратится, ибо данные попавшие в кэш MySQL-я запрашиваются с диска лишь единожды и потом с большой вероятностью вытесняются из кэша OS, Ñ‚.к. к ним нет запросов на чтение.

    March 22, 2008 at 1:19 pm
  • peter


    The double buffering has many down sides. Depending on workload and OS it can be more or less. You’re right data which is cached is typically removed from MySQL quickly but still. Consider full table scan for example – you get a lot of data which is being read very quickly – consuming a lot of cache quickly it can well remove data which you need.

    Another issue is battle for virtual memory – a lot of cached IO makes OS to try to make more memory for the cache which often causes it to try swap out something… like some of huge caches you allocated to MySQL.

    Anyway – this tool is great because now you can actually see how much double buffering you have and what gets cached.

    March 22, 2008 at 9:32 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.