“What cache hit rate is good for optimal MySQL Performance” is typical question I’m asked. It could by MyISAM key_buffer or Innodb innodb_buffer_pool it does not really matter. In both cases trying to come up with constant “good” hit rate is looking for trouble. Well of course you can name 100% hit rate and all data fits in memory as perfect case but that is not the answer people are looking for.
First thing which needs to be understand is – cache hit ratio can be computed differently for different engines. For example for Innodb page hits are counted for all subsequent page accesses if page is scanned not just once. This means if you would have completely IO bound full table scan and you have 100 rows per page you will get 1 miss per page and 99 hits, making your hit ratio 99% for what would be 0% in many peoples mind.
But what is even more important is hit ratio is not directly relevant for performance. Take the following example – you have some script doing small (in memory) data processing on your server doing some traversing some 1.000.000 rows per second. And at the same time you have some performance critical transaction which frequently needs IO. Well in this case global key hit ratio may be 99.99% but you would still have problems with key buffer efficiency. In the real world even single operation may be IO bould while showing very large hit rate so even taking data for single thread is not enough.
So what should you look at ? The best answer would be to look at response time contribution – how much buffer misses (physical reads) contributed to response time ? This would take into account other things such as – how busy were drives with concurrent requests, how random were these reads/writes, how good was OS or SAN cache hit rate – something you would not see otherwise. The bad luck is – MySQL currently does not provide this information (it exists in plans).
The next best thing would be to look at the number of misses – number of IOs which MySQL needs to do. You better to look at global and local (per thread) numbers and compare them to ones from iostat and to what you system can possibly do. For example if single query is issuing 100+ random IO requests which are not cached by OS or SAN this is likely to be IO bound problem. Yes this still includes a lot of quessing but this is what we have to do until proper instrumentation is done.
In partuclar you should be looking at Key_reads and Key_writes for MyISAM tables (note this does not include row data access, so you’re a bit in trouble) or “OS file reads“, “OS file writes” from SHOW INNODB STATUS for Innodb tables. In MySQL 5.0+ you can also use Innodb_data_reads, and Innodb_data_writes counters from SHOW STATUS
So again if you have single 7200 SATA hard drive in your system which can do about 100 random IOs/sec and you see Innodb_data_reads incremented by 10 per second (mysqladmin extended -i10 -r is your friend in computing this) while there is no other IO – no matter what your hit ratio is it is unlikely the problem for MySQL Performance. If it is however 120 per second it well could be. It is still worth to check “iostat” if all of them make it to the disk. In some configurations OS cache may take all of them.
In some OS – ie Solaris you can use dtrace to do external instrumentation. You should be able to see how many reads and writes are done and where do they come from (log writes, data/index reads etc) but you would probably have trouble mapping them to particular objects and queries.
Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.
Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below.