I worked with application recently which has great memcached hit ratio – over 99% but yet still has average page response time over 500ms. Reason ? There are hundreds memcached gets and even though they have some 0.4ms response time they add up to add hundreds of ms to the total response time.
The great memcached hit ratio is great however even more than that you should target eliminating requests all together. Hit rate is very bad indicator to begin with. Imagine you have application which gets 90 memcache gets (hits) to retrieve some data plus there are 10 more requests which resulted in misses and caused MySQL queries. The hit ratio is 90%. Imagine now you have found a way to store the data which required 90 requests as the single object. You have 1 request (hit) now and 10 misses which drops your hit rate down to less than 10% but performance will likely be a lot better.
The answer in many cases is to use larger objects for the cache.
If you look at a web page you typically will see some “blocks” which will be static and possibly highly cachable while others may be very dynamic or badly cachable. For example the area which displays user name and current time may not be cachable however widget displaying last 10 blog entries cachable at least for several minutes.
There are many development implications, component dependencies etc on how you can really implement caching but if you’re looking from birds eye view you want to have as fewer blocks in the page as possible (to reduce number of get requests and CPU efford required to build the page) at the same time you have to split the page in blocks which can allow good caching:
Make non cacheable blocks as small as possible If you have something like current time on the page which is not cachable do not mix it with other content which would have to be also re-created each time on the cache miss.
Maximize amount of uses of the cache item It is the most efficient if the item in the cache can be used by multiple pages and multiple users this will maximize cache efficiency while reduce amount of cache space required. The importance of this advice depends on the use case for your applications – some applications which have only couple of page views per user per day would really need to have items shared by multiple users to get best hit rate. Others which have hundreds of page views per user per session may be good enough with caches which are effectively per user.
Control invalidation Item in cache expire or get invalidated. It is best if such invalidation only kicks of the data in cache which needs to be invalidated but this may result in too small blocks and hence too many cache requests required. You may look for balance by looking to cache data together which invalidates in about same time.
Multi-Get Multi Get is truly nuclear option for caching with memcached both because it can be so efficient and because it can be so complicated to add it to existing code which was not design with it in mind. Multi-Get effectively allows you to manage items (expire,invalidate) separately while fetching them in the single round trip. You can have have hundreds of different items fetched from cache for the page without paying for network latency.
Does Multi Get removes the need to think about using larger items for cache ? I do not think so. You still have to pay extra cost of processing both on memcached server and application side. However the optimal cache topology would probably be different with multi get than without it.
The good theoretical example would be having complicated front page which would require a lot of cache requests and hundreds of milliseconds of CPU processing and which can be cached almost int its entirety with exception of user name in the corner. Storing complete page HTML in the cache and doing template replace for user name can be very efficient.
This may sound complicated and indeed there is a lot of art and science in designing optimal cache solution. The good thing is most web sites do not have to get it absolutely perfect to get good enough performance, it is most important to avoid biggest design mistakes.
Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.
Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below.