EmergencyEMERGENCY? Get 24/7 Help Now!

Cache Miss Rate as a function of Cache Size

 | September 12, 2009 |  Posted In: Tokutek, TokuView


I saw Mark Callaghan’s post, and his graph showing miss rate as a function of cache size for InnoDB running MySQL. He plots miss rate against cache size and compares it to two simple models:

  • A linear model where the miss rate is (1-C/D)/50, and
  • A inverse-proportional model where the miss rate is D/(1000C).

He seemed happy (and maybe surprised) that that the linear model is a bad match and that inverse-proportional model is a good match. The linear model is the one that would make sense if every page were equally likely to have a hit.

I’ll argue here that it’s not so surprising. Suppose that miss rate has a heavy-tailed distribution, such as Zipf’s law. An example of a Zipf’s-law distribution would be if the most frequently accessed cache block accounts for X of the accesses, the second most frequent accounts for X/2 of the accesses, the third most frequent accounts for X/3 of the accesses, and so forth with the ith most frequently accessed block accountig for X/i of the accesses

What miss rate should we expect? Essentially in this distribution, if you look at the number of misses in the first N blocks, it’s half the number of misses found in the next N blocks. Thus, we would expect the miss rate to be proportional to 1/C, where C is the cache size. That matches Mark’s experiment.

This simple heavy-tailed distribution shows up all over the place, and is often a good model for this kind of system. For example, I would expect one were to collect the top page returned for every google query, the frequency of page hits follows this distribution. A more frequently cited example is the frequency distribution of words in a natural language. Such a distribution probably controls the query cache too.

One shortcoming of iiBench is that iiBench assumes that all inserted index values are equally likely, leading to a linear miss-rate model. Since TokuDB’s advantage on insertions is related to the cache miss rate, reducing the miss rate will tend to make InnoDB look better, reducing TokuDB’s advantage. Thus InnoDB’s miss rate probably isn’t as bad on real data as iiBench suggests. That probably explains why iiBench shows a 200x advantage for TokuDB, but when talking to customers the advantage is often more like 10x or 20x. It seems clear to me that iiBench would serve us better if it had the option of generating data according to Zipf’s law. (Since I helped design iiBench, I have no qualms about criticizing it.)

Here’s a little game that helps show why heavy-tailed distributions are fun to think about. This game comes from the St. Petersberg Paradox. The game is a lottery in which I’m going to pay you some money, and the amount of money is a function of a random variable. The payoff schedule is that I’ll pay you one dollar half the time, I’ll pay you two dollars a quarter of the time, I’ll pay you $4 with probability 1/8, and in general I’ll pay 2i dollars with probability 2i-1. How much would you pay to buy one of these lottery tickets? (For an analysis, see Wikipedia).


One Comment

  • I updated the graph to include the Zipfian distribution for the CDF — H(C/1000, s=1.55) / H(D/1000, s=1.55). C and D are divided by 1000 to make the sum computation faster. That function matches the measured miss rate.

Leave a Reply


Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.