EmergencyEMERGENCY? Get 24/7 Help Now!

The MySQL optimizer, the OS cache, and sequential versus random I/O

Posted on:



Share Button

In my post on estimating query completion time, I wrote about how I measured the performance on a join between a few tables in a typical star schema data warehousing scenario.

In short, a query that could take several days to run with one join order takes an hour with another, and the optimizer chose the poorer of the two join orders. Why is one join order so much slower than the other, and why did the optimizer not choose the faster one? That’s what this post is about.

Let’s start with the MySQL query optimizer. The optimizer tries to choose the best join order based on its cost metric; it tries to estimate the cost for a query, then choose the query plan that has the lowest cost. The unit of cost for the MySQL query optimizer is a single random 4k data page read. In general, it’s a pretty good metric, but it has one major weakness: the server doesn’t know whether a read will be satisfied from the operating system cache, or whether it’ll have to go to disk. (This distinction is abstracted away by the storage engine; the optimizer doesn’t know how a given storage engine stores its data).

I’ll try to omit the details and keep this clean. Let’s take a look at the tables.

It’s a big fact table and two fairly small dimension tables, which is normal. Here is the query:

There are indexes on all the columns in all the ways you’d expect: all the dimension columns are indexed on every table, and there’s a separate index on every column in the WHERE clause. Here’s the query plan initially.

This query will run for days and never complete. No one ever let it finish to see how long it would run.

How do I know it will run for days? Here’s my train of thought:

  • It’s performing index lookups into the fact table, which is big.
  • An index lookup is a random I/O.
  • A modern disk can do about 100 random I/O’s per second, as a rule of thumb.
  • If you do the math with the rows column in EXPLAIN, you realize that this equates to about 18790 * 606 = 11386740 I/O operations, assuming that the indexes are fully in memory.
  • When you divide this by 100 I/O operations per second, and then divide that by 86400 seconds in a day, you get about 2.6 days.

Ouch! That’s slow.

Now let’s look at the alternative: table-scan the fact table, and do index lookups in the two dimension tables. MySQL doesn’t want to choose this join order, so we’ll force it with STRAIGHT_JOIN:

As we saw in the previous post, which I linked at the top of this post, we can scan the fact table in less than an hour. And it turns out that joining to the dimension tables doesn’t slow the query perceptibly, because these tables are small and they stay in memory, in the OS cache. (They don’t get evicted from memory by the cache’s LRU policy, because they are frequently used — once per row in the fact table. The LRU policy evicts old blocks from the fact table instead, which is perfect — these blocks are used only once and not needed again, so they can be replaced).

The difference between the two queries — 55 minutes and 2.6 days — is basically the difference between scanning data sequentially on disk and random disk I/O.

So now you know why one join order is faster than the other. But why didn’t the optimizer know this, too? The optimizer does know that random access is slower than sequential access, but it doesn’t know that the dimension tables will stay in memory, and this is an important distinction.

Let’s put ourselves into the mindset of the optimizer. We’ll assume that every join to the dimension tables will go to disk instead of being read from cache. Now the STRAIGHT_JOIN becomes a table scan of about 313 sequential reads (150 million rows / 117 bytes per row / 4096 bytes per read), plus about 150 million random I/Os for the first dimension table, plus 150 million random I/Os for the second dimension table. That’s 300 million random I/O operations.

In contrast, the optimizer chose a plan that it thought would cause only 11.3 million random I/O operations.

The optimizer was being smart, given its lack of knowledge about the OS cache. This is why an expert is sometimes needed to provide the missing information. If the MySQL optimizer were right and each of these had to go to disk, our STRAIGHT_JOIN plan would take more than a month to complete! Good thing we know the difference between cache and disk!

Share Button

Baron Schwartz

Baron is the lead author of High Performance MySQL. He is a former Percona employee.



Insight for Developers

  • Good passage! Maybe you should say : if indexs can not be loaded in memory, you query maybe slower still using index :)!
    If about this case, you have a server had 128G Memory, I think the result is different! Maybe the first query be finished short than 1 Minute.


  • Great post.. Very concise


  • Baron,

    this (and the one about estimating query execution time) is absolutely great stuff! Well written, well explained, and very applicable to the real world.

    Kudos for a great article and for sharing your insight 😉

    kind regards,


  • Thanks Roland :)


  • Yeah this is great stuff. I recall that MS SQL Server would actually figure out that a table and its indexes could fit into RAM, and load them into their equivilent of a memory engine temp table, allowing the optimizer to figure this stuff out. Seems like that would be a worthwhile feature for the query optimizer which would perform better than the underlying OS cache.


  • Nice post. But is this still true when we have lot of different queries running in paralel? Won’t they also use the disk and “interrupt” the sequential reads with their own reads and in fact the disk will be making random reads? If this is true, there is no reason in trying to optimize one particular query to use table scan (unless it is the only query running).


  • Marki, to some extent that is true, but the effect isn’t as severe as you think. Even in the worst case, if the scan is interrupted very often, it will still read many rows each time the disk services this task, and a full scan broken into thousands of pieces is still much faster than millions of random reads. Also, both the OS and the disk controller have a lot of intelligence to recognize and handle these scenarios (batching, reordering, read-ahead, etc), so they end up being more efficient than just random reads. Full scans for this type of query under a concurrent workload are still a big win.


  • And why MySQL doesn’t use scattered read (multiblock read) as it do rdbms leaders ?


  • Cool feature ! and how MySQL optimizer chose random read (and read some blocks from cache) or use MRR (all blocks from HDD) ?


  • Triffids, I don’t know much about the multi-range read feature. (I say that about lots of things until I have studied it a lot). It’s unfinished, and I would bet the final implementation will be somewhat different than it is now.


  • Yasufumi Kinoshita

    I have tried to fix 5.0 optimizer. (But I have focused on InnoDB)
    My patch may solve this type of problems.




  • Baron- wonderful article. One of those articles that establishes faith that the beast can be understood.

    Question: I am not clear when at the very end you say “313 sequential reads (150 million rows / 117 bytes per row / 4096 bytes per read)”. Should’nt the math be “(150 M rows*117 bytes per row)/block_size(=4096)= 4.2 M sequential reads”.

    If so, is there a rule of thumb for how fast sequential reads can be done by commodity hardware ? (I am assuming that the number of 100 IOs per second refers to random I/O).



  • Oh, wow, did I really screw the math up that badly? I believe you are right. Funny that no one else caught it yet.

    You can assume that each random I/O has to wait for the head to move, for the disk to rotate the starting point of the read under the head (1/2 of a rotation on average) and then the disk to rotate all the data under the head. So the spindle rotation speed really dominates, and you can just estimate the random I/O off that. You can get an absolute upper bound easily, and in practice you won’t get that performance.

    For sequential I/O, I have to think about that for a moment, and suddenly I’m not confident in my math skills.


Leave a Reply

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.

Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below.

No, thank you. Please do not ask me again.