EmergencyEMERGENCY? Get 24/7 Help Now!

Missing Data – rows used to generate result set

 | July 20, 2008 |  Posted In: Insight for Developers


As Baron writes it is not the number of rows returned by the query but number of rows accessed by the query will most likely be defining query performance. Of course not all row accessed are created equal (such as full table scan row accesses may be much faster than random index lookups row accesses in the same table) but this is very valuable data point to optimize query anyway.

The question of optimizing number of rows accessed is what would be the optimal number indicating query is typically well optimized ? Of course in the perfect world we would like to see rows returned = rows analyzed. though this is only possible to reach for small fraction of queries.

If you’re joining multiple tables or if you have GROUP BY query the number of rows which need to be utilized to create the result set will be larger than number of rows returned.

What I would like to see (for example as another slow query log record) is the number of rows which MySQL used to generate result set. Comparing this number with number of rows query actually accessed we can guess (what is important automatically !) there is potential for optimizing this query.

For example:


This query will return only couple of rows but it is clear all rows from the table were used to generate result set and it is not possible to optimize this query directly to only access couple of rows (though this gives us another idea to possibly keep cache table with couple of rows in it)

Now if we have the same table with no indexes and query


even though full table scan is performed only rows with COUNTRY=’USA’ are used in results set which clearly puts query as optimization candidate.

It is not always possible to optimize queries so the number of rows accessed is same as number of rows used to generate result set – for example any filter which can’t use indexes will make these number different, though such filter will be suboptimal and you may think how to fix the situation.

For example if you have clause like TITLE LIKE “%MYSQL%” you may instead use Full Text Search indexes. If you have WHERE ID%100=0 you can have extra column divisible_by_hundred and keep it indexed. Of course in all cases there is extra cost involved and you should weight if it make sense to optimize such queries. I’m just describing the possibility.

Sounds nice as described right ? Unfortunately it is not that easy to implement it in the general sense as you can’t always track the future of individual row. Queries with temporary result set are especially complicated, for example:


As of MySQL 5.0 MySQL will materialize the subquery in the from clause fully and so “use” all rows in result set while in reality only fraction of them will be needed for end result set as most of the groups are filtered out. There are many similar cases when decision of whenever row is used for result set or not is taken long after it stop existed as individual row which just was accessed.

At the same time I think starting with something and covering basic “single level” queries keeping in account JOINs, GROUP BY, LIMIT would already be helpful for many cases.

Peter Zaitsev

Peter managed the High Performance Group within MySQL until 2006, when he founded Percona. Peter has a Master's Degree in Computer Science and is an expert in database kernels, computer hardware, and application scaling.


  • We could add the total of the Handler_XXX counts up and put them into the slow query log patch. Or put them all into the slow query log output individually.

    Instead of resetting the Handler_XXX to 0 before each query, we could just keep a copy of them and set it to the current value before each query, then subtract the difference after the query is done.

  • Handler_XXX are helpful though they do not make a difference if row was actually used for result or not.

    Though indeed getting Handler and some other statements in slow query log could be very interesting. In fact one could even have list of “Variables” from show status he would like to see logged as different cases may need different ones.

  • For the upcoming Google patch SHOW USER_STATISTICS includes CPU time, bytes sent, bytes received, binlog bytes written and rows read per account. Alas, we have yet to add disk reads/writes per user — they are not as easy to count.

  • Cool.

    How did you count CPU time ? I was under impression per thread CPU accounting only existed in some very recent Linux kernel – are you using that functionality. Regarding reads/writes are you looking for Innodb or all together ? Innodb reads are easy – we have it in our slow query log patch, writes are harder because they are asynchronous and so anonymous. For MyISAM it is harder because it uses OS caching so you would need to somehow figure out if read resulted in physical read or was served from the cache…

  • Mark,

    As other note I think these are two different things – “logical” and “physical” aspects are important for different parts of optimization. For example if query taken 5 sec of CPU and did 1000 IOs it shows the query may need to be optimized but it does not really tell how good it is compared to what it could be which is where row usage comes into the play.

  • We use Linux 2.6 and clock_gettime from librt provides the CPU time. That library isn’t used by MySQL by default. Because of all difficulties of accounting for IO that you mention, I am not in a rush to count it. We use don’t use O_DIRECT, so many reads for InnoDB may only be from the OS buffer cache.

  • We were using 32-bit builds until recently. Even with 64-bit builds, buffered IO has advantages. Unless you use the Google patch, dirty page writes can be slow because there is one background thread to handle them. One thread is sufficient for buffered IO because most writes to the buffer cache are fast.

  • With 32bit I can see it

    With 64bit O_DIRECT can be slow in case writes can’t keep up… this however typically problem only for heavy writes and system without BBU. The RAID cards I’ve seen can take over 10.000 writes/sec to cache which is typically enough to take the write load.

  • Some of us don’t get to use fancy stuff like that. I also don’t get to use SAN. Is the problem worse there? I would be unhappy if I bought an expensive SAN and then watched 1 thread handle most writes to it.

  • So some of you do not get to use fancy stuff but all of you get to run with same MySQL settings ?

    I’m speaking about RAID with BBU rather than SAN. I’m not a big fan of SAN with MySQL. Generally when it comes to latency for single outstanding request, like logging for example SAN is going to be significantly slower than directly good RAID controller. Something which is not obviously to some people.

    Speaking about expensive SAN and 1 thread doing all writes – it is 1 thread anyway, if you have O_DIRECT or not. True you get more outstanding requests because if OS buffers things and flushes in parallel but are you really so much write bound it is going to be the problem ?

  • You can use more than 1 writer thread if you apply the Google patch. I doubt I am write bound like Kevin Burton, but there are spikes and I want concurrent writes for that case.

  • Right. Speaking about spikes should not delayed flushing take care about this in most cases ?
    Though I’ve seen the spikes and stalls waiting for flush too because of Innodb flush policy suboptimalities.

Leave a Reply