October 2, 2014

Using delayed JOIN to optimize count(*) and LIMIT queries

In many Search/Browse applications you would see main (fact) table which contains search fields and dimension tables which contain more information about facts and which need to be joined to get query result.

If you’re executing count(*) queries for such result sets MySQL will perform the join even if you use LEFT JOIN so it is not needed which slows down things considerably. In similar way MySQL generates full rows while executing queries with limit before throwing them away which makes queries with high offset values very expensive.

To get better performance you can “Help” MySQL and remove JOIN for count(*) and do JOIN after limiting result set for retrieval queries.

Lets look at following simple example with one dimension table. In real life you will usually have several of these so performance improvements can be even higher.

So as you can see using this trick we get 5 times speed up for count(*) query and 12 times. This is of course for extremely high offset value but it is also for example with only one dimension which fully fits in memory. for IO bound workload performance difference can be much higher than that.

You may also notice one more trick I’m using here – fact table has covered index on which has val column in it this allow to get join query a bit more optimal.

So right now performance gain may be worth the trick, in the future I hope MySQL Optimizer will be improved so it does these transformations automatically.

About Peter Zaitsev

Peter managed the High Performance Group within MySQL until 2006, when he founded Percona. Peter has a Master's Degree in Computer Science and is an expert in database kernels, computer hardware, and application scaling.

Comments

  1. “in the future I hope MySQL Optimizer will be improved so it does these transformations automatically”

    Hahaha! I’ve mentioned this to Monty since 1999! Often we create a temp table then join against that. This was before subselects… Even worse is that joins with blobs that are never referenced get loaded and discarded on MyISAM tables (not so in InnoDB).

  2. Howdy Peter,

    I’ve found that in MANY places, subqueries in FROM can be used to optimize queries. It gives you an easy way to control exactly how the query gets executed.

    Regards,

    Jeremy

  3. Dmitri Mikhailov says:

    Performance gain depends on many factors, but it’s always a gain if a derived table is used wisely (never as a driven table, the result set as narrow as possible).

    I use “analytical table partitioning” or “delayed join” technique whenever applicable, not just to optimize count/limit queries.

    Regards,

    Dmitri

  4. Patrick Domack says:

    When using this to optimize a query that has annoyed me for some time, I found it didn’t help much, time went from .12sec to .10 sec with SQL_NO_CACHE.

    Extending this more, to only return one one column on the subselect and leftjoin the same table back in I found gave result times in the .01 and .00 time frames.

    Origional query:
    SELECT SQL_NO_CACHE E.EntryID, E.Body, E.Subject, E.Private, DATE_FORMAT( E.Date, ‘%m-%d-%y’ ) As EDate, EHC.Hits FROM Entries E FORCE INDEX (DPrivate) LEFT JOIN EntryHitCounters EHC ON (EHC.EntryID=E.EntryID) WHERE E.DiaryID = 11693 AND Private = 0 ORDER BY Date DESC LIMIT 1235,20

    Final query:
    select sql_no_cache E.EntryID, E.Body, E.Subject, E.Private, DATE_FORMAT(E.Date,’%m-%d-%y’) as EDate, EHC.Hits from (select EntryID from Entries FORCE INDEX (DPrivate) where DiaryID=11693 and Private=0 Order By Date DESC LIMIT 1235,20) D LEFT JOIN Entries E USING (EntryID) LEFT JOIN EntryHitCounters EHC USING (EntryID)

    I wish the force index wasn’t needed.

  5. peter says:

    Steven,

    A lot of things take a lot of time to fix. Do you remember Subqueries were intended to be one of the main features of MySQL 3.23 ? But this one I’m not sure if it is even on agenda.

    There are a lot of little things like this, other one I like is partially covering index, when like 95% filtering can be done using index only but MySQL will still have to always read row even if it is not accessed in 95% cases.

  6. peter says:

    Right Jeremy,

    I just wanted to give some simple examples rather than just describing technique.

    One thing I should point out – in some cases you would benefit from using 2 subselects in FROM clause and joining them together. The problem is however in this case MySQL will not be able to use any indexes for the join which can kill performance.

    In such case one can use old MySQL 4.0 way and create one of the result set as temporary table with proper indexes.

  7. peter says:

    Patrick,

    This is main point the inner query has to traverse high amount of rows so it should be kept as simple as possible – so try to avoid joins and keep it index covered when possible

  8. This might be a stupid question, but wouldn’t the last query consume quite a lot of memory if you get multiple query’s of the same type but with different variables? Or is that consumption almost the same as the second last?

  9. peter says:

    Why would it ? It only has temporary table in memory to hold 10 rows, which is very fast.

  10. Peufeu says:

    Yes I often use this too, in MySQL and in Postgres, it is just one of the tricks of the trade ! I believe the optimizer should be able to move ORDER BY and LIMIT around inside the execution plan by itself but since it isn’t, doing it yourself is the way.

    Recently I had a huge search query (in Postgres) which was horrendously slow. It has various conditions like GPS coordinates, date, item type, etc, and the all-time favourite, union from a huge archive table and an active table, plus a 5-way JOIN with a few aggregates.

    The archive + active is quite common so I’ll drop my way of doing it :

    – first query the active table and store the results in a temporary table.
    – since i use ORDER BY + LIMIT, I can then compute the max() or min() (depending on the order) or the relevant row
    – then I query the archive table, but I know that rows which have the ORDER BY field > (or

  11. Peufeu says:

    I repost since your comment-parser chokes on “greater” and “lower” characters !

    Yes I often use this too, in MySQL and in Postgres, it is just one of the tricks of the trade ! I believe the optimizer should be able to move ORDER BY and LIMIT around inside the execution plan by itself but since it isn’t, doing it yourself is the way.

    Recently I had a huge search query (in Postgres) which was horrendously slow. It has various conditions like GPS coordinates, date, item type, etc, and the all-time favourite, union from a huge archive table and an active table, plus a 5-way JOIN with a few aggregates.

    The archive + active is quite common so I’ll drop my way of doing it :

    – first query the active table and store the results in a temporary table.
    – since i use ORDER BY + LIMIT, I can then compute the max() or min() (depending on the order) or the relevant row
    – then I query the archive table, but I know that rows which have the ORDER BY field greater (or lower depending on the order) to the value computed above will not appear in the final result set since the first query from the active table already gave me enough rows to satisfy my LIMIT. I then add a WHERE condition to filter out the rows BEFORE sorting them. Postgres will bitmap-index-scan on a boolean OR/AND mix of my indexes so it will never even look at those rows that don’t match.
    – I insert the results of this query in the temp table
    – I sort and limit the temp table and fetch it
    – I perform all my joins from the temp table (no freaky IN() from php)

    It went from a few minutes to a few tens of milliseconds. Not bad I may say. Use this trick if you need ! It works very very well with MySQL too.

  12. Boia Alexandru says:

    Sorry for reposting:)…but it seems that your comment parser acts a little weard:P

    While this works when performing LEFT JOINS, with INNER JOINS it’s a different thing.
    Let’s say we want 10 records, starting from offset 500000, where i & 10000, like in the above example, but let’s perform an innerj join instead…
    The first subquery will return the requested 10 results, and asuming in the res table there are rows that don’t match some rows returned by the subquery, then the result will be less than 10 rows…

  13. peter says:

    Sure. You have to ensure the number of rows do not change to use this optimization. In other cases – like if you’re join to the same table it is just granted.

  14. Praca says:

    To optimize your mysql queries use LIMIT on the end in query:)

    ex. $sql = “SELECT id FROM users where email = ‘uniq@email.com’ LIMIT 1″;

  15. Ilan Hazan says:

    For performance reasons I am limiting my counts.
    I found out that most of the time I need count(*) only up to a certain limit (I dont want to show more than 10 list pages). In this case select count(*) is doing a redundant calculations for heavy lists.
    I am using MySQL Limited-Count.
    See
    http://www.mysqldiary.com/limited-select-count/

    Ilan.

  16. Nuwan says:

    Thanks, really helped out!

Speak Your Mind

*