Using delayed JOIN to optimize count(*) and LIMIT queries

In many Search/Browse applications you would see main (fact) table which contains search fields and dimension tables which contain more information about facts and which need to be joined to get query result.

If you’re executing count(*) queries for such result sets MySQL will perform the join even if you use LEFT JOIN so it is not needed which slows down things considerably. In similar way MySQL generates full rows while executing queries with limit before throwing them away which makes queries with high offset values very expensive.

To get better performance you can “Help” MySQL and remove JOIN for count(*) and do JOIN after limiting result set for retrieval queries.

Lets look at following simple example with one dimension table. In real life you will usually have several of these so performance improvements can be even higher.

So as you can see using this trick we get 5 times speed up for count(*) query and 12 times. This is of course for extremely high offset value but it is also for example with only one dimension which fully fits in memory. for IO bound workload performance difference can be much higher than that.

You may also notice one more trick I’m using here – fact table has covered index on which has val column in it this allow to get join query a bit more optimal.

So right now performance gain may be worth the trick, in the future I hope MySQL Optimizer will be improved so it does these transformations automatically.

Share this post

Comments (23)

  • Steven Roussey

    “in the future I hope MySQL Optimizer will be improved so it does these transformations automatically”

    Hahaha! I’ve mentioned this to Monty since 1999! Often we create a temp table then join against that. This was before subselects… Even worse is that joins with blobs that are never referenced get loaded and discarded on MyISAM tables (not so in InnoDB).

    April 6, 2007 at 3:55 pm
  • Jeremy Cole

    Howdy Peter,

    I’ve found that in MANY places, subqueries in FROM can be used to optimize queries. It gives you an easy way to control exactly how the query gets executed.

    Regards,

    Jeremy

    April 6, 2007 at 7:13 pm
  • Dmitri Mikhailov

    Performance gain depends on many factors, but it’s always a gain if a derived table is used wisely (never as a driven table, the result set as narrow as possible).

    I use “analytical table partitioning” or “delayed join” technique whenever applicable, not just to optimize count/limit queries.

    Regards,

    Dmitri

    April 8, 2007 at 8:01 am
  • Patrick Domack

    When using this to optimize a query that has annoyed me for some time, I found it didn’t help much, time went from .12sec to .10 sec with SQL_NO_CACHE.

    Extending this more, to only return one one column on the subselect and leftjoin the same table back in I found gave result times in the .01 and .00 time frames.

    Origional query:
    SELECT SQL_NO_CACHE E.EntryID, E.Body, E.Subject, E.Private, DATE_FORMAT( E.Date, ‘%m-%d-%y’ ) As EDate, EHC.Hits FROM Entries E FORCE INDEX (DPrivate) LEFT JOIN EntryHitCounters EHC ON (EHC.EntryID=E.EntryID) WHERE E.DiaryID = 11693 AND Private = 0 ORDER BY Date DESC LIMIT 1235,20

    Final query:
    select sql_no_cache E.EntryID, E.Body, E.Subject, E.Private, DATE_FORMAT(E.Date,’%m-%d-%y’) as EDate, EHC.Hits from (select EntryID from Entries FORCE INDEX (DPrivate) where DiaryID=11693 and Private=0 Order By Date DESC LIMIT 1235,20) D LEFT JOIN Entries E USING (EntryID) LEFT JOIN EntryHitCounters EHC USING (EntryID)

    I wish the force index wasn’t needed.

    April 8, 2007 at 9:15 am
  • peter

    Steven,

    A lot of things take a lot of time to fix. Do you remember Subqueries were intended to be one of the main features of MySQL 3.23 ? But this one I’m not sure if it is even on agenda.

    There are a lot of little things like this, other one I like is partially covering index, when like 95% filtering can be done using index only but MySQL will still have to always read row even if it is not accessed in 95% cases.

    April 9, 2007 at 2:52 am
  • peter

    Right Jeremy,

    I just wanted to give some simple examples rather than just describing technique.

    One thing I should point out – in some cases you would benefit from using 2 subselects in FROM clause and joining them together. The problem is however in this case MySQL will not be able to use any indexes for the join which can kill performance.

    In such case one can use old MySQL 4.0 way and create one of the result set as temporary table with proper indexes.

    April 9, 2007 at 2:56 am
  • peter

    Patrick,

    This is main point the inner query has to traverse high amount of rows so it should be kept as simple as possible – so try to avoid joins and keep it index covered when possible

    April 9, 2007 at 2:58 am
  • Christoph Schmitz

    This might be a stupid question, but wouldn’t the last query consume quite a lot of memory if you get multiple query’s of the same type but with different variables? Or is that consumption almost the same as the second last?

    April 11, 2007 at 1:31 pm
  • peter

    Why would it ? It only has temporary table in memory to hold 10 rows, which is very fast.

    April 12, 2007 at 3:01 am
  • Peufeu

    Yes I often use this too, in MySQL and in Postgres, it is just one of the tricks of the trade ! I believe the optimizer should be able to move ORDER BY and LIMIT around inside the execution plan by itself but since it isn’t, doing it yourself is the way.

    Recently I had a huge search query (in Postgres) which was horrendously slow. It has various conditions like GPS coordinates, date, item type, etc, and the all-time favourite, union from a huge archive table and an active table, plus a 5-way JOIN with a few aggregates.

    The archive + active is quite common so I’ll drop my way of doing it :

    – first query the active table and store the results in a temporary table.
    – since i use ORDER BY + LIMIT, I can then compute the max() or min() (depending on the order) or the relevant row
    – then I query the archive table, but I know that rows which have the ORDER BY field > (or

    April 16, 2007 at 3:53 am
  • Peufeu

    I repost since your comment-parser chokes on “greater” and “lower” characters !

    Yes I often use this too, in MySQL and in Postgres, it is just one of the tricks of the trade ! I believe the optimizer should be able to move ORDER BY and LIMIT around inside the execution plan by itself but since it isn’t, doing it yourself is the way.

    Recently I had a huge search query (in Postgres) which was horrendously slow. It has various conditions like GPS coordinates, date, item type, etc, and the all-time favourite, union from a huge archive table and an active table, plus a 5-way JOIN with a few aggregates.

    The archive + active is quite common so I’ll drop my way of doing it :

    – first query the active table and store the results in a temporary table.
    – since i use ORDER BY + LIMIT, I can then compute the max() or min() (depending on the order) or the relevant row
    – then I query the archive table, but I know that rows which have th