Using delayed JOIN to optimize count(*) and LIMIT queries

Using delayed JOIN to optimize count(*) and LIMIT queries

PREVIOUS POST
NEXT POST

In many Search/Browse applications you would see main (fact) table which contains search fields and dimension tables which contain more information about facts and which need to be joined to get query result.

If you’re executing count(*) queries for such result sets MySQL will perform the join even if you use LEFT JOIN so it is not needed which slows down things considerably. In similar way MySQL generates full rows while executing queries with limit before throwing them away which makes queries with high offset values very expensive.

To get better performance you can “Help” MySQL and remove JOIN for count(*) and do JOIN after limiting result set for retrieval queries.

Lets look at following simple example with one dimension table. In real life you will usually have several of these so performance improvements can be even higher.

So as you can see using this trick we get 5 times speed up for count(*) query and 12 times. This is of course for extremely high offset value but it is also for example with only one dimension which fully fits in memory. for IO bound workload performance difference can be much higher than that.

You may also notice one more trick I’m using here – fact table has covered index on which has val column in it this allow to get join query a bit more optimal.

So right now performance gain may be worth the trick, in the future I hope MySQL Optimizer will be improved so it does these transformations automatically.

PREVIOUS POST
NEXT POST

Share this post

Comments (23)

  • Steven Roussey Reply

    “in the future I hope MySQL Optimizer will be improved so it does these transformations automatically”

    Hahaha! I’ve mentioned this to Monty since 1999! Often we create a temp table then join against that. This was before subselects… Even worse is that joins with blobs that are never referenced get loaded and discarded on MyISAM tables (not so in InnoDB).

    April 6, 2007 at 3:55 pm
  • Jeremy Cole Reply

    Howdy Peter,

    I’ve found that in MANY places, subqueries in FROM can be used to optimize queries. It gives you an easy way to control exactly how the query gets executed.

    Regards,

    Jeremy

    April 6, 2007 at 7:13 pm
  • Dmitri Mikhailov Reply

    Performance gain depends on many factors, but it’s always a gain if a derived table is used wisely (never as a driven table, the result set as narrow as possible).

    I use “analytical table partitioning” or “delayed join” technique whenever applicable, not just to optimize count/limit queries.

    Regards,

    Dmitri

    April 8, 2007 at 8:01 am
  • Patrick Domack Reply

    When using this to optimize a query that has annoyed me for some time, I found it didn’t help much, time went from .12sec to .10 sec with SQL_NO_CACHE.

    Extending this more, to only return one one column on the subselect and leftjoin the same table back in I found gave result times in the .01 and .00 time frames.

    Origional query:
    SELECT SQL_NO_CACHE E.EntryID, E.Body, E.Subject, E.Private, DATE_FORMAT( E.Date, ‘%m-%d-%y’ ) As EDate, EHC.Hits FROM Entries E FORCE INDEX (DPrivate) LEFT JOIN EntryHitCounters EHC ON (EHC.EntryID=E.EntryID) WHERE E.DiaryID = 11693 AND Private = 0 ORDER BY Date DESC LIMIT 1235,20

    Final query:
    select sql_no_cache E.EntryID, E.Body, E.Subject, E.Private, DATE_FORMAT(E.Date,’%m-%d-%y’) as EDate, EHC.Hits from (select EntryID from Entries FORCE INDEX (DPrivate) where DiaryID=11693 and Private=0 Order By Date DESC LIMIT 1235,20) D LEFT JOIN Entries E USING (EntryID) LEFT JOIN EntryHitCounters EHC USING (EntryID)

    I wish the force index wasn’t needed.

    April 8, 2007 at 9:15 am
  • peter Reply

    Steven,

    A lot of things take a lot of time to fix. Do you remember Subqueries were intended to be one of the main features of MySQL 3.23 ? But this one I’m not sure if it is even on agenda.

    There are a lot of little things like this, other one I like is partially covering index, when like 95% filtering can be done using index only but MySQL will still have to always read row even if it is not accessed in 95% cases.

    April 9, 2007 at 2:52 am
  • peter Reply

    Right Jeremy,

    I just wanted to give some simple examples rather than just describing technique.

    One thing I should point out – in some cases you would benefit from using 2 subselects in FROM clause and joining them together. The problem is however in this case MySQL will not be able to use any indexes for the join which can kill performance.

    In such case one can use old MySQL 4.0 way and create one of the result set as temporary table with proper indexes.

    April 9, 2007 at 2:56 am
  • peter Reply

    Patrick,

    This is main point the inner query has to traverse high amount of rows so it should be kept as simple as possible – so try to avoid joins and keep it index covered when possible

    April 9, 2007 at 2:58 am
  • Christoph Schmitz Reply

    This might be a stupid question, but wouldn’t the last query consume quite a lot of memory if you get multiple query’s of the same type but with different variables? Or is that consumption almost the same as the second last?

    April 11, 2007 at 1:31 pm
  • peter Reply

    Why would it ? It only has temporary table in memory to hold 10 rows, which is very fast.

    April 12, 2007 at 3:01 am
  • Peufeu Reply

    Yes I often use this too, in MySQL and in Postgres, it is just one of the tricks of the trade ! I believe the optimizer should be able to move ORDER BY and LIMIT around inside the execution plan by itself but since it isn’t, doing it yourself is the way.

    Recently I had a huge search query (in Postgres) which was horrendously slow. It has various conditions like GPS coordinates, date, item type, etc, and the all-time favourite, union from a huge archive table and an active table, plus a 5-way JOIN with a few aggregates.

    The archive + active is quite common so I’ll drop my way of doing it :

    – first query the active table and store the results in a temporary table.
    – since i use ORDER BY + LIMIT, I can then compute the max() or min() (depending on the order) or the relevant row
    – then I query the archive table, but I know that rows which have the ORDER BY field > (or

    April 16, 2007 at 3:53 am
  • Peufeu Reply

    I repost since your comment-parser chokes on “greater” and “lower” characters !

    Yes I often use this too, in MySQL and in Postgres, it is just one of the tricks of the trade ! I believe the optimizer should be able to move ORDER BY and LIMIT around inside the execution plan by itself but since it isn’t, doing it yourself is the way.

    Recently I had a huge search query (in Postgres) which was horrendously slow. It has various conditions like GPS coordinates, date, item type, etc, and the all-time favourite, union from a huge archive table and an active table, plus a 5-way JOIN with a few aggregates.

    The archive + active is quite common so I’ll drop my way of doing it :

    – first query the active table and store the results in a temporary table.
    – since i use ORDER BY + LIMIT, I can then compute the max() or min() (depending on the order) or the relevant row
    – then I query the archive table, but I know that rows which have the ORDER BY field greater (or lower depending on the order) to the value computed above will not appear in the final result set since the first query from the active table already gave me enough rows to satisfy my LIMIT. I then add a WHERE condition to filter out the rows BEFORE sorting them. Postgres will bitmap-index-scan on a boolean OR/AND mix of my indexes so it will never even look at those rows that don’t match.
    – I insert the results of this query in the temp table
    – I sort and limit the temp table and fetch it
    – I perform all my joins from the temp table (no freaky IN() from php)

    It went from a few minutes to a few tens of milliseconds. Not bad I may say. Use this trick if you need ! It works very very well with MySQL too.

    April 16, 2007 at 3:54 am
  • -TMA-1- » links for 2007-04-26 Reply

    […] MySQL Performance Blog » Using delayed JOIN to optimize count(*) and LIMIT queries (tags: Tech Database MySQL Performance Tips) […]

    April 25, 2007 at 5:20 pm
  • » Easy MySQL Performance Tips Reply

    […] joining two large tables, but only searching against one, put the join statement at the end. Why join the two entire tables when you only have to join the matching […]

    June 6, 2007 at 5:12 am
  • i.ndustrio.us - » Optimize MySQL queries w/delayed JOIN & LIMIT Reply

    […] Perfomancing Blog had a nice article on using delayed JOIN to optimize count(*) and LIMIT queries. I just personally optimized a couple of nagging queries with the “delayed limit” […]

    June 18, 2007 at 10:58 pm
  • PHP & MySQL Optimierungtips - CoreBlog Reply

    […] Using delayed JOIN to optimize count(*) and LIMIT queries […]

    August 15, 2007 at 4:59 am
  • Boia Alexandru Reply

    Sorry for reposting:)…but it seems that your comment parser acts a little weard:P

    While this works when performing LEFT JOINS, with INNER JOINS it’s a different thing.
    Let’s say we want 10 records, starting from offset 500000, where i & 10000, like in the above example, but let’s perform an innerj join instead…
    The first subquery will return the requested 10 results, and asuming in the res table there are rows that don’t match some rows returned by the subquery, then the result will be less than 10 rows…

    August 23, 2007 at 5:41 am
  • peter Reply

    Sure. You have to ensure the number of rows do not change to use this optimization. In other cases – like if you’re join to the same table it is just granted.

    August 30, 2007 at 1:20 pm
  • Added by a PAL to FAQ PAL Reply

    Using delayed JOIN to optimize count(*) and LIMIT queries…

    If you’re executing count(*) queries for such result sets MySQL will perform the join even if you use LEFT JOIN so it is not needed which slows down things considerably. In similar way MySQL generates full rows while executing queries with limit befor…

    October 19, 2008 at 4:04 pm
  • Praca Reply

    To optimize your mysql queries use LIMIT on the end in query:)

    ex. $sql = “SELECT id FROM users where email = ‘uniq@email.com’ LIMIT 1”;

    February 26, 2009 at 3:20 am
  • Ilan Hazan Reply

    For performance reasons I am limiting my counts.
    I found out that most of the time I need count(*) only up to a certain limit (I dont want to show more than 10 list pages). In this case select count(*) is doing a redundant calculations for heavy lists.
    I am using MySQL Limited-Count.
    See
    http://www.mysqldiary.com/limited-select-count/

    Ilan.

    May 25, 2010 at 3:34 am
  • Nuwan Reply

    Thanks, really helped out!

    December 9, 2010 at 4:25 am
  • Adam Copley Reply

    How can this method be put into place in a situation where your WHERE clauses, contain fields in the JOINed tables. Surely this just wouldn’t work.

    December 22, 2015 at 5:27 am

Leave a Reply