How fast can you sort data with MySQL ?

I took the same table as I used for MySQL Group by Performance Tests to see how much MySQL can sort 1.000.000 rows, or rather return top 10 rows from sorted result set which is the most typical way sorting is used in practice.

I tested full table scan of the table completes in 0.22 seconds giving us about 4.5 Million of rows/sec. Obviously we can’t get sorted result set faster than that.

I placed temporary sort files on tmpfs (/dev/shm) to avoid disk IO as a variable as my data set fits in memory anyway and decided to experiment with sort_buffer_size variable.

The minimum value for sort_buffer_size is 32K which gives us the following speed:

Not bad ! Even though MySQL does not optimize “get top N sorted rows” very well it takes just 2.5 times longer than full table scan to get the data. And this is with minimum sort_buffer allowed when a lot of sort merge passes are required for sort completion:

As you can see from this show status output MySQL only counts completely sorted rows in Sort_rows variable. In this case 1.000.000 of rows were partially sorted but only 10 rows fetched from the data file and sent and only they are counted. In practice this means Sort_rows may well understate sort activity happening on the system.

Lets now increase sort_buffer_size and see how performance is affected:

OK raising sort_buffer_size to 100K gives quite expected performance benefit, now we’re just 2 times slower than table scan of the query and considering table size was about 60MB we have 120MB/sec sort speed, while 2.000.000 rows/sec is of course more relevant in this case.

Still a lot of sort merge passes lets go with even higher buffer sizes.

Wait it is not right. We’re increasing sort_buffer_size and number of sort_merge_passes decreases appropriately but it does not help sort speed instead it drops 3 times from 0.44sec to do 1.34sec !

Lets try it even higher to finally get rid of sort merge passes – may be it is sort merge which is inefficient with large sort_buffer_size ?

Nope. We finally got rid of sort_merge_passes but our sort performance got even worse !

I decided to experiment a bit further to see what sort_buffer_size is optimal for given platform and given query (I did not test if it is the same for all platforms or data sets) – The optimal sort_buffer_size in this case was 70K-250K which is quite smaller than even default value.

The CPU in question was Pentium 4 having 1024K of cache.

A while ago I already wrote what large buffers are not always better but I never expected optimal buffer to be so small at least in some conditions.

What do we learn from these results:

  • Benchmark your application Unfortunately general tuning guidelines can be wrong for your particular case, or generally wrong because they tend to reprint the manual which is often written based on theoretical expectations rather than supported by large amount of testing.
  • sort_merge_passes are not that bad. Setting your sort_buffer_size large enough so there is zero sort_merge_passes may not be optimal.
  • World is full of surprises I obviously did not expect to get such results, and this is not any exception. Even spending a lot of time optimizing and otherwise working with MySQL I continue to run in results which surprise me. Some are later expected others come from underlying bugs and later fixed.

Share this post

Comments (27)

  • Jay Pipes Reply

    Hi Peter! Nice couple posts on buffer allocation concepts. I’m wondering…do your findings suggest that the main bottleneck in these cases is actually the memory allocation itself? Your results seem to indicate that the sort_merge execution is very fast compared to the memory allocation speed for large mem amounts.

    August 18, 2007 at 12:56 pm
  • peter Reply

    I do not think it is buffer allocation. We’re speaking about single sort buffer allocation per sort duration and that can’t be taking any significant amount.

    There was the case when buffer allocation was a bottleneck – with dependent subselect with order by – but in that case this was to happen a lot of times per query.

    Generally I’d see more profiling on this case because it is not how it should be. I suspect it is cache efficiency which is the issue here.

    August 18, 2007 at 1:29 pm
  • Apachez Reply

    Would be nice if stuff like this could end up into a script which will be included with the mysql distros so you could just run “” (or whatever it might be named :P) and return a few minutes later and it would present optimal sizes for sort_buffer_size among other tweakable variables.

    Or does this perhaps already exists ?

    August 18, 2007 at 4:15 pm
  • Stewart Smith Reply

    My guess is on CPU cache. Would be interesting to see optimal value on a CPU with more cache (2 or 4MB)

    August 18, 2007 at 8:18 pm
  • peter Reply


    As I mentioned I’m not sure all of the settings would will be simply Hardware+OS related. Though some benchmarks to show best values for such likely architecture dependent variables would be interesting.

    August 19, 2007 at 2:00 am
  • Perrin Harkins Reply

    I wonder if this is some kind of interaction with the LIMIT statement. If you don’t use LIMIT, does it change which setting does the best?

    August 19, 2007 at 3:51 pm
  • Apachez Reply

    Perrin, yes my thoughts too. But for this particular testcase I observed the same on a Q6600 cpu (2x4MB or whatever they have as L2). Values roughly between 100-256k were fastest while a sort_buffer_size of 512k or larger showed a slowdown (not much but still repeatable).

    Peter, do you think you could post some my.cnf’s for various setups which you have found be working good and speedy? You spoke about this for more than a year ago 😉 (look in the comments for ) For example based on your knowledge and findings, how would a my.cnf (mysql 5.x) recommended by you look for a Q6600 cpu system with 4 gig ram and using only myisam (aswell as myisam + innodb)? 🙂

    August 19, 2007 at 4:46 pm
  • Scott Reply

    Would an index on the sort field make any difference in performance?

    August 19, 2007 at 8:42 pm
  • Jay Pipes Reply

    Hi Peter, Apachez!

    Apachez, that’s a very interesting suggestion ( I am meeting with the guys from University of Maryland SKOLL Cluster on Wednesday. They are assisting our QA team by continually building and testing hundreds of thousands of configurations across their 200-machine cluster. I will bring up this idea as something to think about long-term: getting optimal config settings for a variety of (common?) architectures and platforms.

    Cheers, and thanks for the good idea!


    August 20, 2007 at 8:30 am
  • peter Reply


    I did not check without limit as in this case there would be too much overhead for fetching data.

    August 29, 2007 at 6:47 am
  • peter Reply


    It is not that simple as memory size – a lot of stuff is workload specific.

    August 29, 2007 at 6:54 am
  • Apachez Reply

    Peter, but still – some my.cnf examples here at would be great.

    Jay, any reponse yet from the Maryland dudes and dudettes ?

    September 2, 2007 at 7:21 am
  • peter Reply

    Yes… We should create commented my.cnf examples sometime.

    September 2, 2007 at 10:09 am
  • Jay Pipes Reply


    We had a very good meeting in New York and you’ll be hearing more about this in the coming weeks on my blog and Giuseppe’s. Stay tuned. 🙂


    September 3, 2007 at 10:25 am
  • whitesites Reply

    Awesome article. Its interesting to find that giving mysql more memory does not help performance. This is totally opposite to what many believe. I am sure this is a result of the amount of System Cache available to mysql. Makes alot of sense considering intel has been moving away from high CPU speeds to beefed up 4, 6 and 8 MB cache with multiple cores. I would be curious to see what the speed improvement is when using one of these beefed up chips.

    September 5, 2007 at 12:38 pm
  • James Day Reply

    Scott, an index that can satisfy both the WHERE and ORDER BY clauses can be expected to eliminate the need to sort.

    Whitesites, larger buffers often still do help performance on mixed query loads but what this does show is that if you can have the default low and increase it only in the sessions that benefit from larger sizes you can get the best of both worlds. As you optimise queries and indexes you can also find that the optimal value for a particular workload changes.

    September 7, 2007 at 5:27 pm
  • MySQL Performance Blog » MySQL: what read_buffer_size value is optimal ? Reply

    […] benchmarks and performance research. I just recently wrote about rather surprising results with sort performance and today I’ve discovered even read_buffer_size selection may be less than […]

    September 17, 2007 at 7:41 am
  • Lachlan Mulcahy Reply

    Hi Peter,

    I wonder how the size of this buffer would affect real loaded systems?

    Obviously on many real systems disk IO comes into the equation and then disk seek time can start playing a big factor.

    I would be interested to see if the results change in this kind of situation.

    Kind Regards,

    September 18, 2007 at 10:40 pm
  • peter Reply


    Indeed it can affect things. And there are more variables like table structure of course – I have not tested these yet.

    September 19, 2007 at 3:20 am
  • Justin Reply

    On this subject.. if you have a huge un-ordered myisam table and wish to get it into sort order before dumping it to INNODB, which is going to be faster?

    a) Add an index on column to be sorted and select by that index
    b) Dump the data to flat file and sort using unix sort then re-import
    c) Select using table-scan and sort as per your example
    d) alter to innodb with primary key (which is the objective anyway)

    I’m currently doing (a) but I’m wondering if this is a big waste of time.

    September 24, 2007 at 7:04 pm
  • Justin Reply

    Further to my last question..

    As described in this blog post, a table scan with order by/limit does a very fast sort if you want to get the top-most rows .. but I find it useless if you want to use it to create a full ordered insert (for example) for innodb.
    My large table spends hours ‘sorting result’, even though I can get top 20 results within 15 minutes. I wonder if there are some short-cuts taken by mysql where limit is used. Perhaps a table-scan, with a running best-of list, for example. That would not then be a sort, would it?

    In order to do an ordered bulk insert to innodb by PK I am resorting to external sort of a file with unix sort then feeding it with batch inserts, rather than feeding it to myisam then converting, or feeding to myisam then sort .. order by .. insert, or feeding un-ordered PK to innodb directly then doing alter table engine=innodb to “defragment” PK.

    Am I on the right track?

    September 24, 2007 at 10:31 pm
  • Project 2061 Techlog » Optimizing MySQL Server Runtime Parameters Reply

    […] MySQL Performance Blog: How fast can you sort data with MySQL? […]

    May 7, 2008 at 11:55 am
  • 根据status信息对MySQL服务器进行优化(二) » 超群.com的博客 Reply

    […] Sort_merge_passes 包括两步。MySQL 首先会尝试在内存中做排序,使用的内存大小由系统变量 Sort_buffer_size 决定,如果它的大小不够把所有的记录都读到内存中,MySQL 就会把每次在内存中排序的结果存到临时文件中,等 MySQL 找到所有记录之后,再把临时文件中的记录做一次排序。这再次排序就会增加 Sort_merge_passes。实际上,MySQL 会用另一个临时文件来存再次排序的结果,所以通常会看到 Sort_merge_passes 增加的数值是建临时文件数的两倍。因为用到了临时文件,所以速度可能会比较慢,增加 Sort_buffer_size 会减少 Sort_merge_passes å’Œ 创建临时文件的次数。但盲目的增加 Sort_buffer_size 并不一定能提高速度,见 How fast can you sort data with MySQL?(引自,貌似被墙) […]

    March 25, 2009 at 12:08 am
  • michael.ma的奋斗人生 » Blog Archive » (转贴)根据status信息对MySQL服务器进行优化 Reply

    […] Sort_merge_passes 包括两步。MySQL 首先会尝试在内存中做排序,使用的内存大小由系统变量 Sort_buffer_size 决定,如果它的大小不够把所有的记录都读到内存中,MySQL 就会把每次在内存中排序的结果存到临时文件中,等 MySQL 找到所有记录之后,再把临时文件中的记录做一次排序。这再次排序就会增加 Sort_merge_passes。实际上,MySQL 会用另一个临时文件来存再次排序的结果,所以通常会看到 Sort_merge_passes 增加的数值是建临时文件数的两倍。因为用到了临时文件,所以速度可能会比较慢,增加 Sort_buffer_size 会减少 Sort_merge_passes å’Œ 创建临时文件的次数。但盲目的增加 Sort_buffer_size 并不一定能提高速度,见 How fast can you sort data with MySQL?(引自,貌似被墙) […]

    August 6, 2009 at 11:39 pm
  • pascal Reply

    I second other commenters asking for commented My.cnf files from the magicians of this blog! that would certainly be very interesting and useful.

    July 24, 2010 at 1:04 am
  • Avi Reply

    Is there any way that I can sort in numeric mode or in string mode…..?????
    Let me know any if possible.

    September 8, 2010 at 2:32 am
  • iris rhodes Reply

    Good ideas . I was enlightened by the points , Does anyone know where my company might get access to a blank DS-2003 version to fill in ?

    July 2, 2016 at 9:06 am

Leave a Reply