Sometimes I see people thinking about buffers as “larger is always better” so if “large” MySQL sample configuration is designed for 2GB and they happen to have 16, they would simply multiply all/most values by 10 and hope it will work well.
Obviously it does not. The least problem would be wasting memory, allocating a lot of memory for operations which do not need it. Allocating 10MB to sort 10 rows will not make it faster than allocating 1MB, both are more than enough.
However not only it may cause memory being wasted but you may see some of queries actually performing slower, and not because the system starts to swap. Generally you want buffers and other values to be sized “just right” – working with smaller data structures would improve cache locality, will make it easier for OS to manage memory as well as cold provide quite unexpected improvements.
sort_buffer_size – recently I worked with case which was running much faster with 32K sort_buffer_size, compared to 32M. The problem was memory allocation – sorting was used in corelated subquery which was executed hundreds of thousands of times but only few rows were sorted. 32K block is allocated fully in user space, while 32M block requires mmap() call and a lot of work with tables. In this extreme case performance difference was 9 seconds vs 8 minutes – 50 times. I would not call it typical but watch out. This could be called but at some extent as why would you allocate/free sort buffer if it is used by so many queries.
Here is example if someone wants to play
CREATE TABLE `t1` (
`id` int(10) unsigned NOT NULL default '0',
`c` char(10) default NULL,
`d` int(11) default NULL,
PRIMARY KEY (`id`),
KEY `d` (`d`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
select sql_no_cache count(*) from t1 where c in (select sql_big_result count(*) from t1 where d=50 group by c );
Populate it with 100.000 rows, having 100 values with d=50 and some 10 distinct c values.
read_buffer_size and read_rnd_buffer_size – These are buffers used by MyISAM to perform reads, in different scenarios. So should we go ahead and set them to 64M ? Not really – this could result in nasty suprises with full table scan LIMIT queries as well as in some other cases. If you run SELECT * FROM LARGETABLE LIMIT 5 MyISAM will allocate the buffer and populate if fully when MySQL will read only 5 rows from the buffer and throw away the rest.
These are some examples I’m quite sure there are others. For example too large table_cache may slow your opens and closes down, too large query_cache may cause long stalls on invalidation.
Configure it smart. Larger is not always better