The use of tmpfs/ramfs as /tmp partition is a common trick to improve the performance of on-disk temporary tables. Servers usually have less RAM than disk space so those kind of partitions are very limited in size and there are some cases were we can run out of space.
Let’s see one example. We’re running a typical master-slave replication with ROW based binary log file. The application connects to the database using persistent connection technique to avoid creating new connections for every request so it just recycle old connections.
The application starts to load data into the database with “LOAD DATA INFILE”. After some minutes the server runs out of disk space on /tmp/partition. The first step is clear, check which files are on that partition:
# ls -lah /tmp
drwxr-xr-x 3 root root 4,0K ago 2 12:53 .
drwx------ 9 root root 4,0K ago 5 11:35 ..
drwx------ 2 root root 16K ago 2 12:36 lost+found
Nothing. No files on the partition but it is not really empty:
# df -h
/dev/loop0 985M 961M 0 100% /tmp
Lets see which files has MySQL opened:
# lsof -p 6112|grep ML
mysqld 6112 root 38u REG 7,0 106594304 18 /tmp/MLjw4ecJ (deleted)
mysqld 6112 root 39u REG 7,0 237314310 17 /tmp/MLwdWDGW (deleted)
mysqld 6112 root 41u REG 7,0 107872256 19 /tmp/MLOqxGox (deleted)
mysqld 6112 root 44u REG 7,0 90832896 20 /tmp/MLr05Ga9 (deleted)
mysqld 6112 root 47u REG 7,0 79855616 21 /tmp/ML83dV2t (deleted)
mysqld 6112 root 50u REG 7,0 65077248 22 /tmp/ML5M7KqM (deleted)
mysqld 6112 root 53u REG 7,0 64094208 23 /tmp/MLSfMSMh (deleted)
mysqld 6112 root 56u REG 7,0 62259200 24 /tmp/MLnAcfOY (deleted)
mysqld 6112 root 59u REG 7,0 62521344 25 /tmp/ML4VPBxK (deleted)
mysqld 6112 root 62u REG 7,0 62390272 26 /tmp/ML19sFCA (deleted)
There are lot of files in deleted status that are filling the partition. The fourth column is the size of those files.
What are those ML* files?
When you run a big UPDATE, INSERT or LOAD DATA INFILE MySQL send the rows to the Binary Log Cache before writing it to the Binary Log Files. If that cache is not big enough to store the temporary data it will create a on-disk cache on the tmp partition with the name ML*.
Here is the problem. Until the connection is closed those files won't disappear, even after committing the transaction and with the rows on the binary log. There are different solution to this problem:
- Increase the size of the /tmp partition.
- Increase the size of the binary log cache. binlog_cache_size and binlog_stmt_cache_size. Before 5.5.9 binlog_cache_size was used for both transactional and no transactional statements. After 5.5.9 the cache was splitted in two, binlog_cache_size to hold transactions data and binlog_stmt_cache_size to hold non-transactional statements.
- Change the session value of binlog_format to STATEMENT just before the execution of those large queries.
It's also worth to mention that there are two methods to mount tmp on RAM, tmpfs and ramfs.
- Tmpfs has a maximum size, so it returns an error if you fill it too much, and also can go to swap, which would defeat the performance improvement as it uses disk I/O anyway.
- Ramfs will not swap, and also ramfs will grow dynamically so that it won't return an error that it has run out of space. This can be a bad practice because if you LOAD DATA for a file bigger than available memory, you could accidentally overcommit the physical memory, most likely killing system in the end.
MySQL can creates temporary files even thought those temporary files are not visible on the EXPLAIN or SHOW STATUS output. Therefore, the best solution is to monitor your system.
If your tmp partition is limited you should check the free space on it with a monitoring system like Nagios. Trending graphs from Cacti are also a very good option to know how and when the size of your temporary tables increases. We have our own plugins for both system that can help you avoid these situations.
I've opened a bug report to follow this issue: