EmergencyEMERGENCY? Get 24/7 Help Now!

Side load may massively impact your MySQL Performance

 | November 13, 2011 |  Posted In: Benchmarks, Insight for DBAs, MySQL


When we’re looking at benchmarks we typically run some stable workload and we run it in isolation – nothing else is happening on the system. This is not however how things happen in real world when we have significant variance in the load and many things can be happening concurrently.

It is very typical to hear complains about MySQL interactive performance – serving simple standard web traffic is drastically impacted when some heavy queries are ran in background or backup is done with mysqldump – a lot more than you would expect from simple resource competition. I finally found some time to look further in this problem and see what can be done to remedy it.

We designed the benchmark the following way – there is a small table (200MB) which completely fits in the Innodb Buffer Pool (512MB). We also have larger table 4GB which does not fit in the buffer pool. We’re running uniform sysbench OLTP on the small table and mysqldump on the second table. First we run tests individually and when concurrently.

In the perfect world what we would like to see is performance is staying about the same when we run tests concurrently because Sysbench should run completely in memory and use a lot of CPU resources but none of disk IO and mysqldump should have relatively little CPU needs and be bound by disk. Also these are just 2 “threads” running on 4 core system so there should be plenty CPU to spare.

We’re using Percona Server 5.5.15 for this test with buffer pool size of 512MB and innodb_flush_method=O_DIRECT

Test Setup:

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_small –oltp-table-size=1100000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock prepare

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_big –oltp-table-size=17600000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock prepare

Running Sysbench and MySQLDump. Note we run them in the loop to see how result stabilizes.

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –num-threads=1 –max-requests=0 –oltp-dist-type=uniform –max-time=180 –oltp-read-only –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_small –oltp-table-size=1100000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock run

[root@localhost msb_ps_5_5_15]# time mysqldump –defaults-file=my.sandbox.cnf test md_cache_test_big > /dev/null

Baseline Run:
When we run the tests individually Sysbench gives about 330 req/sec and mysqldump for large table completes in about 95 seconds.
If we run them concurrently after system reaches steady state we get about 2 req/sec and mysqldump takes about 180 seconds.

Yes you get it right. Performance of sysbench OLTP on small table drops more than 150 times when heavy mysqldump is running concurrently. mysqldump itself also slows down
about 2x.

What is going on here ? To understand it we should take a look at the buffer pool contents.

mysql [localhost] {msandbox} (information_schema) > select concat_ws(‘.’, t.schema, t.name, i.name) as index_name, sum(data_size)/1024/1024 as data_size_mb from innodb_sys_tables as t inner join innodb_sys_indexes as i using(table_id) inner join innodb_buffer_pool_pages_index as p using(index_id) where t.schema=’test’ group by i.index_id \G

test.md_cache_test_small.PRIMARY 216.31397057
test.md_cache_test_small.k 2.66948509
test.md_cache_test_big.PRIMARY 250.76164627


test.md_cache_test_small.PRIMARY 12.10487175
test.md_cache_test_big.PRIMARY 457.70432472

When we’re running sysbench OLTP on its own we have the primary key of the table fit completely in the buffer pool. However when mysqldump is ran concurrently it reads so many pages from the disk it pushes out most of the smaller table from the buffer pool with only 12MB remaining. This makes workload extremely IO bound hence such drop in performance.

The performance of mysqldump is impacted too because we now have 2 threads competing for what is single hard drive on this test system.

It is worth to note MySQL actually uses midpoint insertion for its buffer pool replacement policy . Unfortunately by default it is configured in a way it is quite useless. The blocks are indeed first placed in the head of “old” sublist which mean they should not push any hot data which is in “young” sublist. However when you’re doing mysqldump (or running some complex batch job query) you are likely going to have multiple accesses to the data on the same page before being done with it for good. Because there are several accesses page really gets immediately moved to the young sublist and as such placing high pressure on buffer pool.

There is ingenious feature though to deal with this problem, it is just you have to enable it separately. There is a variable innodb_old_blocks_time which specifies amount of milliseconds which needs to pass before table can be moved to the young sublist. In typical cases like mysqldump all accesses to the majority of pages will be concentrated within very small period of time so setting innodb_old_blocks_time variable to some value will prevent important data to be pushed out of buffer pool.

Lets repeat the benchmark with innodb_old_blocks_time=1000 which will correspond to 1 sec.

Separate Sysbench gives about 330 req/sec and mysqldump about 95 seconds which is the same. Note we ran test on virtualized system in this case so we would not be able to measure small variances in performance reliably.

Running Sysbench and MySQLDump convurrently gives about 325 req/sec for sysbench and some 100 seconds for mysqldump which is a dramatic improvement of over
150x for sysbench and results now going inline with what you would expect.

Lets see what is going on with buffer pool contents:

test.md_cache_test_small.PRIMARY 216.35031509
test.md_cache_test_small.k 0.13414192
test.md_cache_test_big.PRIMARY 253.21095276
test.md_cache_test_big.k 0.01491451

test.md_cache_test_small.PRIMARY 216.35031509
test.md_cache_test_big.PRIMARY 253.19661140
test.md_cache_test_big.k 0.01491451

As you can see now the small table PRIMARY KEY (which is what used by benchmark) is not pushed from buffer pool at all.

For advanced tuning you might also look into changing how buffer pool is split into young and old sublists via innodb_old_blocks_pct variable though we did not need to do it in this case.

I’m not sure if there are any bad side effects from setting innodb_old_blocks_time to non zero value, if not I would strongly suggest changing default from zero in MySQL 5.6 as it would offer much better “out of box” user experience.

As we can see in default configuration MySQL has buffer pool which can be easily washed away by large table scans or heavy batch jobs. If this happen the workload which is normally in memory becomes disk IO bound which can slow it down more than 100 times. The solution is rather easy though. Setting innodb_old_blocks_time to 1000 or other meaningful number is an easy remedy for this problem.

I want to thank Ovais Tariq for doing a lot of heavy lifting running benchmarks for this post.

Peter Zaitsev

Peter managed the High Performance Group within MySQL until 2006, when he founded Percona. Peter has a Master's Degree in Computer Science and is an expert in database kernels, computer hardware, and application scaling.


  • Thanks for the insight, very helpful. Though on my system here (MySQL Community Edition 5.1.58) I cannot set this variable, mysql just complains about “Unknown system variable ‘innodb_old_blocks_time'”. Ein “SHOW VARIABLES LIKE ‘innodb_old_blocks_time’;” will result in an empty set. What do I do wrong here?

  • Hi Peter, if you have filed a bug report, could you share it with us? The discussion there might be more active in suggesting whether there are any problems using this in production or not.

  • I have not created a bug report as I do not see the bug here. It is no more bug than small buffer pool size or log file sizes by default. Just something you need to remember changing.

Leave a Reply


Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.