Side load may massively impact your MySQL Performance

When we’re looking at benchmarks we typically run some stable workload and we run it in isolation – nothing else is happening on the system. This is not however how things happen in real world when we have significant variance in the load and many things can be happening concurrently.

It is very typical to hear complains about MySQL interactive performance – serving simple standard web traffic is drastically impacted when some heavy queries are ran in background or backup is done with mysqldump – a lot more than you would expect from simple resource competition. I finally found some time to look further in this problem and see what can be done to remedy it.

We designed the benchmark the following way – there is a small table (200MB) which completely fits in the Innodb Buffer Pool (512MB). We also have larger table 4GB which does not fit in the buffer pool. We’re running uniform sysbench OLTP on the small table and mysqldump on the second table. First we run tests individually and when concurrently.

In the perfect world what we would like to see is performance is staying about the same when we run tests concurrently because Sysbench should run completely in memory and use a lot of CPU resources but none of disk IO and mysqldump should have relatively little CPU needs and be bound by disk. Also these are just 2 “threads” running on 4 core system so there should be plenty CPU to spare.

We’re using Percona Server 5.5.15 for this test with buffer pool size of 512MB and innodb_flush_method=O_DIRECT

Test Setup:

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_small –oltp-table-size=1100000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock prepare

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_big –oltp-table-size=17600000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock prepare

Running Sysbench and MySQLDump. Note we run them in the loop to see how result stabilizes.

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –num-threads=1 –max-requests=0 –oltp-dist-type=uniform –max-time=180 –oltp-read-only –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_small –oltp-table-size=1100000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock run

[root@localhost msb_ps_5_5_15]# time mysqldump –defaults-file=my.sandbox.cnf test md_cache_test_big > /dev/null

Baseline Run:
When we run the tests individually Sysbench gives about 330 req/sec and mysqldump for large table completes in about 95 seconds.
If we run them concurrently after system reaches steady state we get about 2 req/sec and mysqldump takes about 180 seconds.

Yes you get it right. Performance of sysbench OLTP on small table drops more than 150 times when heavy mysqldump is running concurrently. mysqldump itself also slows down
about 2x.

What is going on here ? To understand it we should take a look at the buffer pool contents.

mysql [localhost] {msandbox} (information_schema) > select concat_ws(‘.’, t.schema,, as index_name, sum(data_size)/1024/1024 as data_size_mb from innodb_sys_tables as t inner join innodb_sys_indexes as i using(table_id) inner join innodb_buffer_pool_pages_index as p using(index_id) where t.schema=’test’ group by i.index_id \G

test.md_cache_test_small.PRIMARY 216.31397057
test.md_cache_test_small.k 2.66948509
test.md_cache_test_big.PRIMARY 250.76164627


test.md_cache_test_small.PRIMARY 12.10487175
test.md_cache_test_big.PRIMARY 457.70432472

When we’re running sysbench OLTP on its own we have the primary key of the table fit completely in the buffer pool. However when mysqldump is ran concurrently it reads so many pages from the disk it pushes out most of the smaller table from the buffer pool with only 12MB remaining. This makes workload extremely IO bound hence such drop in performance.

The performance of mysqldump is impacted too because we now have 2 threads competing for what is single hard drive on this test system.

It is worth to note MySQL actually uses midpoint insertion for its buffer pool replacement policy . Unfortunately by default it is configured in a way it is quite useless. The blocks are indeed first placed in the head of “old” sublist which mean they should not push any hot data which is in “young” sublist. However when you’re doing mysqldump (or running some complex batch job query) you are likely going to have multiple accesses to the data on the same page before being done with it for good. Because there are several accesses page really gets immediately moved to the young sublist and as such placing high pressure on buffer pool.

There is ingenious feature though to deal with this problem, it is just you have to enable it separately. There is a variable innodb_old_blocks_time which specifies amount of milliseconds which needs to pass before table can be moved to the young sublist. In typical cases like mysqldump all accesses to the majority of pages will be concentrated within very small period of time so setting innodb_old_blocks_time variable to some value will prevent important data to be pushed out of buffer pool.

Lets repeat the benchmark with innodb_old_blocks_time=1000 which will correspond to 1 sec.

Separate Sysbench gives about 330 req/sec and mysqldump about 95 seconds which is the same. Note we ran test on virtualized system in this case so we would not be able to measure small variances in performance reliably.

Running Sysbench and MySQLDump convurrently gives about 325 req/sec for sysbench and some 100 seconds for mysqldump which is a dramatic improvement of over
150x for sysbench and results now going inline with what you would expect.

Lets see what is going on with buffer pool contents:

test.md_cache_test_small.PRIMARY 216.35031509
test.md_cache_test_small.k 0.13414192
test.md_cache_test_big.PRIMARY 253.21095276
test.md_cache_test_big.k 0.01491451

test.md_cache_test_small.PRIMARY 216.35031509
test.md_cache_test_big.PRIMARY 253.19661140
test.md_cache_test_big.k 0.01491451

As you can see now the small table PRIMARY KEY (which is what used by benchmark) is not pushed from buffer pool at all.

For advanced tuning you might also look into changing how buffer pool is split into young and old sublists via innodb_old_blocks_pct variable though we did not need to do it in this case.

I’m not sure if there are any bad side effects from setting innodb_old_blocks_time to non zero value, if not I would strongly suggest changing default from zero in MySQL 5.6 as it would offer much better “out of box” user experience.

As we can see in default configuration MySQL has buffer pool which can be easily washed away by large table scans or heavy batch jobs. If this happen the workload which is normally in memory becomes disk IO bound which can slow it down more than 100 times. The solution is rather easy though. Setting innodb_old_blocks_time to 1000 or other meaningful number is an easy remedy for this problem.

I want to thank Ovais Tariq for doing a lot of heavy lifting running benchmarks for this post.

Share this post

Comments (7)

  • Florian

    Thanks for the insight, very helpful. Though on my system here (MySQL Community Edition 5.1.58) I cannot set this variable, mysql just complains about “Unknown system variable ‘innodb_old_blocks_time'”. Ein “SHOW VARIABLES LIKE ‘innodb_old_blocks_time’;” will result in an empty set. What do I do wrong here?

    November 14, 2011 at 1:56 am
  • Vojtech Kurka

    Peter, this is very useful finding, indeed. Have you already filed a bug report?
    Thank you for sharing! Vojtech

    November 14, 2011 at 1:58 am
  • Matthew Ward

    Hi Peter, if you have filed a bug report, could you share it with us? The discussion there might be more active in suggesting whether there are any problems using this in production or not.

    November 14, 2011 at 2:32 am
  • Peter Zaitsev

    I have not created a bug report as I do not see the bug here. It is no more bug than small buffer pool size or log file sizes by default. Just something you need to remember changing.

    November 14, 2011 at 6:44 am
  • Peter Zaitsev


    This variable exists in innodb plugin for MySQL 5.1 and MySQL 5.5

    November 14, 2011 at 6:56 am
  • Vojtech Kurka

    Florian, on 5.1 you must use InnoDB plugin or XtraDB, not the obsolete built-in InnoDB storage engine.

    November 14, 2011 at 6:57 am
  • sbester
    (InnoDB buffer pool can be severely affected by table scans)

    May 15, 2012 at 2:02 am

Comments are closed.