kernel_mutex problem cont. Or triple your throughput

This is to follow up my previous post with kernel_mutex problem.

First, I may have an explanation why the performance degrades to significantly and why innodb_sync_spin_loops may fix it.
Second, if that is correct ( or not, but we can try anyway), than playing with innodb_thread_concurrency also may help. So I ran some benchmarks with innodb_thread_concurrency.

My explanation on the performance degradation is following:
InnoDB still uses some strange mutex implementation, based on sync_arrays (hello 1990ies), I do not have a good reason why it is not yet replaced.
Sync_array internally uses pthread_cond_wait / pthread_cond_broadcast construction, and on pthread_cond_broadcast call, all threads, competing on mutex, wake up and start racing.
This effect has name thundering herd.

Davi Arnaut does not agree with me, where I do not agree with him either. This is the healthy discussion, and it is possible only because InnoDB is still Open Source and we all can check source code. If the problem were in the closed extension Thread Pool I could not participate in it.

We will probably argue more on that topic, but that does not stop us from trying different
innodb_thread_concurrency ( 0 by default, that is no restrictions).

This variable has a complex fate. Once it was one solution for poor InnoDB scalability, then it changed default value, then it even was named useless.

There is results for workload as in previous post, 256 threads and
with innodb_thread_concurrency=0,4,8,16,32,64

innodb_thread_concurrency Throughput
0 68369.02
4 137999.96
8 194537.48
16 161985.59
32 158296.21
64 153889.72

Wow, this is something. I expected improvement, but not almost 3x times ( 194537÷68369 = 2.8).
The best throughput is with innodb_thread_concurrency=8.

So now let’s compare results for innodb_thread_concurrency= 0 vs 8 for all range of threads:

Threads innodb concurrency=0 innodb concurrency=8
1 11178.34
2 27741.06
4 53364.52
8 92546.73 88046.72
16 144619.58 141781.00
32 164884.03 168360.95
64 154235.73 186167.15
128 147456.33 199260.97
256 68369.02 194357.78
512 40509.67 194639.51
1024 22166.94 183524.16

So innodb_thread_concurrency is even more helpful innodb_sync_spin_loops, and allows to get stable result even with 1024 threads. It is yet early to say it useless, and you may play with it.

Share this post

Comments (12)

  • Wlad

    I think the problem is about to be correctly identified. Maybe it is not kernel_mutex that hurts Innodb. Maybe it is sync_array (protected by own lock) that hurts. All that stuff, the atomic looping with dirty read on the variable, the ut_delay with its fake math just not to access the variable, avoiding entering sync_array mutex as long as possible, pthread conditions and wakeups. At some point, all this needs to be replaced with a single pthread_mutex_timedlock() – (I believe timed is needed to handle deadlocks) on systems that support timed mutex locks, and ported to systems that do not support it.

    December 2, 2011 at 7:30 pm
  • Mark Callaghan

    innodb_thread_concurrency=8 is my favorite way to guarantee that you don’t get more than 8 pending disk operations (ignoring purge, ibuf merges and readahead). I know you aren’t promoting it as a great solution because for workloads that want to do a lot of disk IO and busy nice storage subsystems, it really is a good idea to send more concurrent operations to the disks.

    December 2, 2011 at 8:24 pm
  • Dave Juntgen

    @Vadim – thanks for investigating, really good info, did you run the thread concurrency test with the increased spin loops set to 200?

    @Mark – in a read intensive setup (98% reads), would setting the innodb_thread_concurrency=8 be necessarily a bad thing? What are the trade offs?

    December 3, 2011 at 8:22 am
  • Peter Zaitsev


    Yeah. innodb_thread_concurrency is especially hard to tune on mixed workload. When you have completely CPU bound load
    some of the time so you want it relatively low and when there are heavy batch jobs which are IO bound and would benefit from a lot higher innodb_thread_concurrency.

    “right” solution would be to some form of IO aware thread scheduling for whole MySQL not just Innodb where you can schedule something else to run when thread is to be blocked on disk/network IO, locks etc.

    December 4, 2011 at 10:07 am
  • Mark Callaghan

    I hope the community implements the thread pool API for MySQL 5.6 with something that is aware of disk and network IO.

    December 4, 2011 at 10:58 am
  • Dimitri


    all depends on contention you have.. – then setting innodb_thread_concurrency will help or not at all. In case of kernel_mutex it was yet OK. In case of some others – not at all. See the analyze I’ve posted last year:


    December 5, 2011 at 2:01 pm
  • Mark Callaghan

    Wlad – I think you are right about using timed mutexes to replace the sync array. With that the sync array won’t be needed as each waiting thread can do its own checks for “waiting too long” and “missed wakeup” after each timeout. We have done prototypes for this a couple of times and the results were usually good, but CPU/mutex bound workloads are not the common case for me, so I will wait for someone else to implement it for real.

    December 5, 2011 at 2:37 pm
  • todd

    Thread pools use the same broadcast mechanism to unblock threads off a semaphore when new work is available, so I’m not sure that would help much.

    December 5, 2011 at 5:56 pm
  • Raghavendra

    Interesting to see it hitting optimum at 8 considering that the box has 24 logical threads (12 physical cores). What does this imply ? Is it hitting some software bottleneck (sync_array, mutex herding etc) or a hardware one — numa/cache contention etc.
    I don’t see hardware becoming a bottleneck considering it has both RAID and Fusion-io card, what I/O scheduler was used for this — default CFQ or deadline ? Also was the filesystem XFS ?

    Regarding innodb_thread_concurrency being 0, I can see that there will be lot of thrashing/cpu-stealing etc going on, leading to reduced throughput.

    December 6, 2011 at 1:30 am
  • Will

    We recently tried some of this tuning to get rid of some contention that we are having. sync_spin_loop changes made no difference, and decreasing innodb_thread_concurrency to 16 or under actually caused our site to crash. So obviously this stuff is work-load dependent.

    December 19, 2011 at 5:03 pm
  • Tinel Barb

    I think the way we understand innodb_thread_concurrency is wrong.
    i admit, the results presented in this article are nice, but we should understand that innodb_thread_concurrency is not cpu nor disk I/O bound!
    As long as the threads are not in entered in execution pool they are not working at all, thus the cpu and disk I/O are not affected.
    This leads to the conclusion that innodb_thread_concurrency is setting up a stand-by pool before the thread gets access to execution.
    Therefore, buffered threads according innodb_thread_concurrency are not competing for mutexes.
    So, the only settings that affect execution pool and performance (in terms of cpu and disk I/O performance) are:
    – innodb_read_io_threads
    – innodb_write_io_threads
    – innodb_commit_concurrency
    – innodb_thread_sleep_delay
    – innodb_concurrency_tickets
    – innodb_sync_spin_loops
    – innodb_spin_wait_delay
    Having this in mind, and trying to find the most stable configuration for my workload (knowing statistically how many threads try to enter simultaneously the execution queue), I’ve come to conclusion that I have to basically tune:
    • how many threads gets to execution – with regards of cpu and disks number
    • how the threads goes to execution – by tunning innodb_sync_spin_loops, innodb_spin_wait_delay and innodb_concurrency_tickets
    According to mysql site, I have come to the conclusion that I need to give the threads the chance to wait more to be granted to the execution pool BEFORE entering the sleep state (which will put them in FIFO pool, but with performance decrease), by relaxing innodb_sync_spin_loops and innodb_spin_wait_delay!
    In fact, I have slightly increased innodb_sync_spin_loops to 80 (default 30!) and reducing innodb_spin_wait_delay to 5 (default 6!).
    i established a value for innodb_thread_concurrency pool to 32.
    The result is a major decrease for all mutex competitions, therefore I obtain an increase in stability.
    I have a great respect for Percona’s programmers, they are an inspiration for me), so I’d really appreciate their opinion, maybe conducting a series of tests with this “theory”.
    Keep up, Percona team, with this great blogging site!

    December 2, 2013 at 2:11 am
  • Tinel Barb

    I have to mention that my values for “how many threads gets to execution” were set by tunning:
    – innodb_read_io_threads = 2 #number of innodb_buffer_pool_instances
    – innodb_write_io_threads = 10 #8 cores + 2 disks
    – innodb_commit_concurrency = 2 #number of innodb_buffer_pool_instances
    i know that the number “2” may lead to bottle-necks, but in reality the stability is so great, the value of OS Waits is under 1e-4% and “Spin rounds per wait” are below 30 on all types of mutexes, far from the value of 80 allocated. All these gave a performance boost.
    I’d like con’s or pros’s. Thanks!

    December 2, 2013 at 2:24 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.