kernel_mutex problem. Or double throughput with single variable

Problem with kernel_mutex in MySQL 5.1 and MySQL 5.5 is known: Bug report. In fact in MySQL 5.6 there are some fixes that suppose to provide a solution, but MySQL 5.6 yet has long way ahead before production, and it is also not clear if the problem is really fixed.

Meantime the problem with kernel_mutex is raising, I had three customer problems related to performance drops during the last month.

So what can be done there ? Let’s run some benchmarks.

But some theory before benchmarks. InnoDB uses kernel_mutex when it starts/stop transactions, and when InnoDB starts the transaction, usually there is loop through ALL active transactions, and this loop is inside kernel_mutex. That is to see kernel_mutex in action, we need many concurrent but short transactions.

For this we will take sysbench running only simple select PK queries against 48 tables, 5,000,000 rows each.

Hardware is Cisco UCS C250 server. The workload is read-only and fully in memory.

There is the result for different threads (against Percona Server 5.5.17):

Threads Throughput, q/s
1 11178.34
2 27741.06
4 53364.52
8 92546.73
16 144619.58
32 164884.03
64 154235.73
128 147456.33
256 68369.02
512 40509.67
1024 22166.94

The peak throughput is 164884 q/s for 32 threads, and it declines to 68369 q/s for 256 threads, that is 2.4x times drop.

The reason, as you may guess, is kernel_mutex. How you can see it ? It is easy. In SHOW ENGINE INNODB STATUSG you will see a lot of lines like:

This problem is actually quite serious. In the real workloads I saw this happening with less than 256 threads, and not all production systems can tolerate 2x times drop of throughput in the peak times.

So what can be done there ?

In the first try, let’s recall that kernel_mutex (and all InnoDB mutexes) has complex handling with spin loops, and there are two variables that affects mutex loops: innodb_sync_spin_loops and innodb_spin_wait_delay. I actually think that tuning system with these variable is something closer to dance with drum than to scientific method, but nothing else helps, why not to try.

There we vary innodb_sync_spin_loops from 0 to 100 (default is 30):

Threads Throughput NA
1 11178.34
2 27741.06
4 53364.52
8 92546.73
16 144619.58
32 164884.03
64 154235.73
128 147456.33
256 68369.02
512 40509.67
1024 22166.94

I was surprised to see that with innodb_sync_spin_loops=100 we can improve to 145324 q/s , almost to peak throughput from first experiment.

With innodb_sync_spin_loops=100 the kernel_mutex is still the main point of contention, but InnoDB tries to prevent the current thread from pausing, and that seems helping.

Further experiments showed that 100 is not enough for 512 threads, and it should be increased to 200.

So there is final results with innodb_sync_spin_loops=200 for 1-1024 threads.

Threads Throughput Throughput spin 200
1 11178.34 11288.42
2 27741.06 28387.62
4 53364.52 53575.52
8 92546.73 92184.65
16 144619.58 143688.91
32 164884.03 164392.94
64 154235.73 154022.57
128 147456.33 152280.84
256 68369.02 150089.31
512 40509.67 127680.65
1024 22166.94 61507.08

So playing with this variable we can double throughput to the level with 32-64 threads.
I am not really can explain how it does work internally, but I wanted to show one of possible ways
to deal with problem when you hit by kernel_mutex problem.

Further direction I want to try to limit innodb_thread_concurrency and also bind mysqld to less CPUs, and also it is interesting to see if MySQL 5.6.3 really fixes this problem.

Share this post

Comments (17)

  • Baron Schwartz

    For those who are curious, Vadim’s work here was partially in response to the mysterious kernel_mutex problem I had with a customer that turned out to be GDB-related:

    I am also not sure why raising the variable helped. On our phone call, Vadim and I discussed the variable and I guessed that lowering it would help and raising it would make it worse, because I thought that spinning was the problem 🙂 oprofile reports showed that ut_delay was consuming the vast majority of CPU time, and I thought that getting rid of the wasted work might potentially help. Wrong…

    December 2, 2011 at 11:09 am
  • Raghavendra

    So looks like in 5.6 the kernel mutex may have been finally let off — (They don’t seem to date their posts .. weird)

    December 2, 2011 at 11:53 am
  • Vadim Tkachenko


    I believe I know the reason why innodb_sync_spin_loops helps there.

    The problem is old, and by some reason I was sure it is fixed already.

    The problem is that InnoDB uses it’s own mutex implementation, which internally uses condition variables.
    And current implementation uses pthread_cond_broadcast to wake up threads.
    That means that all ( hundreds or thousands) threads, waiting on mutex, wake up all together at the same moment
    and trying to compete for mutex again.

    Increasing innodb_sync_spin_loops allows to delay entering into using condition variables, and allows to resolve
    mutex only via spin_loop.

    In this case using innodb_thread_concurrency also should help, and I am running experiments with it right now.

    December 2, 2011 at 12:29 pm
  • Vadim Tkachenko


    Removing kernel_mutex does not automatically fixes problem, as you will face another mutex after that.
    So I would wait on the results before saying that problem is fixed.

    December 2, 2011 at 12:30 pm
  • Baron Schwartz

    That makes sense. I think that Mark Callaghan has mentioned this problem recently too.

    December 2, 2011 at 1:40 pm
  • Davi Arnaut

    > And current implementation uses pthread_cond_broadcast to wake up threads.
    > That means that all ( hundreds or thousands) threads, waiting on mutex, wake
    > up all together at the same moment and trying to compete for mutex again.

    pthread_cond_broadcast just requeues (FUTEX_REQUEUE) into the mutex wait list.
    Perhaps the thundering herd you mention is at some other level?

    December 2, 2011 at 3:48 pm
    • Vadim Tkachenko


      I am not sure what you refer by FUTEX_REQUEUE to, but you caught me on curiosity so I overcame my laziness and went to
      it says:
      “The pthread_cond_broadcast() function shall unblock all threads currently blocked on the specified condition variable cond.”

      2. As I get used to that the documentation may be wrong, I wrote test cond.c ( actually taken from

      with following change:

      pthread_cond_wait(&cond, &cond_mutex);
      printf(“T WOKE: %x\n”, pthread_self());

      and on single “pthread_cond_broadcast” it prints:
      T WOKE: bd143700
      T WOKE: bc742700
      T WOKE: bb340700
      T WOKE: ba93f700
      T WOKE: bbd41700

      That is all 5 threads woke up.

      December 2, 2011 at 4:54 pm
  • Davi Arnaut

    The point is that they are not all woken up at the same time/moment. When a condition is broadcasted, the threads waiting on the condition are just moved to the wait list of the mutex, where they are woken one by one.

    December 2, 2011 at 5:05 pm
  • Vadim Tkachenko December 2, 2011 at 5:42 pm
  • Vadim Tkachenko


    If I following:
    pthread_cond_wait(&cond, &cond_mutex);
    printf(“T WOKE: %x\n”, pthread_self());
    printf(“T WOKE 2: %x\n”, pthread_self());

    I get:
    T WOKE: 91339700
    T WOKE 2: 91339700
    T WOKE: 90938700
    T WOKE 2: 90938700
    T WOKE: 8f536700
    T WOKE 2: 8f536700
    T WOKE: 91d3a700
    T WOKE 2: 91d3a700
    T WOKE: 8ff37700
    T WOKE 2: 8ff37700

    on single pthread_cond_broadcast.

    This is what I refer to when I say that ALL threads wake.

    In InnoDB implementation after thread wakes it comes back to SPIN LOOP

    Simplifying, InnoDB mutex looks like:

    That is all threads in random order comes to pthread_cond_wait, but
    once mutex released, they all WAKE UP and starting loop again.

    December 2, 2011 at 6:12 pm
  • Mark Callaghan

    They are all scheduled to run so they are all going to run. Then they will busy-wait for 20 microseconds or more in the InnoDB mutex code and then a bit more in pthread code courtesy of PTHREAD_MUTEX_ADAPTIVE_NP. Then they will go back to sleep. When there are hundreds of them they will delay productive threads from being scheduled. They will also get cache lines in read-mode so that productive threads have to do cross-socket cache operations which leads to more latency. This is very inefficient.

    December 2, 2011 at 8:21 pm
  • Davi Arnaut

    > This is what I refer to when I say that ALL threads wake.

    Yes, eventually they will all wake up because they are waiting on the mutex. One thread will grab the mutex, and once it releases it, another thread is woken up.

    What I was replying to is:

    > wake up all together at the same moment

    Which is not true for pthread_cond_broadcast. Again, if there are threads sleeping on the condition variable, they are re-queued into waiting on the mutex. If the mutex is unlocked, only the top-waiter is waked. Only one thread may lock a mutex, so there is simply no point is waking all threads.


    1. See introduction.

    December 3, 2011 at 1:57 am
  • Davi Arnaut

    > In InnoDB implementation after thread wakes it comes back to SPIN LOOP

    Yes, but one important point, InnoDB only uses pthread synchronization objects to implement the wait queue of an InnoDB mutex. When the threads are on the wait queue, only one will be actually woken and this one grabs the _wait queue_ lock. Soon after being wake up, the thread releases the wait queue lock, which wakes up another and so on. Outside of the wait queue, what you said applies.

    December 3, 2011 at 2:17 am
  • Peter Zaitsev

    Setting innodb_sync_spin_loops is very interesting discussion because there is really no “right” answer – depending on what is the limiting mutex for your workload the different amount of spinning might make sense. Better solution would be to have this valuable to be set per mutex and adjusted automatically.

    I believe it would be possible to design the system which would profile how long it takes to grab the mutex – say profiling one out of 1000 mutex get request. When based on distribution we can design how long it makes sense to wait. For example if we run long spin and can discover we either get the mutex after 10us of we spin till the end of time of 1000us we can decide to spin up to 20us or so which will deal with short locks of given mutex and switch to OS wait and stop wasting CPU if not.

    December 4, 2011 at 10:03 am
  • James Day

    You might want to look at innodb-adaptive-max-sleep-delay in MySQL 5.6. It makes innodb-thread-sleep-delay adaptive and is of particular value over 1024 threads in 5.6.

    Sunny’s OOW presentation at mentions it on slide 28.

    James Day, Oracle. This is my view only; for an official Oracle opinion consult a PR person.

    December 5, 2011 at 3:20 pm
  • Andy Carlson


    I want to thank you for this informative post. I had a workload that I was working with a few years ago, that I could not get to perform well in innodb. It seemed like MySQL would attack one thread, and starve all the rest. I dug out the old code and data, and ran it with innodb_sync_spin_loops=64, and the workload performed much better.

    Thanks again, and I will be watching for more posts from you in the future.

    December 16, 2011 at 9:15 am
  • yangdehua

    we also saw this problem

    and what’s more , there is different in the manual of 5.1 and manual of 5.5

    5.1 innodb_thread_concurrency the default value is 8

    5.5 the default value is 0

    if we set innodb_thread_concurrency , the server ‘s load would get down.

    December 27, 2011 at 10:43 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.