Benchmarking: More Stable Results with CPU Affinity Setting

benchmarking cpu affinity settingWhen I run a benchmark and want to measure the CPU efficiency of something, I find it’s often a good choice to run a benchmark program, as well as the database, on the same server. This is in order to eliminate network impact and to look at single-thread performance, to eliminate contention.

Usually, this approach gives rather stable results; for example, benchmarking MySQL with Sysbench OLTP Read-Only workload I get a variance of less than one percent between 1-minute runs.

In this case, though, I was seeing some 20 percent difference between the runs, which looked pretty random and would not go away even with longer 10-minute runs.

The benchmark I did was benchmarking MySQL through ProxySQL (all running on the same machine):

Sysbench -> ProxySQL -> MySQL 

As I thought more about possible reasons, I thought CPU scheduling might be a problem. As requests pass from one process to another, does Linux Kernel schedule requests at the same CPU core or a different one? Even though only one process can really be busy processing a request at the same time in this setup, there is the question of CPU cache usage, as well as other implications of scheduling work on the single CPU core or a different one.

To validate my assumptions, I used taskset, a little utility available in modern Linux distributions, which allows the mapping of a set process CPU affinity, essentially mapping it to some of the CPU cores.

I set MySQL and ProxySQL to be limited to different CPU cores:

And run sysbench bound to the given CPU core too:

With this change, I’m back to having very stable benchmark results. So if you ever run to a similar problem, see if setting process affinity with taskset helps!

Share this post

Comment (1)

  • Gurnish Anand Reply

    Good stuff Peter!

    April 16, 2020 at 1:19 am

Leave a Reply