Performance Schema Benchmarks: OLTP RW

Performance Schema Benchmarks: OLTP RW

PREVIOUS POST
NEXT POST

In this blog post, we’ll look at Performance Schema benchmarks for OLTP Read/Write workloads.

I am in love with Performance Schema and talk a lot about it. Performance Schema is a revolutionary MySQL troubleshooting instrument, but earlier versions had performance issues. Many of these issues are fixed now, and the default options work quickly and reliably. However, there is no free lunch! It is expected that the more instruments you turn ON, the more overhead you’ll have.

The advice I give my customers is that when in doubt, only turn ON the instruments that are required to troubleshoot your issue. Many of them ask: what exactly are the overhead costs for one instrumentation or another? I believe the best answer is “test on your system!” No generic benchmark can exactly repeat a workload on your site. But while I was working on the “OpenSource Databases on Big Machines” project, I decided to test the performance of Performance Schema as well.

I only tested a Read/Write workload. I used the same fast machine (144 CPU cores), the same MySQL options and the same SysBench commands that I described in this post. The option  innodb_flush_method was changed to O_DIRECT, because it’s more reasonable for real-life workloads. I also upgraded the MySQL Server version to Oracle’s MySQL 5.7.17. The reason for the upgrade was to test if the issue described in this post is repeatable with latest Oracle MySQL server version. But since I tested Performance Schema, the effect on Percona Server for MySQL should be same.

I tested nine different scenarios:

  1. “All disabled”: Performance Schema is ON, but all instruments and consumers are disabled.
  2. “All enabled”: Performance Schema is ON, and all instruments and consumers are enabled.
  3. “Default”: Performance Schema is ON, and only default instruments and consumers are enabled.
  4. “MDL only”: only Metadata Lock instrumentation is enabled.
  5. “Memory only”: only Memory instrumentation enabled.
  6. “Stages and Statements”: only Stages and Statements instrumentation enabled.
  7. “Stages only”: only Stages instrumentation enabled.
  8. “Statements only”: only Statements instrumentation enabled.
  9. “Waits only”: only Waits instrumentation enabled.

Here are the overall results.

As you can see, some instrumentation only slightly affects performance, while others affect it a lot. I created separate graphs to make the picture clearer.

As expected, enabling all instrumentation makes performance lower:

Does this mean to use Performance Schema, you need to start the server with it ON and then disable all instruments? No! The default options have very little effect on performance:

The same is true for Metadata Locks, Memory and Statements instrumentation:

Regarding statements, I should note that I used prepared statements (which are instrumented in version 5.7). But it makes sense to repeat the tests without prepared statements.

The Stages instrumentation starts affecting performance:

However, the slowdown is reasonable and it happens only after we reach 32 concurrent threads. It still provides great insights on what is happening during query execution.

The real performance killer is Waits instrumentation:

It affects performance close to the same way as all instruments ON.

Conclusion

Using Performance Schema with the default options, Memory, Metadata Locks and Statements instrumentation doesn’t have a great impact on read-write workload performance. You might notice slowdowns with Stages instrumentation after reaching 32 actively running parallel connections. The real performance killer is Waits instrumentation. And even with it on, you will start to notice a performance drop only after 10,000 transactions per second.

Save

Save

Save

Save

Save

Save

Save

PREVIOUS POST
NEXT POST

Share this post

Comments (9)

  • Mark Callaghan Reply

    What was “waits” using to measure time?

    January 26, 2017 at 7:55 pm
    • Sveta Smirnova Reply

      Do you mean events_waits instrumentation? I did not do any deep checks this time. I simply turned ON all events_waits instrumentation. It will have good sense to test groups of events_waits instruments, but it will also take time. Something for the next iteration.

      January 26, 2017 at 8:05 pm
      • Mark Callaghan Reply

        I thought there was a table that explained whether rdtsc or something else was used to measure waits

        January 26, 2017 at 8:09 pm
          • Sveta Smirnova

            Got it. Here is the output for this machine:

            mysql [localhost] {msandbox} (performance_schema) > SELECT * FROM performance_timers;
            +————-+—————–+——————+—————-+
            | TIMER_NAME | TIMER_FREQUENCY | TIMER_RESOLUTION | TIMER_OVERHEAD |
            +————-+—————–+——————+—————-+
            | CYCLE | 2493492537 | 1 | 18 |
            | NANOSECOND | 1000000000 | 1 | 48 |
            | MICROSECOND | 1000000 | 1 | 48 |
            | MILLISECOND | 1037 | 1 | 48 |
            | TICK | 103 | 1 | 424 |
            +————-+—————–+——————+—————-+
            5 rows in set (0,00 sec)

            January 27, 2017 at 5:58 am
          • Mark Leith

            Sveta: you should also post the output of performance_schema.setup_timers, to show which of the options from performance_timers was shown for each event type (generally though, I would always expect wait events to be using the CYCLE timer)..

            January 30, 2017 at 6:40 am
          • Sveta Smirnova

            Mark, they are defaults.

            January 30, 2017 at 6:59 am
  • Mark Leith Reply

    Do you have the raw numbers available by any chance Sveta? I’m interested in overall percentages.. 🙂

    January 30, 2017 at 6:45 am

Leave a Reply