Impact of logging on MySQL’s performance

When people think about Percona’s microslow patch immediately a question arises how much logging impacts on performance. When we do performance audit often we log every query to find not only slow queries. A query may take less than a second to execute, but a huge number of such queries may significantly load a server. On one hand logging causes sequential writes which can’t impair performance much, on other hand when every query is logged there is a plenty of write operations and obviously performance suffers. Let’s investigate how much.

I took DBT2, an OSDL’s implementation of TPC-C.
Hardware used
The benchmark was run on a DELL server running CentOS release 4.7 (Final)
There are four CPUs Intel(R) Xeon(R) CPU 5150 @ 2.66GHz, 32GB RAM. There are 8 disks in RAID10(a mirror of 4+4 striped disks).
It was used MySQL 5.0.75-percona-b11 on CentOS release 4.7
MySQL setting
There were two cases considered CPU- and IO-bound.
Each case had three options:

  • logging turned off;
  • logging queries which take more than a second to execute;
  • logging every query;

MySQL was run with default settings except following:

Depending on workload different InnoDB buffer was used.
In CPU-bound case

In IO-bound case

DBT2 settings
For CPU-bound case number of warehouses was 10(1.31GiB). In case of IO-bound load – 100 warehouses which is 10GiB in terms of database size.
The test was run with 1, 20 and 100 database connections
To reduce random error the test was run 3 times per each parameter set.
The metric of a DBT2 test is NOTPM (New Order Transaction per Minute) – the more the better.

CPU-bound case – 10 warehouses

Database size 1.31 GiB
# of connections No logging, NOTPM Logging queries >1 SEC, NOTPM ratio 1 sec / no_logging Logging all queries, NOTPM Ratio all_logging / no_logging
1 9607 9632 1.00 8434 0.88
20 27612 27720 1.00 22105 0.80
100 11704 11741 1.00 10956 0.94

We see here that logging all queries decreases MySQL’s performance on 6-20% depending on a number of connections to a database.
It should be noted during the test it was executed roughly 20-25k queries per second. If all queries are logged – a slow log is populated at rate about 10MB/sec. This is the highest rate observed.
IO-bound case – 100 warehouses

Database size 10GiB
# of connections No logging, NOTPM Logging queries > 1 sec, NOTPM Ratio no_logging / 1 sec_logging Logging all, NOTPM Ratio no_logging / all_logging
1 225 ± 9 211 ± 3 0.94 213 ± 9 0.95
20 767 ± 41 730 ± 35 0.95 751 ± 33 0.98
100 746 ± 54 731 ± 12 0.98 703 ± 36 0.94

In this case every test was run 5 times and random measurement error was calculated. As it is seen from the chart above the performance almost doesn’t depend on logging – the difference doesn’t exceed the measurement error.

The query rate in this case is about 1000 per second.

Logging to /dev/null
It is interesting to know how much from performance degradation caused by the microslow patch itself. Let’s do the same tests but logging to /dev/null.

CPU-bound case – 10 warehouses, Database size: 1.31 GiB
# of connections No logging, NOTPM Logging all queries, NOTPM Ratio all_logging /no_logging
1 9512 8943 0.94
20 27675 25869 0.93
100 11609 11236 0.97

From the all tests above there are two conclusions can be made:

  1. It is safe to log slow queries with execution time bigger than a second without worry about performance impact in case of CPU-bound workload. The performance impact is negligibly small in IO-bound workload even if all queries are logged.
  2. In general logging all queries can hurt MySQL and you should consider the load while using it, especially in CPU-bound case.

Share this post

Comments (10)

  • Arjen Lentz

    So nothing new really, but good to have the numbers. Thanks!

    February 10, 2009 at 4:32 pm
  • Baron Schwartz


    Exactly — and as has been pointed out over and over, the minute you propose to people the means to figure out where their performance problems are, the first thing they do is anxiously ask “how much overhead does it have.” Look at the PostgreSQL blogging world’s take on this last week:

    I think this benchmark shows that a) in cases where you’re I/O bound, which is exactly the time when people worry about the impact of logging, it isn’t measurable, and b) it’s not that much overhead anyway.

    February 10, 2009 at 5:02 pm
  • peter

    Interesting your point to Cary Millsap blog.

    I think his Book on Oracle Performance Optimization is one of the best books on performance tuning. It surely taught me a lot when… And a lot of principles are general being it Oracle or MySQL just tools (or lack of tools) is different.

    February 10, 2009 at 10:46 pm
  • Vladimir Rusinov

    What about logging queries like INSERT INTO binary_table VALUES (123, ‘%LARGE_BINARY_BLOB%’)?

    I’ve found some time ago that for PostgreSQL logging such queries takes more time (and uses a lot of CPU) than this insert executes.

    February 11, 2009 at 7:48 am
  • Robert Treat

    This whole topic is one of the reason’s we’ve always been so excited about dtrace. I wonder, perhaps you guys should query the mysql developers to get a copy of their dtrace probes patch for 6.0, and put it into OurDelta now. Probes are pretty non-intrusive; we’ve done them for both Apache and Postgres on our own (though we worked with Sun to get the Postgres ones into core for Postgres 8.4).

    February 11, 2009 at 1:34 pm
  • peter


    Dtrace is cool. The thing is though very little of our customers runs Solaris. I worked w Sun on Dtrace support for MySQL during my time there.

    February 11, 2009 at 5:36 pm
  • Rick James

    The whole question is a non-issue. Turn on the slowlog, and leave it on. Why do I say that?
    * If the system is performing well, then, by definition, the slowlog is not hurting.
    * If the system is performing poorly, then you need to look back at what was causing the most pain and fix it. And the slowlog is the best way to do that.

    We have hundreds of systems (thousands of servers in replication setups) in production. The default is to have the slowlog on, and to set long_query_time = 2. (Some systems use higher or lower values.)

    When there is a meltdown, the slowlog is sometimes the best source of ‘why’. The rest of the time, I find it useful to proactively look for naughty queries and propose tuning / index / schema / design changes.

    Anyway, thanks for confirming that the overhead is “in the noise”. (I assume you are talking only about the slowlog, not the binlog, nor the general log.)

    February 1, 2011 at 12:26 pm
  • Fadi El-Eter

    Slow Query should be run on as-needed basis, and re-actively not pro-actively. I’ve actually see worse performance caused by slow query turned on, especially for very large (but not necessarily slow) queries.

    It’s a good tool but it should only be used when needed, and that’s why it’s turned off by default.

    June 11, 2013 at 8:51 pm
  • Jouni Järvinen

    This post is over 6 years old, I bet MySQL and Percona have changed alot since then to make these numbers obsolete.

    October 1, 2015 at 4:19 pm
  • Cheryl

    Is there a more recent look at audit logging overheads anywhere?

    February 1, 2017 at 1:25 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.