ProxySQL versus MaxScale for OLTP RO workloads

In this blog post, we’ll discuss ProxySQL versus MaxScale for OLTP RO workloads.

Continuing my series of READ-ONLY benchmarks (you can find the other posts here: and, in this post I want to see how much overhead a proxy adds. At this

In my opinion, there are only two solid proxy software options for MySQL at the moment: ProxySQL and MaxScale. In the past, there was also MySQL Proxy, but it is pretty much dead for now. Its replacement, MySQl Router, is still in the very early stages and seriously lacks any features that would compete with ProxySQL and MaxScale. This will most likely change in the future – when MySQL Router adds more features, I will reevaluate them then!

To test the proxies, I will start with a very simple setup to gauge basic performance characteristics. I will use a sysbench client and proxy running on the same box. Sysbench connects to the proxy via local socket (for minimal network and TCP overhead), and the proxy is connected to a remote MySQL via a 10Gb network. This way, the proxy and sysbench share the same server resources.

Other parameters:

  • CPU: 56 logical CPU threads servers Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • sysbench ten tables x 10mln rows, Pareto distribution
  • OS: Ubuntu 15.10 (Wily Werewolf)
  • MySQL 5.7
  • MaxScale version 1.4.1
  • ProxySQL version 1.2.0b

You can find more details about benchmarks, scripts and configs here:

An important parameter to consider is how much of the CPU resources you allocate for a proxy. Both ProxySQL and MaxScale allow you to configure how many threads they can use to process user requests and to route queries. I’ve found that 16 threads for ProxySQL 8 threads for  MaxScale is optimal (I will also show 16 threads for MaxScale in this). Both proxies also allow you to setup simple load-balancing configurations, or to work in read-write splitting mode. In this case, I will use simple load balancing, since there are no read-write splitting requirements in a read-only workload).


First result: How does ProxySQL perform compared to vanilla MySQL 5.7?

As we can see, there is a noticeable drop in performance with ProxySQL. This is expected, as ProxySQL does extra work to process queries. What is good though is that ProxySQL scales with increasing user connections.

One of the tricks that ProxySQL has is a “fast-forward” mode, which minimizes overhead from processing (but as a drawback, you can’t use many of the other features). Out of curiosity, let’s see how the “fast-forward” mode performs:


Now let’s see what happens with MaxScale. Before showing the next chart, let me not it contains “error bars,” which are presented as vertical bars. Basically, an “error bar” shows a standard deviation: the longer the bar, the more variation was observed during the experiment. We want to see less variance, as it implies more stable performance.

Here are results for MaxScale versus ProxySQL:

We can see that with lower numbers of threads both proxies are nearly similar, but MaxScale has a harder time scaling over 100 threads. On average, MaxScale’s throughput is worse, and there is a lot of variation. In general, we can see that MaxScale demands more CPU resources and uses more of the CPU per request (compared to ProxySQL). This holds true if we run MaxScale with 16 threads (instead of 8):

MaxScale with 16 threads does not handle the workload well, and there is a lot of variation along with some visible scalability issues.

To summarize, here is a chart with relative performance (vanilla MySQL 5.7 is shown as 1):

While this chart does show that MaxScale has less overhead from 1-6 threads, it doesn’t scale as user load increases.

Share this post

Comments (11)

  • Peter Zaitsev


    Just to be clear you’re looking as Sysbench OLTP here so this is where connections are persistent ? You also do not make any of the proxies to use connection pool right ?

    May 12, 2016 at 4:28 pm
    • Vadim Tkachenko


      That’s correct. All connection are established at the start of benchmark and there is no connection pool involved.

      May 12, 2016 at 6:43 pm
      • renecannao

        Thank your for the post!
        To be more specific, connection pool feature is always enabled in ProxySQL, but for this specific type of workload when a backend connection is put in the connection pool it is taken immediately after from (most probably) the same client that released it.

        May 13, 2016 at 10:18 am
  • svar

    Thanks Vadim,

    We have created a task to Handle this issue,

    Would you be nice enough to replay such benchmarks once we will provide improvements?

    May 13, 2016 at 7:13 am
  • Dipti

    Vadim (and Rene):

    Thanks for sharing your test results. Few points here

    (1) MariaDB MaxScale guidance for number of threads is to not cross number of physical cores –

    (2) MaxScale is dynamic routing platform for OLTP work loads – read and write, and not designed as read only caching proxy. We do hear about caching layer request- and have considered it for future items in MaxScale.

    (3) MaxScale does not force users to use separate port for R/W and RO workload. While you can configure to send the two loads on separate port if you want to, you do not have to. If you are configured application to a single port configured for R/W split and all your workload is RO – it works just like RO load balancing.

    (4) We are continuously doing improvements on the performance side and will have benchmark number from our testing in the next release.

    May 16, 2016 at 4:58 pm
    • renecannao

      Hi Dipti.

      Thank you for your feedback.

      With regards to point #2 , I think there is a misleading assumption about ProxySQL . As already pointed in a comment in , ProxySQL caches only resultsets for statements explicitly set to be cached. That means that, unless configured otherwise, it doesn’t perform any caching.
      The only benchmark I ran with caching enabled is dated October 2013 .
      Since then, all the benchmarks ProxySQL vs MaxScale were performed without caching, including this one.

      May 17, 2016 at 4:14 am
  • Bogdan Rădulescu

    I really like that those error bars were graphed!

    May 17, 2016 at 10:19 am
  • Gleb Lesnikov

    What about ScaleArc or Vitess?

    August 23, 2016 at 9:40 am
  • Abhijeet Prabhune

    Hi Vadim — DO you know of any research benchmarks based on different networking options?

    October 17, 2016 at 2:17 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.