Benchmarking Galera replication overheadVadim Tkachenko
When I mention Galera replication as in my previous post on this topic, the most popular question is how does it affect performance.
Of course you may expect performance overhead, as in case with Galera replication we add some network roundtrip and certification process. How big is it ? In this post I am trying to present some data from my benchmarks.
For tests I use
tpcc-mysql, datasize 600W (~60GB) with buffer pool 52GB. Workload is run under 48 user connections.
- 1st node: HP ProLiant DL380 G6
- 2nd node: Dell PowerEdge R815
- Both nodes use Fusion-io cards as storage to minimize IO overhead
Software: Percona-Server-5.5.15 regular and with Galera replication.
During tests I measure throughput each 10 sec, so it allows to observe stability of results. As final result I take median (line that divides top 50% measurements and bottom 50%).
In text I use following code names:
- single – result just on single node, without any replication. This is used to establish baseline
- wsrep – result on single node with enabled wsrep_provider. wsrep stands for Write Set REPlication, this is used by Galera
- wsrep 2 nodes – result when we have 2 nodes in Galera cluster
- replication – result using 2 nodes under regular MySQL replication
- semisync – result using 2 nodes with semisync MySQL replication
And now go to results.
That is we have 12432 vs 10821 NOT/10sec.
I think main overhead may coming from writing write sets to disk. Galera 1.0 stores events on disk.
The result drops a bit further, network communication adds its overhead.
We have 10384 for 2 nodes vs 10821 for 1 node.
The drop is just somewhat 4.2%
So we see that running regular replication we have better result:
11180 for MySQL replication vs 10384 for 2 Galera nodes.
However there two things to consider:
- You see quite periodic drops for MySQL replication, I think it is related to binary logs rotation
- And, second, much more serious problem: After 1800 sec of run, slave in regular MySQL replication was behind of master by 1000 sec. You can calculate how many transactions slave is missing. For Galera replication this is NOT a problem.
4. Now it is interesting how Semi-Sync replication will do under this workload.
For Semi-Sync replication we have 6439 NOT/10sec which is significantly slower than
in any other run.
And even with that, slave was still behind Master by 300 sec after 1800 sec of run.
I personally consider results with Galera very good, taking into account that we do not have second node getting behind and this node is consistent with first node.
For further activity it will be interesting to see what overhead we can get in 3-nodes setup,
and also what total throughput we can get if we put load on ALL nodes, but not only on single.