Amazon Aurora in sysbench benchmarksVadim Tkachenko
In my previous post Amazon Aurora – Looking Deeper, I promised benchmark results on Amazon Aurora.
There are already some results available from Amazon itself: https://d0.awsstatic.com/product-marketing/Aurora/RDS_Aurora_Performance_Assessment_Benchmarking_v1-2.pdf.
There are also some from Marco Tusa: http://www.tusacentral.net/joomla/index.php/mysql-blogs/175-aws-aurora-benchmarking-blast-or-splash.html.
Amazon used quite a small dataset in their benchmark: 250 tables, with 25000 rows each, which in my calculation corresponds to 4.5GB worth of data. For this datasize, Amazon used r3.8xlarge instances, which provided 32 virtual CPUs and 244GB of memory. So I can’t say their benchmark is particularly illustrative, as all the data fits nicely into the available memory.
In my benchmark, I wanted to try different datasizes, and also compare Amazon Aurora with Percona Server 5.6 in identical cloud instances.
You can find my full report there: http://benchmark-docs.readthedocs.org/en/latest/benchmarks/aurora-sysbench-201511.html
Below is a short description of the benchmark:
- Initial dataset. 32 sysbench tables, 50 million (mln) rows each. It corresponds to about 400GB of data.
- Testing sizes. For this benchmark, we vary the maximum amount of rows used by sysbench: 1mln, 2.5mln, 5mln, 10mln, 25mln, 50mln.
In the chart, the results are marked in thousands of rows: 1000, 2500, 5000, 10000, 25000, 50000. In other words, “1000” corresponds to 1mln rows.
It is actually very complicated to find an equal configuration (in both performance and price aspects) to use as a comparison between Percona Server running on an EC2 instance and Amazon Aurora.
- db.r3.xlarge instance (4 virtual CPUS + 30GB memory)
- Monthly computing cost (1-YEAR TERM, No Upfront): $277.40
- Monthly storage cost: $0.100 per GB-month * 400 GB = $40
- Extra $0.200 per 1 million IO requests
Total cost (per month, excluding extra per IO requests): $311.40
- r3.xlarge instance (4 virtual CPUS + 30GB memory)
- Monthly computing cost (1-YEAR TERM, No Upfront): $160.60
For the storage we will use 3 options:
- General purpose SSD volume (marked as “ps” in charts), 500GB size, 1500/3000 ios, cost: $0.10 per GB-month * 500 = $50
- Provisioned IOPS SSD volume (marked as “ps-io3000”), 500GB, 3000 IOP = $0.125 per GB-month * 500 + $0.065 per provisioned IOPS-month * 3000 = $62.5 + $195 = $257.5
- Provisioned IOPS SSD volume (marked as “ps-io2000”), 500GB, 2000 IOP = $0.125 per GB-month * 500 + $0.065 per provisioned IOPS-month * 2000 = $62.5 + $130 = $192.5
So corresponding total costs (per month) for used EC2 instances are: $210.60; $418.10; $353.10
More graphs, including timelines, are available by the link http://benchmark-docs.readthedocs.org/en/latest/benchmarks/aurora-sysbench-201511.html
Summary results of Amazon Aurora vs Percona Server vs different datasizes:
There are few important points to highlight:
- Even in long runs (2 hours) I didn’t see a fluctuation in results. The throughput is stable.
- I actually made one run for 48 hours. There were still no fluctuations.
- For Percona Server, as expected, better storage gives better throughput. 3000 IOPS is better then Amazon Aurora, especially for IO-heavy cases.
- Amazon Aurora shows worse results with smaller datasizes. Aurora outperforms Percona Server (with general purpose SSD and provisioned SSD 2000IOPS volumes) when it comes to big datasizes.
- It appears that Amazon Aurora does not benefit from adding extra memory – the throughput does not grow much with small datasizes. I think it proves my assumption that Aurora has some kind of write-through cache, which shows better results in IO-heavy workloads.
- Provisioned IO volumes indeed give much better performance compared to general purpose volume, though they are more expensive.
- From a cost consideration (compared to provisioned IO volumes) 3000 IOPS is more cost efficient (for this particular case, but in your workload it might be different) than 2000 IOPS, in the sense that it gives more throughput per dollar.