Benchmarking single-row insert performance on Amazon EC2

I have been working for a customer benchmarking insert performance on Amazon EC2, and I have some interesting results that I wanted to share. I used a nice and effective tool iiBench which has been developed by Tokutek. Though the “1 billion row insert challenge” for which this tool was originally built is long over, but still the tool serves well for benchmark purposes.

OK, let’s start off with the configuration details.

Configuration

First of all let me describe the EC2 instance type that I used.

EC2 Configuration

I chose m2.4xlarge instance as that’s the instance type with highest memory available, and memory is what really really matters.

As for the IO configuration I chose 8 x 200G EBS volumes in software RAID 10.

Now let’s come to the MySQL configuration.

MySQL Configuration

I used Percona Server 5.5.22-55 for the tests. Following is the configuration that I used:

You can see that the buffer pool is sized at 55G and I am using 4 buffer pool instances to reduce the contention caused by buffer pool mutexes. Another important configuration that I am using is that I am using “estimate” flushing method available only on Percona Server. The “estimate” method reduces the impact of traditional InnoDB log flushing, which can cause downward spikes in performance. Other then that, I have also disabled query cache to avoid contention caused by query cache on write heavy workload.

OK, so that was all about the configuration of the EC2 instance and MySQL.

Now as far as the benchmark itself is concerned, I made no code changes to iiBench, and used the version available here. But I changed the table to use range partitioning. I defined a partitioning scheme such that every partition would hold 100 million rows.

Table Structure

The table structure of the table with no secondary indexes is as follows:

While the structure of the table with secondary indexes is as follows:

Also, I ran 5 instances of iiBench simultaneously to simulate 5 concurrent connections writing to the table, with each instance of iiBench writing 200 million single row inserts, for a total of 1 billion rows. I ran the test both with the table purchases_noindex which has no secondary index and only a primary index, and against the table purchases_index which has 3 secondary indexes. Another thing I would like to share is that, the size of the table without secondary indexes is 56G while the size of the table with secondary indexes is 181G.

Now let’s come down to the interesting part.

Results

With the table purchases_noindex, that has no secondary indexes, I was able to achieve an avg. insert rate of ~25k INSERTs Per Second, while with the table purchases_index, the avg. insert rate reduced to ~9k INSERTs Per Second. Let’s take a look at the graphs have a better view of the whole picture.

Note, in the above graph, we have “millions of rows” on the x-axis and “INSERTs Per Second” on the y-axis.
The reason why I have chosen to show “millions of rows” on the x-axis so that we can see the impact of growth in data-set on the insert rate.

We can see that adding the secondary indexes to the table has decreased the insert rate by 3x, and its not even consistent. While with the table having no secondary indexes, you can see that the insert rate is pretty much constant remaining between ~25k to ~26k INSERTs Per Second. But on the other hand, with the table having secondary indexes, we can see that there are regular spikes in the insert rate, and the variation in the rate can be classified as large, because it varies between ~6.5k to ~12.5k INSERTs per second, with noticeable spikes after every 100 million rows inserted.

I noticed that the insert rate drop was mainly caused by IO pressure caused by increase in flushing and checkpointing activity. This caused spikes in write activity to the point that the insert rate was decreased.

Conclusion

As we all now there are pros and cons to using secondary indexes. While secondary indexes cause read performance to improve, but they have an impact on the write performance. Well most of the apps rely on read performance and hence having secondary indexes is an obvious choice. But for those applications that are write mostly or that rely a lot on write performance, reducing the no. of secondary indexes or even going away with secondary indexes causes a write throughput increase of 2x to 3x. In this particular case, since I was mostly concerned with write performance, so I went ahead to choose a table structure with no secondary indexes. Other important things to consider when you are concerned with write performance is using partitioning to reduce the size of the B+tree, having multiple buffer pool instances to reduce contention problems caused by buffer pool mutexes, using “estimate” checkpoint method to reduce chances of log flush storms and disabling the query cache.

Share this post

Comments (18)

  • Justin Swanhart

    Examining the insert buffer during the insertions on purchases_index should be interesting. You are seeing periodic drops in performance which might be the result of the insert buffer filling up and then decreasing performance until some space is free’ed again. XtraDB lets you accelerate the rate at which insert buffer pages are flushed to disk (innodb_ibuf_accel_rate) which might be interesting to tweak during your test. Mark Callaghan over at Facebook has seen big performance difference on insertion rate when increased insert buffer flushing was enabled (using the FB patch, not XtraDB but difference should be similar)

    May 16, 2012 at 12:24 pm
  • Andy

    This benchmark doesn’t test the IO performance of EBS though as you’re not flushing to disks on each commit.

    What numbers do you get when you set innodb_flush_log_at_trx_commit to 1?

    May 16, 2012 at 3:25 pm
  • Mark Callaghan

    Partitioning probably reduces the stress on the insert buffer. Does XtraDB/Percona Server have an option to use more IO for insert buffer merges when the ibuf gets too big?

    May 16, 2012 at 4:03 pm
  • Tim Callaghan

    Interesting results, and well presented. Would you mind sharing the command line you used for your iiBench clients? Also, did you run a test with innodb_flush_log_at_trx_commit = 1?

    May 16, 2012 at 6:50 pm
  • Bradley C Kuszmaul

    I think I can explain the performance you are seeing.

    But first I’d like to note that this experiment isn’t really running the iiBench problem. The point of iiBench problem is to measure the cost of index maintenance. This experiment isn’t measuring the cost of index maintenance. Transaction-id is auto-incremented, and so partitioning on transaction-id makes the insertions easy. Basically you fill in 100 million rows, then start a new partition and fill that in. During the time that you are modifying a partition it fits in main memory (and would have fit even with only a few GB of main memory). The cost of querying this database will be 10 times higher with this partitioning, however since you’ll have to look in each of the ten partitions to query one of the indexes. If you keep going, making the problem be 10B rows, the cost of querying will go up by another factor of ten (it will be 100 disk seeks to answer a random query, assuming you can get each index query down to one disk seek.) I claim that this schema is not properly indexed.

    Here’s a theory that seems to explain the performance you are seeing. The periodic performance variation corresponds to filling up a partition and starting a new one. (If I read your graph correctly, you didn’t graph the performance on the first 100M rows). The minimum performance troughs occur just as the partition switches. So performance gets worse and worse as a partition gets bigger and bigger, then we start a new partition, and Inno starts catching up and the performance starts getting better, and then the new partition starts getting too big, so Inno starts slowing down again.

    The factor of 4 speedup that occurs when you get rid of the secondary keys is simply because you are writing 4 times fewer B-tree values. The primary key and each secondary key each incur nearly the same cost, and the performance difference is basically that factor of 4.

    The variance is lower with only the primary key because the entire table fits in main memory. Each row is on the order of 30 bytes in the primary table, and so a billion rows is about 30GB. Since we are inserting on an auto-increment key, I would expect Inno to fill the B-tree pretty efficiently. With a 55GB buffer pool, we would expect the primary table to simply fit. But with all 4 indexes the data itself is 2 or 3 times bigger, and the B-tree nodes are filled perhaps only 3/4 full, making the database a total of, say, 4 times bigger. So around 300M or 400M rows, I would expect Inno starts hitting disk. Perhaps you could verify that by watching IOstat. Tim’s measurements (http://www.tokutek.com/2012/01/1-billion-insertions-%E2%80%93-the-wait-is-over/) on a newer version of iiBench showed that with 16GB of RAM, Inno hits the memory wall at about 100M rows, which matches my prediction that at 400M rows and 55GB RAM, Inno would hit a memory wall.

    Given that Memory sizes are perhaps 10 times bigger than they were 4 years ago when the iiBench project started, today the challenge would be to index 10B rows, not 1B rows. And it’s still a good challenge.

    The URL for iibench is slightly incorrect. The correct URL for version you used is http://tokutek.com/downloads/iiBench-1.0.3.1.tar.gz
    One of the advantages of the newer version of iiBench (the python version that Mark wrote http://bazaar.launchpad.net/~mdcallag/mysql-patch/mytools/annotate/head%3A/bench/ibench/iibench.py) is that it measures query performance too, so if you fail to actually index the data, the benchmark will notice.

    May 16, 2012 at 10:37 pm