EC2/EBS single and RAID volumes IO benchmark

During preparation of Percona-XtraDB template to run in RightScale environment, I noticed that IO performance on EBS volume in EC2 cloud is not quite perfect. So I have spent some time benchmarking volumes. Interesting part with EBS volumes is that you see it as device in your OS, so you can easily make software RAID from several volumes.

So I created 4 volumes ( I used m.large instance), and made:

RAID0 on 2 volumes as:

RAID0 on 4 volumes as:

RAID5 on 3 volumes as:

RAID10 on 4 volumes in two steps:


And also in Linux you can create tricky RAID10,f2 (you can read what is this here

and also I tested IO on single volume.

I used xfs filesystem mounted with noatime,nobarrier options

and for benchmark I used sysbench fileio modes on 16GB file with next script:

So tested modes: seqrd (sequential read), seqwr (sequential write), rndrd (random read), rndwr (random write), rndrw (random read-write). And sysbench uses 16KB pagesize to emulate work of InnoDB with 16KB pagesize.

Raw results you may find in Google Docs
, but let me show most interesting results from my point of view. On graphs I show requests / second (more is better) and response time in ms for 95% cases (less is better).
random read
random write
random read-write

What I see from the results is that if you are looking for IO performance in EC2/EBS environment it’s definitely worth to consider some RAID setup.
RAID5 does not show benefits comparing with others, and RAID10,f2 is worse than RAID10.
But speaking RAID0 vs RAID10 it’s your call. For sure in regular server I’d never suggest RAID0 for database, but speaking about EBS I am not sure what guarantee Amazon gives here. I’d expect under EBS volume there already exists redundant array, and it may not worth to add additional redundancy, but I am not sure in that.
For now I’d consider RAID10 on 4 – 10 volumes.
And of course to get benefit from multi-threading IO in MySQL you need to use XtraDB or MySQL 5.4 ®

However there may be small problem with backup over EBS. On single EBS volume you can just do snapshot, but on several volumes it may be tricky. But in this case you may consider LVM snapshots or XtraBackup

Share this post

Comments (14)

  • Joel

    Has the recent outage on Amazon/EBS made you reconsider RAID0? Supposedly some volumes were complete lost. Others had reachability problems throughout the day. I assume using SW RAID1/10 could have saved some of these situations.

    August 6, 2009 at 12:00 am
  • morgan

    @Vadim – I noticed the strangest thing when working with locally attached EC2 disks a year ago… If you fill up the disk completely, and then empty them IO is much faster!

    The best explanation I had for this was that Amazon has a bitmap (in software) of what blocks you’ve seen, and they don’t want you to be able to recover the blocks from the previous person’s instance for the ones you haven’t. It kind of makes sense – but the problem was that the penalty was about x5 for a first write.

    The distribution of some of your results looks pretty wide. Do you think the same could be true for EBS as well? It might be a good test to try over the weekend.

    August 6, 2009 at 8:48 pm
  • Jean-Emmanuel Orfèvre

    RAID5 on 3 volumes as:
    mdadm -C /dev/md0 –chunk=256 -n 3 -l 0 /dev/sdj /dev/sdk /dev/sdl

    “-l 0” means “level 0”, how come RAID5?

    August 6, 2009 at 9:16 pm
  • Vadim

    Jean-Emmanuel Orfèvre ,

    Sorry, typo. fixed

    August 6, 2009 at 9:22 pm
  • Mark Callaghan

    @Morgan – the first write to a disk block on EC2 has a performance penalty. I have not read the explanation, but Amazon mentions it at

    August 7, 2009 at 6:29 am
  • Thorsten von Eicken

    Very nice and interesting results. Something that looks fishy to me is that with 1 volume you seem to be getting the best performance with a single thread, but with two striped volumes you get the best performance with 16 threads. If one volume “can’t handle” more than one thread effectively, you should see the best performance on 2 volumes with 2 threads or perhaps 4, but not with 16. So something is going here…

    Amazon is pretty clear about the fact that each EBS volume has redundancy built in and that adding additional redundancy is not particularly recommended (at that point other failure modes dominate anyway). Instead, it is recommended to spend the extra effort on doing frequent snapshots (which have their own set of performance effects).

    How many distinct instances did you fire up and run the tests, btw?

    August 13, 2009 at 12:52 am
  • Marc Slemko

    The issue we ran into trying to get performance out of EBS volumes with software RAID is that they are not all consistent over time. We were ending up with a slow volume, and which volume was slow might change over time. If you care about reads, then in theory a smart RAID1 setup with the right kernel could fix that by balancing reads based on performance, but if you just have a dumb round robin read setup then you are hamstrung by the slowest disk. If you care about writes, it is pretty tough no matter what you do.

    August 13, 2009 at 4:49 pm
  • Marlon says:

    “While some resources like CPU, memory and instance storage are dedicated to a particular instance, other resources like the network and the disk subsystem are shared among instances.”

    While I have no experience with EC2, I assume this to cause quite a bit of unrealiable I/O behavior.

    May 1, 2010 at 12:13 am
  • Dave Rose

    Were all your EBS volumes in the same availability zone?

    May 4, 2010 at 2:35 pm
  • Wes Shull

    @Dave Rose:

    EBS volumes can only be accessed by instances in the same availability zone, so the answer to your question is “yes”, by necessity.

    November 10, 2010 at 9:31 pm
  • Aaron Brown

    RAIDed EBS volumes can be snapshotted just as with non-RAIDed volumes. You simply have to either unmount the md volume or use XFS and freeze the volume briefly for the duration of the snapshot. ec2-consistent-snapshot from Alestic supports the XFS freeze/unfreeze methodology with a FLUSH TABLES WITH READ LOCK or MySQL shutdown.

    July 24, 2011 at 11:41 am
  • László Bácsi

    @Vadim, we were considering a RAID setup on EC2 and I found this post. I wanted to confirm your findings before doing something like this in production. I created a new large instance and a RAID0 array of 4 EBS volumes (4GB each). I used the same configuration as you did. I also attached a single EBS volume for reference. I used XFS on both and I run the same benchmark as you did.

    I was surprised to find *no performance improvement* whatsoever over the single EBS volume. All the results were almost the same except for rndwr with 8 threads for the 256Mb file (which showed a 1.6x improvement with raid0).

    Do you have any idea why this might be? I’ve uploaded my results to CloudApp: and

    October 7, 2011 at 3:41 am
  • Ozgur Akan


    @Vadim, I believe things must have improved on AWS/EBS side so like Laszio posted, maybe there is not much performance difference any more with different RAID setups.Do you plan to repeat the test?

    Also it would be interesting to compare these with RDS.

    best wishes,

    December 20, 2011 at 9:50 am
  • Eduardo Oliveira

    Now you have EBS with guaranteed iops, would be nice to see benchmark of them too.

    August 5, 2012 at 3:01 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.