November 21, 2014

RAID throughput on FusionIO

Along with maximal possible fsync/sec it is interesting how different software RAID modes affects throughput on FusionIO cards.

In short conclusion, RAID10 modes really disappoint me, the detailed numbers to follow.

To get numbers I run

test with 16KB page size, random read and writes, 1 and 16 threads, O_DIRECT mode.

FusionIO cards are the same as in the previous experiment, as I am running XFS with nobarrier mount options.

OS is CentOS 5.3 with 2.6.18-128.1.10.el5 kernel.

For RAID modes I use:

  • single card ( for baseline)
  • RAID0 over 2 FusionIO cards
  • RAID1 over 2 FusionIO cards
  • RAID1 over 2 RAID0 partitions (4 cards in total)
  • RAID0 over 2 RAID1 partitions (4 cards in total)
  • special RAID10 mode with n2 layout

Latest mode you can get creating RAID as:

In this case for all modes use 64KB chunk size ( different chunk sizes also interesting question).

There is graph for 16 threads runs, and raw results are below.

As expected RAID1 over 2 disks shows hit on write throughput comparing to single disk,
but RAID10 modes over 4 disks surprises me, showing almost 2x drops.

Only in RAID10n2 random reads skyrocket, while writes are equal to single disk.

This makes me asking if RAID1 mode is really usable, and how it performs
on regular hard drives or SSD disks.

The performance drop in RAID settings is unexpected. I am working with Fusion-io engineers to figure out the issue.

The next experiment I am going to look into is different page sizes.

Raw results (in requests / seconds, more is better):

single disk
raid0 2 disks
raid1 2 disks
raid1 over raid0 4 disks
raid0 over raid1 4 disks
raid10 n2

Script for reference:

About Vadim Tkachenko

Vadim leads Percona's development group, which produces Percona Clould Tools, the Percona Server, Percona XraDB Cluster and Percona XtraBackup. He is an expert in solid-state storage, and has helped many hardware and software providers succeed in the MySQL market.


  1. Venu says:

    The numbers are really good. Can you post the spec for the IO card ?

  2. RAID 10,f2 is generally considered to be the overall fastest Linux RAID10, so you might want to try that. Also, as of mdadm 3.1.1, default chunk size is 512K rather than 64K, so maybe that’s worth trying.

  3. Vadim says:


    It was as in previous post
    two dual Fusion-io ioDrive Duo 320GB cards,
    visible as 4 cards for OS.

  4. peter says:


    Why do you expect RAID1 to be slower than single disk for writes ? Normally 2 disk RAID1 should be twice the speed for random reads and about same speed for random writes. You have to do 2 writes per write but there are 2 devices to do them in parallel.

  5. An answer for why writes to RAID1 are slower than a single drive: for RAID1, the time for the write to complete is dependent on the slowest of the two devices at doing the write. For hard drives, this is easier to think about because of rotational delay and servo motors. With a single drive, sometimes you get lucky and the head is near where the write needs to happen. With RAID1, now you have two drives that need to have their heads conveniently positioned before you can get lucky. I assume there’s a significant variance in the amount of time a FusionIO device can take to complete a write.

  6. hubbert says:

    I understand Fusion IO can be configured in “basic/full-capacity” mode and some sort of “performance/diminished-capacity” mode. ?? were these benchmarks done under basic/or/performance mode (apologies if I am using the wrong terms)

  7. hubbert,

    I did not do “reserved capacity” mode in this test.

Speak Your Mind