fsyncs on software raid on FusionIO

fsyncs on software raid on FusionIO


As soon as we get couple FusionIO cards, there is question how to join them in single space for database. FusionIO does not provide any mirroring/stripping solutions and totally relies on OS tools there.

So for Linux we have software RAID and LVM, I tried to followup on my post
How many fsync / sec FusionIO can handle, and check what overhead we can expect using additional layers over FusionIO card.

The card I used is

, physically it is two cards on single board, and visible as two cards to OS.

By some reason I was not able to setup LVM on cards, so I’ve finished tests only for software RAID0 and RAID1.

I used XFS filesystem mounted with “-o nobarrier” option, and I’ve the test I used in previous post on next configurations:

  • Single card
  • RAID0 over two cards
  • RAID1 over two cards

There what I’ve got:

  • Single card: 14050.59 req/sec
  • RAID0: 13039.00 req/sec
  • RAID1: 324.71 req/sec

By single card I’ve results much better than in my previous test, probably
Duo card has better characteristics.

RAID0 shows some overhead 8%, but it is acceptable.

And something is terrible wrong with RAID1, I am getting 40x drops in amount of fsyncs.
So it seems we can’t use innodb transactional log files with

on software RAID1 over FusionIO.

For reference I used command:

And next steps is to check how RAID setup affects IO throughput.


Share this post

Comments (6)

  • Mrten Reply

    Is that 40x RAID1 drop worth a posting on the linux kernel list perhaps?

    March 24, 2010 at 5:33 am
  • Edgard Pineda Reply

    It is very important to mention which version of kernel you used for this analysis.

    For example, see this article: http://www.phoronix.com/scan.php?page=article&item=linux_2624_2633&num=1 “Linux 2.6.24 Through Linux 2.6.33 Benchmarks”. There are remarkable performance variations between different versions of Linux kernels in several areas, and maybe in Software RAID too.

    March 24, 2010 at 5:58 pm
  • Vadim Reply


    It was 2.6.18-128.1.10.el5 on CentOS release 5.3 (Final).

    As RedHat uses strange kernel version system it hard to say what vanilla kernel it corresponds to.

    March 24, 2010 at 6:37 pm
  • Marki Reply

    Was alignment on all levels (partition, md superblock) correctly set? In case of mis-alignment, the card has to do much more work on each request…

    March 25, 2010 at 3:46 am
  • Vadim Reply


    I did not try that, do you have good reference for details ?

    March 25, 2010 at 7:54 am
  • Marki Reply

    I don’t have personal experience with that, I have just read few articles about it (also recent discussion on lvm mailing list).
    Check for example http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/
    and other articles which can google found for: ssd md align

    March 30, 2010 at 2:22 am

Leave a Reply