November 29, 2014

fsyncs on software raid on FusionIO

As soon as we get couple FusionIO cards, there is question how to join them in single space for database. FusionIO does not provide any mirroring/stripping solutions and totally relies on OS tools there.

So for Linux we have software RAID and LVM, I tried to followup on my post
How many fsync / sec FusionIO can handle, and check what overhead we can expect using additional layers over FusionIO card.

The card I used is

, physically it is two cards on single board, and visible as two cards to OS.

By some reason I was not able to setup LVM on cards, so I’ve finished tests only for software RAID0 and RAID1.

I used XFS filesystem mounted with “-o nobarrier” option, and I’ve the test I used in previous post on next configurations:

  • Single card
  • RAID0 over two cards
  • RAID1 over two cards

There what I’ve got:

  • Single card: 14050.59 req/sec
  • RAID0: 13039.00 req/sec
  • RAID1: 324.71 req/sec

By single card I’ve results much better than in my previous test, probably
Duo card has better characteristics.

RAID0 shows some overhead 8%, but it is acceptable.

And something is terrible wrong with RAID1, I am getting 40x drops in amount of fsyncs.
So it seems we can’t use innodb transactional log files with

on software RAID1 over FusionIO.

For reference I used command:

And next steps is to check how RAID setup affects IO throughput.

About Vadim Tkachenko

Vadim leads Percona's development group, which produces Percona Clould Tools, the Percona Server, Percona XraDB Cluster and Percona XtraBackup. He is an expert in solid-state storage, and has helped many hardware and software providers succeed in the MySQL market.


  1. Mrten says:

    Is that 40x RAID1 drop worth a posting on the linux kernel list perhaps?

  2. It is very important to mention which version of kernel you used for this analysis.

    For example, see this article: “Linux 2.6.24 Through Linux 2.6.33 Benchmarks”. There are remarkable performance variations between different versions of Linux kernels in several areas, and maybe in Software RAID too.

  3. Vadim says:


    It was 2.6.18-128.1.10.el5 on CentOS release 5.3 (Final).

    As RedHat uses strange kernel version system it hard to say what vanilla kernel it corresponds to.

  4. Marki says:

    Was alignment on all levels (partition, md superblock) correctly set? In case of mis-alignment, the card has to do much more work on each request…

  5. Vadim says:


    I did not try that, do you have good reference for details ?

  6. Marki says:

    I don’t have personal experience with that, I have just read few articles about it (also recent discussion on lvm mailing list).
    Check for example
    and other articles which can google found for: ssd md align

Speak Your Mind