FusionIO – time for benchmarks

I posted about FusionIO couple times RAID vs SSD vs FusionIO and Testing FusionIO: strict_sync is too strict…. The problem was that FusionIO did not provide durability or results were too bad in strict mode, so I lost interest FusionIO for couple month. But I should express respect to FusionIO team, they did not ignore problem, and recently I was told that in last drivers FusionIO provides durability even without strict_mode (see https://www.percona.com/blog/2009/06/15/testing-fusionio-strict_sync-is-too-strict/#comment-676717). While I do not fully understand how it works internally (FusionIO does not have RAM on board and takes memory from host, so there is always cache kept in OS memory), I also do not have reason to doubt ( but I will test that someone day …). So I decided to see what IO performance we can expect from FusionIO
in different modes.

First few words about card by itself. I have 160 GB SLC ioDrive, and simple google search
by word “FusionIO” drives you to Dell shop (so I do not disclosure any private information here).
By that link you can get card for $6,569.99, which gives us ~40$/GB. There we should
talk about real space. To provide “steady” write performance card comes pre-formatted
with only 120GB available (25% of space is reserved for internal needs), so real cost jumps to 50$/GB. But even that is not enough to get maximal write throughput, so if you want to get announced 600MB/sec you need to use even less space (you will see that from results of benchmarks). As you can see from description on Dell site you should expect “write bandwidth of 600 MB/s and read bandwidth of 700 MB/s”, let’s see what we get in real life.

There is a lot of numbers, so let me put my conclusions first, and later I will prove them by numbers.

So conclusions:

  • Reads: you can really get 700MB/s read bandwidth, but you need 32 working threads
    for that. For single thread I got 140MB/s and for 4 threads – 446MB/s
  • Writes is more complex story (random writes)
  • – with filling 100GB on 125GB partition, I got 316.15MB/s, for 4 threads. for 1 thread – 131MB/s. However for 8 threads result drops to 162.96MB/s. The latency is also interesting. For 1 thread 95% of requests get response within 0.11ms (yes, it’s 110 microseconds!), but for 8 threads it is already 0.85ms, and for 16 threads – 1.55ms (which is still very good). And something happens with 32 threads – response time jumps to 19.71ms. I think there
    some serialization issue inside driver or firmware
  • – you can get about 600MB/s write bandwidth on 16GB file and with 16 threads. With increasing of file size write bandwidth drops.
  • – I got strange results with sequential writes. With increasing number of threads, write bandwidth drops noticeable. Again it should be some serialization problem, I hope FusionIO address it also
  • Now to numbers and benchmarks. All details available here. I used sysbench fileio benchmark for tests (full script available under link). Block size is 16KB and I used directio mode.

    Let me show some graphs, while numeric results are available in full report.
    For reference I show also results for RAID10 on 8 disks, 2.5″ , 15K RPMS each.

    Random reads:

    From this graph we see that fully utilize FusionIO card we need 4 or more working threads.

    Random writes:

    Again 4 threads seem interesting here, we get peak on 4 threads, and with more, bandwidth drops. Again I assume it is some serialization / contention in driver or firmware.

    Sequential reads:

    With sequential reads the same story with 4 threads.

    Sequential writes:

    This is what I mention on sequential writes. Instead of increasing, bandwidth drops down if we have more than 1 working thread, and we can see only 130MB/sec max throughput.

    Write bandwidth vs filesize:

    So there I tested how filesize (fill factor) affects write performance ( 8 working threads).
    The point here is that you can really get promised 600MB/s or close write throughput but only on small files size (<=32GB). With increasing size, throughput drops noticeable. Ok, to finalize post I think FusionIO provides really good IO read and write performance. If your budget allows and you fight with IO problems it could be good solution. Speaking about budget, $6,569.99 may look quite significant price ( and it is 🙂 ), but we may also look into consolidation factor, how many servers we can replace with single card. Rough look says it may be 5-10 servers, and that amount of servers cost much more.

Share this post

Comments (10)

  • aaron kempf

    you don’t need 4 threads.. you just need a database that supports SMP dude.

    December 8, 2009 at 12:00 am
  • uxio

    I have the same card in production. My priority was IO/s and is amazing solution. Practically disapears the io-wait. I saw 95000 io/s on a 4k sysbench rndrd with reiserfs.

    December 9, 2009 at 5:10 am
  • Ryan H

    If we need at least 600GB per server any thoughts to the using the larger MLC cards vs RAIDing multiple cards together. Also any thoughts on if RAID is used how it should be configured and if if this impacts durability in any way.

    December 9, 2009 at 9:23 am
  • Dan Rogart

    Interesting results. Do you have a graph for sequential write speeds on the RAID device? Would be nice to compare side by side with the Fusion IO graph for that test.


    December 9, 2009 at 12:59 pm
  • Steven Roussey

    Would be interesting to compare with OCZ integrated RAID/SSD solutions (both the PCI-E cards and the 3.5″ Colossus, though the latter really needs to be on SATA III).

    December 9, 2009 at 7:24 pm
  • peter


    We tent to test hardware we get out hands on. If you can have someone on OCZ to send us one we’d be happy to test it.

    December 9, 2009 at 9:19 pm
  • Didier Spezia

    Interesting! Would you run just one of this card in a production server? It is supposed to be more reliable than a hard drive, but is it reliable enough to avoid the cost of buying a second card for mirroring purpose?

    December 10, 2009 at 10:20 am
  • Vadim


    I think I would be OK to run it in production without mirroring, just would make sure I
    have reliable backup solution. FusionIO also allows to make backup quite fast.

    December 10, 2009 at 2:20 pm
  • Vadim


    I just noticed press-release

    Zappos moved MySQL instances to IBM systems with FusionIO card, with
    3x consolidation.

    December 11, 2009 at 9:47 pm
  • Vadim


    I hope I will get 320GB MLC card soon, so I will be able to test.
    I got recommendation from FusionIO so I can run two cards in parallel using RAID0, and it asow
    will improve IO performance.
    I have no statistics how durable is that, so I can’t comment.

    December 14, 2009 at 8:58 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.