Fast storage: 8 SSD Intel X-25M 80GB benchmarks

Fast storage: 8 SSD Intel X-25M 80GB benchmarks

PREVIOUS POST
NEXT POST

I appreciate opportunity Jos van Dongen from Tholis Consulting gave me. He granted me access to servers with 8 attached Intel X-25M 80GB MLC cards. The cards attached to 2 Adaptec 5805 raid controllers, with 4 cards per controller.

The cost of setup is 8 x 260$ (X-25M) + 2×500$ (Adaptec 5805) = ~3000$.
Available space varies in depends on raid setup from 300GB to 600GB.

The logical comparison is to compare results with FusionIO 320GB MLC card, so I will copy results from FusionIO 320GB MLC benchmarks.

For benchmarks I used sysbench fileio benchmark.
All raw results are available on Percona benchmarks wiki, there I will highlight most interesting points.

Couple words on tested setups. We used two configurations:

  • In first runs (software) each card connected as individual card to Adaptec, so in OS we see 8 individual cards. Card configured in software RAIDs: RAID0, RAID10, RAID5, RAID50
  • In second round (marked as hardware) each Adaptec is configured as hardware RAID0 over 4 cards, so in OS we see 2 devices. Devices connected in software RAIDs: RAID0 and RAID10 with different OS schedules over each device.

I should highlight I do not see usage in production for RAID0, as in my opinion SSD cards
are not reached enough level of reliability yet (see comments to post FusionIO 320GB MLC benchmarks), however I put results here to show theoretical maximal results.

So let’s start with random reads:

rand-read

I’d say SSDs show comparable results with FusionIO on 16+ threads, however on 4-8 threads difference is significant. On SSD you can get 160MB/sec for 4 threads and 260MB/sec for 8 threads.

Random writes:
rand-write

There couple things to note (beside SSD is doing much worse than FusionIO).
1. Something is wrong with scaling random writes in this setup. It is point for research,
I think there some serialization in Linux software raid or Linux scheduler or on Adaptec hardware level.
2. Cards connected in hardware raids show worse results than card connected as single devices (you can see results on https://www.percona.com/docs/wiki/benchmark:ssd:start, in summary table, randwr rows)
3. For cards connected in hardware raids, DEADLINE performs much worse than CFQ.

Sequential reads:
seq-read

– For sequential reads you can get pretty decent results from SSD (230MB/sec on 4+ threads)
– Cards connected in hardware raid is doing much better
– DEADLINE outperforms CFQ here (which is different from random writes)
– Software raid0 performed pretty bad, so I chosen to show hardware-raid0 results

Sequential writes:
seq-write

I’d say sequential writes is hard task for both SSDs and FusionIO, it does not scale well.
You may want to look into another options if your load requires sequential writes (e.g. like I
put InnoDB transactional logs on rotation based drives instead of SSD in my InnoDB on FusionIO benchmarks).

So in summary I can say

  • With SSD drives you can get decent results for random and sequential reads.
    I think it is competitive vs FusionIO if we look into price/performance analysis (remember FusionIO is twice as expensive)
  • Random writes did not work for me as expected, and this is point for investigations
  • RAID5, as expected, is only competitive for reads, but not writes
  • Complexity of having 8 SSD drives may be significant. You may want to
    look into different options: connect them in software or hardware raids, what scheduler to
    pick, etc. I suggest to run sysbench fileio or similar (i.e. iozone) to check if you get performance you expect
  • In my opinion maintaining 8 SSD cards per server is much more hassle than
    deal with single FusionIO card, however there is important point that with SSD you may
    “hot”remove and insert cards, while for FusionIO, which put into PCI-E bus, you need
    to shutdown server to replace it
PREVIOUS POST
NEXT POST

Share this post

Comments (15)

  • Peter van Dijk Reply

    Interesting results.. Have you done any other tests to specifically compare between the Deadline and CFQ schedulers?

    Based on the serials from the wiki page these are the gen2 x25-m drives – is it safe to assume that the performance degredation has already kicked in on these drives?
    (as i wouldnt have thought that TRIM would be working under these setups…)

    January 18, 2010 at 3:55 pm
  • peter Reply

    Vadim,

    It is interesting how results compare to single Intel SSD drives, basically how much scalability do we see.
    This can help to understand if this is raw device performance issue or scalability issue.

    January 18, 2010 at 5:13 pm
  • aospan Reply

    Peter, try to check this testing result for single Intel SSD X25-m ( sorry, russian only 🙂 – http://www.setupc.ru/wiki/moin.cgi/ssd_test_x25m

    January 19, 2010 at 12:24 am
  • Nickolay Ihalainen Reply

    The intel SSDs has a best performance at >128K blocks, according iozone tests (ex.: http://setupc.ru/wiki/moin.cgi/ssd_test_x25m)
    May be we need to enlarge InnoDb page size for better SSD performance?

    January 19, 2010 at 1:17 am
  • Vojtech Kurka Reply

    Vadim,

    did you use the drives and FusionIO in durable mode? I mean this:

    http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/
    http://www.mysqlperformanceblog.com/2009/06/15/testing-fusionio-strict_sync-is-too-strict/

    I’m sorry if I overlooked it, but I can’t find any note concerning this in the test results.

    January 19, 2010 at 5:00 am
  • Vojtech Reply

    Vadim,

    did you use the drives and FusionIO in durable mode? I mean this:

    http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/
    http://www.mysqlperformanceblog.com/2009/06/15/testing-fusionio-strict_sync-is-too-strict/

    I’m sorry if I overlooked it, but I can’t find any note concerning this in the test results.

    January 19, 2010 at 5:16 am
  • Vadim Reply

    Vojtech,

    Intel SSDs was used with enabled write cache, that is not in durable mode.

    FusionIO says it works in durable mode by default in latest drivers (which I use).

    January 19, 2010 at 12:41 pm
  • Vojtech Kurka Reply

    Vadim, thank you!

    January 19, 2010 at 12:43 pm
  • Vlad Rodionov Reply

    How come sequential writes slower than random ones? Even single X-25M can do 80MB per sec. Something is wrong with your setup.

    January 20, 2010 at 3:18 pm
  • Vadim Reply

    Vlad,

    If look on SSD (and recent FusionIO benchmarks), they usually do not like sequential writes.

    January 20, 2010 at 3:25 pm
  • Patrick Kane Reply

    The reason sequential writes stunk was –file-extra-flags=direct. Rerun the benchmarks without it and you’ll see much better performance (257MB/sec across a 22 drive 25-M raid-10 array, here — not great, but decent).

    January 21, 2010 at 1:48 pm
  • Digital Reply

    Is it possible to try this setup under FreeBSD 8?

    January 28, 2010 at 11:57 am
  • Jos van Dongen Reply

    For those interested, I ran into another X-25M vs FusionIO benchmark: http://hothardware.com/Articles/Fusionio-vs-Intel-X25M-SSD-RAID-Grudge-Match/?page=1

    best, Jos

    February 1, 2010 at 6:26 am
  • Alain Draet Reply

    Hi vadim,

    thanks for putting ou those test.
    I was wondering though. Do you consider the performance of SSD drive with Mysql better than regular HDD?
    (seems like a too basic question may be, sorry 🙂

    Thanks a lot

    Alain

    January 14, 2012 at 8:32 pm
  • Vadim Tkachenko Reply

    Alain,

    I am not sure what to you mean by “performance of SSD drive with MySQL”, but I may try to interpreter it as
    Performance of MySQL on SSD against HDD.
    In this case yes, generally the MySQL performance on SSD is better.

    February 15, 2012 at 6:36 pm

Leave a Reply