I appreciate opportunity Jos van Dongen from Tholis Consulting gave me. He granted me access to servers with 8 attached Intel X-25M 80GB MLC cards. The cards attached to 2 Adaptec 5805 raid controllers, with 4 cards per controller.
The cost of setup is 8 x 260$ (X-25M) + 2×500$ (Adaptec 5805) = ~3000$.
Available space varies in depends on raid setup from 300GB to 600GB.
The logical comparison is to compare results with FusionIO 320GB MLC card, so I will copy results from FusionIO 320GB MLC benchmarks.
For benchmarks I used sysbench fileio benchmark.
All raw results are available on Percona benchmarks wiki, there I will highlight most interesting points.
Couple words on tested setups. We used two configurations:
- In first runs (software) each card connected as individual card to Adaptec, so in OS we see 8 individual cards. Card configured in software RAIDs: RAID0, RAID10, RAID5, RAID50
- In second round (marked as hardware) each Adaptec is configured as hardware RAID0 over 4 cards, so in OS we see 2 devices. Devices connected in software RAIDs: RAID0 and RAID10 with different OS schedules over each device.
I should highlight I do not see usage in production for RAID0, as in my opinion SSD cards
are not reached enough level of reliability yet (see comments to post FusionIO 320GB MLC benchmarks), however I put results here to show theoretical maximal results.
So let’s start with random reads:
I’d say SSDs show comparable results with FusionIO on 16+ threads, however on 4-8 threads difference is significant. On SSD you can get 160MB/sec for 4 threads and 260MB/sec for 8 threads.
There couple things to note (beside SSD is doing much worse than FusionIO).
1. Something is wrong with scaling random writes in this setup. It is point for research,
I think there some serialization in Linux software raid or Linux scheduler or on Adaptec hardware level.
2. Cards connected in hardware raids show worse results than card connected as single devices (you can see results on http://www.percona.com/docs/wiki/benchmark:ssd:start, in summary table, randwr rows)
3. For cards connected in hardware raids, DEADLINE performs much worse than CFQ.
– For sequential reads you can get pretty decent results from SSD (230MB/sec on 4+ threads)
– Cards connected in hardware raid is doing much better
– DEADLINE outperforms CFQ here (which is different from random writes)
– Software raid0 performed pretty bad, so I chosen to show hardware-raid0 results
I’d say sequential writes is hard task for both SSDs and FusionIO, it does not scale well.
You may want to look into another options if your load requires sequential writes (e.g. like I
put InnoDB transactional logs on rotation based drives instead of SSD in my InnoDB on FusionIO benchmarks).
So in summary I can say
- With SSD drives you can get decent results for random and sequential reads.
I think it is competitive vs FusionIO if we look into price/performance analysis (remember FusionIO is twice as expensive)
- Random writes did not work for me as expected, and this is point for investigations
- RAID5, as expected, is only competitive for reads, but not writes
- Complexity of having 8 SSD drives may be significant. You may want to
look into different options: connect them in software or hardware raids, what scheduler to
pick, etc. I suggest to run sysbench fileio or similar (i.e. iozone) to check if you get performance you expect
- In my opinion maintaining 8 SSD cards per server is much more hassle than
deal with single FusionIO card, however there is important point that with SSD you may
“hot”remove and insert cards, while for FusionIO, which put into PCI-E bus, you need
to shutdown server to replace it