EmergencyEMERGENCY? Get 24/7 Help Now!

Testing Intel® SSD 910


Posted on:

|

By:


PREVIOUS POST
NEXT POST
Share Button

Intel came on PCI-e SSD market with their Intel SSD 910 card. With a slogan “The ultimate data center SSD” I assume Intel targets rather a server grade hardware, not consumer level.
I’ve got one of this card into our lab. I should say it is very price competitive, comparing with other enterprise level PCIe vendors. For a 400GB card I paid $2100, which gives $5.25/GB. Of course I’ve got some performance numbers I’d like to share.

But before that, few words on the card internals. Intel puts separate 200GB modules, so 400GB card is visible as 2 x 200GB devices in operation system, and 800GB card is visible as 4 different devices. After that you can do software raid0, raid1 or raid10, whatever you prefer.

For my tests I used single 200GB device and pair combined in software raid0 (Duo).

For raw performance IO I follow scripts I used for other reviews, i.e. Testing Intel SSD 520

First results are for asynchronous writes:

The result averages at 150 MiB/sec for single device and at 250 MiB/sec for Duo.
I find it interesting, as on SATA based Intel 520 I was able to get 300 MiB/sec.

Now asynchronous reads:

The result line is quite stable and is 270 MiB/sec for single drive, and 530 MiB/sec for Duo.
In the same workload for Intel 520 : 370 MiB/sec.

Now we are getting to synchronous reads, to see how many threads we need to reach peak throughput and check corresponding response times:

Throughput:

Response time:

I would say for single device the throughput peaking at 8 threads with 95% response time 0.68ms, and for Duo at 16 threads with 0.84ms

In conclusion I can say that I have mixed feelings after this experiment. On the one hand the performance results are definitely lower than on alternative PCIe cards available on market, but on the other hand the price is absolutely attractive.

I am going to run more corresponding MySQL-based benchmarks to see how the card is compared to alternatives under database workload.

Share Button
PREVIOUS POST
NEXT POST


Vadim Tkachenko

Vadim leads Percona's development group, which produces the Percona Server, Percona Server for MongoDB, Percona XtraDB Cluster and Percona XtraBackup. He is an expert in solid-state storage, and has helped many hardware and software providers succeed in the MySQL market.



Categories:
Benchmarks, Hardware and Storage, MySQL


Comments
  • Vadim,

    It is interesting to see you do not have 2x for writes in Duo but you get very close to 2x on reads. I wonder if this is RAID0/filesystem issue or the hardware issue. Getting 2 partitions formated and tested separately in parallel could answer this questions.

    Numbers lower than 520 is of course interesting. I wonder if it could be related to the compression used in 520. Sysbench probably uses highly compressable data for file IO which can skew numbers a lot if engine below do compression.

    Reply

  • Vadim,

    For enterprise class SSD the most important performance characteristics – Read/Write throughput in steady state and request latency percentiles. In many cases nobody cares about how much TPS device sustain if you can not get 99% of them under 100ms, for example. Or better – under 10ms.

    Reply

Leave a Reply

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.

Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below.

No, thank you. Please do not ask me again.