EmergencyEMERGENCY? Get 24/7 Help Now!

Intel 320 SSD random write performance

Posted on:



Share Button

While I like performance provided by PCI-E cards like FusionIO or Virident tachIOn, I am often asked about SATA drives alternatives, as price of PCI-E cards often is barrier, especially for startups. There is wide range of SATA drives on market, and it is hard to pick one, but Intel SSD are probably one of most popular, and I’ve got pair of Intel 320 SSD 160GB to play with it.Probably most interesting characteristic for SSD for me is Random Write throughput in correlation with file size , as it is known that the write throughput declines when you use more space. In this post I will test (using sysbench fileio) single Intel 320 SSD card with different filesize ( from 10 to 140 GiB, with step 10GiB). Filesystem is XFS and IO blocksize is 16KiB.I posted all scripts and results on our Launchpad project where you can find

I used next methodology for testing: format xfs, run 1 hour random write test with measuring throughput each 10 sec.The results are bit tricky to analyze, as throughput performs in this way (for 100GiB filesize)You can see that just after format throughput starts with 80MiB/sec, then drops to 10MiB/sec and after about half of hour stabilizes on 30MiB/sec level.We can build the same graph (time -> throughput) for all filesizes we have:where you can see that throughput drops from 100 MiB/sec for 10GiB file to 15MiB/sec for 140GiB file.For reference I added result from similar benchmark for RAID10 over 8 regular spinning SAS 15K disks, which is around 23MiB/sec.From graph we see that all results are stabilized after 2500 sec, and if we get slice of data after 2500 sec, the summary graph ( size -> throughput) looks like:This graph allows to get idea what is throughput for given filesize much easier.E.g. for 70GiB files, we have 40MiB/sec and for 120GiB file, it is 20MiB/sec.Some conclusions from these results:

  • Intel 320 SSD performance is affected by amount of used space. The more space used – the worse performance

  • Throughput may drop very intensively, e.g. from 10GiB to 20GiB it drops by 20%

  • When you run benchmark on your own, take into account time needed to get stabilized result. It may take over half of hour for some cases

In final I want to give credit to R projects and ggplot2 which are very helpful for graphical analyzing of data.

Share Button

Vadim Tkachenko

Vadim leads Percona's development group, which produces the Percona Server, Percona Server for MongoDB, Percona XtraDB Cluster and Percona XtraBackup. He is an expert in solid-state storage, and has helped many hardware and software providers succeed in the MySQL market.



Leave a Reply

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.

Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below.

No, thank you. Please do not ask me again.