EmergencyEMERGENCY? Get 24/7 Help Now!

Intel 320 SSD random write performance


Posted on:

|

By:


PREVIOUS POST
NEXT POST
Share Button

While I like performance provided by PCI-E cards like FusionIO or Virident tachIOn, I am often asked about SATA drives alternatives, as price of PCI-E cards often is barrier, especially for startups. There is wide range of SATA drives on market, and it is hard to pick one, but Intel SSD are probably one of most popular, and I’ve got pair of Intel 320 SSD 160GB to play with it.Probably most interesting characteristic for SSD for me is Random Write throughput in correlation with file size , as it is known that the write throughput declines when you use more space. In this post I will test (using sysbench fileio) single Intel 320 SSD card with different filesize ( from 10 to 140 GiB, with step 10GiB). Filesystem is XFS and IO blocksize is 16KiB.I posted all scripts and results on our Launchpad project where you can find

I used next methodology for testing: format xfs, run 1 hour random write test with measuring throughput each 10 sec.The results are bit tricky to analyze, as throughput performs in this way (for 100GiB filesize)You can see that just after format throughput starts with 80MiB/sec, then drops to 10MiB/sec and after about half of hour stabilizes on 30MiB/sec level.We can build the same graph (time -> throughput) for all filesizes we have:where you can see that throughput drops from 100 MiB/sec for 10GiB file to 15MiB/sec for 140GiB file.For reference I added result from similar benchmark for RAID10 over 8 regular spinning SAS 15K disks, which is around 23MiB/sec.From graph we see that all results are stabilized after 2500 sec, and if we get slice of data after 2500 sec, the summary graph ( size -> throughput) looks like:This graph allows to get idea what is throughput for given filesize much easier.E.g. for 70GiB files, we have 40MiB/sec and for 120GiB file, it is 20MiB/sec.Some conclusions from these results:

        
  • Intel 320 SSD performance is affected by amount of used space. The more space used – the worse performance
  •     

  • Throughput may drop very intensively, e.g. from 10GiB to 20GiB it drops by 20%
  •     

  • When you run benchmark on your own, take into account time needed to get stabilized result. It may take over half of hour for some cases

In final I want to give credit to R projects and ggplot2 which are very helpful for graphical analyzing of data.

Share Button
PREVIOUS POST
NEXT POST


Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.



Categories:
MySQL


Comments

Leave a Reply

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.