I wrote about Intel 320 SSD write performance before, but I was not satisfied with these results.
Somewhat each time on Intel 320 SSD I was getting different write performance, so it made me looking into this with details.
So let’s run experiment as in previous post, this is sysbench fileio random write on different file size, from 10GiB to 140GiB with 10GiB step. I use ext4 filesystem, and I perform filesystem format before increasing filesize.
However, there is when interesting stuff begin. Now when we run the same iterations again, the result will look like:
As you see, second time the throughput is much worse, even on medium size files. Just after 50GiB size, throughput gets below 40MiB/sec And this is with the fact, that I perform filesystem format before each run.
This leads me to conclusion that write performance on Intel 320 SSD is decreasing in time, and actually it is quite unpredictable in each given point of time. Filesystem format does not help, and only secure erase procedure allows to return to initial state. There are commands for this procedure for reference.
hdparm --user-master u --security-set-pass Eins /dev/sd$i
hdparm --user-master u --security-erase Eins /dev/sd$i
Discussing this problem with engineers working with Intel 320 SSD drives I was advised to use artificial space provisioning, about 20%. Basically we create partition which takes only 80% of space.
So let’s try this. The experiment the same as previously, with difference that I use 120G partition, and max filesize is 110GiB.
You can see that throughput in first iteration is basically the same as with full drive, but second iteration performs much better. Throughput never drops below 40MiB/sec, and stays on about 50MiB/sec level.
So, I think, this advise to use space provisioning is worth to consider if you want to have some kind of protection and maintain throughput on some level.
Raw results and used scripts as always you can find on our Benchmarks Launchpad