November 23, 2014

Benchmarking IBM eXFlash™ DIMM with sysbench fileio

Diablo Technologies engaged Percona to benchmark IBM eXFlash™ DIMMs in various aspects. An eXFlash™ DIMM itself is quite an interesting piece of technology. In a nutshell, it’s flash storage, which you can put in the memory DIMM slots. Enabled by Diablo’s Memory Channel Storage™ technology, this practically means low latency and some unique performance characteristics. These […]

Testing Intel, Samsung & SanDisk SATA SSD

While working on the service architecture for one of our projects, I considered several SATA SSD options as the possible main storage for the data. The system will be quite write intensive, so the main interest is the write performance on capacities close to full-size storage. After some research I picked several candidates (I show […]

Testing STEC SSD MACH16 200GB SLC

Following my previous benchmark of Samsung 830, today I want to show results for STEC MACH16 SATA card, 200GB size, this card is based on SLC, and regarding STEC website, it is an enterprise grade storage.

Testing Samsung SSD SATA 256GB 830 – not all SSD created equal

I personally like PCIe based Flash, but from a pricing point our customers are looking for cheaper alternatives. SATA SSD is an options. There is many products based on MLC technology, and Intel 320 I would say is the most popular. I do not particularly like its write performance – I wrote about it before, […]

ext4 vs xfs on SSD

As ext4 is a standard de facto filesystem for many modern Linux system, I am getting a lot of question if this is good for SSD, or something else (i.e. xfs) should be used. Traditionally our recommendation is xfs, and it comes to known problem in ext3, where IO gets serialized per i_node in O_DIRECT […]

Virtualization and IO Modes = Extra Complexity

It has taken a years to get a proper integration between operating system kernel, device driver and hardware to get behavior with caches and IO modes correctly. I remember us having a lot of troubles with fsync() not flushing hard drive write cache and so potential hard drives can be lost on power failure. Happily […]

RAID throughput on FusionIO

Along with maximal possible fsync/sec it is interesting how different software RAID modes affects throughput on FusionIO cards. In short conclusion, RAID10 modes really disappoint me, the detailed numbers to follow. To get numbers I run

test with 16KB page size, random read and writes, 1 and 16 threads, O_DIRECT mode. FusionIO cards are […]

fsyncs on software raid on FusionIO

As soon as we get couple FusionIO cards, there is question how to join them in single space for database. FusionIO does not provide any mirroring/stripping solutions and totally relies on OS tools there. So for Linux we have software RAID and LVM, I tried to followup on my post How many fsync / sec […]

How many fsync / sec FusionIO can handle

I recently was asked how many fsync / sec ( and therefore durable transactions / sec) we can get on FusionIO card. It should be easy to test, let’s take sysbench fileio benchmark and run, the next command should make it:

So that’s 9229.35 req/sec, which is pretty impressive. For comparison the same […]

NILFS – may be not yet

Inspired by NILFS: A File System to Make SSDs Scream and some customers asked if they should try NILFS on their SSD disks I decided to run quick tests to see how it performs. Installation on our Ubuntu 8.10 with SSD disk (Intel X25-E, 32GB) was pretty plain and I got partition with NILFS without […]