As soon as we get couple FusionIO cards, there is question how to join them in single space for database. FusionIO does not provide any mirroring/stripping solutions and totally relies on OS tools there.
So for Linux we have software RAID and LVM, I tried to followup on my post
How many fsync / sec FusionIO can handle, and check what overhead we can expect using additional layers over FusionIO card.
The card I used is
Fusion-io ioDrive Duo 320GB
, physically it is two cards on single board, and visible as two cards to OS.
By some reason I was not able to setup LVM on cards, so I’ve finished tests only for software RAID0 and RAID1.
I used XFS filesystem mounted with “-o nobarrier” option, and I’ve the test I used in previous post on next configurations:
- Single card
- RAID0 over two cards
- RAID1 over two cards
There what I’ve got:
- Single card: 14050.59 req/sec
- RAID0: 13039.00 req/sec
- RAID1: 324.71 req/sec
By single card I’ve results much better than in my previous test, probably
Duo card has better characteristics.
RAID0 shows some overhead 8%, but it is acceptable.
And something is terrible wrong with RAID1, I am getting 40x drops in amount of fsyncs.
So it seems we can’t use innodb transactional log files with
on software RAID1 over FusionIO.
For reference I used command:
sysbench --test=fileio --file-num=1 --file-total-size=50G --file-fsync-all=on --file-test-mode=seqrewr --max-time=100 --file-block-size=4096 --max-requests=0 run
And next steps is to check how RAID setup affects IO throughput.