FlashCache: first experiments

I wrote about FlashCache there, and since that I run couple benchmarks, to see what performance benefits we can expect.
For initial tries I took sysbench oltp tests ( read-only and read-write) and case when data fully fits into L2 cache.

I made binaries for FlashCache for CentOS 5.4, kernel 2.6.18-164.15, you can download it from our testing stage. It took some efforts to make binary, you may get my instructions for CentOS on FlashCache-dev mail-list, most likely it will not work for different CentOS / Kernel.

The full results, scripts and settings are on Benchmarks Wiki.

* Hardware: Dell PowerEdge R900
* IO subsystems
* RAID10: 8 disks SAS 2.5″ 15K ( main storage)
* SSD-1: Intel X25-E 32GB ( 1 gen firmware) ( Model Number: SSDSA2SH032G1GN INTEL, Firmware Revision: 045C8621)
* SSD-2: Intel X25-M 160GB ( 2 gen ) ( Model Number: INTEL SSDSA2M160G2GC, Firmware Revision: 2CV102HA )
* FlashCache: build over SSD-1 or SSD-2
* Filesystem XFS, builds as: mkfs.xfs -f -d su=16384,sw=40 /dev/sdc
* mounted with -o nobarrier option
* InnoDB files layout: ibdata1 and ib_logfile* are placed on separate RAID partition ( not on FlashCache or SSD)
* Benchmark: sysbench oltp ( read-only and read-write modes), 80mln rows (~18GB of data)

And I am comparing results with data stored:

  • Directly on RAID
  • Directly on SSD
  • Data stored on RAID, but caching in FlashCache ( which located on SSD)

The results for Read-Only case:

As expected with FlashCache the results are very close to running directly on SSD, with small drop, which could be related to additional driver layer.

Read-Write case is more interesting. There I should mention that FlashCache uses WriteBack caching algorithm, which keeps some amount of dirty pages before physically writing to main storage. FlashCache allows you to set target percentage of dirty pages, so it was interesting to see how it affects ( expected that smaller amount of dirty pages puts more IO on main storage leading to smaller performance).

So there results for FlashCache with 20% and 80% of dirty pages and FlashCache based on Intel X25-M:

Read  Write X25-M

So it is pretty much impressed with 20% dirty pages we still have decent improvement.

And there are results for Intel X25-E:
ReadWrite X25-E

By some reason the results much worse if compare to X25-M, probably X25-M in second generation has better performance characteristics ( I will post sysbench fileio benchmarks). Also what is attractive in X25-M, that even in biggest capacity 160GB you can get it by quite acceptable price.

In general FlashCache leaves pretty good impression, I did not have any significant problems ( not speaking about crashes or data loss), probably, the most challenge is to get a binary driver for your kernel :). I will do more benchmarks especially when data exceeds memory, and also it will be interesting to see how it works when we put FlashCache on FusionIO cards instead of Intel SSD.

Share this post

Comments (18)

  • Andy

    Did you use BBU for your RAID 10?

    May 10, 2010 at 7:07 pm
  • Vadim


    Sure, RAID10 is with BBU .

    May 10, 2010 at 7:33 pm
  • peter


    It may be interesting to compare X25-E to X25-M partition of the same size. The larger partition can cause significantly different cache behavior even if data fits in it completely.

    May 10, 2010 at 10:48 pm
  • Dimitri


    any details about which part of SSD was used for FlashCache? (whole disk? partition? etc.)

    Until the SSD may contain all data – it’s clear it’s more simple to move your database(s) to SSD.. But what if no? – it’ll be interesting to see how well FlashCache may improve performance if it may contain only 10% of data (for ex.) – likely what will be the price/performance ratio of placing one 32GB SSD as FlashCache into existing 320GB database, comparing to replacement by x10 32GB SSD..


    May 11, 2010 at 2:24 am
  • Domas

    Dimitri, as with any other I/O based benchmark, flashcache performance improvements depend on data distribution. If you have long tail but very hot data at the head of it, it will help. If your access pattern is completely random – not so much.

    May 11, 2010 at 3:50 am
  • Dimitri

    Domas, completely agree 🙂
    that’s why it’s important to know the worse case..

    On the same time it’ll be fine to have an easy way to monitor the data access distribution in general, and then predict a possible performance gain..


    May 11, 2010 at 4:25 am
  • Kevin Burton

    Another hack is to mount the SSD as swap and tell InnoDB to use say 100GB of memory. I haven’t tested this but it might be a fun hack 🙂

    May 11, 2010 at 8:12 pm
  • markt

    Very interesting, thanks for the hard numbers!

    May 11, 2010 at 11:28 pm
  • Joerg M.

    I don’t know if this test is really realistic, as it delivers performance without guaranteeing consistency:
    – usage of the nobarrier option
    – using SSD without battery cache protection as write-back cache

    Without protected cache, i would suggest 0% dirty pages…

    May 12, 2010 at 1:10 am
  • Vadim


    Your points are totally correct, I asked FlashCache dev if we can use WriteThrough more instead of WriteBack, as even dirty_pages=0% still do not warranty full consistency.

    We may also disable write cache on SSD (something I am going to test) or use some mirroring for SSD level.

    May 12, 2010 at 10:59 am
  • Vadim


    I used whole partition for SSD.

    Sure I am going to test cases when data exceeds SSD capacity.

    May 12, 2010 at 11:00 am
  • Vadim


    I tried that.
    IMO Linux kernel is not able to work properly in that mode.

    May 25, 2010 at 6:54 am
  • Mark Callaghan

    The Facebook patch for MySQL supports that. FlashCache is much better.

    May 25, 2010 at 7:03 am
  • darkfader

    80% dirty rate and barriers off so it’s “fast”…

    Err, good you also ran some benchmarks that apply to people whose data is more value than social networking or online store.

    December 15, 2011 at 8:50 am
  • Clemens Eisserer

    It would be really interesting to see how flashcache copes with barriers enabled – as for many reasons I would not be comfortable barries disabled.

    With flashcache beeing copy-back, the relative performance gains should be even better =)

    April 19, 2012 at 2:31 am
  • Lee

    You said
    “* InnoDB files layout: ibdata1 and ib_logfile* are placed on separate RAID partition ( not on FlashCache or SSD)”

    Why did you exclude ibdata1 from flashcache ?
    I think flashcache is good place to store ibdata(innodb system tablespace).

    And, Is Xtrabackup compatible with Flashcache ?


    May 3, 2012 at 12:15 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.