Virident vCache vs. FlashCache: Part 1


Virident vCache vs. FlashCache(This is part one of a two part series) Over the past few weeks I have been looking at a preview release of Virident’s vCache software, which is a kernel module and set of utilities designed to provide functionality similar to that of FlashCache. In particular, Virident engaged Percona to do a usability and feature-set comparison between vCache and FlashCache and also to conduct some benchmarks for the use case where the MySQL working set is significantly larger than the InnoDB buffer pool (thus leading to a lot of buffer pool disk reads) but still small enough to fit into the cache device. In this post and the next, I’ll present some of those results.

Disclosure: The research and testing for this post series was sponsored by Virident.

Usability is, to some extent, a subjective call, as I may have preferences for or against a certain mode of operation that others may not share, so readers may have a different opinion than mine, but on this point I call it an overall draw between vCache and FlashCache.

Ease of basic installation. The setup process was simply a matter of installing two RPMs and running a couple of commands to enable vCache on the PCIe flash card (a Virident FlashMAX II) and set up the cache device with the command-line utilities supplied with one of the RPMs. Moreover, the vCache software is built in to the Virident driver, so there is no additional module to install. FlashCache, on the other hand, requires building a separate kernel module in addition to whatever flash memory driver you’ve already had to install, and then further configuration requires modification to assorted sysctls. I would also argue that the vCache documentation is superior. Winner: vCache.

Ease of post-setup modification / advanced installation. Many of the FlashCache device parameters can be easily modified by echoing the desired value to the appropriate sysctl setting; with vCache, there is a command-line binary which can modify many of the same parameters, but doing so requires a cache flush, detach, and reattach. Winner: FlashCache.

Operational Flexibility: Both solutions share many features here; both of them allow whitelisting and blacklisting of PIDs or simply running in a “cache everything” mode. Both of them have support for not caching sequential IO, adjusting the dirty page threshold, flushing the cache on demand, or having a time-based cache flushing mechanism, but some of these features operate differently with vCache than with FlashCache. For example, when doing a manual cache flush with vCache, this is a blocking operation. With FlashCache, echoing “1” to the do_sync sysctl of the cache device triggers a cache flush, but it happens in the background, and while countdown messages are written to syslog as the operation proceeds, the device never reports that it’s actually finished. I think both kinds of flushing are useful in different situations, and I’d like to see a non-blocking background flush in vCache, but if I had to choose one or the other, I’ll take blocking and modal over fire-and-forget any day. FlashCache does have the nice ability to switch between FIFO and LRU for its flushing algorithm; vCache does not. This is something that could prove useful in certain situations. Winner: FlashCache.

Operational Monitoring: Both solutions offer plenty of statistics; the main difference is that FlashCache stats can be pulled from /proc but vCache stats have to be retrieved by running the vgc-vcache-monitor command. Personally, I prefer “cat /proc/something” but I’m not sure that’s sufficient to award this category to FlashCache. Winner: None.

Time-based Flushing: This wouldn’t seem like it should be a separate category, but because the behavior seems to be so different between the two cache solutions, I’m listing it here. The vCache manual indicates that “flush period” specifies the time after which dirty blocks will be written to the backing store, whereas FlashCache has a setting called “fallow_delay”, defined in the documentation as the time period before “idle” dirty blocks are cleaned from the cache device. It is not entirely clear whether or not these mechanisms operate in the same fashion, but based on the documentation, it appears that they do not. I find the vCache implementation more useful than the one present in FlashCache. Winner: vCache.

Although nobody likes a tie, if you add up the scores, usability is a 2-2-1 draw between vCache and FlashCache. There are things that I really liked better with FlashCache, and there are other things that I thought vCache did a much better job with. If I absolutely must pick a winner in terms of usability, then I’d give a slight edge to FlashCache due to configuration flexibility, but if the GA release of vCache added some of FlashCache’s additional configuration options and exposed statistics via /proc, I’d vote in the other direction.

Stay tuned for part two of this series, wherein we’ll take a look at some benchmarks. There’s no razor-thin margin of victory for either side here: vCache outperforms FlashCache by a landslide.


Share this post

Comments (3)

  • Kazimieras Reply

    > “but still small enough to fit into the cache device”
    Do not see, why test vCache or flashcache with such condition, when you can put ssd as disk then?

    May 16, 2013 at 8:59 am
  • Mark Callaghan Reply

    Which of the open source flashcache solutions are most popular?
    * Linux dm-cache
    * Facebook flashcache

    May 16, 2013 at 11:12 am
  • Ernie Souhrada Reply


    It’s all about performance.

    Suppose I have a 2TB database, but maybe my working set is only 500GB. What are my options for achieving the best performance? I could by a ton of RAM, but that isn’t really going to help me when I need to start writing massive amounts of data to disk. I could go out and buy a bunch of SSDs and RAID them together and store all the data on that, and that will be better than HDD RAID, but why spend money to store all my data on SSD when some of that data might just be sitting idle? Or what if my database is really large, say in the multi-terabyte range? I might not be able to get enough SSD storage in my server at a cost-effective price point, or I might end up saturating my RAID controller such that now it becomes the bottleneck.

    Also, keep in mind that while SSDs are a lot faster than regular spinning disk, a PCIe flash card is still massively faster than an SSD, both in terms of throughput and in terms of latency. I have a Samsung 830 in the laptop I’m using to write this comment on. It’s pretty fast. But it’s still significantly slower (and more CPU-intensive under high load) than PCIe flash For example, see – the card referenced there is a Micron PCIe card, not Virident, but there’s a comparison in TPCC-MySQL performance between that card and a traditional SSD – the PCIe card is about 3x faster.

    By using a FlashCache/vCache approach with this sort of use case, I’m effectively allowing my database to run in memory (not actual RAM, that’s true, but much closer to memory speed than a regular SSD or HDD RAID), I can write data out to spinning disk at a nice sustainable rate.

    I don’t know, actually. The only one that I’ve ever personally encountered is Facebook FlashCache. I’d never even heard of the STEC caching solution prior to your comment. Maybe another Perconian will have some insight here.

    May 17, 2013 at 4:58 am

Leave a Reply