Sysbench Benchmark for MongoDB – v0.1.0 Performance Update

Sysbench Benchmark for MongoDB – v0.1.0 Performance Update

PREVIOUS POST
NEXT POST

Two months ago I posted a performance comparison running Sysbench on MongoDB versus MongoDB with Fractal Tree Indexes v0.0.2. The benchmark showed a 133% improvement in throughput. Nice, but our engineering team had an effort on our road-map for lock refinement that we believed would really boost our performance, which is now available in v0.1.0. The benchmark application itself is unchanged and available on GitHub.

For anyone curious about Sysbench itself, the details are available from the prior blog. The only change for this run was hardware. Our Sun x4150 server recently began rebooting itself at random times, so it has been replaced with a newer HP server. Another change is that during the benchmark I run a small application that consumes all but 16GB of RAM on the server to make sure I am not running an in-memory benchmark.

Benchmark Environment

  • HP Proliant DL380 G6, (2) Xeon 5520, 72GB RAM, P410i (512MB, write-back), 8x10K SAS/RAID 0
  • Centos 5.8 (64-bit), XFS file system
  • MongoDB v2.2.3 and MongoDB v2.2.0 + Fractal Tree Indexes

Benchmark Results – Throughput

As the performance graph shows, throughput at all concurrency levels was higher in v0.0.2 than pure Mongo, but is now substantially higher in v0.1.0. At 128 concurrent threads we are now 1507% faster than pure MongoDB (207.08 transactions per second vs. 12.89 transactions per second).

Benchmark Results – Raw Insertion

Prior to running the benchmark itself, we run 8 simultaneous loaders. Each loader inserts 1000 documents per batch into different collections. The raw Sysbench insertion performance of v0.1.0 is now 409% faster than pure MongoDB (60,391 inserts per second vs. 11,858 inserts per second).

We will continue to share our results with the community and get people’s thoughts on applications where this might help, suggestions for next steps, and any other feedback. Please drop us a line if you are interested in an evaluation.

PREVIOUS POST
NEXT POST

Share this post

Comments (15)

  • Brian Cavanagh Reply

    That is some impressive work.

    May 29, 2013 at 2:26 pm
  • Princess Pieu Reply

    This Is So Cool….
    Good Work….

    May 29, 2013 at 3:30 pm
  • Bruno Martínez Reply

    Did you try remounting the drive to clear caches, instead of consuming all memory?

    May 30, 2013 at 6:55 pm
    • Tim Callaghan Reply

      Bruno, not sure what you mean by that, can you elaborate?

      May 30, 2013 at 6:58 pm
    • Levi Reply

      It wasn’t the database being in memory from previous runs that they were trying to avoid, it was having a database that fits in memory within the same run (ie DB working set much bigger than memory).

      If it completely fits in memory and you have raid write cache to allow disk write reordering then I expect the toku benefit wouldn’t be nearly so big (less I/O from better compression would be the only advantage?)

      June 5, 2013 at 6:37 am
      • Tim Callaghan Reply

        Agreed, but we do provide some advantages to workloads that aren’t IO bound (i.e., clustering secondary indexes and compression).

        June 5, 2013 at 10:40 am
  • Ian Ehlert Reply

    Could we see these same benchmarks while running MongoDB 2.4? The v8 engine changes improved concurrency for over the 2.2.3 version greatly: http://docs.mongodb.org/manual/release-notes/2.4-javascript/

    June 3, 2013 at 6:25 pm
    • Tim Callaghan Reply

      Ian, sorry for the delay. My server was consumed for over a day running the iiBench benchmark.

      The performance is almost exactly the same as v2.2.3, I suspect the bottleneck isn’t the Javascript concurrency but rather the write locking.

      Here are the raw results for MongoDB v2.4.4 (# threads | tps):
      0001 / 3.17
      0002 / 7.43
      0004 / 11.50
      0008 / 13.10
      0016 / 13.05
      0032 / 12.01
      0064 / 12.71
      0128 / 12.26
      0256 / 12.38
      0512 / 11.62
      1024 / 13.49

      June 6, 2013 at 8:06 pm
  • Howard Reply

    We are using MongoDB 2.4 and the index cannot fit in the system memory, so we have quite a bit of page fault and make the system loading quite high, can Fractal index help?

    June 5, 2013 at 1:50 pm
    • Tim Callaghan Reply

      Yes, TokuMX is able to maintain performance long after the indexes no longer fit in RAM. I have another benchmark coming this afternoon that shows the iiBench performance of TokuMX vs. MongoDB.

      June 5, 2013 at 2:25 pm
      • Howard Reply

        Yes, this is much more important than raw speed.

        Honestly, most people are satisfied with MongoDB performance when the working set fit in the memory. Adding 700% seems nice but not a direct benefit.

        However, if TokuMX can maintain performance when index size or working set is much larger than system memory, I can say this is a breakthrough and most of the people using MongoDB would consider changing as soon as possible. Even people using Redis would consider your solution..

        June 5, 2013 at 2:47 pm
        • Tim Callaghan Reply

          Howard, thanks for the feedback.

          June 6, 2013 at 8:07 pm
  • Richard Bensley Reply

    Can we please see some information on how MVCC and ACID compliant transactions function with TokuMX?

    June 5, 2013 at 2:41 pm
  • itsme Reply

    hi Tim,

    what’s kind of software are you using to make graph ? thanks 🙂

    October 27, 2015 at 3:23 am

Leave a Reply