EmergencyEMERGENCY? Get 24/7 Help Now!

Sysbench Benchmark for MongoDB – v0.1.0 Performance Update

 | May 28, 2013 |  Posted In: Tokutek, TokuView

PREVIOUS POST
NEXT POST

Two months ago I posted a performance comparison running Sysbench on MongoDB versus MongoDB with Fractal Tree Indexes v0.0.2. The benchmark showed a 133% improvement in throughput. Nice, but our engineering team had an effort on our road-map for lock refinement that we believed would really boost our performance, which is now available in v0.1.0. The benchmark application itself is unchanged and available on GitHub.

For anyone curious about Sysbench itself, the details are available from the prior blog. The only change for this run was hardware. Our Sun x4150 server recently began rebooting itself at random times, so it has been replaced with a newer HP server. Another change is that during the benchmark I run a small application that consumes all but 16GB of RAM on the server to make sure I am not running an in-memory benchmark.

Benchmark Environment

  • HP Proliant DL380 G6, (2) Xeon 5520, 72GB RAM, P410i (512MB, write-back), 8x10K SAS/RAID 0
  • Centos 5.8 (64-bit), XFS file system
  • MongoDB v2.2.3 and MongoDB v2.2.0 + Fractal Tree Indexes

Benchmark Results – Throughput

As the performance graph shows, throughput at all concurrency levels was higher in v0.0.2 than pure Mongo, but is now substantially higher in v0.1.0. At 128 concurrent threads we are now 1507% faster than pure MongoDB (207.08 transactions per second vs. 12.89 transactions per second).

Benchmark Results – Raw Insertion

Prior to running the benchmark itself, we run 8 simultaneous loaders. Each loader inserts 1000 documents per batch into different collections. The raw Sysbench insertion performance of v0.1.0 is now 409% faster than pure MongoDB (60,391 inserts per second vs. 11,858 inserts per second).

We will continue to share our results with the community and get people’s thoughts on applications where this might help, suggestions for next steps, and any other feedback. Please drop us a line if you are interested in an evaluation.

PREVIOUS POST
NEXT POST

15 Comments

    • It wasn’t the database being in memory from previous runs that they were trying to avoid, it was having a database that fits in memory within the same run (ie DB working set much bigger than memory).

      If it completely fits in memory and you have raid write cache to allow disk write reordering then I expect the toku benefit wouldn’t be nearly so big (less I/O from better compression would be the only advantage?)

      • Agreed, but we do provide some advantages to workloads that aren’t IO bound (i.e., clustering secondary indexes and compression).

  • Could we see these same benchmarks while running MongoDB 2.4? The v8 engine changes improved concurrency for over the 2.2.3 version greatly: http://docs.mongodb.org/manual/release-notes/2.4-javascript/

    • Ian, sorry for the delay. My server was consumed for over a day running the iiBench benchmark.

      The performance is almost exactly the same as v2.2.3, I suspect the bottleneck isn’t the Javascript concurrency but rather the write locking.

      Here are the raw results for MongoDB v2.4.4 (# threads | tps):
      0001 / 3.17
      0002 / 7.43
      0004 / 11.50
      0008 / 13.10
      0016 / 13.05
      0032 / 12.01
      0064 / 12.71
      0128 / 12.26
      0256 / 12.38
      0512 / 11.62
      1024 / 13.49

  • We are using MongoDB 2.4 and the index cannot fit in the system memory, so we have quite a bit of page fault and make the system loading quite high, can Fractal index help?

    • Yes, TokuMX is able to maintain performance long after the indexes no longer fit in RAM. I have another benchmark coming this afternoon that shows the iiBench performance of TokuMX vs. MongoDB.

      • Yes, this is much more important than raw speed.

        Honestly, most people are satisfied with MongoDB performance when the working set fit in the memory. Adding 700% seems nice but not a direct benefit.

        However, if TokuMX can maintain performance when index size or working set is much larger than system memory, I can say this is a breakthrough and most of the people using MongoDB would consider changing as soon as possible. Even people using Redis would consider your solution..

  • Can we please see some information on how MVCC and ACID compliant transactions function with TokuMX?

    • My colleague, Leif Walsh, said it best. “TokuMX offers multi-document transactional semantics without application changes (snapshot reads), as well as protocol support for multi-statement (read-modify-write style) transactions, within a single shard. We are still designing how we want to present transactions in a sharded cluster.”

      Blogs we have done on this:
      – http://www.tokutek.com/2013/04/mongodb-transactions-yes/
      – http://www.tokutek.com/2013/04/mongodb-multi-statement-transactions-yes-we-can/

Leave a Reply

 
 

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.