October 25, 2014

ScaleArc: Real-world application testing with WordPress (benchmark test)

ScaleArc recently hired Percona to perform various tests on its database traffic management product. This post is the outcome of the benchmarks carried out by me and ScaleArc co-founder and chief architect, Uday Sawant. The goal of this benchmark was to identify ScaleArc’s overhead using a real-world application – the world’s most popular (according to […]

Sample datasets for benchmarking and testing

Sometimes you just need some data to test and stress things. But randomly generated data is awful — it doesn’t have realistic distributions, and it isn’t easy to understand whether your results are meaningful and correct. Real or quasi-real data is best. Whether you’re looking for a couple of megabytes or many terabytes, the following […]

Using sysbench 0.5 for performing MySQL benchmarks

Given the recent excitement & interest around OpenStack I wanted to make sure I was ready to conduct appropriate evaluations of system performance.  I generally turn to sysbench since it comes with a variety of different tests (accessed via –test= option interface), including: fileio – File I/O test cpu – CPU performance test memory – […]

Benchmarking IBM eXFlash™ DIMM with sysbench fileio

Diablo Technologies engaged Percona to benchmark IBM eXFlash™ DIMMs in various aspects. An eXFlash™ DIMM itself is quite an interesting piece of technology. In a nutshell, it’s flash storage, which you can put in the memory DIMM slots. Enabled by Diablo’s Memory Channel Storage™ technology, this practically means low latency and some unique performance characteristics. These […]

Benchmark: SimpleHTTPServer vs pyclustercheck (twisted implementation)

Github user Adrianlzt provided a python-twisted alternative version of pyclustercheck per discussion on issue 7. Due to sporadic performance issues noted with the original implementation in SimpleHTTPserver, the benchmarks which I’ve included as part of the project on github use mutli-mechanize library, cache time 1 sec 2 x 100 thread pools 60s ramp up time 600s […]

Tips on benchmarking Go + MySQL

We just released, as an open source release, our new percona-agent (https://github.com/percona/percona-agent), the agent to work with Percona Cloud Tools. This agent is written in Go. I will give a webinar titled “Monitoring All MySQL Metrics with Percona Cloud Tools” on June 25 that will cover the new features in percona-agent and Percona Cloud Tools, […]

ScaleArc: Benchmarking with sysbench

ScaleArc recently hired Percona to perform various tests on its database traffic management product. This post is the outcome of the benchmarks carried out by Uday Sawant (ScaleArc) and myself. You can also download the report directly as a PDF here. The goal of these benchmarks is to identify the potential overhead of the ScaleArc […]

Before every release: A glimpse into Percona XtraDB Cluster CI testing

I spoke last month at linux.conf.au 2014 in Perth, Australia, and one of my sessions focused on the “Continuous Integration (CI) testing of Percona XtraDB Cluster (PXC)” at the Developer,Testing, Release and CI miniconf. Here is the video of the presentation: Here is the presentation itself: Percona XtraDB Cluster before every release: Glimpse into CI […]

TokuDB vs InnoDB in timeseries INSERT benchmark

This post is a continuation of my research of TokuDB’s  storage engine to understand if it is suitable for timeseries workloads. While inserting LOAD DATA INFILE into an empty table shows great results for TokuDB, what’s more interesting is seeing some realistic workloads. So this time let’s take a look at the INSERT benchmark.

Testing Intel, Samsung & SanDisk SATA SSD

While working on the service architecture for one of our projects, I considered several SATA SSD options as the possible main storage for the data. The system will be quite write intensive, so the main interest is the write performance on capacities close to full-size storage. After some research I picked several candidates (I show […]