Search Results for: cpu bound innodb

InnoDB vs TokuDB in LinkBench benchmark

Previously I tested Tokutek’s Fractal Trees (TokuMX & TokuMXse) as MongoDB storage engines – today let’s look into the MySQL area. I am going to use modified LinkBench in a heavy IO-load. I compared InnoDB without compression, InnoDB with 8k compression, TokuDB with quicklz compression. Uncompressed datasize is 115GiB, and cachesize is 12GiB for InnoDB […]

Update on the InnoDB double-write buffer and EXT4 transactions

In a post, written a few months ago, I found that using EXT4 transactions with the “data=journal” mount option, improves the write performance significantly, by 55%, without putting data at risk. Many people commented on the post mentioning they were not able to reproduce the results and thus, I decided to further investigate in order […]

InnoDB crash recovery speed in MySQL 5.6

It has been a while since I have looked at InnoDB crash recovery. A lot has change in the last few years – we have serious crash recovery performance improvements in MySQL 5.5 and MySQL 5.6, we have solid state drives raising as typical high performance IO subsystem and we also have the ability to […]

TokuDB vs InnoDB in timeseries INSERT benchmark

This post is a continuation of my research of TokuDB’s  storage engine to understand if it is suitable for timeseries workloads. While inserting LOAD DATA INFILE into an empty table shows great results for TokuDB, what’s more interesting is seeing some realistic workloads. So this time let’s take a look at the INSERT benchmark.

InnoDB compression woes

InnoDB compression is getting some traction, and I see quite contradictory opinions. Someone has successful deployments in productions, and someone says that compression in current implementation is useless. To get some initial impression about performance I decided to run some sysbench with multi-tables benchmarks. I actually was preparing to do complex research, but even first […]

High Rate insertion with MySQL and Innodb

I again work with the system which needs high insertion rate for data which generally fits in memory. Last time I worked with similar system it used MyISAM and the system was built using multiple tables. Using multiple key caches was the good solution at that time and we could get over 200K of inserts/sec. […]