Search Results for: cpu high

Q&A: High availability when using MySQL in the cloud

Last week I hosted a webinar on using MySQL in the cloud for High Availability (HA) alongside 451 Research analyst Jason Stamper. You can watch the recording and also download the slides (free) here. Just click the “Register” button at the end of that page. We had several excellent questions and we didn’t have time […]

Hyper-threading – how does it double CPU throughput?

The other day a customer asked me to do capacity planning for their web server farm. I was looking at the CPU graph for one of the web servers that had Hyper-threading switched ON and thought to myself: “This must be quite a misleading graph – it shows 30% CPU usage. It can’t really be […]

High Rate insertion with MySQL and Innodb

I again work with the system which needs high insertion rate for data which generally fits in memory. Last time I worked with similar system it used MyISAM and the system was built using multiple tables. Using multiple key caches was the good solution at that time and we could get over 200K of inserts/sec. […]

XtraDB in CPU-bound benchmark

Peter said me that previous results https://www.percona.com/blog/2008/12/18/xtradb-benchmarks-15x-gain/ are too marketing, and we should show other results also. Here is the run for CPU Bound,or it would be more correctly to say in-cache benchmark, because there is a lot of CPU remains idle. This run is exactly the same as Disk Bound but with innodb_buffer_pool_size=8G which […]

T2000 CPU Performance – Watch out

Sun is aggressively pushing T2000 as Scalable MySQL Platforms, and indeed it is Scalable in terms of high concurrency workloads – it is able to execute a lot of concurrent threads and so speed gain from 1 thread to say 32 thread will be significant. But thing a lot of people miss is – Being […]

InnoDB vs TokuDB in LinkBench benchmark

Previously I tested Tokutek’s Fractal Trees (TokuMX & TokuMXse) as MongoDB storage engines – today let’s look into the MySQL area. I am going to use modified LinkBench in a heavy IO-load. I compared InnoDB without compression, InnoDB with 8k compression, TokuDB with quicklz compression. Uncompressed datasize is 115GiB, and cachesize is 12GiB for InnoDB […]

MongoDB benchmark: sysbench-mongodb IO-bound workload comparison

In this post I’ll share the results of a sysbench-mongodb benchmark I performed on my server. I compared MMAP, WiredTiger, RocksDB and TokuMXse (based on MongoDB 3.0) and TokuMX (based on MongoDB 2.4) in an IO-intensive workload. The full results are available here, and below I’ll just share the summary chart: I would like to […]

Using Cgroups to Limit MySQL and MongoDB memory usage

Quite often, especially for benchmarks, I am trying to limit available memory for a database server (usually for MySQL, but recently for MongoDB also). This is usually needed to test database performance in scenarios with different memory limits. I have physical servers with the usually high amount of memory (128GB or more), but I am […]

Update on the InnoDB double-write buffer and EXT4 transactions

In a post, written a few months ago, I found that using EXT4 transactions with the “data=journal” mount option, improves the write performance significantly, by 55%, without putting data at risk. Many people commented on the post mentioning they were not able to reproduce the results and thus, I decided to further investigate in order […]