Tag - Benchmarks

The Transaction Behavior Impact of innodb_rollback_on_timeout in MySQL

innodb_rollback_on_timeout

I would say that innodb_rollback_on_timeout is a very important parameter. In this blog, I am going to explain “innodb_rollback_on_timeout” and how it affects the transaction behavior at the MySQL level. I describe two scenarios with practical tests, as it would be helpful to understand this parameter better.
What is innodb_rollback_on_timeout?
The parameter Innodb_rollback_on_timeout will control the […]

Read more

ClickHouse and ColumnStore in the Star Schema Benchmark

ClickHouse and ColumnStore

There were two new releases in the OpenSource Analytical Databases space, which made me want to evaluate how they perform in the Star Schema Benchmark.
I covered Star Schema Benchmarks a few times before:

Star Schema Bechmark: InfoBright, InfiniDB and LucidDB
ClickHouse in a General Analytical Workload (Based on a Star Schema Benchmark)

What are the new releases:
MariaDB […]

Read more

MariaDB S3 Engine: Implementation and Benchmarking

MariaDB S3 Engine

MariaDB 10.5 has an excellent engine plugin called “S3”. The S3 storage engine is based on the Aria code and the main feature is that you can directly move your table from a local device to S3 using ALTER. Still, your data is accessible from MariaDB client using the standard SQL commands. This is […]

Read more

MongoDB Checkpointing Woes

MongoDB Checkpointing Woes

In my recent post Evaluating MongoDB Under Python TPCC 1000W Workload with MongoDB benchmarks, I showed an average throughput for a prolonged period of time (900sec or 1800sec), and the average throughput tended to smooth and hide problems.
But if we zoom in to 1-sec resolution for WiredTiger dashboard (Available in the Percona Monitoring and […]

Read more

Evaluating MongoDB Under Python TPCC 1000W Workload

evaluting mongodb python tpcc

Following my blog post Evaluating the Python TPCC MongoDB Benchmark, I wanted to evaluate how MongoDB performs under workload with a bigger dataset. This time I will load a 1000 Warehouses dataset, which in raw format should equal to 100GB of data.
For the comparison, I will use the same hardware and the same MongoDB […]

Read more