Emergency

Search Results for: insert large

Importing big tables with large indexes with Myloader MySQL tool

Mydumper is known as the faster (much faster) mysqldump alternative. So, if you take a logical backup you will choose Mydumper instead of mysqldump. But what about the restore? Well, who needs to restore a …

Read More
 

Don’t worry about embedding large arrays in your MongoDB documents

In this post, I’d like to discuss some performance problems recently mentioned about MongoDB’s embedded arrays, and how TokuMX avoids these problems and delivers more consistent performance for MongoDB applications. In “Why shouldn’t I embed …

Read More
 

532x Multikey Index Insertion Performance Increase for MongoDB with Fractal Tree Indexes

In my three previous MongoDB blogs I wrote about our implementation of Fractal Tree(R) indexes on MongoDB, showing a 10x insertion performance increase, a 268x query performance increase, and a comparison of covered indexes and …

Read More
 

10x Insertion Performance Increase for MongoDB with Fractal Tree Indexes

The challenge of handling massive data processing workloads has spawned many new innovations and techniques in the database world, from indexing innovations like our Fractal Tree® technology to a myriad of “NoSQL” solutions (here is …

Read More
 

Benchmarking single-row insert performance on Amazon EC2

I have been working for a customer benchmarking insert performance on Amazon EC2, and I have some interesting results that I wanted to share. I used a nice and effective tool iiBench which has been …

Read More
 

MySQL on Amazon RDS part 1: insert performance

Amazon’s Relational Database Service (RDS) is a cloud-hosted MySQL solution. I’ve had some clients hitting performance limitations on standard EC2 servers with EBS volumes (see SSD versus EBS death match), and one of them wanted …

Read More
 

Why “insert … on duplicate key update” May Be Slow, by Incurring Disk Seeks

In my post on June 18th, I explained why the semantics of normal ad-hoc insertions with a primary key are expensive because they require disk seeks on large data sets. I previously explained why it …

Read More
 

Making “Insert Ignore” Fast, by Avoiding Disk Seeks

In my post from three weeks ago, I explained why the semantics of normal ad-hoc insertions with a primary key are expensive because they require disk seeks on large data sets. Towards the end of …

Read More
 

High Insertion Rates into a TokuDB Table with Durable Transactions

We recently made transactions in TokuDB 3.0 durable. We write table changes into a log file so that in the event of a crash, the table changes up to the last checkpoint can be replayed. …

Read More
 

Innodb performance gotcha w Larger queries.

Couple of days ago I was looking for a way to improve update performance for the application and I was replacing single value UPDATE with multiple value REPLACE (though I also saw the same problem …

Read More