Search Results for: load data infile insert

Generating test data for MySQL tables

One of the common tasks requested by our support customers is to optimize slow queries. We normally ask for the table structure(s), the problematic query and sample data to be able to reproduce the problem and resolve it by modifying the query, table structure, or global/session variables. Sometimes, we are given access to the server […]

TokuDB vs InnoDB in timeseries INSERT benchmark

This post is a continuation of my research of TokuDB’s  storage engine to understand if it is suitable for timeseries workloads. While inserting LOAD DATA INFILE into an empty table shows great results for TokuDB, what’s more interesting is seeing some realistic workloads. So this time let’s take a look at the INSERT benchmark.

Considering TokuDB as an engine for timeseries data

I am working on a customer’s system where the requirement is to store a lot of timeseries data from different sensors. For performance reasons we are going to use SSD, and therefore there is a list of requirements for the architecture: Provide high insertion rate Provide a good compression rate to store more data on […]

High Rate insertion with MySQL and Innodb

I again work with the system which needs high insertion rate for data which generally fits in memory. Last time I worked with similar system it used MyISAM and the system was built using multiple tables. Using multiple key caches was the good solution at that time and we could get over 200K of inserts/sec. […]

MySQL Replication: ‘Got fatal error 1236′ causes and cures

MySQL replication is a core process for maintaining multiple copies of data – and replication is a very important aspect in database administration. In order to synchronize data between master and slaves you need to make sure that data transfers smoothly, and to do so you need to act promptly regarding replication errors to continue […]

Want to archive tables? Use Percona Toolkit’s pt-archiver

Percona Toolkit’s pt-archiver is one of the best utilities to archive the records from large tables to another tables or files. One interesting thing is that pt-archiver is a read-write tool. It deletes data from the source by default, so after archiving you don’t need to delete it separately. As it is done by default, you […]

Recovery after DROP & CREATE

In a very popular data loss scenario a table is dropped and empty one is created with the same name. This is because  mysqldump in many cases generates the “DROP TABLE” instruction before the “CREATE TABLE”:

If there were no subsequent CREATE TABLE the recovery would be trivial. Index_id of the PRIMARY index of […]