Search Results for: max row size

MySQL compression: Compressed and Uncompressed data size

MySQL has information_schema.tables that contain information such as “data_length” or “avg_row_length.” Documentation on this table however is quite poor, making an assumption that those fields are self explanatory – they are not when it comes to tables that employ compression. And this is where inconsistency is born. Lets take a look at the same table […]

How rows_sent can be more than rows_examined?

When looking at queries that are candidates for optimization I often recommend that people look at rows_sent and rows_examined values as available in the slow query log (as well as some other places). If rows_examined is by far larger than rows_sent, say 100 larger, then the query is a great candidate for optimization. Optimization could […]

Why is the ibdata1 file continuously growing in MySQL?

We receive this question about the ibdata1 file in MySQL very often in Percona Support. The panic starts when the monitoring server sends an alert about the storage of the MySQL server – saying that the disk is about to get filled. After some research you realize that most of the disk space is used […]

Understanding the maximum number of columns in a MySQL table

This post was initially going to be two sets of polls: “What is the maximum number of columns in MySQL?” and “What is the minimum maximum number of columns in MySQL?”. Before you read on, ponder those questions and come up with your own answers… and see if you’re right or can prove me wrong! […]

read_buffer_size can break your replication

There are some variables that can affect the replication behavior and sometimes cause some big troubles. In this post I’m going to talk about read_buffer_size and how this variable together with max_allowed_packet can break your replication. The setup is a master-master replication with the following values: max_allowed_packet = 32M read_buffer_size = 100M To break the […]

Benchmarking single-row insert performance on Amazon EC2

I have been working for a customer benchmarking insert performance on Amazon EC2, and I have some interesting results that I wanted to share. I used a nice and effective tool iiBench which has been developed by Tokutek. Though the “1 billion row insert challenge” for which this tool was originally built is long over, […]

Review of Virident FlashMAX MLC cards

I have been following Virident for a long time (e.g. http://www.percona.com/blog/2010/06/15/virident-tachion-new-player-on-flash-pci-e-cards-market/). They have great PCIe Flash cards based on SLC NAND. I always thought that Virident needed to come up with an MLC card, and I am happy to see they have finally done so. At Virident’s request, I performed an evaluation of their MLC […]

Innodb undo segment size and transaction isolation

You might know if you have long running transactions you’re risking having a lot of “garbage” accumulated in undo segment size which can cause performance degradation as well as increased disk space usage. Long transactions can also be bad for other reasons such as taking row level locks which will prevent other transactions for execution, […]