November 27, 2014

MySQL compression: Compressed and Uncompressed data size

MySQL has information_schema.tables that contain information such as “data_length” or “avg_row_length.” Documentation on this table however is quite poor, making an assumption that those fields are self explanatory – they are not when it comes to tables that employ compression. And this is where inconsistency is born. Lets take a look at the same table […]

When (and how) to move an InnoDB table outside the shared tablespace

In my last post, “A closer look at the MySQL ibdata1 disk space issue and big tables,” I looked at the growing ibdata1 problem under the perspective of having big tables residing inside the so-called shared tablespace. In the particular case that motivated that post, we had a customer running out of disk space in his […]

A closer look at the MySQL ibdata1 disk space issue and big tables

A recurring and very common customer issue seen here at the Percona Support team involves how to make the ibdata1 file “shrink” within MySQL. I can only imagine there’s a degree of regret by some of the InnoDB architects on their design decisions regarding disk-space management by the shared tablespace* because this has been a big […]

InnoDB file formats: Here is one pitfall to avoid

UPDATED: explaining the role of innodb_strict_mode and correcting introduction of innodb_file_format Compressed tables is an example of an InnoDB feature that became available with the Barracuda file format, introduced in the InnoDB plugin. They can bring significant gains in raw performance and scalability: given the data is stored in a compressed format the amount of […]

Data compression in InnoDB for text and blob fields

Have you wanted to compress only certain types of columns in a table while leaving other columns uncompressed? While working on a customer case this week I saw an interesting problem where a table had many heavily utilized TEXT fields with some read queries exceeding 500MB (!!), and stored in a 100GB table. In this […]

InnoDB compression woes

InnoDB compression is getting some traction, and I see quite contradictory opinions. Someone has successful deployments in productions, and someone says that compression in current implementation is useless. To get some initial impression about performance I decided to run some sysbench with multi-tables benchmarks. I actually was preparing to do complex research, but even first […]

Lost innodb tables, xfs and binary grep

Before I start a story about the data recovery case I worked on yesterday, here’s a quick tip – having a database backup does not mean you can restore from it. Always verify your backup can be used to restore the database! If not automatically, do this manually, at least once a month. No, seriously […]

Thoughs on Innodb Incremental Backups

For normal Innodb “hot” backups we use LVM or other snapshot based technologies with pretty good success. However having incremental backups remain the problem. First why do you need incremental backups at all ? Why not just take the full backups daily. The answer is space – if you want to keep several generations to […]

INFORMATION_SCHEMA tables in the InnoDB pluggable storage engine

Much has been written about the new InnoDB pluggable storage engine, which Innobase released at the MySQL conference last month. We’ve written posts ourselves about its fast index creation capabilities and the compressed row format, and how that affects performance. One of the nice things they added in this InnoDB release is INFORMATION_SCHEMA tables that […]

Testing InnoDB “Barracuda” format with compression

New features of InnoDB – compression format and fast index creation sound so promising so I spent some time to research time and sizes on data we have on our production. The schema of one of shards is