September 23, 2014

Is there room for more MySQL IO Optimization?

I prefer to run MySQL with innodb_flush_method=O_DIRECT in most cases – it makes sure there is no overhead of double buffering and I can save the limited amount of file system cache I would normally have on database server for those things which need to be cached — system files, binary log, FRM files, MySQL […]

DROP TABLE and stalls: Lazy Drop Table in Percona Server and the new fixes in MySQL

Suppose you have turned on innodb_file_per_table (which means that each table has its own tablespace), and you have to drop tables in a background every hour or every day. If its once every day then you can probably schedule the table dropping process to run during off-peak hours. But I have seen cases where the […]

Checking the subset sum set problem with set processing

Hi, Here is an easy way to run the subset sum check from SQL, which you can then distribute with Shard-Query:

Notice there is no 16 in the list. We did not pass the check. There are enough 15s though. The distinct value count for each item in the output set, must at least […]

The story of one MySQL Upgrade

I recently worked on upgrading MySQL from one of very early MySQL 5.0 versions to Percona Server 5.1. This was a classical upgrade scenario which can cause surprises. Master and few slaves need to be upgraded. It is a shared database used by tons of applications written by many people over more than 5 years […]

Sharing an auto_increment value across multiple MySQL tables

The title is SEO bait – you can’t do it. We’ve seen a few recurring patterns trying to achieve similar – and I thought I would share with you my favorite two: Option #1: Use a table to insert into, and grab the insert_id:

Option #2: Use a table with one just row:

JOIN Performance & Charsets

We have written before about the importance of using numeric types as keys, but maybe you’ve inherited a schema that you can’t change or have chosen string types as keys for a specific reason. Either way, the character sets used on joined columns can have a significant impact on the performance of your queries. Take […]

AUTO_INCREMENT and MERGE TABLES

How would you expect AUTO_INCREMENT to work with MERGE tables ? Assuming INSERT_METHOD=LAST is used I would expect it to work same as in case insertion happens to the last table… which does not seems to be the case. Alternatively I would expect AUTO_INCREMENT to be based off the maximum value across all tables, respecting […]

How fast can MySQL Process Data

Reading Barons post about Kickfire Appliance and of course talking to them directly I learned a lot in their product is about beating data processing limitations of current systems. This raises valid question how fast can MySQL process (filter) data using it current architecture ? I decided to test the most simple case – what […]

PHP Large result sets and summary tables.

We’re working with web site preparing for massive growth. To make sure it handles large data sets as part of the process we work on generation test database of significant size as testing your application on table with 1000 rows may well give you very dangerous false sense of security. One of the process web […]

Handling big result sets

Sometime it is needed to handle a lot of rows on client side. Usual way is send query via mysql_query and than handle the result in loop mysql_fetch_array (here I use PHP functions but they are common or similar for all APIs, including C). Consider table: