November 22, 2014

How can we bring query to the data?

Baron recently wrote about sending the query to the data looking at distributed systems like Cassandra. I want to take a look at more simple systems like MySQL and see how we’re doing in this space. It is obvious getting computations as closer to the data as possible is the most efficient as we will […]

Inexpensive SSDs for Database Workloads

The cost of SSDs has been dropping rapidly, and at the time of this writing, 2.5-drives have reached the 1TB capacity mark.  You can actually get inexpensive drives for as little as 60 cents per GB. Even inexpensive SSDs can perform tens of thousands of IOPs and come with 1.5M – 2M hous MTBF and […]

Join my Oct. 2 webinar: ‘Implementing MySQL and Hadoop for Big Data’

MySQL DBAs know that integrating MySQL and a big data solution can be challenging. That’s why I invite you to join me this Wednesday (Oct. 2) at 10 a.m. Pacific time for a free webinar in which I’ll walk you through how to implement a successful big data strategy with Apache Hadoop and MySQL. This […]

Considering TokuDB as an engine for timeseries data

I am working on a customer’s system where the requirement is to store a lot of timeseries data from different sensors. For performance reasons we are going to use SSD, and therefore there is a list of requirements for the architecture: Provide high insertion rate Provide a good compression rate to store more data on […]

Why is the ibdata1 file continuously growing in MySQL?

We receive this question about the ibdata1 file in MySQL very often in Percona Support. The panic starts when the monitoring server sends an alert about the storage of the MySQL server – saying that the disk is about to get filled. After some research you realize that most of the disk space is used […]

Big Data with MySQL and Hadoop at MySQL Connect 2013

I will be talking about Big Data with MySQL and Hadoop at MySQL Connect 2013 (Sept. 21-22) in San Francisco as well as at Percona University at Washington, DC (September 12, 2013). Apache Hadoop is a very popular Big Data solution and we can nowadays easily integrate it with MySQL. I will start with a brief introduction […]

Data compression in InnoDB for text and blob fields

Have you wanted to compress only certain types of columns in a table while leaving other columns uncompressed? While working on a customer case this week I saw an interesting problem where a table had many heavily utilized TEXT fields with some read queries exceeding 500MB (!!), and stored in a 100GB table. In this […]

Data mart or data warehouse?

This is part two in my six part series on business intelligence, with a focus on OLAP analysis. Part 1 – Intro to OLAP Identifying the differences between a data warehouse and a data mart. (this post) Introduction to MDX and the kind of SQL which a ROLAP tool must generate to answer those queries. […]

Blob Storage in Innodb

I’m running in this misconception second time in a week or so, so it is time to blog about it. How blobs are stored in Innodb ? This depends on 3 factors. Blob size; Full row size and Innodb row format. But before we look into how BLOBs are really stored lets see what misconception […]

InnoDB, InnoDB-plugin vs XtraDB on fast storage

To continue fun with FusionIO cards, I wanted to check how MySQL / InnoDB performs here. For benchmark I took MySQL 5.1.42 with built-in InnoDB, InnoDB-plugin 1.0.6, and XtraDB 1.0.6-9 ( InnoDB with Percona patches). As benchmark engine I used tpcc-mysql with 1000 warehouses ( which gives around 90GB of data + indexes) on my […]