Search Results for: select max limit

Measuring Max Replication Throughput on Percona XtraDB Cluster with wsrep_desync

Checking throughput with async MySQL replication Replication throughput is the measure of just how fast the slaves can apply replication (at least by my definition).  In MySQL async replication this is important to know because the single-threaded apply nature of async replication can be a write performance bottleneck.  In a production system, we can tell […]

Advanced JSON for MySQL

What is JSON JSON is an text based, human readable format for transmitting data between systems, for serializing objects and for storing document store data for documents that have different attributes/schema for each document. Popular document store databases use JSON (and the related BSON) for storing and transmitting data. Problems with JSON in MySQL It […]

3 handy tools to remove problematic MySQL processes

DBAs often encounter situations where they need to kill queries to ensure there are no long-running queries on a MySQL server that would impact performance. Long-running queries can be the result of many factors. Fortunately, Percona Server contains some handy tools to remove problematic MySQL processes. I will highlight all of the tools via some […]

Identifying useful info from MySQL row-based binary logs

As a MySQL DBA/consultant, it is part of my job to decode the MySQL binary logs – and there are a number of reasons for doing that. In this post, I’ll explain how you can get the important information about your write workload using MySQL row-based binary logs and a simple awk script. First, it […]

Looking deeper into InnoDB’s problem with many row versions

A few days ago I wrote about MySQL performance implications of InnoDB isolation modes and I touched briefly upon the bizarre performance regression I found with InnoDB handling a large amount of versions for a single row. Today I wanted to look a bit deeper into the problem, which I also filed as a bug. First […]

InnoDB’s multi-versioning handling can be Achilles’ heel

I believe InnoDB storage engine architecture is great for a lot of online workloads, however, there are no silver bullets in technology and all design choices have their trade offs. In this blog post I’m going to talk about one important InnoDB limitation that you should consider. InnoDB is a multiversion concurrency control (MVCC) storage […]

MySQL’s INNODB_METRICS table: How much is the overhead?

Starting with MySQL 5.6 there is an INNODB_METRICS table available in INFORMATION_SCHEMA which contains some additional information than provided in the SHOW GLOBAL STATUS output – yet might be more lightweight than PERFORMANCE_SCHEMA. Too bad INNODB_METRICS was designed during the Oracle-Sun split under MySQL leadership and so it covers only InnoDB counters. I think this […]

MySQL compression: Compressed and Uncompressed data size

MySQL has information_schema.tables that contain information such as “data_length” or “avg_row_length.” Documentation on this table however is quite poor, making an assumption that those fields are self explanatory – they are not when it comes to tables that employ compression. And this is where inconsistency is born. Lets take a look at the same table […]

More then 1000 columns – get transactional with TokuDB

Recently I encountered a specific situation in which a customer was forced to stay with the MyISAM engine due to a legacy application using tables with over 1000 columns. Unfortunately InnoDB has a limit at this point. I did not expect to hear this argument for MyISAM. It is usually about full text search or […]

Reference architecture for a write-intensive MySQL deployment

We designed Percona Cloud Tools (both hardware and software setup) to handle a very high-intensive MySQL write workload. For example, we already observe inserts of 1bln+ datapoints per day. So I wanted to share what kind of hardware we use to achieve this result. Let me describe what we use, and later I will explain […]