October 24, 2014

Incremental backups with log archiving for XtraDB

Percona Server 5.6.11-60.3 has introduced a new feature called Log Archiving for XtraDB. This feature makes copies of the old log files before they are overwritten, thus saving all the redo log for a write workload. When log archiving is enabled, it duplicates all redo log writes in a separate set of files in addition […]

How to Extract All Running Queries (Including the Last Executed Statement) from a Core File?

This post builds on the How to obtain the “LES” (Last Executed Statement) from an Optimized Core Dump? post written about a year ago. A day after that post was released, Shane Bester wrote an improved version, How to obtain all executing queries from a core file on his blog. Reading that post is key […]

Percona Toolkit 2.2.2 released; bug fixes include pt-heartbeat & pt-archiver

During the Percona Live MySQL Conference & Expo 2013 the week before last, we quietly released Percona Toolkit 2.2.2 with a few bug fixes: pt-archiver –bulk-insert may corrupt data pt-heartbeat –utc –check always returns 0 pt-query-digest 2.2 prints unwanted debug info on tcpdump parsing errors pt-query-digest 2.2 prints too many string values Some tools don’t […]

Serious build and testing automation

Here at Percona we’ve spent a lot of time improving our development and testing practices. Why? Because constant innovation keeps us ahead and more productive. We want to work smarter, not harder. One of the tools we use is the Jenkins Continuous Integration server. We use Jenkins pretty heavily to help with out development processes […]

Quality Assurance: Percona Server Development Now Monitored by Automated Sysbench Performance Regression Checks!

Continuous integration of new features and bug fixes is great – but what if a small change in seemingly insignificant code causes a major performance regression in overall server performance? We need to ensure this does not happen. That said, performance regressions can be hard to detect. They may hide for some time (or be […]

Automation: A case for synchronous replication

Just yesterday I wrote about math of automatic failover today I’ll share my thoughts about what makes MySQL failover different from many other components and why asynchronous nature of standard replication solution is causing problems with it. Lets first think about properties of simple components we fail over – web servers, application servers etc. We […]

The Math of Automated Failover

There are number of people recently blogging about MySQL automated failover, based on production incident which GitHub disclosed. Here is my take on it. When we look at systems providing high availability we can identify 2 cases of system breaking down. First is when the system itself has a bug or limitations which does not […]

The perils of uniform hardware and RAID auto-learn cycles

Last night a customer had an emergency in selected machines on a large cluster of quite uniform database servers. Some of the servers were slowing down in a very puzzling way over a short time span (a couple of hours). Queries were taking multiple seconds to execute instead of being practically instantaneous. But nothing seemed […]

Galera replication – how to recover a PXC cluster

Galera replication for MySQL brings not only the new, great features to our ecosystem, but also introduces completely new maintenance techniques. Are you concerned about adding such new complexity to your MySQL environment? Perhaps that concern is unnecessarily. I am going to present here some simple tips that hopefully will let fresh Galera users prevent […]

The ARCHIVE Storage Engine – does it do what you expect?

Sometimes there is a need for keeping large amounts of old, rarely used data without investing too much on expensive storage. Very often such data doesn’t need to be updated anymore, or the intent is to leave it untouched. I sometimes wonder what I should really suggest to our Support customers. For this purpose, the […]