October 2, 2014

Three ways that the poor man’s profiler can hurt MySQL

Over the last few years, Domas’s technique of using GDB as a profiler has become a key tool in helping us analyze MySQL when customers are having trouble. We have our own implementation of it in Percona Toolkit (pt-pmp) and we gather GDB backtraces from pt-stalk and pt-collect. Although it’s helped us figure out a […]

How expensive is USER_STATISTICS?

One of our customers asked me whether it’s safe to enable the so-called USER_STATISTICS features of Percona Server in a heavy-use production server with many tens of thousands of tables. If you’re not familiar with this feature, it creates some new INFORMATION_SCHEMA tables that add counters for activity on users, hosts, tables, indexes, and more. […]

Binary log file size matters (sometimes)

I used to think one should never look at max_binlog_size, however last year I had a couple of interesting cases which showed that sometimes it may be very important variable to tune properly. I meant to write about it earlier but never really had a chance to do it. I have it now!

Percona Live Keynote Speaker: Mark Callaghan

Mark Callaghan has graciously accepted to be the closing keynote speaker for Percona Live: San Francisco! Mark is best known for his work behind MySQL @ Facebook, where he and his team maintain one of the largest MySQL installations around.  They also contribute back to the community with a publicly available branch of enhancements, improved […]

Stripped MySQL builds, the optimization that isn’t

I usually tell people to use official MySQL builds from MySQL, or from their operating system distribution if they don’t want to do that. (This assumes that there is no compelling reason to use third-party builds such as Percona’s.) Sometimes, though, people want to create their own builds, or use a build that is “optimized” […]

To find the bottleneck, stop guessing and start measuring

We recently examined a customer’s system to try to speed up an ETL (Extraction, Transformation and Loading) process for a big data set into a sort of datamart or DW.  What we typically do is ask customers to run the process in question, and then examine what’s happening.  In this case, the (very large, powerful) […]