]]>
]]>

You are here

Latest MySQL Performance Blog posts

Subscribe to Latest MySQL Performance Blog posts feed
Percona's MySQL & InnoDB performance and scalability blog
Updated: 15 min 26 sec ago

‘Woz on your mind?’ Share your questions for Steve Wozniak during his Percona Live keynote!

March 27, 2015 - 2:34pm

Here’s your chance to get on stage with Woz! Sort of. Apple co-founder and Silicon Valley icon and philanthropist Steve Wozniak will participate in a moderated Q&A on creativity and innovation April 14 during the Percona Live MySQL Conference and Expo in Santa Clara, California.

Woz once said that he never intended to change the world. That was the other Steve, Steve Jobs.

“I didn’t want to start this company,” Woz told the Seattle Times of Apple’s beginnings in a 2006 interview. “My goal wasn’t to make a ton of money. It was to build good computers. I only started the company when I realized I could be an engineer forever.”

What would you ask Woz if given the opportunity?

“Woz, what first sparked your interest in engineering?”
“Hey Woz, how did you come up with the design for the first Apple?”
“Woz, what do you see as the next big thing in personal computers?”
“Hi Woz, what’s the deal with your giant vacuum tube watch?”

Now it’s your turn! Ask a question in the comments below and be sure to include your Twitter handle – or your Facebook page or LinkedIn profile. If we use your question, then your profile and question will be displayed on the giant screen behind Woz on stage as it’s being asked during his big keynote! How cool is that?

Want to be there in person? See Woz speak for just $5! That’s $70 off the regular admission price! Just use the promo code “KEY” at registration under the “Expo Hall and Keynote Pass” selection. Following Woz’s keynote, be sure to stop by the Percona booth, say “hello, Tom,” and I’ll give you a limited-edition Percona t-shirt.

In the meantime, help spread the word! Please share this tweet:

“Woz on your mind?” Tweet @Percona your questions for Apple’s Steve Wozniak who speaks April 14 at #PerconaLive! http://ow.ly/KTmES

Do that, then follow @Percona and I’ll send a DM for your address and will ship a t-shirt right to your door. See you at the conference!

The post ‘Woz on your mind?’ Share your questions for Steve Wozniak during his Percona Live keynote! appeared first on MySQL Performance Blog.

FoundationDB is acquired by Apple: My thoughts

March 27, 2015 - 6:00am

TechCrunch reported yesterday that Apple has acquired FoundationDB. And while I didn’t see any mention if this news on the FoundationDB website, they do have an announcement saying: “We have made the decision to evolve our company mission and, as of today, we will no longer offer downloads.”

This is an unfortunate development – I have been watching FoundationDB technology for years and was always impressed in terms of its performance and features. I was particularly impressed by their demo at last year’s Percona Live MySQL and Expo. Using their Intel NUC-based Cluster, I remember Ori Herrnstadt showing me how FoundationDB handles single-node failure as well as recovery from complete power-down – very quickly and seamlessly. We have borrowed a lot of ideas from this setup for our Percona XtraDB Cluster Demos.

I think it was a great design to build a distributed, shared-nothing transaction aware key value store, and then have an SQL Layer built on top of it. I did not have a chance to test it hands-on, though. Such a test would have revealed the capabilities of the SQL optimizer – the biggest challenge for distributed relational database systems.

My hope was to see, over time, this technology becoming available as open source (fully or partially), which would have dramatically increased adoption by the masses. It will be interesting to see Apple’s long-terms plans for this technology.

In any case it looks like FoundationDB software is off limits. If you are an existing FoundationDB customer looking for alternatives, we here at Percona would be happy to help evaluate options and develop a migration strategy if necessary.

The post FoundationDB is acquired by Apple: My thoughts appeared first on MySQL Performance Blog.

Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more

March 25, 2015 - 3:00am

Founded in 2004 to help people find great local businesses, Yelp has some 135 million monthly unique visitors. With those traffic volumes Yelp’s 300+ engineers are constantly working to keep things moving smoothly – and when you move that fast you learn many things.

Fortunately for the global MySQL community, three Yelp DBAs will be sharing what they’ve learned at the annual Percona Live MySQL Conference and Expo this April 13-16 in Santa Clara, California.

Say “hello” to Susanne Lehmann, Jenni Snyder and Josh Snyder! I chatted with them over email about their presentations, on how MySQL is used at Yelp, and about the shortage of women in MySQL.

***

Tom: Jenni, you and Josh will be co-presenting “Next generation monitoring: moving beyond Nagios ” on April 14.

You mentioned that Yelp’s databases scale dynamically, and so does your monitoring of those databases. And to minimize human intervention, you’ve created a Puppet and Sensu monitoring ensemble… because “if it’s not monitored, it’s not in production.” Talk to me more about Yelp’s philosophy of “opt-out monitoring.” What does that entail? How does that help Yelp?

Jenni: Before we moved to Sensu, our Nagios dashboards were a sea of red, muted, acknowledged, or disabled service checks. In fact, we even had a cluster check to make sure that we never accidentally put a host into use that was muted or marked for downtime. It was possible for a well-meaning operator to acknowledge checks on a host and forget about it, and I certainly perpetrated a couple of instances of disks filling up after acknowledging a 3am “warning” page that I’d rather forget about. With Sensu, hosts and services come out of the downtime/acknowledgement state automatically after a number of days, ensuring that we’re kept honest and stay on top of issues that need to be addressed.

Also, monitoring is deployed with a node, not separate monitoring configuration. Outside of a grace period we employ when a host is first provisioned or rebooted, if a host is up, it’s being monitored and alerting. Also, alerting doesn’t always mean paging. We also use IRC and file tickets directly into our tracking system when we don’t need eyes on a problem right away.

Tom: Susanne, in your presentation, titled “insert cassandra into prod where use_case=?;” you’ll discuss the situations you’ve encountered where MySQL just wasn’t the right tool for the job.

What led up to that discovery and how did you come up with finding the right tools (and what were they) to run alongside and support MySQL?

Susanne: Our main force behind exploring other datastores alongside MySQL was that Yelp is growing outside the US market a lot. Therefore we wanted the data to be nearer to the customer and needed multi-master writes.

Also, we saw use cases where our application data was organized very key-value like and not relational, which made them a better fit for a NoSQL solution.

We decided to use Cassandra as a datastore and I plan to go more into detail why during my talk. Now we offer developers more choices on how to store our application data, but we also believe in the “right tool for the job” philosophy and might add more solutions to the mix in the future.

Tom: Jenni, you’ll also be presenting “Schema changes multiple times a day? OK!” I know that you and your fellow MySQL DBAs are always improving and also finding better ways of supporting new and existing features for Yelp users like me. Delivering on such a scale must entail some unique processes and tools. Does this involve a particular mindset among your fellow DBAs? Also, what are some of those key tools – and processes and how are they used?

Jenni: Yelp prizes the productivity of our developers and our ability to iterate and develop new features quickly. In order to do that, we need to be able to not only create new database tables, but also modify existing ones, many of which are larger than MySQL can alter without causing considerable replication delay. The first step is to foster a culture of automated testing, monitoring, code reviews, and partnership between developers and DBAs to ensure that we can quickly & safely roll out schema changes. In my talk, I’ll be describing tools that we’ve talked about before, like our Gross Query Checker, as well as the way the DBA team works with developers while still getting the rest of our work done. The second, easy part is using a tool like pt-online-schema-change to run schema changes online without causing replication delay or degrading performance

Tom: Josh, you’ll also be speaking on “Bootstrapping databases in a single command: elastic provisioning for the win.” What is “elastic provisioning” and how are you using it for Yelp’s tooling?

Josh: When I say that we use elastic provisioning, I mean that we can reliably and consistently build a database server from scratch, with minimal human involvement. The goal is to encompass every aspect of the provisioning task, including configuration, monitoring, and even load balancing, in a single thoroughly automated process. With this process in place, we’ve found ourselves able to quickly allocate and reallocate resources, both in our datacenters and in the cloud. Our tools for implementing the above goals give us greater confidence in our infrastructure, while avoiding single-points of failure and achieving the maximum possible level of performance. We had a lot of fun building this system, and we think that many of the components involved are relevant to others in the field.

Tom: Susanne and Jenni, last year at Percona Live there was a BoF session titled “MySQL and Women (or where are all the women?).” The idea was to discuss why there are “just not enough women working on the technology side of tech.” In a nutshell, the conversation focused on why there are not more women in MySQL and why so relatively few attend MySQL conferences like Percona Live.

The relative scarcity of women in technical roles was also the subject of an article published in the August 2014 issue of Forbes, citing a recent industry report.

Why, in your (respective) views, do you (or don’t) think that there are so few women in MySQL? And how can this trend be reversed?

Susanne: I think there are few women in MySQL and the reasons are manifold. Of course there is the pipeline problem. Then there is the problem, widely discussed right now, that women who are entering STEM jobs are less likely staying in there. These are reasons not specific for MySQL jobs, but rather for STEM in general. What is more specific for database/MySQL jobs is, in my opinion, that often times DBAs need to be on call, they need to stay in the office if things go sideways. Database problems tend often to be problems that can’t wait till the next morning. That makes it more demanding when you have a family for example (which is true for men as well of course, but seems still to be more of a problem for women).

As for how to reverse the trend, I liked this Guardian article because it covers a lot of important points. There is no easy solution.

I like that more industry leaders and technology companies are discussing what they can do to improve diversity these days. In general, it really helps to have a great professional (female) support system. At Yelp, we have AWE, the Awesome Women in Engineering group, in which Jenni and I are both active. We participate in welcoming women to Yelp engineering, speaking at external events and workshops to help other women present their work, mentoring, and a book club.

Jenni: I’m sorry that I missed Percona Live and this BoF last year; I was out on maternity leave. I believe that tech/startup culture is a huge reason that fewer women are entering and staying these days, but a quick web search will lead you to any number of articles debating the subject. I run into quite a few women working with MySQL; it’s large, open community and generally collaborative and supportive nature is very welcoming. As the article you linked to suggests, MySQL has a broad audience. It’s easy to get started with and pull into any project, and as a result, most software professionals have worked with it at some time or another.

On another note, I’m happy to see that Percona Live has a Code of Conduct. I hope that Percona and/or MySQL will consider adopting a Community Code of Conduct like Python, Puppet, and Ubuntu. Doing so raises the bar for all participants, without hampering collaboration and creativity!

* * *

Thanks very much, Susanne, Jenni and Josh! I look forward to seeing you next month at the conference. And readers, if you’d like to attend Percona Live, use the promo code Yelp15 for 15% off! Just enter that during registration. If you’re already attending, be sure to tweet about your favorite sessions using the hashtag #PerconaLive. And if you need to find a great place to eat while attending Percona Live, click here for excellent Yelp recommendations.

The post Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more appeared first on MySQL Performance Blog.

Calling all MySQL DBAs: How do you use Percona Toolkit?

March 24, 2015 - 12:00am

Percona Toolkit is one of our most mature open source applications. Derived from Maatkit and Aspersa, Percona Toolkit has evolved significantly over the years. The software now contains 32 tools, over 4,000 tests, and has been downloaded over 250,000 times. Anyone who manages a database – from DBAs to system administrators to even software developers – benefits from Percona Toolkit’s ability to perform a variety of MySQL server and system tasks that are too difficult or complex to perform manually.

We continue to make Percona Toolkit better each month. Over the last 9 months alone Percona has had 6 releases and resolved nearly 50 issues.

While Percona team members in Support, Consulting, and Managed Services are big drivers of identifying bugs and new features (driven mostly by Percona customer needs), the community of Percona Toolkit users plays a significant role in making the open source software what it is today.

We’d like to learn how we can make Percona Toolkit even better for your needs. Please take a brief survey so we can learn how you actually use the software. As a thank you for taking the survey, we are randomly giving away five $50 Amazon.com gift cards to participants. It’s a small token but one that we hope you’ll appreciate.

Recent additions to Percona Toolkit have included better Percona XtraDB Cluster support as well as multiple fixes and improvements to pt-online-schema-change, pt-kill, pt-query-digest, pt-stalk, and preparation for the MySQL 5.7 GA. Help us continue to improve Percona Toolkit by taking part in our survey. If you use Percona Toolkit and are attending Percona Live next month, please keep a look out for me. I’d like to hear about your experiences.

The post Calling all MySQL DBAs: How do you use Percona Toolkit? appeared first on MySQL Performance Blog.

Choosing a good sharding key in MongoDB (and MySQL)

March 19, 2015 - 12:00am

MongoDB 3.0 was recently released. Instead of focusing on what’s new – that is so easy to find, let’s rather talk about something that has not changed a lot since the early MongoDB days. This topic is sharding and most specifically: how to choose a good sharding key. Note that most of the discussion will also apply to MySQL, so if you are more interested in sharding than in MongoDB, it could still be worth reading.

When do you want to shard?

In general sharding is recommended with MongoDB as soon as any of these conditions is met:

  • #1: A single server can no longer handle the write workload.
  • #2: The working set no longer fits in memory.
  • #3: The dataset is too large to easily fit in a single server.

Note that #1 and #2 are by far the most common reason why people need sharding. Also note that in the MySQL world, #2 does not imply that you need sharding.

What are the properties of a good sharding key?

The starting point is that a cross-shard query is very expensive in a sharded environment. It is easy to understand why: the query has to be executed independently on several shards, and then results have to be merged.

With MongoDB, mongos will transparently route queries to the right shards and will automatically merge the results: this is very handy but the hidden complexity can also make you forget that you have executed a very poorly optimized query.

This is where the choice of a sharding key is so critical: choose the right key and most queries will be simple and efficient, choose a wrong one and you’ll have ugly and slow queries.

Actually a good sharding does not need to have tens of properties, but only two:

  • Insertions should be as much balanced as possible across all shards.
  • Each query should be able to be executed by retrieving data from as little shards as possible (ideally 1 shard).

Sounds quite easy, right? However depending on your use case, it may be quite difficult to find a good sharding key. Let’s look at a few examples.

Social Network

Say we have users who can be connected to other users, who can read or write posts and who have their own wall.

All 3 collections can become very large, so sharding all will be necessary over time.

For the user and wall collection the user_id field is an obvious choice and it actually meets both criteria for a good sharding key.

For the post collection, user_id also looks like an obvious choice, but if you think about how the collection is accessed for reads, you will realize that you will need to fetch it using its post_id, not its user_id (simply because a user can have multiple posts). If you shard by user_id, any read to a single post will be broadcast to all shards: this is clearly not a good option.

So using post_id is a better choice. However it only meets criteria #2: most posts are never updated, so all the writes are insertions that will go to a single shard. However the traffic on the post collection is strongly in favor of reads, so being able to speed up reads while not slowing down writes is probably an acceptable tradeoff.

Access Logs

The workload here is very specific: write-intensive and append-only.

Sharding by ObjectId is definitely a bad idea: while data can be easily spread across all shards, all writes will only go to one shard, so you will have no benefit compared to a non-sharded setup when it comes to scale the writes.

A better solution is to use a hash of the ObjectId: that way data AND writes will be spread across all shards.

Another good option would be to use another field in your documents that you know is evenly distributed across the whole dataset. Such field may not exist though, that’s why hashing the ObjectId is a more generic solution.

Ecommerce

MongoDB can be a good option to store a product catalog: being schemaless, it can easily store products with very different attributes.

To be usable such a catalog must be searchable. This means that many indexes need to be added, and the working set will probably grow very quickly. In this case your main concern is probably not to scale the writes, but to make reads as efficient as possible.

Sharding can be an option because if done properly, each shard will act as a coarse-grained index. Now the issue is to find which field(s) will evenly distribute the dataset. Most likely a single field will not be enough, you will have to use a compound sharding key.

Here I would say that there is no generic solution, but if the products are for instance targeted at either kid, woman or man and if you have several categories of products, a potential sharding key would be (target, category, sku).

Note that in this case, reading from secondaries may be enough to ensure good scalability.

Conclusion

As you can see, choosing a correct sharding key is not always easy: do not assume that because some apps are sharded by some fields, you should do the same. You may need a totally different strategy or sharding may even not be the right solution for your issues.

If you are interested in learning more about MongoDB, I will be presenting a free webinar on March 25 at 10 a.m. Pacific time. It will be an introduction to MongoDB for MySQL DBAs and developers. Register here if you are interested.

The post Choosing a good sharding key in MongoDB (and MySQL) appeared first on MySQL Performance Blog.

Getting started guide for OpenStack contributors

March 18, 2015 - 6:00am

So you want to contribute to OpenStack? I can help!

For the last year or so I have been involved with OpenStack and more specifically the Trove (DBaaS) project as sort of an ambassador for Percona, contributing bits of knowledge, help and debugging wherever I could and thought I would share some of my experience with others that wanted to get involved with OpenStack development, documentation, testing, etc. Getting started with OpenStack contributions is also the idea behind my talk next month at Percona OpenStack Live 2015. (Percona Live attendees have access to OpenStack Live)

Back at the last OpenStack Conference and Design Summit in Paris last November, I had the amazing opportunity to attend the two-day OpenStack Upstream Training hosted by Stefano Maffulli, Loic Dachary and several other very kind and generous folks. If you ever find yourself in a position to attend one of these training sessions, I strongly suggest that you take advantage of the opportunity, you will not be disappointed.

Using some of the material from the OpenStack Foundation and a little personal experience, I’m going to go through some of the basics of what you’ll need to know if you want to contribute. There are several steps but they are mostly painless:

– It all starts with a little bit of legal work such as signing a either an individual or corporate contributor agreement.

– You will need to decide on a project or projects that you want to contribute to. Chances are that you already have one in mind.

– Find the various places where other contributors to that project hang out, usually there is a mailing list and IRC channel. Logon, introduce yourself, make some friends and sit and listen to what they are working on. Find the PTL (Project Team Lead) and remember his/her name. Let him/her know who you are, who you work for, what you are interested in, etc. Sit in on their meetings, ask questions but don’t be a pest. Observe a little etiquette, be polite and humble and you will reap many rewards later on.

– Eventually you will need to find and get the code and install whatever tools are necessary for that project, build it, stand up a test/working environment, play with it and understand what the various moving parts are. Ask more questions, etc.

– Do you think you are ready to do some coding and submit a patch? Talk to the PTL and get a lightweight bug or maybe a documentation task to work on.

– In order to submit a patch you will need to understand the workflow use the OpenStack gerrit review system which takes a little bit of time to understand if you have never used gerrit before. You’ll need to find and install git-review. Here is where making friends above really helps out. In every project there are usually going to be a few folks around with the time and patience to help you work through your first review.

– Find a bit of a mentor to help you with the mechanics in case you run into trouble, could just be the PTL if he/she has the time, make your patch, send it in and work through the review process.

– As with most peer review situations, you’ll need to remember never to take things personally. A negative review comment is not an insult to you and your family! Eventually your patch will either be accepted and merged upstream (yay!) or rejected and possibly abandoned in favor of some alternative (boo!). If rejected, fret not! Talk to the PTL and your new friends to try and understand the reason why if the review comments were unclear and simply try again.

It is that easy!

Come join me on Tuesday, April 14th in Santa Clara, California and we’ll chat about how you can begin contributing to OpenStack.

The post Getting started guide for OpenStack contributors appeared first on MySQL Performance Blog.

MySQL QA Episode 1: Bash/GNU Tools & Linux Upskill & Scripting Fun

March 17, 2015 - 9:12am

MySQL QA Episode #1: Bash/GNU Tools & Linux Upskill & Scripting Fun

This episode consists of 13 parts, and an introduction. See videos below

In HD quality (set your player to 720p!)

Introduction

Part 1: echo, ls, cp, rm, vi, cat, df, du, tee, cd, clear, uname, date, time, cat, mkdir

Part 2: find, wc, sort, shuf, tr, mkdir, man, more

Part 3: Redirection, tee, stdout, stderr, /dev/null, cat

Part 4: Vars, ‘ vs “, $0, $$, $!, screen, chmod, chown, export, set, whoami, sleep, kill, sh, grep, sudo, su, pwd

Part 5: grep, regex (regular expressions), tr

Part 6: sed, regex (regular expressions)

Part 7: awk

Part 8: xargs

Part 9: subshells, shells, sh

Part 10: if, for, while, seq, head, grep & grep -q, sleep, tee, read & more

Part 11: Arrays, lynx, grep, egrep, awk, redirection, variable, printf, while, wget, read

Part 12: Production scripting examples

Part 13: Gnuwin32, Gnuwin32 escaping & path name/binary selection gotcha’s, untar, unzip, gzip for Windows

If you enjoyed these video’s leave us a comment below!

The post MySQL QA Episode 1: Bash/GNU Tools & Linux Upskill & Scripting Fun appeared first on MySQL Performance Blog.

Free MySQL QA & Bash/Linux Training Series

March 17, 2015 - 6:00am

Welcome to the MySQL QA Training Series!

If you have not read our introductory blog post on pquery yet, I’d recommend reading that one first to get a bit of background. The community is enthuastic about pquery, and today I am happy to announce a full training series on pquery and more. Whether you are a Linux or MySQL newbie or a seasoned QA engineer, there is something here for you. From Bash scripting (see episode 1 below), to every aspect of the new pquery framework, it is my hope that you enjoy this series. If you do, please leave us a comment

Database quality assurance is not as straightforward as it may seem. It’s not a matter of point-and-click, but rather of many intertwined tools and scripts. Beyond that, due to the complexity of the underlying product, it’s about having an overall plan or vision on how to adequately test the product in every aspect.

Take for example the SELECT statement; it allows specifying about 30 different clauses or modifiers (GROUP BY, WHERE, ORDER, LIMIT, HAVING, …). Then, think further about what one could do inside these clauses, or inside subselects etc. The number of possible combinations (exhaustive testing) of all commands (and all formats and variations thereof) plus all mysqld options (nearly 500 of them) is for all intents and purposes infinite, and thus seemingly impossible to test.

In Episode 13, an approach is proposed which, in our view, adequately solves this test infinite-possibility coverage problem through the use of random spread coverage testing and sql interleaving.Knowing your Bash/Linux/Gnu scripting well is also an almost definite prerequisite to getting started with mysqld QA. Episode 1 in this series is over 3.5 hours of in-depth (from easy to advanced) training on many often-used Bash commands and topics. From ls to sed, from cp to xargs and from variables to arrays. Enjoy!

Without further ado, here are the planned upcoming episodes:

MySQL QA Episode 1: Bash/GNU Tools & Linux Upskill & Scripting Fun
MySQL QA Episode 2: Build a MySQL Server: Git, Compiling, Build Tools
MySQL QA Episode 3: Debugging: GDB, Backtraces, Frames, Library Dependencies
MySQL QA Episode 4: QA Framework Setup Time! percona-qa, pquery, reducer & more
MySQL QA Episode 5: Preparing Your QA Run: mtr_to_sql.sh and pquery-run.sh
MySQL QA Episode 6: Analyzing & Filtering: pquery-prep-red.sh, -clean-known.sh & pquery-results.sh
MySQL QA Episode 7: Reducing Testcases for Beginners: single-threaded reducer.sh
MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh
MySQL QA Episode 9: Reducing Testcases for Experts: multi-threaded reducer.sh
MySQL QA Episode 10: Reproducing and Simplifying: How to get it Right
MySQL QA Episode 11: Valgrind Testing: Pro’s, Con’s, Why and How
MySQL QA Episode 12: Multi-node Cluster Testing Using Docker
MySQL QA Episode 13: A Better Approach to all MySQL Regression, Stress & Feature Testing: Random Coverage Testing & SQL Interleaving

A short introduction on each episode:

As episodes are finished, the series titles above will be linked so it’s easy to check this page for updates.

Enjoy!

The post Free MySQL QA & Bash/Linux Training Series appeared first on MySQL Performance Blog.

Deep dive into MySQL’s innochecksum tool

March 16, 2015 - 12:00am

One of our Percona Support customers recently reported that Percona XtraBackup failed with a page corruption error on an InnoDB table. The customer thought it was a problem or bug in the Percona XtraBackup tool. After investigation we found that an InnoDB page was actually corrupted and a Percona XtraBackup tool caught the error as expected and hence the backup job failed.

I thought this would be an interesting topic and worthy of a blog post. In this article I will describe the innochecksum tool, when and how to use it and what are the possible fixes if an InnoDB table suffers from page corruption.

The innochecksum tool is an offline tool that prints checksums for InnoDB files. This tool reads an InnoDB tablespace file, calculates the checksum for each page and reports mismatches, if any. A checksum mismatch is an indication of corrupt pages. Being as offline tool, innochecksum can’t be used on tablespace file that a MySQL server is currently using,  hence you need to shutdown the server prior to running the innochecksum tool. If you try to run the innochecksum tool on a running MySQL server, then there is a chance that innochecksum crashes or reports a bad checksum for a good page, resulting false positives results. There is chance when you run innochecksum on a tablespace file that is opened by mysqld, that pages are dirty and not checksummed yet by the InnoDB storage engine itself.

The point: don’t run innochecksum against a running server.

InnoDB corruption can be caused by many factors (e.g. power lost, faulty hardware, bugs).  The InnoDB storage engine validates calculated checksum while reading pages from a tablespace on disk to the stored checksum in the page. In case, InnoDB finds page checksum mismatch it will force down the MySQL server.

Let me show you a page corruption error identified by Percona XtraBackup during a backup run in which the backup failed afterward.

[01] xtrabackup: Database page corruption detected at page 25413, retrying... [01] xtrabackup: Database page corruption detected at page 25413, retrying... [01] xtrabackup: Database page corruption detected at page 25413, retrying...

First, we need to identify if the tablespace is really corrupted for that particular table. I do that with the help of the innochecksum utility as shown below. As I mentioned earlier, make sure to shut down MySQL before using the innochecksum tool.

$ innochecksum -p 25413 /path/to/datadir/database_name/table_name.ibd

I passed the -p (page) flag for innochecksum to only check the specific pages that were reported corrupt by Percona XtraBackup. Without passing any option to the innochecksum tool, it will check entire tablespace for corruption which will required additional server downtime. The innochecksum tool also supports the -d (debug) option to print the checksum for each page and the -v (verbose) parameter to print a progress indicator. You can find more details in the manual. If the tool reports page corruption then database table is really corrupted as below.

page 25413 invalid (fails log sequence number check)

In order to fix this issue, the first thing you should try is to mysqldump the corrupted table and If mysqldump succeeded then problem exists in secondary indexes for that tablespace. This is because the mysqldump utility doesn’t touch indexes as indexes are created after all rows are inserted.

If mysqldump succeeds then the problem is associated with indexes. I would suggest following options to fix the corruption.

— Execute OPTIMIZE TABLE on that table which rebuilds indexes. The table will be locked during the operation prior to MySQL 5.6.17. Since MySQL 5.6.17 OPTIMIZE TABLE is an online operation.
— Rebuild table with the pt-online-schema-change tool from Percona Toolkit. This will give the same result as OPTIMIZE TABLE a non-blocking way as the pt-online-schema=change tool is online schema change tool.
— Drop all secondary indexes and then recreate them. The table will be locked during that operation for writes only. Again, you can use pt-online-schema-change tool for this purpose without sacrificing read/writes ability on the table during the drop and create indexes operation.

Finally, I would suggest to re-run the innochecksum tool to verify the tables integrity again as this will make sure there is no more page corruption. In this case we found that the table was actually corrupted and fixing table corruption through the backup/reload table fixed the problem and Percona XtraBackup ran fine during the next run.

It is possible that mysqldump crashes a MySQL server for a corrupted table. Fortunately, Percona Server contains innodb_corrupt_table_action which you can enable. The configuration variable is dynamic in nature, this means enabling it doesn’t requires a MySQL server restart. Prior to Percona Server 5.6 innodb_corrupt_table_action was known as innodb_pass_corrupt_table. Once you enable this option, you can try mysqldump again. If you are using Oracle MySQL then I would suggest to try this with innodb_force_recovery in case mysqldump fails to dump the table contents.

As a side note, if your backup is successful without any errors while performing a backup with Percona Xtrabackup, this means your InnoDB tables don’t have any page checksum mismatch or corruption. Percona XtraBackup can validate page checksums and in case of errors it  logs error and exists as I mentioned above.

There is also a modified version of the innochecksum made available by Facebook’s Mark Callaghan and can be found in this bug report which provides extra stats on tablespace undo blocks. There is another tool made by Google’s Jeremy Cole known as the InnoDB Ruby Tool to examine the internals of InnoDB.

LIMITATIONS:

  • Innochecksum is an offline InnoDB checksum tool. This means you must stop MySQL server. Otherwise it produces “Unable to lock file” error since MySQL 5.7.
  • Old versions of innochecksum only supports files up to 2GB in size. However, since MySQL 5.6 innochecksum supports files greater than 2GB in size.
  • Percona Server variable innodb_corrupt_table_action is supported on tables existing in their tablespace (i.e. innodb_file_per_table).
  • If you are using compressed tables (ROW_FORMAT=COMPRESSED) , then you must use innochecksum from MySQL 5.7.2 or greater, as earlier versions of innochecksum don’t support compressed tables. Check this bug for details.

New Features for the innochecksum tool from MySQL 5.7:

  • As I mentioned above, since MySQL 5.7 innochecksum supports file sizes greater than 2GB.
  • Since MySQL 5.7 you can log the output with the –log option.
  • –page-type-summary option added for page type summaries.
  • MySQL 5.7 also includes another nice option –page-type-dump which dumps the details of each page to standard output (STDOUT) or standard error (STDERR).
  • Since MySQL 5.7 innochecksum can be executed on multiple user-defined system tablespace files.
  • Since MySQL 5.7 innochecksum can be executed on multiple system tablespace files.

You can read more about this is in the MySQL 5.7 manual page of innochecksum.

Conclusion:
In this post, we identified InnoDB page corruption using the logs generated by Percona XtraBackup and fixed  them by using the mysqldump tool. But again, unfortunately, there are chances that Percona XtraBackup will not always fail in the same way when it finds corruption. So in some cases, it’s not easy to tell whether Percona XtraBackup has failed due to a bad checksum or a bug of its own. But in most cases, page corruption is the culprit if Percona XtraBackup fails to complete.

To summarize, I would say that Percona XtraBackup is a good way of verifying whether or not InnoDB pages are corrupted – and you can also verify the same thing via the mysqldump utility.

The post Deep dive into MySQL’s innochecksum tool appeared first on MySQL Performance Blog.

Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7

March 13, 2015 - 8:45am

My webinar “Multi-threaded Replication in MySQL 5.6 and 5.7″ on February 25 generated several excellent questions following the presentation (available here for playback along with the slides). I didn’t have time to answer many of the questions during the session and so in this post I answer all of them. Thanks to everyone who attended!

Q: What do you expect from MTS with logical clock? Do you think performance would be good as with per database?
A: MTS with 5.6 is not usable if you have a single database. I do not have numbers, but this is quite frequent. With 5.7 everyone should be able to benefit from multi-threaded replication.

Q: When MySQL 5.6 was released, performance of MTS was lower, than in 5.5, for example. Is this addressed now?
A: I am not sure which specific issue or bug you are referring, but if your data is spread across several databases

Q: How does Percona XtraBackup work with MTS? What are the changes in mysqldump?
A: As long as you are using GTIDs, you can safely take a backup from a slave using multi-threaded replication: with XtraBackup, add the --slave-info option as usual when taking a backup from a slave and with mysqldump, use --master-data instead of --dump-slave.

Q: For checkpoint position, what if MTS thread apply Insert before creating table where it inserting data. How MTR checkpoint will keep track of these transactions applying by different thread on slave?
A: The worker threads track all execution gaps to make sure that out-of-order execution is safe and to be able to replay all events without forgetting any of them. So it is not possible that a worker thread will insert data in a table that has not been created yet.

Q: Can you use MTS with all binlog_format options?
A: Yes

Q: Is there any way to have the threads work so that no database contention happens?
A: The short answer is no: the goal of the worker threads is to execute the incoming transactions as fast as possible. If that results in database contention, you should probably decrease the number of worker threads.

Q: Why doesn’t multi-threaded replication perform well on a single DB?
A: With 5.6, parallelization is based on isolating the transactions going to each database. If you only have a single DB, no parallelization is possible. You should look at 5.7 and the logical clock algorithm.

Q: Are there any implications with regards to GTIDs and Multi-Threaded replication when running a Master-to-Master setup?
A: I cannot think of any, however I am not sure master-master replication is still very relevant when using GTIDs.

Q: Is there any inconvenience with memory or cache when using more workers than the number of databases?
A: If the number of workers is just a bit higher than the number of databases (like 5 workers for 3 databases), there should not be any issue. However with ridiculously high numbers (500 workers for 2 databases), there might be performance degradation. I have not tested such cases, so I cannot give a good answer there. However the idea is that the number of workers should be close to the number of databases and should exceed the number of cores on the server.

Q: Is there multi-threaded replication in MySQL 5.7?
A: Yes, multi-threaded replication is available in MySQL 5.7 and offers improvements compared to MySQL 5.6 (mainly the parallelization with logical clock).

Q: Have you used DIM_STAT to created load and measure SLAVE Lag? Any interesting take-a-ways from that effort?
A: I used sysbench to generate load and Seconds_Behind_Master from SHOW SLAVE STATUS to measure slave lag. That mainly shows that if your workload is a good fit for MTS (multiple databases and load shared evenly across ), performance benefits can be significant.

Q: Does multi-threaded replication also work with Percona XtraDB Cluster/Percona Server?
A: Percona Server 5.6 is based on MySQL 5.6, so you can use multi-threaded replication exactly as you would use it on MySQL 5.6.

On Percona XtraDB Cluster, it is a bit different: replication inside the cluster uses Galera replication, which has nothing to do with MySQL replication. Note that Galera has offered parallel replication from the beginning (parallel applying of the replicated writesets to be accurate). However if you are using asynchronous replicas, these replicas can benefit from multi-threaded replication if they are running on MySQL/Percona Server 5.6.

Q: What happens to cross db transactions? Do they not replicate?
A: These transactions will replicate, but they will have to wait until all preceding transactions have been executed. Stated differently, cross db transactions introduce serialization, so you should avoid them as much as possible if you want to benefit from parallel applying.

To be accurate, if you have db1, db2 and db3 and if you execute a transaction involving db1 and db2, transactions on db3 can still be applied in parallel. So if you have many databases, cross db transactions may not be that bad.

Q: When using MTS without GTIDs, is “Seconds_Behind_Master” from SHOW SLAVE STATUS valid?
A: Seconds_Behind_Master is based on Exec_Master_Log_Pos. And with MTS, Exec_Master_Log_Pos is not reliable as it indicates the position of the latest checkpoint and not the position of the latest executed transaction. However in practice, checkpoints will happen at least every 300ms by default, so Seconds_Behind_Master is still a good indication of the replication lag. Of course you should keep in mind the usual limitations, such as with multi-tiered replication (if the setup is A->B->C, C will report its lag against B, not against A) or when there is a replication error (then Seconds_Behind_Master is NULL).

Q: How can all the servers be realistically restarted at the same time? There could be a few sec intervals if you have multiple servers [That was when I explained how to enable GTID replication].
A: With MySQL 5.6, the requirements are pretty strict when it comes to enabling GTIDs: all servers must be restarted at the same point in time in the replication stream. As you mention, it is difficult if you have several servers, so the only viable solution is: stop the writes on the master, wait until replication has caught up on all slaves, stop all servers, change the configuration, restart all servers.

What it means is that there is a time range when all servers are down. This is a showstopper for many people, and that’s why Percona Server 5.6 now includes a patch from Facebook that allows an online migration to GTIDs and that’s why MySQL 5.7.6 also offers this option.

* * *

Thanks for all of the great questions – and I hope to see you next month at OpenStack Live and the Percona Live MySQL Conference and Expo 2015 (April 13-16) – both at the Santa Clara conference center in sunny Silicon Valley. Get more info here.

The post Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7 appeared first on MySQL Performance Blog.

MySQL and geospatial programming: An introduction to GIS

March 13, 2015 - 3:00am

Geographic information systems (GIS) are used by application developers to incorporate geographic information into their datasets to create apps with cool features such finding the address to the best steakhouse in town or the locations of local points of historical interest (the list is endless). In a nutshell, GIS captures, store, checks and displays data related to positions on Earth’s surface.

Next Wednesday I’m going provide an introduction to GIS functionality in MySQL along an overview of recent changes. I invite you to register now for this free webinar titled “MySQL and Geospatial Programming.” It starts at 10 a.m. Pacific time on March 18.

In this talk I’ll discuss:

  • The various tasks that deal with geocoding
  • How MySQL can solve problems such as points within a radius (e.g., “Find the 10 closest coffee shops) among others
  • Highlights from some of the new features bundled in the upcoming 5.7 release, and what benefits they can bring to your applications

I’m happy to field questions in advance in the comments section below. This webinar, like all Percona webinars, will be recorded. In addition to the video, my slides will also be available for download.

Register now and I’ll talk to you next week!

The post MySQL and geospatial programming: An introduction to GIS appeared first on MySQL Performance Blog.

Easy query metrics with MySQL Performance Schema

March 12, 2015 - 12:00am

The MySQL Performance Schema exposes so much data that it’s not trivial to learn, configure, and use. With recently released Percona Agent 1.0.11 you can get query metrics – like min, max, and average query execution time – with a few clicks:

Click “Apply” and about two minutes later you’ll have query metrics from Performance Schema, collected and sent every minute.

Percona Cloud Tools (PCT) and Percona Agent handle all the details. You’ll need MySQL (or Percona Server) 5.6 and Percona Agent 1.0.11 or newer. One caveat at the moment: it only works for local MySQL instances (so not Amazon RDS). This limitation should be fixed soon; we’re already working on it.

Why use Performance Schema? We prefer Query Analytics with MySQL slow logs (especially Percona Server slow logs which expose more metrics) because slow logs expose the most metrics compared to other sources, but sometimes the slow log just isn’t an option, so Performance Schema is the next best choice, but the choice means tradeoffs. For example, Performance Schema does not expose actual query examples (just fingerprints), so EXPLAIN does not work.

For those who’ve been following PCT development, you know that Percona Agent 1.0.5 first introduced support for Performance Schema. What’s new in 1.0.11 is everything – we completely rewrote this part of the agent. It’s so much better that it’s now the required minimum version for using Query Analytics with Performance Schema. Upgrading is really easy: just run the single command line you used to install the agent and it will auto-update.

MySQL Performance Schema exposes a lot of data and insights into the server, but query metrics are perhaps the most important because the primary job of your MySQL database is to execute queries. That’s why Percona Cloud Tools makes Query Analytics with Performance Schema (and slow logs) so easy: to help you focus on the essential and leave the details to the tools.

Percona Cloud Tools is in beta, so it’s still free to sign up and free to use all the tools and features.

The post Easy query metrics with MySQL Performance Schema appeared first on MySQL Performance Blog.

Advanced JSON for MySQL

March 10, 2015 - 3:00am
What is JSON

JSON is an text based, human readable format for transmitting data between systems, for serializing objects and for storing document store data for documents that have different attributes/schema for each document. Popular document store databases use JSON (and the related BSON) for storing and transmitting data.

Problems with JSON in MySQL

It is difficult to inter-operate between MySQL and MongoDB (or other document databases) because JSON has traditionally been very difficult to work with. Up until recently, JSON is just a TEXT document. I said up until recently, so what has changed? The biggest thing is that there are new JSON UDF by Sveta Smirnova, which are part of the MySQL 5.7 Labs releases. Currently the JSON UDF are up to version 0.0.4. While these new UDF are a welcome edition to the MySQL database, they don’t solve the really tough JSON problems we face.

Searching

The JSON UDF provide a number of functions that make working with JSON easier, including the ability to extract portions of a document, or search a document for a particular key. That being said, you can’t use JSON_EXTRACT() or JSON_SEARCH in the WHERE clause, because it will initiate a dreaded full-table-scan (what MongoDB would call a full collection scan). This is a big problem and common wisdom is that JSON can’t be indexed for efficient WHERE clauses, especially sub-documents like arrays or objects within the JSON.

Actually, however, I’ve come up with a technique to effectively index JSON data in MySQL (to any depth). The key lies in transforming the JSON from a format that is not easily indexed into one that is easily indexed. Now, when you think index you think B-TREE or HASH indexes (or bitmap indexes) but MySQL also supports FULLTEXT indexes.

A fulltext index is an inverted index where words (tokens) point to documents. While text indexes are great, they aren’t normally usable for JSON. The reason is, MySQL splits words on whitespace and non-alphanumeric characters. A JSON document doesn’t end up being usable when the name of the field (the key) can’t be associated with the value. But what if we transform the JSON? You can “flatten” the JSON down into key/value pairs and use a text index to associate the key/value pairs with the document. I created a UDF called RAPID_FLATTEN_JSON using the C++ Rapid JSON library. The UDF flattens JSON documents down into key/value pairs for the specific purpose of indexing.

Here is an example JSON document:

{ "id": "0001", "type": "donut", "name": "Cake", "ppu": 0.55, "batters": { "batter": [ { "id": "1001", "type": "Regular" }, { "id": "1002", "type": "Chocolate" }, { "id": "1003", "type": "Blueberry" }, { "id": "1004", "type": "Devil's Food" } ] }, "topping": [ { "id": "5001", "type": "None" }, { "id": "5002", "type": "Glazed" }, { "id": "5005", "type": "Sugar" }, { "id": "5007", "type": "Powdered Sugar" }, { "id": "5006", "type": "Chocolate with Sprinkles" }, { "id": "5003", "type": "Chocolate" }, { "id": "5004", "type": "Maple" } ] }

Flattened:

mysql> select RAPID_FLATTEN_JSON(load_file('/tmp/doc.json'))G *************************** 1. row *************************** RAPID_FLATTEN_JSON(load_file('/tmp/doc.json')): id=0001 type=donut name=Cake ppu=0.55 id=1001 type=Regular id=1002 type=Chocolate id=1003 type=Blueberry id=1004 type=Devil's Food type=Devil's type=Food id=5001 type=None id=5002 type=Glazed id=5005 type=Sugar id=5007 type=Powdered Sugar type=Powdered type=Sugar id=5006 type=Chocolate with Sprinkles type=Chocolate type=with type=Sprinkles id=5003 type=Chocolate id=5004 type=Maple 1 row in set (0.00 sec)

Obviously this is useful, because our keys are now attached to our values in an easily searchable way. All you need to do is store the flattened version of the JSON in another field (or another table), and index it with a FULLTEXT index to make it searchable. But wait, there is one more big problem: MySQL will split words on the equal sign. We don’t want this as it removes the locality of the keyword and the value. To fix this problem you’ll have to undertake the (actually quite easy) step of adding a new collation to MySQL (I called mine ft_kvpair_ci). I added equal (=) to the list of lower case characters as described in the manual. You just have to change two text files, no need to recompile the server or anything, and as I said, it is pretty easy. Let me know if you get stuck on this step and I can show you the 5.6.22 files I modified.

By the way, I used a UDF, because MySQL FULLTEXT indexes don’t support pluggable parsers for InnoDB until 5.7. This will be much cleaner in 5.7 with a parser plugin and there will be no need to maintain an extra column.

Using the solution:
Given a table full of complex json:

create table json2(id int auto_increment primary key, doc mediumtext);

Add a column for the index data and FULLTEXT index it:

alter table json2 add flat mediumtext character set latin1 collate ft_kvpair_ci, FULLTEXT(flat);

Then populate the index. Note that you can create a trigger to keep the second column in sync, I let that up to an exercise of the reader, or you can use Flexviews to maintain a copy in a second table automatically.

mysql> update json2 set flat=RAPID_FLATTEN_JSON(doc); Query OK, 18801 rows affected (26.34 sec) Rows matched: 18801 Changed: 18801 Warnings: 0

Using the index:

mysql> select count(*) from json2 where match(flat) against ('last_name=Vembu'); +----------+ | count(*) | +----------+ | 3 | +----------+ 1 row in set (0.00 sec)

The documents I searched for that example are very complex and highly nested. Check out the full matching documents for the query here here

If you want to only index a subportion of the document, use the MySQL UDF JSON_EXTRACT to extract the portion you want to index, and only flatten that.

Aggregating

JSON documents may contain sub-documents as mentioned a moment ago. JSON_EXTRACT can extract a portion of a document, but it is still a text document. There is no function that can extract ALL of a particular key (like invoice_price) and aggregate the results. So, if you have a document called orders which contains a varying number of items and their prices, it is very difficult (if not impossible) to use the JSON UDF to aggregate a “total sales” figure from all the order documents.

To solve this problem, I created another UDF called RAPID_EXTRACT_ALL(json, ‘key’). This UDF will extract all the values for the given key. For example, if there are 10 line items with invoice_id: 30, it will extract the value (30 in this case) for each item. This UDF returns each item separated by newline. I created a few stored routines called jsum, jmin, jmax, jcount, and javg. They can process the output of rapid_extract_all and aggregate it. If you want to only RAPID_EXTRACT_ALL from a portion of a document, extract that portion with the MySQL UDF JSON_EXTRACT first, then process that with RAPID_EXTRACT_ALL.

For example:

mysql> select json_extract_all(doc,'id') ids, jsum(json_extract_all(doc,'id')) from json2 limit 1G *************************** 1. row *************************** ids: 888 889 2312 5869 8702 jsum(json_extract_all(doc,'id')): 18660.00000 1 row in set (0.01 sec)

Aggregating all of the id values in the entire collection:

mysql> select sum( jsum(json_extract_all(doc,'id')) ) from json2 ; +-----------------------------------------+ | sum( jsum(json_extract_all(doc,'id')) ) | +-----------------------------------------+ | 296615411.00000 | +-----------------------------------------+ 1 row in set (2.90 sec)

Of course you could extract other fields and sort and group on them.

Where to get the tools:
You can find the UDF in the swanhart-tools github repo. I think you will find these tools very useful in working with JSON documents in MySQL.

(This post was originally posted on my personal blog: swanhart.livejournal.com, but is reposed here for wider distribution)

The post Advanced JSON for MySQL appeared first on MySQL Performance Blog.

5 free handy tools for monitoring and managing MySQL replication

March 9, 2015 - 8:50am

MySQL Replication is very simple to set up. In this post I’ll discuss its importance and five handy tools for monitoring and managing MySQL replication.

What is MySQL Replication? It’s the process of copying the (real-time events) data from one master instance to another slave instance and maintaining the redundant consistent data in a different machine. This enables a distributed database system that shares the same level of information.

In MySQL the replication works based on the three threads as shown below.

1) I/O thread on the slave server:  To start on receiving replication events, an I/O thread starts on the slave server and connects to the master server.

2) Master connection handler thread:  As a connection handier, master starts a thread whenever a replication slave connects to a master. The master server sends the events from its binary log file to the slave I/O thread, notifying slave about newly written events to its binary log. The slave I/O thread which records them to in the slave’s relay log file.

3) Slave SQL thread:  When it starts, immediately reads the events from the relay log and applies on the Slave DB. Once it finishes the processing of every relay log and if the I/O thread is writing the events to a new relay log file then it deletes the processed one. Suppose if the  I/O thread is writing  the events on a relay log and which is the same file SQL thread is reading then the SQL thread pauses until more events are available in the relay log.

MySQL replication (slave) redundant instances is an excellent method of improving data performance and stability. It ensures the availability of another copy of a database whenever there arises any issues with the master server. One of the other advantages is the report query (select) offloading to a slave server, which is a common practice to reduce the workload of the master DB server as there are multiple servers that are able to respond to the queries. The third advantage is to schedule the backup from the slave server, etc.

All the benefits discussed above are smart and effective only if replication is up and running and the data is in sync with the master.

Let us see the set of very useful tools from Percona Toolkit which help you in monitoring and managing the MySQL replication (Slaves).

1) pt-heartbeat: Tool measures/monitor replication lag on a MySQL in real time. It is important to have a replication monitoring system to confirm that replication is up and running and lag is current.

In typical way of monitoring, we use “SHOW SLAVE STATUS” to find out the information like Slave_IO_Running: Yes, Slave_SQL_Running: Yes and Seconds_Behind_Master: 0 etc, but is not reliable as  Seconds_Behind_Master shows difference between last timestamp read in the binlogs and current time. Many reasons like slow network, long running queries, blocking operations or a 2nd level slaves(Master > slave1> slave2) etc. can produce an irrelevant value for the variable.

So I recommend to use pt-heartbeat, which depends only on the heartbeat record being replicated to slave instead of the above said unreliable method of finding the lag. pt-heartbeat will insert/update a row in the master and the time delay is calculated depending on when the data was inserted and when it became available to read in the slave. It works at any depth in the replication hierarchy. For example, it will reliably report how far a slave lags its original master (master’s master).

Example :

On Master: [root@Tst1Master ~]#pt-heartbeat --daemonize -D test --update -h<IP address> --create-table On Slave: [root@Tst1Slave ~]#pt-heartbeat -D test --monitor --master-server-id 1 0.00s [  0.00s,  0.00s,  0.00s ] 0.00s [  0.00s,  0.00s,  0.00s ] 0.00s [  0.00s,  0.00s,  0.00s ] 0.00s [  0.00s,  0.00s,  0.00s ]

We used to schedule the backup from the slave to avoid the additional load with the master server. In this case it is important to confirm the slave is current with the master to ascertain the backup is having the recent data. Here is a simple script you can use to verify the replication status on a periodical basis(cron) and to know the status just before the backup scheduled.

#!/bin/bash #     <300 - [Good] #     300> <600 - [Warning] #     > 600 - [Critical] MAIL_FROM="root@`hostname`" MAIL_TO="mailid@mail.com" Warningthreshold=300 Criticalthreshold=600 backup=$1 CMD=$(/root/bin/pt-heartbeat -D test --master-server-id 1 --check | cut -d. -f1) # Pass the parameter "test.sh backup" to denote the call is from the backup script. if [ $CMD -lt $Warningthreshold ] then MESSAGE=`date +'%m:%d:%Y %H:%M:%S'`" [Good] current delay: "$CMD; elif [ $CMD -gt $Warningthreshold ] && [ $CMD -lt $Criticalthreshold ] then MESSAGE=`date +'%m:%d:%Y %H:%M:%S'`" [Warning] current delay: "$CMD; elif [ $CMD -gt $Criticalthreshold ] then MESSAGE=`date +'%m:%d:%Y %H:%M:%S'`" [Critical] current delay: $CMD Check the replication" else MESSAGE=`date +'%m:%d:%Y %H:%M:%S'`" [Error] Replication status check failed need to investigate." fi #No arguments supplied" if [ -z "$1" ] && [ $CMD -gt $Warningthreshold ] then (echo "Subject: Replication status on `hostname`"; echo "Replication status : " echo $MESSAGE )  | /usr/sbin/sendmail -O NoRecipientAction=add-to -f${MAIL_FROM} ${MAIL_TO} elif [ $# -eq 1 ] then (echo "Subject: Replication status check prior to backup on `hostname`"; echo "Replication status prior to backup:" echo $MESSAGE )  | /usr/sbin/sendmail -O NoRecipientAction=add-to -f${MAIL_FROM} ${MAIL_TO} fi

2) pt-slave-find: Finds and prints replication hierarchy of the slaves – shows you the topology and replication hierarchy of your MySQL replication instances.

Example :

[root@Tst1Master ~]# ./pt-slave-find --host=192.168.56.10 192.168.56.10 Version 5.6.22-72.0-log Server ID 1 Uptime 42:09 (started 2015-03-03T01:40:42) Replication Is not a slave, has 1 slaves connected, is not read_only Filters Binary logging STATEMENT Slave status Slave mode STRICT Auto-increment increment 1, offset 1 InnoDB version 5.6.22-72.0 +- 192.168.56.11 Version 5.6.22-72.0 Server ID 2 Uptime 41:48 (started 2015-03-03T01:41:03) Replication Is a slave, has 0 slaves connected, is not read_only Filters Binary logging STATEMENT Slave status 0 seconds behind, running, no errors Slave mode STRICT Auto-increment increment 1, offset 1 InnoDB version 5.6.22-72.0

[root@Tst1Master ~]# ./pt-slave-find --host=192.168.56.10 --report-format=hostname 192.168.56.10 +- 192.168.56.11

3) pt-slave-restart: Watches the MySQL replication slaves for any error and tries to restart the replication.

The tool is very useful for skipping statements that cause errors and continuing replication. If you use this carelessly, the slave will be having the inconsistent data. However  when you use the tool, I recommended you to confirm the consistency of data between master and slave with help of pt-table-checksum.

Example : Restart the slave for error-numbers=1062 (Duplicate entry ‘1’ for key ‘PRIMARY’)

#pt-slave-restart --socket=/var/lib/mysql/custom-feeds/mysql.sock --ask-pass --error-numbers=1062

4) pt-table-checksum: Performs an online replication consistency check by executing checksum queries on the master, which produces different results on replicas that are inconsistent with the master.

Example :

[root@Tst1Master ~]# ./pt-table-checksum -dD TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 03-03T02:34:44 0 1 2 1 0 0.011 d.t

Note: It  is important to run the pt-table-checksum tool regardless of whether or not you’ve ever skipped an event with pt-slave-restart to make sure we are having the identical data on the slave side.

5) pt-table-sync: Sync the slave with their master (synchronizes data efficiently between MySQL tables.)

Example :

[root@Tst1Slave ~]# ./pt-table-sync -dD --print --sync-to-master 192.168.56.11 REPLACE INTO `d`.`t`(`id`, `data`) VALUES ('1', 'Test1') /*percona-toolkit src_db:d src_tbl:t src_dsn:P=3306,h=192.168.56.10 dst_db:d dst_tbl:t dst_dsn:h=192.168.56.11 lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:6435 user:root host:Tst1Slave.mysql*/; REPLACE INTO `d`.`t`(`id`, `data`) VALUES ('2', 'Test2') /*percona-toolkit src_db:d src_tbl:t src_dsn:P=3306,h=192.168.56.10 dst_db:d dst_tbl:t dst_dsn:h=192.168.56.11 lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:6435 user:root host:Tst1Slave.mysql*/; [root@Tst1Slave ~]#

[root@Tst1Slave ~]# ./pt-table-sync -dD  --verbose  --execute  --sync-to-master 192.168.56.11 # Syncing h=192.168.56.11 # DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE #      0       2      0      0 Chunk     03:38:09 03:38:09 2    d.t

We have successfully sync the tables so let us try the checksum again and confirm the table is in sync.

[root@Tst1Master ~]# ./pt-table-checksum -dD TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 03-03T03:03:40 0 0 2 1 0 0.111 d.t

The aforesaid are the simple examples and based on your needs, you can choose the tools, options or modify the scripts. I also recommend that you to go through the documentations for more details on each tools.

The post 5 free handy tools for monitoring and managing MySQL replication appeared first on MySQL Performance Blog.

What stopped MySQL? Tracing back signals sent to MySQL

March 6, 2015 - 8:59am

Have you ever had a case where you needed to find a process which sent a HUP/KILL/TERM or other signal to your database? Let me rephrase. Did you ever have to find which process messed up your night? If so, you might want to read on. I’m going to tell you how you can find it.

Granted, on small and/or meticulously managed systems tracking down the culprit is probably not a big deal. You can likely identify your process simply by checking what processes have enough privileges to send mysqld a HUP/KILL/TERM signal. However, frequently we see cases where this may not work or the elimination process would be too tedious to execute.

We recently had a case where a process was frequently sending SIGHUPs to mysqld and the customer asked us to see if we could get rid of his annoyance. This blog is the direct result of a discussion I had with my colleague Francisco Bordenave, on options available to deal with his issue. I’m only going to cover a few of them in this blog but I imagine that most of you will be able to find one that will work for your case. Note that most tracing tools add some overhead to the system being investigated. The tools presented in the following are designed to be lightweight so the impact should be well within acceptable range for most environments.

DISCLAIMER: While writing this blog I discovered that David Busby has also discussed one of the tools that I’m going to cover in his article. For those who have read the article note that I’m going to cover other tools as well and I will also cover a few extra SystemTap details in this blog. For those who haven’t yet had chance to read David’s blog, you can read it here.

All right, let’s see what “low hanging tools” there are available to us to deal with our issue!

Linux
  • SystemTap: widely available on Linux but usually not enabled by default. You need to install debuginfo and devel kernel packages and systemtap itself. Similar to DTrace.
  • Perf: although not quite written for generic tracing, due to its ability to trace system calls we can use it to our advantage if we trace sys_enter_sigkill.
  • Audit: generic system auditing platform. Given its nature, we can use it to track down many things, including rogue processes sending HUP signals to our poor mysqld!
  • Code!: Given that MySQL is opensource, you could customize the signal handler to obtain extra information. See more in sigaction(2) and the SA_SIGINFO flag. I’m not sure if this should be listed as a more efficient solution but it’s an option nevertheless. I guess one could also preload/inject his own singal handler via an LD_PRELOAD trick and a custom library but that’s beyond the scope what I intend to cover. However, for certain signals (most notably, SIGSEGV) you may not need to write your own tools as the OS may already come with libs/tools that can assist you. See Ulrich Drepper’s catchsegv or /usr/lib64/libSegFault.so, for instance.
  • Debuggers: These may be efficient to use in some cases but I won’t cover them this time, either.
FreeBSD/Solaris
  • DTrace: a very decent, stable tracing platform. Included in most recent kernels by default for the mentioned platforms (FreeBSD 9.2+, FreeBSD 10+, Solaris 10+).

In this article I’m going to focus on Linux as that’s what people in the MySQL community seem to care about most nowadays. The tools that I will discuss will be SystemTap, Perf and Audit. If you feel that you would like to read about the rest, let me know and I will cover the rest of the options in a followup article.

SystemTap

I’m going to set up SystemTap on a recent, 64 bit CentOS 7 box. I will only cover basic install, you can find more about how to install SystemTap here.

The strength of SystemTap is definitely its flexibility, potentially the best tool for solving our problem on the Linux platform. It’s been around for some time and is generally regarded mature but I would recommend to test your “tapscripts” in dev/qa before you run them in production.

Installing SystemTap

Follow below steps to install SystemTap:

[root@centos7]~# sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Debuginfo.repo [root@centos7]~# yum repolist ... base-debuginfo/x86_64 CentOS-7 - Debuginfo 1,688 ...

[root@centos7]~# yum install kernel-debuginfo kernel-debuginfo-common kernel-devel [root@centos7]~# yum install systemtap systemtap-runtime

Tracing with SystemTap

Create a tapscript like the one below:

[root@centos7]~# cat find_sighupper.stp #!/usr/bin/stap # Prints information on process which sent HUP signal to mysqld probe begin { printf("%-26s %-8s %-5s %-8s %-5sn", "TIME", "SOURCE", "SPID", "TARGET", "TPID"); } probe nd_syscall.kill.return { sname = @entry(execname()); spid = @entry(pid()); sig = @entry(uint_arg(2)); tpid = @entry(uint_arg(1)); tname = pid2execname(tpid); time = ctime(gettimeofday_s()); if (sig == 1 && tname == "mysqld") printf("%-26s %-8s %-5d %-8s %-5dn", time, sname, spid, tname, tpid); }

Then run the tap script in a dedicated terminal:

[root@centos7]~# stap find_sighupper.stp TIME SOURCE SPID TARGET TPID

Send your HUP signal to mysqld from another terminal:

[root@centos7]~# kill -1 1984

The culprit should will show up on your first window like so:

[root@centos7]~# stap find_sighupper.stp TIME SOURCE SPID TARGET TPID Thu Feb 26 21:20:44 2015 kill 6326 mysqld 1984 ^C

Note that with this solution I was able to define fairly nice constraints relatively easily. With a single probe (well, quasi, as @entry refers back to the callee) I was able to get all this information and filter out HUP signals sent to mysqld. No other filtering is necessary!

Perf

Perf is another neat tool to have. As its name implies, it was originally developed for lightweight profiling, to use the performance counters subsystem in Linux. It became fairly popular and got extended many times over these past years. Since it happens to have probes we can leverage, we are going to use it!

Installing Perf

As you can see, installing Perf is relatively simple.

# yum install perf

Start perf in a separate terminal window. I’m only going to run it for a minute but I could run it in screen for a longer period of time.

[root@centos7 ~]# perf record -a -e syscalls:sys_enter_kill sleep 60

In a separate terminal window send your test and obtain the results via “perf script”:

[root@centos7 ~]# echo $$ 11380 [root@centos7 ~]# pidof mysqld 1984 [root@centos7 ~]# kill -1 1984 [root@centos7 ~]# perf script # ======== # captured on: Thu Feb 26 14:25:02 2015 # hostname : centos7.local # os release : 3.10.0-123.20.1.el7.x86_64 # perf version : 3.10.0-123.20.1.el7.x86_64.debug # arch : x86_64 # nrcpus online : 2 # nrcpus avail : 2 # cpudesc : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz # cpuid : GenuineIntel,6,70,1 # total memory : 1885464 kB # cmdline : /usr/bin/perf record -a -e syscalls:sys_enter_kill sleep 60 # event : name = syscalls:sys_enter_kill, type = 2, config = 0x9b, config1 = 0x0, config2 = 0x0, excl_usr = 0, exc # HEADER_CPU_TOPOLOGY info available, use -I to display # HEADER_NUMA_TOPOLOGY info available, use -I to display # pmu mappings: software = 1, tracepoint = 2, breakpoint = 5 # ======== # bash 11380 [000] 6689.348219: syscalls:sys_enter_kill: pid: 0x000007c0, sig: 0x00000001

As you can see in above output process “bash” with pid of 11380 signalled pid 0x07c0 (decimal: 1984) a HUP signal (0x01). Thus, we found our culprit with this method as well.

Audit

You can read more about Audit in the Red Hat Security Guide.

Installing Audit

Depending on your OS installation, it may be already installed.

If case it is not, you can install it as follows:

[root@centos7 ~]# yum install audit

When you are done installing, start your trace and track 64 bit kill system calls that send HUP signals with signal ID of 1:

[root@centos7]~# auditctl -l No rules [root@centos7]~# auditctl -a exit,always -F arch=b64 -S kill -F a1=1 [root@centos7]~# auditctl -l LIST_RULES: exit,always arch=3221225534 (0xc000003e) a1=1 (0x1) syscall=kill [root@centos7]~# auditctl -s AUDIT_STATUS: enabled=1 flag=1 pid=7010 rate_limit=0 backlog_limit=320 lost=0 backlog=0 [root@centos7]~# pidof mysqld 1984 [root@centos7]~# kill -1 1984 [root@centos7]~# tail -2 /var/log/audit/audit.log type=SYSCALL msg=audit(1425007202.384:682): arch=c000003e syscall=62 success=yes exit=0 a0=7c0 a1=1 a2=a a3=7c0 items=0 ppid=11380 pid=3319 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="zsh" exe="/usr/bin/zsh" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null) type=OBJ_PID msg=audit(1425007202.384:682): opid=1984 oauid=-1 ouid=995 oses=-1 obj=system_u:system_r:mysqld_t:s0 ocomm="mysqld"

As you can see from above output, the results showed up nicely in the system audit.log. From the log it’s clear that I sent my SIGHUP to mysqld (pid 1984, “opid” field) from zsh (see the command name in the “comm” field) via the 64 bit kill syscall. Thus, mischief managed, once again!

Summary

In this blog I presented you three different tools to help you trace down sources of signals. The three tools each have their own strengths. SystemTap is abundant of features and really nicely scriptable. The additional features of auditd may make it appealing to deploy to your host. Perf is a great tool for CPU profiling and you might want to install it solely for that reason. On the other hand, your distribution might not have support compiled in its kernel or may make the setup harder for given tool. In my experience most modern distributions support the tools discussed here so the choice comes down to personal preference or convenience.

In case you were wondering, I often pick auditd because it is often already installed. SystemTap might be a bit more complicated to setup but I would likely invest some extra time into the setup if my case is more complex. I primary use perf for CPU tracing and tend to think of the other two tools before I think of perf for tracing signals.

Hope you enjoyed reading! Happy [h/t]racking!

The post What stopped MySQL? Tracing back signals sent to MySQL appeared first on MySQL Performance Blog.

How to test if CVE-2015-0204 FREAK SSL security flaw affects you

March 5, 2015 - 8:45am

The CVE-2015-0204 FREAK SSL vulnerability abuses intentionally weak “EXPORT” ciphers which could be used to perform a transparent Man In The Middle attack. (We seem to be continually bombarded with not only SSL vulnerabilities but the need to name vulnerabilities with increasing odd names.)

Is your server vulnerable?

This can be tested using the following GIST

If the result is 0; the server is not providing the EXPORT cipher; and as such is not vulnerable.

Is your client vulnerable?

Point your client to https://oneiroi.co.uk:4443/test if this returns “Vulnerable” then the client is vulnerable, if you find a connection error your client should not be vulnerable for example:

root@host:/tmp$ openssl version
OpenSSL 1.0.1e 11 Feb 2013
root@host:/tmp$ curl https://oneiroi.co.uk:4443/test -k
Vulnerable

root@host:/tmp$ openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify error:num=18:self signed certificate
verify return:1
depth=0 C = XX, L = Default City, O = Default Company Ltd
verify return:1

Certificate chain
0 s:/C=XX/L=Default City/O=Default Company Ltd
i:/C=XX/L=Default City/O=Default Company Ltd

Server certificate
—–BEGIN CERTIFICATE—–
MIIDVzCCAj+gAwIBAgIJANvTn7jl

[root@3654e4df1cc2 bin]# curl https://oneiroi.co.uk:4443/test -k
curl: (35) Cannot communicate securely with peer: no common encryption algorithm(s).
[root@3654e4df1cc2 bin]# openssl s_client -connect oneiroi.co.uk:4443
CONNECTED(00000003)
139942442694560:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:744:

In short a vulnerable client will complete the connection, and a non vulnerable client should present an SSL handshake failure error.

DIY

You can recreate this setup yourself


openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mycert.pem -out mycert.pem;
openssl s_server -cipher EXPORT -accept 4443 -cert mycert.pem -HTTP;

Is MySQL affected ?

Some of the code per the POODLE Blog post can be re-purposed here.


mysql -Bse "SHOW STATUS LIKE 'Ssl_cipher_list'" | sed 's/:/n/g' | grep EXP | wc -l

A result of 0 means the MySQL instance does not support any of the EXPORT ciphers, and thus should not be vulnerable to this attack.

How about other clients?

Most clients link to another library for SSL purposes; however there are examples where this is not the case; take for example golang http://golang.org/pkg/crypto/tls/ which partially implements the TLS1.2 RFC.

The following test code however shows golang does not appear to be affected.


package main

import (
“fmt”
“net/http”
“crypto/tls”
)

func main() {
tr := &http.Transport{
TLSClientConfig: &tls.Config{},
DisableCompression: true,
}
client := &http.Client{Transport: tr}
resp, err := client.Get(“https://oneiroi.co.uk:4443/test”)
fmt.Println(err)
fmt.Println(resp)
}

Get https://oneiroi.co.uk:4443/test: remote error: handshake failure

References

The post How to test if CVE-2015-0204 FREAK SSL security flaw affects you appeared first on MySQL Performance Blog.

Steve ‘Woz’ Wozniak to speak at Percona Live MySQL Conference & Expo 2015

March 5, 2015 - 5:00am

Apple’s Steve “Woz” Wozniak will talk at Percona Live 2015

I am thrilled to announce that Apple co-founder and Silicon Valley icon and philanthropist Steve Wozniak will participate in a moderated Q&A on creativity and innovation April 14 during this year’s Percona Live MySQL Conference and Expo in Santa Clara, California.

In addition to “The Woz,” as Steve is nicknamed, we have a great lineup of esteemed industry luminaries, with a broad range of talks and tutorials along with fun and informative networking events during the four-day conference (April 13-16).

Harrison Fisk of Facebook’s Database Performance Team will deliver a keynote titled “Polyglot Persistence @Facebook” exploring why Facebook has so many different data solutions deployed at scale and how the company decides to deploy a new one. He’ll also talk about how Facebook is able to manage all these solutions and what types of optimizations are possible across all their data systems.

‘MySQL 101’ program announced
I wrote about the new “MySQL 101” program earlier this week. Largely driven by the broad demand for (and scarcity of) MySQL expertise across many industries, Percona Live is adding dual tracks for both developers and system administrators that will provide a 2-day course on MySQL fundamentals.

The two days of practical training will include everything needed to handle day-to-day MySQL DBA tasks. The motto of this new program is, “You send us developers and admins, and we will send you back MySQL DBAs.” You can check out the full schedule for MySQL 101 here.

Lightning Talks and Birds of a Feather Sessions (BOFs)
In related news, today we also announced the schedules for the popular “Lightning Talks” and “Birds of a Feather Sessions” (BOFs).

Lightning Talks provide an opportunity for attendees to propose, explain, exhort, and rant on any MySQL-related topic for five minutes. Lightning Talks will take place Wednesday night, (April 15) during the MySQL Community Networking Reception, which begins immediately following the breakout sessions.

This year’s topics include:

  • “Everything About MySQL Bugs” presented by Seta Smirnova, senior principal technical support engineer, and Valerii Kravchuk, principle support engineer at Percona
  • “MySQL Galera Cluster, Percona XtraDB Cluster, and MariaDB Galera Cluster” presented by Jay Janseen, managing consultant at Percona
  • “Data Security – Emerging Legal Trends” presented by S. Keith Moulsdale, partner at Whiteford, Taylor & Preston
  • “WebScaleSQL Meeting/Hacking” presented by Steaphan Greene, software engineer at WebScaleSQL

BOFs sessions enable attendees with interests in the same project or topic to enjoy some quality face time. The BOF sessions will take place Tuesday night, April 14, from 6:00 p.m. to 7:00 p.m. This year’s topics include:

  • “Working From Home – Fun, Facts and Scares” presented by Raghavendra Prabhu, product lead, Percona XtraDB Cluster at Percona
  • “The Future of MySQL Quality Assurance: Introducing pquery” presented by Roel Van de Paar, senior QA lead, and Ramesh Sivaraman, junior QA engineer at Percona

Community Reception
One of the most fun and compelling aspects of the Percona Live is the networking, providing the opportunity to make connections that can help enhance a career, facilitate a current project, or inspire new ideas. This year’s conference features the Community Networking Reception in the Expo Hall on Wednesday, April 15, from 5:30 p.m. to 7:00 p.m. The event will include the MySQL Community Awards and the Lightning Talks.

Conference Registration Promotions
Advanced pricing discounts are still available but will end on March 8. Register soon to take advantage of the best conference rates available. A Percona Live MySQL Conference & Expo Ticket provides you with access to all OpenStack Live sessions. In addition, in the spirit of the new MySQL 101 track, the first 101 people who sign-up for MySQL 101 using the “101” discount code will get their tickets for a $101 – a $299 savings! Attendees of MySQL 101 will have full access to Percona Live Keynotes and the Expo Hall.

See you next month!

The post Steve ‘Woz’ Wozniak to speak at Percona Live MySQL Conference & Expo 2015 appeared first on MySQL Performance Blog.

Percona XtraDB Cluster 5.6.22-25.8 is now available

March 5, 2015 - 4:30am

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on March 5th 2015. Binaries are available from downloads area or from our software repositories.

Based on Percona Server 5.6.22-72.0 including all the bug fixes in it, Galera Replicator 3.9, and on Codership wsrep API 25.8, Percona XtraDB Cluster 5.6.21-25.8 is now the current General Availability release. All of Percona’s software is open-source and free, and all the details of the release can be found in the 5.6.22-25.8 milestone at Launchpad.

Bugs Fixed:

  • XtraBackup SST wouldn’t stop when MySQL was SIGKILLed. This would prevent MySQL to initiate a new transfer as port 4444 was already utilized. Bug fixed #1380697.
  • wsrep_sst_xtrabackup-v2 script was causing innobackupex to print a false positive stack trace into the log. Bug fixed #1407599.
  • MyISAM DDL (CREATE/DROP) isn’t replicated any more when wsrep_replicate_myisam is OFF. Note, for older nodes in the cluster, wsrep_replicate_myisam should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended with wsrep_replicate_myisam OFF since if any table in list is MyISAM, the whole DDL statement is not put under TOI (total order isolation), hence not replicated. Bug fixed #1402338.
  • gcache.mem_size option has been deprecated. A warning will now be generated if the variable has value different than 0. Bug fixed #1392408.
  • stderr of SST/Innobackupex is logged to syslog with appropriate tags if sst-syslog is in [sst] or [mysqld_safe] has syslog in my.cnf. This can be overriden by setting the sst-syslog to -1 in [sst]. Bug fixed #1399134.
  • clustercheck can now check if the node is PRIMARY or not, to allow for synced nodes which go out of PRIMARY not to take any writes/reads. Bug fixed #1403566.
  • SST will now fail early if the xtrabackup_checkpoints is missing on the joiner side. Bug fixed #1405985.
  • socat utility was not properly terminated after a timeout. Bug fixed #1409710.
  • When started (without bootstrap), the node would hang if it couldn’t find a primary node. Bug fixed #1413258.
  • 10 seconds timeout in Xtrabackup SST script was not enough for the joiner to delete existing files before it started the socat receiver on systems with big datadir. Bug fixed #1413879.
  • Non booststrap node could crash while attempting to perform table%cache operations with the BF applier failed to open_and_lock_tables warning. Bug fixed #1414635.
  • Percona XtraDB Cluster 5.6 would crash on ALTER TABLE / CREATE INDEX with Failing assertion: table->n_rec_locks == 0 error. Bug fixed #1282707.
  • Variable length arrays in WSREP code were causing debug builds (-DWITH_DEBUG=ON) to fail. Bug fixed #1409042.
  • Race condition between donor and joiner in Xtrabackup SST configuration has been fixed. This caused XtraBackup SST to fail when joiner took longer to spawn the second listener for SST. Bug fixed #1405668.
  • Signal handling in mysqld has been fixed for SST processes. Bug fixed #1399175.
  • SST processes are now spawned with fork/exec instead of posix_spawn to allow for better cleanup of child processes in event of non-graceful termination (SIGKILL or a crash etc.). Bug fixed #1382797.
  • wsrep_local_cached_downto would underflow when the node on which it is queried had no writesets in gcache. Bug fixed #1262179.
  • A typo in wsrep_provider_options could cause an unhandled exception. Bug fixed #215.
  • Interrupted IST would result in HA_ERR_KEY_NOT_FOUND error in subsequent IST. Bug fixed #210.

Other bugs fixed: #1275814.

Known Issue:

  • For those affected by crashes on donor during SST due to backup locks (#1401133), please add the following to your my.cnf configuration file:[sst]
    inno-backup-opts='--no-backup-locks'

    option as a workaround to force FLUSH TABLES WITH READ LOCK (NOTE: This workaround will is available only if you’re using Percona XtraBackup 2.2.9 or newer.). Or, as an alternative you can set your environment variable FORCE_FTWRL to 1 (for passing environment variables, see description of bug #1381492 in the previous release notes).

Release notes for Percona XtraDB Cluster 5.6.22-25.8 are available in our online documentation along with the installation and upgrade instructions.

Percona XtraDB Cluster code hosting has been moved to Github. The Bazaar branches will not be updated further.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

The post Percona XtraDB Cluster 5.6.22-25.8 is now available appeared first on MySQL Performance Blog.

Percona Server 5.6.23-72.1 is now available

March 4, 2015 - 9:29am

Percona is glad to announce the release of Percona Server 5.6.23-72.1 on March 4, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.23, including all the bug fixes in it, Percona Server 5.6.23-72.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.23-72.1 milestone on Launchpad.

New Features:

  • TokuDB storage engine package has been updated to version 7.5.6.

Bugs Fixed:

  • RPM pre-install script assumed that the PID file was always located in the datadir. If it was not, during installation, wrong assumption could be made if the server was running or not. Bug fixed #1201896.
  • SHOW GRANTS displayed only the privileges granted explicitly to the named account. Other effectively available privileges were not displayed. Fixed by implementing Extended SHOW GRANTS feature. Bug fixed #1354988 (upstream #53645).
  • InnoDB lock monitor output was printed even if it was not requested. Bug fixed #1418996.
  • The stored procedure key was made consistent with other keys in the Slow Query Log by replacing space with an underscore. Bug fixed #1419230.
  • Some --big-test MTR tests were failing for Percona Server because they weren’t updated. Bug fixed #1419827.

Other bugs fixed: #1408232, and #1420303.

Release notes for Percona Server 5.6.23-72.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker

The post Percona Server 5.6.23-72.1 is now available appeared first on MySQL Performance Blog.

Percona Server 5.5.42-37.1 is now available

March 4, 2015 - 7:22am


Percona is glad to announce the release of Percona Server 5.5.42-37.1 on March 4, 2015. Based on MySQL 5.5.42, including all the bug fixes in it, Percona Server 5.5.42-37.1 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.42-37.1 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • RPM pre-install script assumed that the PID file was always located in the datadir. If it was not, during installation, wrong assumption could be made if the server was running or not. Bug fixed #1201896.
  • SHOW GRANTS displayed only the privileges granted explicitly to the named account. Other effectively available privileges were not displayed. Fixed by implementing Extended SHOW GRANTS feature. Bug fixed #1354988 (upstream #53645).
  • InnoDB lock monitor output was printed even if it was not requested. Bug fixed #1418996.
  • The stored procedure key was made consistent with other keys in the Slow Query Log by replacing space with an underscore. Bug fixed #1419230.

Other bugs fixed: #1408232, #1415843 (upstream #75642), bug fixed #1407941, and bug fixed #1424568 (upstream #75868).

(Please also note that Percona Server 5.6 series is the latest General Availability series and current GA release is 5.6.23-72.1.)

Release notes for Percona Server 5.5.42-37.1 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.42-37.1 is now available appeared first on MySQL Performance Blog.

Pages

]]>