Buy Percona ServicesBuy Now!

Q & A: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

Latest MySQL Performance Blog posts - June 9, 2017 - 1:36pm

In this blog, we will provide answers to the Q & A for the MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling webinar.

First, we want to thank everybody for attending the June 7, 2017 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that we were unable to answer during the webinar:

How does Percona XtraDB cluster work with AWS for MySQL clustering?

Percona XtraDB Cluster works especially well in cloud environments, including Amazon EC2. Since Percona XtraDB Cluster only requires one network round trip per transaction for write transactions commits, and keeps all reads local, allows it to deploy high performance multi AZ and even multi region clusters. The fact that each Percona XtraDB Cluster node contains all the data allows it to avoid reliance on the EBS storage. You can run Percona XtraDB Cluster on NVMe storage based i3 EC2 nodes to achieve high performance even with very IO-intensive workloads. Automatic provisioning and cluster self healing allows you to easily scale the cluster. We have simple tutorial on how to deploy Percona XtraDB Cluster on AWS – check it out here.

How do you approach master-master model? Are there enough reasons to use the model to implement multi-site scaling?

There are two distinct multi-master modes in existence. A synchronous Master-Master solution, like the one offered by Percona XtraDB Cluster (virtually synchronous to be exact), guarantees there are no data conflicts as you connect to the nodes located at different sites. The downside of this model is that writes can be expensive. As such, it works well in environments with low latency between the different sites, or when high latency for updates can be tolerated. Percona XtraDB Cluster is greatly optimized in that it requires only one network roundtrip to complete a commit transaction. This significantly reduces the added latency compared to many other solutions.

In contrast, asynchronous Master-Master means you can perform writes locally, without waiting on a network round trip.  It comes with the downside of possible data conflicts. In MySQL, it can be implemented using MySQL Replication. MySQL Replication only detects conflicts at this point, however, and stops if it detects a conflict. It has no good built-in conflict resolution. Ensuring conflicts do not happen on the application level is hard and error prone, and only recommended in rare cases. Most applications out there do not use Active Master-Master, but rather design an architecture where each database replication set operates with a only a single writable node.

Do the Percona tools work in the cloud, like in Amazon Aurora?

We try to make Percona software in the cloud when it makes sense. For example, Percona Toolkit and Percona Monitoring Management support Amazon RDS and Amazon Aurora. Percona XtraBackup does not, as it requires physical access to the database files (Amazon RDS and Aurora don’t provide that).  Having said that, Amazon recently updated its Aurora migration documentation to include the use of XtraBackup. Amazon Aurora supports backups taken by Percona XtraBackup as a way to import data.

What is the fastest way to verify and validate backups created by XtraBackup for databases around 2-3TB?

In the big picture, you test backups by doing some sort of restore and validation. This can be done manually, but is much better if automated. There are three levels of such validation:

  • Basic Validation. Run –apply-log and ensure it completes successfully. Start the MySQL instance and run some basic queries to ensure it works. Often running some queries to see that recent data is present is a good idea.  
  • Consistency Validation.  Additionally, run Check Table on all tables to ensure there is no corruption. This way, tables and indexes data structures are validated.   
  • Full Validation. Restore the backup and connect the restored backup as a MySQL slave (possibly to one of the existing slaves). Let it catch up and then run pt-table-checksum to validate consistency and ensure that the data in backup matches what is in the source.

Running a checktable on databases on AWS IO optimized instances takes up to eight hours. Any other suggestions on how to replace checktable in validation?”

Without knowing the table size, it is hard for me to assess whether eight hours is reasonable for your environment. However, generally speaking you should not run a Full Validation on every backup. Full Validation first and foremost validates the backup and restore pipeline. If you’re not seeing issues, doing it once per month is plenty. You want to do lighter checks on a daily and weekly basis. 

What approach would you recommend for a data warehouse needing about 80,000IOPS, currently on FusionIO bare metal? Which cloud solution would be my best bet?

This is complicated question. To answer it properly requires more information. We need to know what type of operations your database performs. Working with a Percona Consultant to do an A&D for your environment would give you best answer. In general though, EBS (even with a large number of provisioned IOPs) would not match FusionIO in IO request latency. I3 high IO instances with NVMe storage is closer match. If budget is not a concern, you can look into X1 instances. These can have up to 2TB of memory and often allow getting all (or a large portion) of the database in memory for even higher performance.

Thanks for attending the MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling webinar! Post any more MySQL in the cloud comments below.

Blog Poll: What Operating System Do You Run Your Production Database On?

Latest MySQL Performance Blog posts - June 8, 2017 - 1:24pm

In this post, we’ll use a blog poll to find out what operating system you use to run your production database servers.

As databases grow to meet more challenges and expanding application demands, they must try and get the maximum amount of performance out of available resources. How they work with an operating system can affect many variables, and help or hinder performance. The operating system you use for your database can impact consumable choices (such as hardware and memory). The operation system you use can also impact your choice of database engine as well (or vice versa).

Please let us know what operating system you use to run your database. For this poll, we’re asking which operating system you use to actually run your production database server (not the base operating system).

If you’re running virtualized Linux on Windows, please select Linux as the OS used for development. Pick up to three that apply. Add any thoughts or other options in the comments section:

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Thanks in advance for your responses – they will help the open source community determine how database environments are being deployed.

PMM and RDS

Lastest Forum Posts - June 8, 2017 - 12:57pm
Hi Community,

I have a question about PMM and the P95 calculation.

We used to use PMM with EC2 and we were able to see the calculation of P95 in Percona Query Analytics (QAN).

Now that we migrated our databases to RDS, we still use PMM but now the P95 in Percona Query Analytics is not calculated anymore.

Can you tell use if we are missing something ?

We followed this tutorial:
https://www.percona.com/doc/percona-...mazon-rds.html

But we only enabled "mysql:queries".

we did NOT add "mysql:metrics"

Maximum number of master servers in a multi-source replication

Lastest Forum Posts - June 8, 2017 - 7:26am
Hello Friends,
Recently I deployed a multi-source replication scheme with over 100 master servers against a single slave. I had no problem with the setup despite, according to the documentation (https://mariadb.com/kb/en/mariadb/mu...e-replication/) the maximum number of nodes was 64.
I was wondering whether this limit was expanded in recent releases, but I could not find any further mention to it in the release notes.
However, I am having some spurious replication issues. For example, there are missing rows in the slave database. These missing rows come from random master servers. I would like to confirm with you that the limit of 64 servers was in fact expanded in version 10.1.22 (or prior) of MariaDB, running on a 64-bit Centos, so I can discard this as a possible cause for the problem we are experiencing.
I appreciate very much your time and help with this issue.
Best,
Carlos

Three nodes cluster shutdowning randomly

Lastest Forum Posts - June 8, 2017 - 4:51am
Hello,

We deployed a three nodes cluster using PXC last year for one of our client and it is working nicely but we got some random crash affecting the entire cluster. It happened again yesterday and the complete cluster shut down without warning.
MySQL logs on the first node are in attachment (galera_logs_1.txt).


It seems there is some communication problems between nodes as mentioned on the first line : turning message relay requesting on, nonlive peers.
I'm not sure what the root cause can be :can this be network related or load related (load average or SQL traffic, numbers of requests) ? Is there some parameters to adjust ?

After that, I tried bootstrapping the cluster but got another shutdown I don't understand : I bootstrapped the first node, restarted the second which initiated a SST.
After the second node was up and running (WSREP state Synced), I restarted the third node and the two other nodes stopped immediatly.
I put the messages from error log in attachment (galera_logs_2.txt).


It's not the first time I have to reset a PXC cluster like that but I don't understand why the last node created this situation.
Am I missing something ?


For information, we are using Debian with the following packages :
Code: ii percona-xtradb-cluster-56 5.6.29-25.15-1.wheezy amd64 Percona XtraDB Cluster with Galera ii percona-xtradb-cluster-client-5.6 5.6.29-25.15-1.wheezy amd64 Percona XtraDB Cluster database client binaries ii percona-xtradb-cluster-common-5.6 5.6.29-25.15-1.wheezy amd64 Percona XtraDB Cluster database common files (e.g. /etc/mysql/my.cnf) ii percona-xtradb-cluster-galera-3 3.14-1.wheezy amd64 Metapackage for latest version of galera3. ii percona-xtradb-cluster-galera-3.x 3.14-1.wheezy amd64 Galera components of Percona XtraDB Cluster ii percona-xtradb-cluster-server-5.6 5.6.29-25.15-1.wheezy amd64 Percona XtraDB Cluster database server binaries I guess an upgrade of those versions is a must have here.

Configuration File :
Code: [mysqld] # Cluster configuration wsrep_provider = /usr/lib/libgalera_smm.so wsrep_forced_binlog_format = ROW wsrep_cluster_address = gcomm://10.16.0.92,10.16.0.93,10.16.0.94 wsrep_slave_threads = 64 wsrep_sst_method = xtrabackup-v2 wsrep_sst_auth = XXXX:XXXX wsrep_cluster_name = galera wsrep_node_name = client wsrep_node_address = 10.16.0.92 wsrep_causal_reads = OFF wsrep_provider_options = "gcache.size = 50G; gcs.fc_limit = 64" wsrep_retry_autocommit = 1 wsrep_debug = 0 Thanks for any information about that case.

prometheus high cpu

Lastest Forum Posts - June 8, 2017 - 3:00am
The following host load conditions

PMM-SERVER mongodb:queries (dev-enable)

Lastest Forum Posts - June 7, 2017 - 3:15pm
I really love PMM. I've got a couple of MySQL RDS & a couple of ec2 mongoDB clusters. I really would love to see the queries on mongoDB clusters. but it doesn't seem to work.
pmm-admin 1.1.4
MongoDB 2.6.9

Is my Mongo just too old?

PMM authority management

Lastest Forum Posts - June 7, 2017 - 12:12am
Can not implement grafnana internal user management login, log directly from nginx

Incremental backup using remote base directory

Lastest Forum Posts - June 6, 2017 - 7:31pm
I have serverA with my Mysql database and i have serverB for storing my backups. I have the full backup of serverA created in server B. Now i wanted to create the incremental backup everyday using the full backup and the incremental backup should also store in serverB. Can you please help me to solve this problem.

Thanks.

MySQL Encryption at Rest – Part 1 (LUKS)

Latest MySQL Performance Blog posts - June 6, 2017 - 12:00pm

In this first of a series of blog posts, we’ll look at MySQL encryption at rest.

At Percona, we work with a number of clients that require strong security measures for PCI, HIPPA and PHI compliance, where data managed by MySQL needs to be encrypted “at rest.” As with all things open source, there several options for meeting the MySQL encryption at rest requirement. In this three-part series, we cover several popular options of encrypting data and present the various pros and cons to each solution. You may want to evaluate which parts of these tutorials work best for your situation before using them in production.

Part one of this series is implementing disk-level encryption using crypt+LUKS.

In MySQL 5.7, InnoDB has built-in encryption features. This solution has some cons, however. Specifically, InnoDB tablespace encryption doesn’t cover undo logs, redo logs or the main ibdata1 tablespace. Additionally, binary-logs and slow-query-logs are not covered under InnoDB encryption.

Using crypt+LUKS, we can encrypt everything (data + logs) under one umbrella – provided that all files reside on the same disk. If you separate the various logs on to different partitions, you will have to repeat the tutorial below for each partition.

LUKS Tutorial

The Linux Unified Key Setup (LUKS) is the current standard for disk encryption. In the examples below, the block device /dev/sda4 on CentOS 7 is encrypted using a generated key, and then mounted as the default MySQL data directory at /var/lib/mysql.

WARNING! Loss of the key means complete loss of data! Be sure to have a backup of the key.

Install the necessary utilities:

# yum install cryptsetup

Creating, Formatting and Mounting an Encrypted Disk

The cryptsetup command initializes the volume and sets an initial key/passphrase. Please note that the key is not recoverable, so do not forget it. Take the time now to decide where you will securely store a copy of this key. LastPass Secure Notes are a good option, as they allow file attachments. This enhances our backup later on.

Create a passphrase for encryption. Choose something with high entropy (i.e., lots of randomness). Here are two options (pick one):

# openssl rand -base64 32 # date | md5 | rev | head -c 24 | md5 | tail -c 32

Next, we need to initialize and format our partition for use with LUKS. Any mounted points using this block device must be unmounted beforehand.

WARNING! This command will delete ALL DATA ON THE DEVICE! BE SURE TO COMPLETE ANY BACKUPS BEFORE YOU RUN THIS!

# cryptsetup -c aes-xts-plain -v luksFormat /dev/sda4

You will be prompted for a passphrase. Provide the phrase you generated above. After you provide a passphrase, you now need to “open” the encrypted disk and provide a device mapper name (i.e., an alias). It can be anything, but for our purposes, we will call it “mysqldata”:

# cryptsetup luksOpen /dev/sda4 mysqldata

You will be prompted for the passphrase you used above. On success, you should see the device show up:

# ls /dev/mapper/ lrwxrwxrwx 1 root root 7 Jun 2 11:50 mysqldata -> ../dm-0

You can now format this encrypted block device and create a filesystem:

# mkfs.ext4 /dev/mapper/mysqldata

Now you can mount the encrypted block device you just formatted:

# mount /dev/mapper/mysqldata /var/lib/mysql

Unfortunately you cannot add this to /etc/fstab to automount on a server reboot, since the key is needed to “open” the device. Please keep this in mind that if your server ever reboots MySQL will not start since the data directory is unavailable until opened and mounted (we will look at how to make this work using scripts in Part Two of this series).

Creating a Backup of Encryption Information

The header of a LUKS block device contains information regarding the current encryption key(s). Should this ever get damaged, or if you need to recover because you forgot the new passphrase, you can restore this header information:

# cryptsetup luksHeaderBackup --header-backup-file ${HOSTNAME}_`date +%Y%m%d`_header.dat /dev/sda4

Go ahead and make a SHA1 of this file now to verify that it doesn’t get corrupted later on in storage:

# sha1sum ${HOSTNAME}_`date +%Y%m%d`_header.dat

GZip the header file. Store the SHA1 and the .gz file in a secure location (for example, attach it to the secure note created above). Now you have a backup of the key you used and a backup of the header which uses that key.

Unmounting and Closing a Disk

If you know you will be storing a disk, or just want to make sure the contents are not visible (i.e., mounted), you can unmount and “close” the encrypted device:

# umount /var/lib/mysql/ # cryptsetup luksClose mysqldata

In order to mount this device again, you must “open” it and provide one of the keys.

Rotating Keys (Adding / Removing Keys)

Various compliance and enforcement rules dictate how often you need to rotate keys. You cannot rotate or change a key directly. LUKS supports up to eight keys per device. You must first add a new key to any slot (other than the slot currently occupying the key you are trying to remove), and then remove the older key.

Take a look at the existing header information:

# cryptsetup luksDump /dev/sda4 LUKS header information for /dev/sda4 Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 4096 MK bits: 256 MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f 47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf MK iterations: 44500 UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6 Key Slot 0: ENABLED Iterations: 181059 Salt: 9c a9 f6 12 d2 a4 2a 3d a4 08 b2 32 b0 b4 20 3b 69 13 8d 36 99 47 42 9c d5 41 35 8c b3 d0 ff 0e Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

Here we can see a key is currently occupying “Key Slot 0”. We can add a key to any DISABLED key slot. Let’s use slot #1:

# cryptsetup luksAddKey --key-slot 1 -v /dev/sda4 Enter any passphrase: Key slot 0 unlocked. Enter new passphrase for key slot: Verify passphrase: Command successful.

LUKS asks for “any” passphrase to authenticate us. Had there been keys in other slots, we could have used any one of them. As only one is currently saved, we have to use it. We can then add a new passphrase for slot 1.

Now that we have saved the new key in slot 1, we can remove the key in slot 0.

# cryptsetup luksKillSlot /dev/sda4 0 Enter any remaining LUKS passphrase: No key available with this passphrase.

In the example above, the existing passphrase stored in slot 0 was used. This is not allowed. You cannot provide the passphrase for the same slot you are attempting to remove.

Repeat this command and provide the passphrase for slot 1, which was added above. We are now able to remove the passphrase stored in slot 0:

# cryptsetup luksKillSlot /dev/sda4 0 Enter any remaining LUKS passphrase: # cryptsetup luksDump /dev/sda4 LUKS header information for /dev/sda4 Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 4096 MK bits: 256 MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f 47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf MK iterations: 44500 UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6 Key Slot 0: DISABLED Key Slot 1: ENABLED Iterations: 229712 Salt: 5d 71 b2 3a 58 d7 f8 6a 36 4f 32 d1 23 1a df df cd 2b 68 ee 18 f7 90 cf 58 32 37 b9 02 e1 42 d6 Key material offset: 264 AF stripes: 4000 Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

After you change the passphrase, it’s a good idea to repeat the header dump steps we performed above and store the new passphrase in your vault.

Conclusion

Congratulations, you have now learned how to encrypt and mount a partition using LUKS! You can now use this mounted device just like any other. You can also restore a backup and start MySQL.

In Part Two, we will cover using InnoDB tablespace encryption.

Upcoming Webinar Thursday June 8, 2017: MongoDB Shell – A Primer

Latest MySQL Performance Blog posts - June 6, 2017 - 11:35am

Join Percona’s Solutions Engineer, Rick Golba as he presents MongoDB Shell: A Primer on Thursday, June 8, 2017, at 11 am PDT / 2 pm EDT (UTC-7).

Register Now

Every good DBA should be a master of the database shell. In this webinar, we will help you understand how to structure shell commands and discuss all the advanced functions and ways to chain commands in the mongo shell.

This webinar will teach you how to:

  • Limit the number of documents, or skip documents, when running a query
  • Work with the MongoDB aggregation pipeline
  • View an explain plan for a MongoDB query
  • Understand the MongoDB write concerns
  • Validate the contents of a database on various nodes in a replica set
  • Understand the MongoDB read preference

We will touch on CRUD functions, but a great deal more time will be spent on the areas above. We will have a dedicated webinar for mastering CRUD operations in MongoDB in the future.

Register for the webinar here.

Rick Golba, Solutions Engineer

Rick Golba is a Solutions Engineer at Percona. Rick has over 20 years of experience working with databases. Prior to Percona, he worked as a Technical Trainer for HP/Vertica.

 

Bi-directional replication between 2 XtraDB clusters

Lastest Forum Posts - June 6, 2017 - 7:16am
Good morning, does anyone use in production scheme below:

ClusterA(3 nodes) <-Native MySQL replication->ClusterB(3 nodes).

As I understand slave process will be started on some node in cluster and in case if node goes down we can use auto-position to start slave on other node in cluster without any problems. Basically I would like to use such scheme to replicate data between datacenters.


Thanks in advance!

how to clean up space without effect the config

Lastest Forum Posts - June 6, 2017 - 5:09am
Hi

Centos 6.4 , docker 1.7.0 taking lot of space for volumes how to remove volumes without effect the config. please give me a tips

Percona Live Open Source Database Conference Europe 2017 in Dublin, Ireland Call for Papers is Open!

Latest MySQL Performance Blog posts - June 5, 2017 - 6:08pm

Announcing the opening of the Percona Live Open Source Database Conference Europe 2017 in Dublin, Ireland call for papers. It will be open from now until July 17, 2017.*

Do you have a big idea to explain, use case to share or skill to teach? Submit your speaking proposal for either breakout or tutorial sessions. This is your chance to put your developer ideas, business and case studies, and operational expertise in front of an intelligent, engaged audience of open source technology users.

The theme of Percona Live Europe 2017 is “Time Series Databases” for MySQL, MariaDB, MongoDB and other open source databases, with the main tracks of:

  • Developers
  • Business / Case Studies
  • Operations

We are looking for topics that address a variety of open source issues: Are you:

  • Working with MongoDB as a developer?
  • Creating a new MySQL-variant time series database?
  • Deploying MariaDB in a novel way?
  • Using open source database technology to solve a particular business issue?

We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions. Share your open source database experiences with peers and professionals in the open source community by presenting a:

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. Encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

Speaking at Percona Live Europe is a great way to build your personal and company brands. If selected, you will receive a complimentary full conference pass!

Submit your talks now.

*NOTE: We have changed our registration platform this year, so you will need to register before submitting a talk idea (even if you have previously registered).

Tips for Submitting

Include presentation details, but be concise. Clearly state:

  • Purpose of the talk (problem, solution, action format, etc.)
  • Covered technologies
  • Target audience
  • Audience takeaway

Keep proposals free of sales pitches. The Committee is looking for in-depth technical talks, not ones that sound like a commercial.

Be original! Make your presentation stand out by submitting a proposal that focuses on real-world scenarios, relevant examples, and knowledge transfer.

Submit your proposals as soon as you can – the call for papers closes July 17, 2017!

bcrypt and qpress seem slower than I would expect.... Advice?

Lastest Forum Posts - June 5, 2017 - 4:42pm
I have a large database I am decrypting and decompressing from a full innobackupex command. I am using the script below to do this. Any suggestions on how to make this go faster, did I miss any 'speed-up' options on either of these commands?

This is on centos 6.9, mysql 5.7 and percona-xtrabackup-2.4. The machine has lots of 8 CPU and 60GB of memory. The disks are SSD with a 20MB/second advertised write read/write rate (This is an AWS r2.x4large machine), incase that matters.

-Jeff

Code: <script type="text/bash"> # decrypt_decompress.sh DATE=$(date '+%Y%m%d_%H%M%S') ( time for i in `find /srv/full_backup_20170521/ -iname "*.xbcrypt"`; \ do echo "%%%-INFO: decrypting $i"; \ time xbcrypt -d --encrypt-key-file=/var/lib/mysql/.innobackupex.key --encrypt-algo=AES256 --encrypt-chunk-size=10M < $i \ | qpress -T4di $(dirname $i) \ && \rm $i; \ done ) | tee ~/do_decrypt_decompress_${DATE}.log

Is Percona Forum a &amp;quot;black hole&amp;quot; ie no one from percona is Monitoring it ?

Lastest Forum Posts - June 5, 2017 - 12:38pm
I see too many treads with 0 activity or replies
can someone point me to an more active discussion forum elsewhere ?

Webinar June 7, 2017: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

Latest MySQL Performance Blog posts - June 5, 2017 - 11:09am

Join Percona’s CEO and Founder Peter Zaitsev as he presents MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling on Wednesday, June 7, 2017, at 10 am PDT / 1:00 pm EDT (UTC-7).

Register Now

Businesses are moving many of the systems and processes they once owned to offsite “service” models: Platform as a Service (PaaS), Software as a Service (SaaS), Infrastructure as a Service (IaaS), etc. These services are usually referred to as being “in the cloud” – meaning that the infrastructure and management of the service in question are not maintained by the enterprise using the service.

When it comes to database environment and infrastructure, more and more enterprises are moving to MySQL in the cloud to manage this vital part of their business organization. We often refer to database services provided in the cloud as Database as a Service (DBaaS). The next question after deciding to move your database to the cloud is “How to I plan properly to as to avoid a disaster?”

Before moving to the cloud, it is important to carefully define your database needs, plan for the migration and understand what putting a solution into production entails. This webinar discusses the following subjects on moving to the cloud:

  • Public and private cloud
  • Migration to the cloud
  • Best practices
  • High availability
  • Scaling

Register for the webinar here.

Peter Zaitsev, Percona CEO and Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone often tap Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

Percona XtraDB Cluster 5.7.18-29.20 is now available

Lastest Forum Posts - June 2, 2017 - 10:57pm
Percona announces the release of Percona XtraDB Cluster 5.7.18-29.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

NOTE: You can also run Docker containers from the images in the Docker Hub repository.


Due to new package dependency, Ubuntu/Debian users should use apt-get dist-upgrade or apt-get installpercona-xtradb-cluster-57 to upgrade.


Percona XtraDB Cluster 5.7.18-29.20 is now the current release, based on the following:All Percona software is open-source and free.


Fixed Bugs
  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommend you define primary keys on all tables for correct write-set replication.
  • PXC-812: Fixed SST script to leave the DONOR keyring when JOINER clears the datadir.
  • PXC-813: Fixed SST script to use UTC time format.
  • PXC-816: Fixed hook for caching GTID events in asynchronous replication. For more information, see #1681831.
  • PXC-820: Enabled querying of pxc_maint_mode by another client during the transition period.
  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.
  • PXC-824: Fixed graceful shutdown of Percona XtraDB Cluster node to wait until applier thread finishes.
Other Improvements
  • PXC-819: Added five new status variables to expose required values from wsrep_ist_receive_status and wsrep_flow_control_interval as numbers, rather than strings that need to be parsed:
    • wsrep_flow_control_interval_low
    • wsrep_flow_control_interval_high
    • wsrep_ist_receive_seqno_start
    • wsrep_ist_receive_seqno_current
    • wsrep_ist_receive_seqno_end
Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Percona XtraDB Cluster 5.6.36-26.20 is Now Available

Lastest Forum Posts - June 2, 2017 - 10:55pm
Percona announces the release of Percona XtraDB Cluster 5.6.36-26.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.36-26.20 is now the current release, based on the following:All Percona software is open-source and free.

NOTE: Due to end of life, Percona will stop producing packages for the following distributions after July 31, 2017:
  • Red Hat Enterprise Linux 5 (Tikanga)
  • Ubuntu 12.04 LTS (Precise Pangolin)
You are strongly advised to upgrade to latest stable versions if you want to continue using Percona software.

Fixed Bugs
  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommended you define primary keys on all tables for correct write set replication.
  • PXC-813: Fixed SST script to use UTC time format.
  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.
Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Percona XtraDB Cluster 5.7.18-29.20 is now available

Latest MySQL Performance Blog posts - June 2, 2017 - 12:41pm

Percona announces the release of Percona XtraDB Cluster 5.7.18-29.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

NOTE: You can also run Docker containers from the images in the Docker Hub repository.

Percona XtraDB Cluster 5.7.18-29.20 is now the current release, based on the following:

All Percona software is open-source and free.

Fixed Bugs

  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommend you define primary keys on all tables for correct write-set replication.

  • PXC-812: Fixed SST script to leave the DONOR keyring when JOINER clears the datadir.

  • PXC-813: Fixed SST script to use UTC time format.

  • PXC-816: Fixed hook for caching GTID events in asynchronous replication. For more information, see #1681831.

  • PXC-820: Enabled querying of pxc_maint_mode by another client during the transition period.

  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.

  • PXC-824: Fixed graceful shutdown of Percona XtraDB Cluster node to wait until applier thread finishes.

Other Improvements

  • PXC-819: Added five new status variables to expose required values from wsrep_ist_receive_status and wsrep_flow_control_interval as numbers, rather than strings that need to be parsed:

    • wsrep_flow_control_interval_low
    • wsrep_flow_control_interval_high
    • wsrep_ist_receive_seqno_start
    • wsrep_ist_receive_seqno_current
    • wsrep_ist_receive_seqno_end

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.