Buy Percona ServicesBuy Now!
Subscribe to Latest MySQL Performance Blog posts feed
Updated: 11 hours 29 min ago

Webinar Thursday, June 15, 2017: Demystifying Postgres Logical Replication

June 13, 2017 - 12:21pm

Join Percona’s Senior Technical Services Engineer Emanuel Calvo as he presents Demystifying Postgres Logical Replication on Thursday, June 15, 2017 at 7 am PDT / 10 am EDT (UTC-7).

Register Now

The Postgres logical decoding feature was added in version 9.4, and thankfully it is continuously improving due to the vibrant open source community. In this webinar, we are going to walk through its concepts, usage and some of the new things coming up in future releases.

Logical decoding is one of the features under the BDR implementation, allowing bidirectional streams of data between Postgres instances. It also allows you to stream data outside Postgres into many other data systems.

Register for the webinar here.

Emanuel Calvo, Percona Sr. Technical Services

Emanuel has worked with MySQL for more than eight years. He is originally from Argentina, but also lived and worked in Spain and other Latin American countries. He lectures and presents at universities and local events. Emanuel currently works as a Sr. Technical Services at Percona, focusing primarily on MySQL. His professional background includes experience at telecommunication companies, educational institutions and data warehousing solutions. In his career, he has worked as a developer, SysAdmin and DBA in companies like Pythian, Blackbird.io/PalominoDB, Siemens IT Solutions, Correo Argentino (Argentinian Postal Services), Globant-EA, SIU – Government Educational Institution and Aedgency among others. As a community member he has lectured and given talks in Argentina, Brazil, United States, Paraguay, Spain and Belgium as well as written several technical papers.

Q & A: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

June 9, 2017 - 1:36pm

In this blog, we will provide answers to the Q & A for the MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling webinar.

First, we want to thank everybody for attending the June 7, 2017 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that we were unable to answer during the webinar:

How does Percona XtraDB cluster work with AWS for MySQL clustering?

Percona XtraDB Cluster works especially well in cloud environments, including Amazon EC2. Since Percona XtraDB Cluster only requires one network round trip per transaction for write transactions commits, and keeps all reads local, allows it to deploy high performance multi AZ and even multi region clusters. The fact that each Percona XtraDB Cluster node contains all the data allows it to avoid reliance on the EBS storage. You can run Percona XtraDB Cluster on NVMe storage based i3 EC2 nodes to achieve high performance even with very IO-intensive workloads. Automatic provisioning and cluster self healing allows you to easily scale the cluster. We have simple tutorial on how to deploy Percona XtraDB Cluster on AWS – check it out here.

How do you approach master-master model? Are there enough reasons to use the model to implement multi-site scaling?

There are two distinct multi-master modes in existence. A synchronous Master-Master solution, like the one offered by Percona XtraDB Cluster (virtually synchronous to be exact), guarantees there are no data conflicts as you connect to the nodes located at different sites. The downside of this model is that writes can be expensive. As such, it works well in environments with low latency between the different sites, or when high latency for updates can be tolerated. Percona XtraDB Cluster is greatly optimized in that it requires only one network roundtrip to complete a commit transaction. This significantly reduces the added latency compared to many other solutions.

In contrast, asynchronous Master-Master means you can perform writes locally, without waiting on a network round trip.  It comes with the downside of possible data conflicts. In MySQL, it can be implemented using MySQL Replication. MySQL Replication only detects conflicts at this point, however, and stops if it detects a conflict. It has no good built-in conflict resolution. Ensuring conflicts do not happen on the application level is hard and error prone, and only recommended in rare cases. Most applications out there do not use Active Master-Master, but rather design an architecture where each database replication set operates with a only a single writable node.

Do the Percona tools work in the cloud, like in Amazon Aurora?

We try to make Percona software in the cloud when it makes sense. For example, Percona Toolkit and Percona Monitoring Management support Amazon RDS and Amazon Aurora. Percona XtraBackup does not, as it requires physical access to the database files (Amazon RDS and Aurora don’t provide that).  Having said that, Amazon recently updated its Aurora migration documentation to include the use of XtraBackup. Amazon Aurora supports backups taken by Percona XtraBackup as a way to import data.

What is the fastest way to verify and validate backups created by XtraBackup for databases around 2-3TB?

In the big picture, you test backups by doing some sort of restore and validation. This can be done manually, but is much better if automated. There are three levels of such validation:

  • Basic Validation. Run –apply-log and ensure it completes successfully. Start the MySQL instance and run some basic queries to ensure it works. Often running some queries to see that recent data is present is a good idea.  
  • Consistency Validation.  Additionally, run Check Table on all tables to ensure there is no corruption. This way, tables and indexes data structures are validated.   
  • Full Validation. Restore the backup and connect the restored backup as a MySQL slave (possibly to one of the existing slaves). Let it catch up and then run pt-table-checksum to validate consistency and ensure that the data in backup matches what is in the source.

Running a checktable on databases on AWS IO optimized instances takes up to eight hours. Any other suggestions on how to replace checktable in validation?”

Without knowing the table size, it is hard for me to assess whether eight hours is reasonable for your environment. However, generally speaking you should not run a Full Validation on every backup. Full Validation first and foremost validates the backup and restore pipeline. If you’re not seeing issues, doing it once per month is plenty. You want to do lighter checks on a daily and weekly basis. 

What approach would you recommend for a data warehouse needing about 80,000IOPS, currently on FusionIO bare metal? Which cloud solution would be my best bet?

This is complicated question. To answer it properly requires more information. We need to know what type of operations your database performs. Working with a Percona Consultant to do an A&D for your environment would give you best answer. In general though, EBS (even with a large number of provisioned IOPs) would not match FusionIO in IO request latency. I3 high IO instances with NVMe storage is closer match. If budget is not a concern, you can look into X1 instances. These can have up to 2TB of memory and often allow getting all (or a large portion) of the database in memory for even higher performance.

Thanks for attending the MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling webinar! Post any more MySQL in the cloud comments below.

Blog Poll: What Operating System Do You Run Your Production Database On?

June 8, 2017 - 1:24pm

In this post, we’ll use a blog poll to find out what operating system you use to run your production database servers.

As databases grow to meet more challenges and expanding application demands, they must try and get the maximum amount of performance out of available resources. How they work with an operating system can affect many variables, and help or hinder performance. The operating system you use for your database can impact consumable choices (such as hardware and memory). The operation system you use can also impact your choice of database engine as well (or vice versa).

Please let us know what operating system you use to run your database. For this poll, we’re asking which operating system you use to actually run your production database server (not the base operating system).

If you’re running virtualized Linux on Windows, please select Linux as the OS used for development. Pick up to three that apply. Add any thoughts or other options in the comments section:

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Thanks in advance for your responses – they will help the open source community determine how database environments are being deployed.

MySQL Encryption at Rest – Part 1 (LUKS)

June 6, 2017 - 12:00pm

In this first of a series of blog posts, we’ll look at MySQL encryption at rest.

At Percona, we work with a number of clients that require strong security measures for PCI, HIPPA and PHI compliance, where data managed by MySQL needs to be encrypted “at rest.” As with all things open source, there several options for meeting the MySQL encryption at rest requirement. In this three-part series, we cover several popular options of encrypting data and present the various pros and cons to each solution. You may want to evaluate which parts of these tutorials work best for your situation before using them in production.

Part one of this series is implementing disk-level encryption using crypt+LUKS.

In MySQL 5.7, InnoDB has built-in encryption features. This solution has some cons, however. Specifically, InnoDB tablespace encryption doesn’t cover undo logs, redo logs or the main ibdata1 tablespace. Additionally, binary-logs and slow-query-logs are not covered under InnoDB encryption.

Using crypt+LUKS, we can encrypt everything (data + logs) under one umbrella – provided that all files reside on the same disk. If you separate the various logs on to different partitions, you will have to repeat the tutorial below for each partition.

LUKS Tutorial

The Linux Unified Key Setup (LUKS) is the current standard for disk encryption. In the examples below, the block device /dev/sda4 on CentOS 7 is encrypted using a generated key, and then mounted as the default MySQL data directory at /var/lib/mysql.

WARNING! Loss of the key means complete loss of data! Be sure to have a backup of the key.

Install the necessary utilities:

# yum install cryptsetup

Creating, Formatting and Mounting an Encrypted Disk

The cryptsetup command initializes the volume and sets an initial key/passphrase. Please note that the key is not recoverable, so do not forget it. Take the time now to decide where you will securely store a copy of this key. LastPass Secure Notes are a good option, as they allow file attachments. This enhances our backup later on.

Create a passphrase for encryption. Choose something with high entropy (i.e., lots of randomness). Here are two options (pick one):

# openssl rand -base64 32 # date | md5 | rev | head -c 24 | md5 | tail -c 32

Next, we need to initialize and format our partition for use with LUKS. Any mounted points using this block device must be unmounted beforehand.

WARNING! This command will delete ALL DATA ON THE DEVICE! BE SURE TO COMPLETE ANY BACKUPS BEFORE YOU RUN THIS!

# cryptsetup -c aes-xts-plain -v luksFormat /dev/sda4

You will be prompted for a passphrase. Provide the phrase you generated above. After you provide a passphrase, you now need to “open” the encrypted disk and provide a device mapper name (i.e., an alias). It can be anything, but for our purposes, we will call it “mysqldata”:

# cryptsetup luksOpen /dev/sda4 mysqldata

You will be prompted for the passphrase you used above. On success, you should see the device show up:

# ls /dev/mapper/ lrwxrwxrwx 1 root root 7 Jun 2 11:50 mysqldata -> ../dm-0

You can now format this encrypted block device and create a filesystem:

# mkfs.ext4 /dev/mapper/mysqldata

Now you can mount the encrypted block device you just formatted:

# mount /dev/mapper/mysqldata /var/lib/mysql

Unfortunately you cannot add this to /etc/fstab to automount on a server reboot, since the key is needed to “open” the device. Please keep this in mind that if your server ever reboots MySQL will not start since the data directory is unavailable until opened and mounted (we will look at how to make this work using scripts in Part Two of this series).

Creating a Backup of Encryption Information

The header of a LUKS block device contains information regarding the current encryption key(s). Should this ever get damaged, or if you need to recover because you forgot the new passphrase, you can restore this header information:

# cryptsetup luksHeaderBackup --header-backup-file ${HOSTNAME}_`date +%Y%m%d`_header.dat /dev/sda4

Go ahead and make a SHA1 of this file now to verify that it doesn’t get corrupted later on in storage:

# sha1sum ${HOSTNAME}_`date +%Y%m%d`_header.dat

GZip the header file. Store the SHA1 and the .gz file in a secure location (for example, attach it to the secure note created above). Now you have a backup of the key you used and a backup of the header which uses that key.

Unmounting and Closing a Disk

If you know you will be storing a disk, or just want to make sure the contents are not visible (i.e., mounted), you can unmount and “close” the encrypted device:

# umount /var/lib/mysql/ # cryptsetup luksClose mysqldata

In order to mount this device again, you must “open” it and provide one of the keys.

Rotating Keys (Adding / Removing Keys)

Various compliance and enforcement rules dictate how often you need to rotate keys. You cannot rotate or change a key directly. LUKS supports up to eight keys per device. You must first add a new key to any slot (other than the slot currently occupying the key you are trying to remove), and then remove the older key.

Take a look at the existing header information:

# cryptsetup luksDump /dev/sda4 LUKS header information for /dev/sda4 Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 4096 MK bits: 256 MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f 47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf MK iterations: 44500 UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6 Key Slot 0: ENABLED Iterations: 181059 Salt: 9c a9 f6 12 d2 a4 2a 3d a4 08 b2 32 b0 b4 20 3b 69 13 8d 36 99 47 42 9c d5 41 35 8c b3 d0 ff 0e Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

Here we can see a key is currently occupying “Key Slot 0”. We can add a key to any DISABLED key slot. Let’s use slot #1:

# cryptsetup luksAddKey --key-slot 1 -v /dev/sda4 Enter any passphrase: Key slot 0 unlocked. Enter new passphrase for key slot: Verify passphrase: Command successful.

LUKS asks for “any” passphrase to authenticate us. Had there been keys in other slots, we could have used any one of them. As only one is currently saved, we have to use it. We can then add a new passphrase for slot 1.

Now that we have saved the new key in slot 1, we can remove the key in slot 0.

# cryptsetup luksKillSlot /dev/sda4 0 Enter any remaining LUKS passphrase: No key available with this passphrase.

In the example above, the existing passphrase stored in slot 0 was used. This is not allowed. You cannot provide the passphrase for the same slot you are attempting to remove.

Repeat this command and provide the passphrase for slot 1, which was added above. We are now able to remove the passphrase stored in slot 0:

# cryptsetup luksKillSlot /dev/sda4 0 Enter any remaining LUKS passphrase: # cryptsetup luksDump /dev/sda4 LUKS header information for /dev/sda4 Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 4096 MK bits: 256 MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f 47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf MK iterations: 44500 UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6 Key Slot 0: DISABLED Key Slot 1: ENABLED Iterations: 229712 Salt: 5d 71 b2 3a 58 d7 f8 6a 36 4f 32 d1 23 1a df df cd 2b 68 ee 18 f7 90 cf 58 32 37 b9 02 e1 42 d6 Key material offset: 264 AF stripes: 4000 Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

After you change the passphrase, it’s a good idea to repeat the header dump steps we performed above and store the new passphrase in your vault.

Conclusion

Congratulations, you have now learned how to encrypt and mount a partition using LUKS! You can now use this mounted device just like any other. You can also restore a backup and start MySQL.

In Part Two, we will cover using InnoDB tablespace encryption.

Upcoming Webinar Thursday June 8, 2017: MongoDB Shell – A Primer

June 6, 2017 - 11:35am

Join Percona’s Solutions Engineer, Rick Golba as he presents MongoDB Shell: A Primer on Thursday, June 8, 2017, at 11 am PDT / 2 pm EDT (UTC-7).

Register Now

Every good DBA should be a master of the database shell. In this webinar, we will help you understand how to structure shell commands and discuss all the advanced functions and ways to chain commands in the mongo shell.

This webinar will teach you how to:

  • Limit the number of documents, or skip documents, when running a query
  • Work with the MongoDB aggregation pipeline
  • View an explain plan for a MongoDB query
  • Understand the MongoDB write concerns
  • Validate the contents of a database on various nodes in a replica set
  • Understand the MongoDB read preference

We will touch on CRUD functions, but a great deal more time will be spent on the areas above. We will have a dedicated webinar for mastering CRUD operations in MongoDB in the future.

Register for the webinar here.

Rick Golba, Solutions Engineer

Rick Golba is a Solutions Engineer at Percona. Rick has over 20 years of experience working with databases. Prior to Percona, he worked as a Technical Trainer for HP/Vertica.

 

Percona Live Open Source Database Conference Europe 2017 in Dublin, Ireland Call for Papers is Open!

June 5, 2017 - 6:08pm

Announcing the opening of the Percona Live Open Source Database Conference Europe 2017 in Dublin, Ireland call for papers. It will be open from now until July 17, 2017.*

Do you have a big idea to explain, use case to share or skill to teach? Submit your speaking proposal for either breakout or tutorial sessions. This is your chance to put your developer ideas, business and case studies, and operational expertise in front of an intelligent, engaged audience of open source technology users.

The theme of Percona Live Europe 2017 is “Time Series Databases” for MySQL, MariaDB, MongoDB and other open source databases, with the main tracks of:

  • Developers
  • Business / Case Studies
  • Operations

We are looking for topics that address a variety of open source issues: Are you:

  • Working with MongoDB as a developer?
  • Creating a new MySQL-variant time series database?
  • Deploying MariaDB in a novel way?
  • Using open source database technology to solve a particular business issue?

We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions. Share your open source database experiences with peers and professionals in the open source community by presenting a:

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. Encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

Speaking at Percona Live Europe is a great way to build your personal and company brands. If selected, you will receive a complimentary full conference pass!

Submit your talks now.

*NOTE: We have changed our registration platform this year, so you will need to register before submitting a talk idea (even if you have previously registered).

Tips for Submitting

Include presentation details, but be concise. Clearly state:

  • Purpose of the talk (problem, solution, action format, etc.)
  • Covered technologies
  • Target audience
  • Audience takeaway

Keep proposals free of sales pitches. The Committee is looking for in-depth technical talks, not ones that sound like a commercial.

Be original! Make your presentation stand out by submitting a proposal that focuses on real-world scenarios, relevant examples, and knowledge transfer.

Submit your proposals as soon as you can – the call for papers closes July 17, 2017!

Webinar June 7, 2017: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

June 5, 2017 - 11:09am

Join Percona’s CEO and Founder Peter Zaitsev as he presents MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling on Wednesday, June 7, 2017, at 10 am PDT / 1:00 pm EDT (UTC-7).

Register Now

Businesses are moving many of the systems and processes they once owned to offsite “service” models: Platform as a Service (PaaS), Software as a Service (SaaS), Infrastructure as a Service (IaaS), etc. These services are usually referred to as being “in the cloud” – meaning that the infrastructure and management of the service in question are not maintained by the enterprise using the service.

When it comes to database environment and infrastructure, more and more enterprises are moving to MySQL in the cloud to manage this vital part of their business organization. We often refer to database services provided in the cloud as Database as a Service (DBaaS). The next question after deciding to move your database to the cloud is “How to I plan properly to as to avoid a disaster?”

Before moving to the cloud, it is important to carefully define your database needs, plan for the migration and understand what putting a solution into production entails. This webinar discusses the following subjects on moving to the cloud:

  • Public and private cloud
  • Migration to the cloud
  • Best practices
  • High availability
  • Scaling

Register for the webinar here.

Peter Zaitsev, Percona CEO and Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone often tap Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

Percona XtraDB Cluster 5.7.18-29.20 is now available

June 2, 2017 - 12:41pm

Percona announces the release of Percona XtraDB Cluster 5.7.18-29.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

NOTE: You can also run Docker containers from the images in the Docker Hub repository.

Percona XtraDB Cluster 5.7.18-29.20 is now the current release, based on the following:

All Percona software is open-source and free.

Fixed Bugs

  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommend you define primary keys on all tables for correct write-set replication.

  • PXC-812: Fixed SST script to leave the DONOR keyring when JOINER clears the datadir.

  • PXC-813: Fixed SST script to use UTC time format.

  • PXC-816: Fixed hook for caching GTID events in asynchronous replication. For more information, see #1681831.

  • PXC-820: Enabled querying of pxc_maint_mode by another client during the transition period.

  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.

  • PXC-824: Fixed graceful shutdown of Percona XtraDB Cluster node to wait until applier thread finishes.

Other Improvements

  • PXC-819: Added five new status variables to expose required values from wsrep_ist_receive_status and wsrep_flow_control_interval as numbers, rather than strings that need to be parsed:

    • wsrep_flow_control_interval_low
    • wsrep_flow_control_interval_high
    • wsrep_ist_receive_seqno_start
    • wsrep_ist_receive_seqno_current
    • wsrep_ist_receive_seqno_end

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Percona XtraDB Cluster 5.6.36-26.20 is Now Available

June 2, 2017 - 12:40pm

Percona announces the release of Percona XtraDB Cluster 5.6.36-26.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.36-26.20 is now the current release, based on the following:

All Percona software is open-source and free.

NOTE: Due to end of life, Percona will stop producing packages for the following distributions after July 31, 2017:

  • Red Hat Enterprise Linux 5 (Tikanga)
  • Ubuntu 12.04 LTS (Precise Pangolin)

You are strongly advised to upgrade to latest stable versions if you want to continue using Percona software.

Fixed Bugs

  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommended you define primary keys on all tables for correct write set replication.

  • PXC-813: Fixed SST script to use UTC time format.

  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

ProxySQL-Assisted Percona XtraDB Cluster Maintenance Mode

May 31, 2017 - 11:34am

In this blog post, we’ll look at how Percona XtraDB Cluster maintenance mode uses ProxySQL to take cluster nodes offline without impacting workloads.

Percona XtraDB Cluster Maintenance Mode

Since Percona XtraDB Cluster offers a high availability solution, it must consider a data flow where a cluster node gets taken down for maintenance (through isolation from a cluster or complete shutdown).

Percona XtraDB Cluster facilitated this by introducing a maintenance mode. Percona XtraDB Cluster maintenance mode reduces the number of abrupt workload failures if a node is taken down using ProxySQL (as a load balancer).

The central idea is delaying the core node action and allowing ProxySQL to divert the workload.

How ProxySQL Manages Percona XtraDB Cluster Maintenance Mode

With Percona XtraDB Cluster maintenance mode, ProxySQL marks the node as OFFLINE when a user triggers a shutdown signal (or wants to put a specific node into maintenance mode):

  • When a user triggers a shutdown, Percona XtraDB Cluster node sets pxc_maint_mode to SHUTDOWN (from the DISABLED default) and sleep for x seconds (dictated by pxc_maint_transition_period  — 10 secs by default). ProxySQLauto detects this change and marks the node as OFFLINE. With this change, ProxySQL avoids opening new connections for any DML transactions, but continues to service existing queries until pxc_maint_transition_period. Once the sleep period is complete, Percona XtraDB Cluster delivers a real shutdown signal — thereby giving ProxySQL enough time to transition the workload.
  • If the user needs to take a node into maintenance mode, the user can simply set pxc_maint_mode to MAINTENANCE. With that, pxc_maint_mode is updated and the client connection updating it goes into sleep for x seconds (as dictated by pxc_maint_transition_period) before giving back control to the user. ProxySQL auto-detects this change and marks the node as OFFLINE. With this change ProxySQL avoids opening new connections for any DML transactions but continues to service existing queries.
  • ProxySQL auto-detects this change in maintenance state and then automatically re-routes traffic, thereby reducing abrupt workload failures.

Technical Details:

  • The ProxySQL Galera checker script continuously monitors the state of individual nodes by checking the pxc_maint_mode variable status (in addition to the existing wsrep_local_state) using the ProxySQL scheduler feature
  • Scheduler is a Cron-like implementation integrated inside ProxySQL, with millisecond granularity.
  • If proxysql_galera_checker detects pxc_maint_mode = SHUTDOWN | MAINTENANCE, then it marks the node as OFFLINE_SOFT.  This avoids the creation of new connections (or workloads) on the node.

Sample proxysql_galera_checker log:

Thu Dec  8 11:21:11 GMT 2016 Enabling config Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25000 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25100 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25200 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Changing server 10:127.0.0.1:25200 to status OFFLINE_SOFT due to SHUTDOWN Thu Dec  8 11:21:17 GMT 2016 Number of writers online: 2 : hostgroup: 10 Thu Dec  8 11:21:17 GMT 2016 Enabling config Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25000 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25100 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25200 , status OFFLINE_SOFT , wsrep_local_state 4

Ping us below with any questions or comments.

Percona XtraBackup 2.4.7-2 is Now Available

May 30, 2017 - 9:53am

Percona announces the GA release of Percona XtraBackup 2.4.7-2 on May 29, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

Bug Fixed:
  • Fixed build failure on Debian 9.0 (Stretch). Bug fixed #1678947.

Release notes with all the bugfixes for Percona XtraBackup 2.4.7-2 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Webinar May 31, 2017: Online MySQL Backups with Percona XtraBackup

May 30, 2017 - 9:40am

Please join Percona’s solution engineer, Dimitri Vanoverbeke as he presents Online MySQL Backups with Percona XtraBackup on Wednesday, May 31, 2017 at 7:00 am PDT / 10:00 am EDT (UTC-7).

Register Now

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server, MySQL® and MariaDB®. Percona XtraBackup provides:

  • Fast and reliable backups
  • Uninterrupted transaction processing during backups
  • Savings on disk space and network bandwidth with better compression
  • Automatic backup verification
  • Higher uptime due to faster restore time

This webinar will discuss the different features of Percona XtraBackup, including:

  • Full and incremental backups
  • Compression, streaming, and encryption of backups
  • Backing up to the cloud (swift)
  • Percona XtraDB Cluster / Galera Cluster
  • Percona Server specific features
  • MySQL 5.7 support
  • Tools that use Percona XtraBackup
  • Limitations

Register for the webinar here.

Dimitri Vanoverbeke, Solution Engineer

At the age of seven, Dimitri received his first computer. Since then he has felt addicted to anything with a digital pulse. Dimitri has been active in IT professionally since 2003, when he took various roles from internal system engineering to consulting. Prior to joining Percona, Dimitri worked as a consultant for a leading open source software consulting firm in Belgium. During his career, Dimitri has become familiar with a broad range of open source solutions and with the devops philosophy. Whenever he’s not glued to his computer screen, he enjoys traveling, cultural activities, basketball and the great outdoors.

 

Percona Monitoring and Management 1.1.4 is now available

May 29, 2017 - 12:28pm

Percona announces the release of Percona Monitoring and Management 1.1.4 on May 29, 2017.

For installation instructions, see the Deployment Guide.

This release includes experimental support for MongoDB in Query Analytics, including updated QAN interface.

Query Analytics for MongoDB

To enable MongoDB query analytics, use the mongodb:queries alias when adding the service. As an experimental feature, it also requires the --dev-enable option:

sudo pmm-admin add --dev-enable mongodb:queries

NOTE: Currently, it monitors only collections that are present when you enable MongoDB query analytics. Query data for collections that you add later is not gathered. This is a known issue and it will be fixed in the future.

Query Analytics Redesign

The QAN web interface was updated for better usability and functionality (including the new MongoDB query analytics data). The new UI is experimental and available by specifying /qan2 after the URL of PMM Server.

NOTE: The button on the main landing page still points to the old QAN interface.

You can check out the new QAN web UI at https://pmmdemo.percona.com/qan2

New in PMM Server
  • PMM-724: Added the Index Condition Pushdown (ICP) graph to the MySQL InnoDB Metrics dashboard.
  • PMM-734: Fixed the MySQL Active Threads graph in the MySQL Overview dashboard.
  • PMM-807: Fixed the MySQL Connections graph in the MySQL Overview dashboard.
  • PMM-850: Updated the MongoDB RocksDB and MongoDB WiredTiger dashboards.
  • Removed the InnoDB Deadlocks and Index Collection Pushdown graphs from the MariaDB dashboard.
  • Added tooltips with descriptions for graphs in the MySQL Query Response Time dashboard.Similar tooltips will be gradually added to all graphs.
New in PMM Client
  • PMM-801: Improved PMM Client upgrade process to preserve credentials that are used by services.
  • Added options for pmm-admin to enable MongoDB cluster connections.
About Percona Monitoring and Management

Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

Percona Server for MongoDB 3.0.15-1.10 is Now Available

May 26, 2017 - 10:28am

Percona announces the release of Percona Server for MongoDB 3.0.15-1.10 on May 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.0 protocol and drivers. It extends MongoDB with PerconaFT and MongoRocks storage engines, as well as several enterprise-grade features:

NOTE: PerconaFT was deprecated and is not available in later versions. TokuBackup was replaced with Hot Backup for WiredTiger and MongoRocks storage engines.

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.0.15 and includes the following additional change:

Percona Server for MongoDB 3.0.15-1.10 release notes are available in the official documentation.

Percona Server for MySQL 5.7.18-15 is Now Available

May 26, 2017 - 10:05am

Percona announces the GA release of Percona Server for MySQL 5.7.18-15 on May 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.18, including all the bug fixes in it, Percona Server for MySQL 5.7.18-15 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.18-15 milestone at Launchpad.

Bugs Fixed:
  • The server would crash when querying partitioning table with a single partition. Bug fixed #1657941 (upstream #76418).
  • Running a query on InnoDB table with ngram full-text parser and a LIMIT clause could lead to a server crash. Bug fixed #1679025 (upstream #85835).

The release notes for Percona Server for MySQL 5.7.18-15 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

What About ProxySQL and Mirroring?

May 25, 2017 - 3:45pm

In this blog post, we’ll look at how ProxySQL and mirroring go together.

Overview

Let me be clear: I love ProxySQL, and I think it is a great component for expanding architecture flexibility and high availability. But not all that shines is gold! In this post, I want to correctly set some expectations, and avoid selling carbon for gold (carbon has it’s own uses, while gold has others).

First of all, we need to cover the basics of how ProxySQL manages traffic dispatch (I don’t want to call it mirroring, and I’ll explain further below).

ProxySQL receives a connection from the application, and through it we can have a simple SELECT or a more complex transaction. ProxySQL gets each query, passes them to the Query Processor, processes them, identifies if a query is mirrored, duplicates the whole MySQL session ProxySQL internal object and associates it to a mirror queue (which refer to a mirror threads pool). If the pool is free (has an available active slot in the concurrent active threads set) then the query is processed right away. If not, it will stay in the queue. If the queue is full, the query is lost.

Whatever is returned from the query goes to /dev/null, and as such no result set is passed back to the client.

The whole process is not free for a server. If you check the CPU utilization, you will see that the “mirroring” in ProxySQL actually doubles the CPU utilization. This means that the traffic on server A is impacted because of resource contention.

Summarizing, ProxySQL will:

  1. Send the query for execution in different order
  2. Completely ignore any transaction isolation
  3. Have different number of query executed on B with respect to A
  4. Add significant load on the server resources

This point, coupled with the expectations I mention in the reasoning at the end of this article, it is quite clear to me that at the moment we cannot consider ProxySQL as a valid mechanism to duplicate a consistent load from server A to server B.

Personally, I don’t think that the ProxySQL development team (Rene :D) should waste time on fixing this issue, as there are so many other things to cover and improve on in ProxySQL.

After working extensively with ProxySQL, and doing a deep QA on mirroring, I think that either we keep it as basic blind traffic dispatcher. Otherwise, a full re-conceptualization is required. But once we have clarified that, ProxySQL “traffic dispatch” (still don’t want to call it mirroring) remains a very interesting feature that can have useful applications – especially since it is easy to setup.

The following test results should help set the correct expectations.

The tests were simple: load data in a Percona XtraDB Cluster and use ProxySQL to replicate the load on a MySQL master-slave environment.

  • Machines for MySQL/Percona XtraDB Cluster: VM with CentOS 7, 4 CPU 3 GB RAM, attached storage
  • Machine for ProxySQL: VM CentOS 7, 8 CPU 8GB RAM

Why did I choose to give ProxySQL a higher volume of resources? I knew in advance I could need to play a bit with a couple of settings that required more memory and CPU cycles. I wanted to be sure I didn’t get any problems from ProxySQL in relation to CPU and memory.

The application that I was using to add load is a Java application I develop to perform my tests. The app is at https://github.com/Tusamarco/blogs/blob/master/stresstool_base_app.tar.gz, and the whole set I used to do the tests are here:  https://github.com/Tusamarco/blogs/tree/master/proxymirror.

I used four different tables:

+------------------+ | Tables_in_mirror | +------------------+ | mirtabAUTOINC | | mirtabMID | | mirtabMIDPart | | mirtabMIDUUID |

Ok so let start. Note that the meaningful tests are the ones below. For the whole set, refer to the whole set package. First setup ProxySQL:

First setup ProxySQL:

delete from mysql_servers where hostgroup_id in (500,501,700,701); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',500,3306,60000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',501,3306,100,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.21',501,3306,20000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.231',501,3306,20000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',700,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',701,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.25',701,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.43',701,3306,1,400); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; delete from mysql_users where username='load_RW'; insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) values ('load_RW','test',1,500,'test',1); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK; delete from mysql_query_rules where rule_id=202; insert into mysql_query_rules (rule_id,username,destination_hostgroup,mirror_hostgroup,active,retries,apply) values(202,'load_RW',500,700,1,3,1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

Test 1

The first test is mainly a simple functional test during which I insert records using one single thread in Percona XtraDB Cluster and MySQL. No surprise, here I have 3000 loops and at the end of the test I have 3000 records on both platforms.

To have a baseline we can see that the ProxySQL CPU utilization is quite low:

At the same time, the number of “questions” against Percona XtraDB Cluster and MySQL very similar:

Percona XtraDB Cluster

MySQL

The other two metrics we want to keep an eye on are Mirror_concurrency and Mirror_queue_length. These two refer respectively to mysql-mirror_max_concurrency and mysql-mirror_max_queue_length:

These two new variables and metrics were introduced in ProxySQL 1.4.0, with the intent to control and manage the load ProxySQL generates internally related to the mirroring feature. In this case, you can see we have a max of three concurrent connections and zero queue entries (all good).

Now that we have a baseline, and that we know at functional level “it works,” let see what happens when increasing the load.

Test 2

The scope of the test was identifying how ProxySQL behaves with a standard configuration and increasing load. It comes up that as soon as ProxySQL has a little bit more load, it starts to lose some queries along the way.

Executing 3000 loops for 40 threads only results in 120,000 rows inserted in all the four tables in Percona XtraDB Cluster. But the table in the secondary (mirrored) platform only has a variable number or inserted rows, between 101,359 and 104,072. This demonstrates consistent loss of data.

After reviewing and comparing the connections running in Percona XtraDB Cluster and the secondary, we can see that (as expected) Percona XtraDB Cluster’s number of connections is scaling and serving the number of incoming requests, while the connections on the secondary are limited by the default value of mysql-mirror_max_concurrency=16.

Is also interesting to note that the ProxySQL transaction process queue maintains its connection to the Secondary longer than the connection to Percona XtraDB Cluster.

As we can see above, the queue is an evident bell curve that reaches 6K entries (which is quite below the mysql-mirror_max_queue_length limit (32K)). Yet queries were dropped by ProxySQL, which indicates the queue is not really enough to accommodate the pending work.

CPU-wise, ProxySQL (as expected) take a few more cycles, but nothing crazy. The overhead for the simple mirroring queue processing can be seen when the main load stops around 12:47.

Another interesting graph to keep an eye on is the one describing the executed commands inside Percona XtraDB Cluster and the secondary:

Percona XtraDB Cluster

Secondary

As you can see, the traffic on the secondary was significantly less (669 on average, compared to Percona XtraDB Cluster’s 1.17K). Then it spikes when the main load on the Percona XtraDB Cluster node terminates. In short it is quite clear that ProxySQL is not able to scale following the traffic existing in Percona XtraDB Cluster, and actually loses a significant amount of data on the secondary.

Doubling the load in Test 3 shows the same behavior, with ProxySQL reaches its limit for traffic duplication.

But can this be optimized?

The answer is, of course, yes! This is what the mysql-mirror_max_concurrency is for, so let;’s see what happens if we increase the value from 16 to 100 (just to make it crazy high).

Test 4 (two app node writing)

The first thing that comes to attention is that both Percona XtraDB Cluster and secondary report the same number of rows in the tables (240,000). That is a good first win.

Second, note the number of running connections:

The graphs are now are much closer, and the queue drops to just a few entries.

Commands executed in Percona XtraDB Cluster:

And commands executed in the secondary:

Average execution reports the same value, and very similar trends.

Finally, what was the CPU cost and effect?

Percona XtraDB Cluster and secondary CPU utilization:

     

As expected, some difference in the CPU usage distribution exists. But the trend is consistent between the two nodes, and the operations.

The ProxySQL CPU utilization is definitely higher than before:

But it’s absolutely manageable, and still reflects the initial distribution.

What about CRUD? So far I’ve only tested the insert operation, but what happen if we run a full CRUD set of tests?

Test 7 (CRUD)

First of all, let’s review the executed commands in Percona XtraDB Cluster:

And the secondary:

While in appearance we have very similar workloads, selects aside the behavior will significantly diverge. This is because in the secondary the different operations are not encapsulated by the transaction. They are executed as they are received. We can see a significant difference in update and delete operations between the two.

Also, the threads in the execution show a different picture between the two platforms:

Percona XtraDB Cluster

Secondary

It appears quite clear that Percona XtraDB Cluster is constantly running more threads and more connections. Nevertheless, both platforms process a similar total number of questions:

Percona XtraDB Cluster

Secondary

Both have an average or around 1.17K/second questions.

This is also another indication of how much the impact of concurrent operation on behavior, with no respect to the isolation or execution order. Below we can clearly see different behavior by reviewing the CPU utilization:

Percona XtraDB Cluster

Secondary

Conclusions

To close this article, I want to go back to the start. We cannot consider the mirror function in ProxySQL as a real mirroring, but more as traffic redirection (check here for more reasoning on mirroring from my side).

Using ProxySQL with this approach is still partially effective in testing the load and the effect it has on a secondary platform. As we know, data consistency is not guaranteed in this scenario, and Selects, Updates and Deletes are affected (given the different data-set and result-set they manage).

The server behaviors change between the original and mirror, if not in the quantity or the quality.

I am convinced that when we need a tool able to test our production load on a different or new platform, we would do better to look to something else. Possibly query Playback, recently reviewed and significantly patched by DropBox (https://github.com/Percona-Lab/query-playback).

In the end, ProxySQL is already a cool tool. If it doesn’t cover mirroring well, I can live with that. I am interested in having it working as it should (and it does in many other functionalities).

Acknowledgments

As usual, to Rene, who worked on fixing and introducing new functionalities associated with mirroring, like queue and concurrency control.

To the Percona team who developed Percona Monitoring and Management (PMM): all the graphs here (except 3) come from PMM (some of them I customized).

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.