Buy Percona ServicesBuy Now!
Subscribe to Latest MySQL Performance Blog posts feed
Updated: 1 hour 42 min ago

Internal Temporary Tables in MySQL 5.7

December 4, 2017 - 6:51am

In this blog post, I investigate a case of spiking InnoDB Rows inserted in the absence of a write query, and find internal temporary tables to be the culprit.

Recently I was investigating an interesting case for a customer. We could see the regular spikes on a graph depicting “InnoDB rows inserted” metric (jumping from 1K/sec to 6K/sec), however we were not able to correlate those spikes with other activity. The innodb_row_inserted graph (picture from PMM demo) looked similar to this (but on a much larger scale):

Other graphs (Com_*, Handler_*) did not show any spikes like that. I’ve examined the logs (we were not able to enable general log or change the threshold of the slow log), performance_schema, triggers, stored procedures, prepared statements and even reviewed the binary logs. However, I was not able to find any single write query which could have caused the spike to 6K rows inserted.

Finally, I figured out that I was focusing on the wrong queries. I was trying to correlate the spikes on the InnoDB Rows inserted graph to the DML queries (writes). However, the spike was caused by SELECT queries! But why would SELECT queries cause the massive InnoDB insert operation? How is this even possible?

It turned out that this is related to temporary tables on disk. In MySQL 5.7 the default setting for internal_tmp_disk_storage_engine is set for InnoDB. That means that if the SELECT needs to create a temporary table on disk (e.g., for GROUP BY) it will use the InnoDB storage engine.

Is that bad? Not necessarily. Krunal Bauskar published a blog post originally about the InnoDB Intrinsic Tables performance in MySQL 5.7. The InnoDB internal temporary tables are not redo/undo logged. So in general performance is better. However, here is what we need to watch out for:

  1. Change of the place where MySQL stores temporary tables. InnoDB temporary tables are stored in ibtmp1 tablespace file. There are a number of challenges with that:
    • Location of the ibtmp1 file. By default it is located inside the innodb datadir. Originally MyISAM temporary tables were stored in  tmpdir. We can configure the size of the file, but the location is always relative to InnoDB datadir, so to move it to tmpdir we need something like this: innodb_temp_data_file_path=../../../tmp/ibtmp1:12M:autoextend
    • Like other tablespaces it never shrinks back (though it is truncated on restart). The huge temporary table can fill the disk and hang MySQL (bug opened). One way to fix that is to set the maximum size of ibtmp1 file: innodb_temp_data_file_path=ibtmp1:12M:autoextend:max:1G
    • Like other InnoDB tables it has all the InnoDB limitations, i.e., InnoDB row or column limits. If it exceeds these, it will return “Row size too large” or “Too many columns” errors. The workaround is to set internal_tmp_disk_storage_engine to MYISAM.
  2. When all temp tables go to InnoDB, it may increase the total engine load as well as affect other queries. For example, if originally all datasets fit into buffer_pool and temporary tables were created outside of the InnoDB, it will not affect the InnoDB memory footprint. Now, if a huge temporary table is created as an InnoDB table it will use innodb_buffer_pool and may “evict” the existing pages so that other queries may perform slower.
Conclusion

Beware of the new change in MySQL 5.7, the internal temporary tables (those that are created for selects when a temporary table is needed) are stored in InnoDB ibtmp file. In most cases this is faster. However, it can change the original behavior. If needed, you can switch the creation of internal temp tables back to MyISAM: set global internal_tmp_disk_storage_engine=MYISAM

Percona Monitoring and Management 1.5: QAN in Grafana Interface

December 1, 2017 - 1:21pm

In this post, we’ll examine how we’ve improved the GUI layout for Percona Monitoring and Management 1.5 by moving the Query Analytics (QAN) functions into the Grafana interface.

For Percona Monitoring and Management users, you might notice that QAN appears a little differently in our 1.5 release. We’ve taken steps to unify the PMM interface so that it feels more natural to move from reviewing historical trends in Metrics Monitor to examining slow queries in QAN.  Most significantly:

  1. QAN moves from a stand-alone application into Metrics Monitor as a dashboard application
  2. We updated the color scheme of QAN to match Metrics Monitor (but you can toggle a button if you prefer to still see QAN in white!)
  3. Date picker and host selector now use the same methods as Metrics Monitor

Starting from the PMM landing page, you still see two buttons – one for Metrics Monitor and another for Query Analytics (this hasn’t changed):

Once you select Query Analytics on the left, you see the new Metrics Monitor dashboard page for PMM Query Analytics. It is now hosted as a Metrics Monitor dashboard, and notice the URL is no longer /qan:

Another advantage of the Metrics Monitor dashboard integration is that the QAN inherits the host selector from Grafana, which supports partial string matching. This makes it simpler to find the host you’re searching for if you have more than a handful of instances:

The last feature enhancement worth mentioning is the native Grafana time selector, which lets you select down to the minute resolution time frames. This was a frequent source of feature requests — previously PMM limited you to our pre-defined default ranges. Keep in mind that QAN has an internal archiving job that caps QAN history at eight days.

Last but not least is the ability to toggle between the default dark interface and the optional white. Look for the small lightbulb icon at the bottom left of any QAN screen () and give it a try!

We hope you enjoy the new interface, and we look forward to your feedback on these improvements!

This Week in Data with Colin Charles 17: AWS Re:Invent, a New Book on MySQL Cluster and Another Call Out for Percona Live 2018

December 1, 2017 - 6:58am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The CFP for Percona Live Santa Clara 2018 closes December 22, 2017: please consider submitting as soon as possible. We want to make an early announcement of talks, so we’ll definitely do a first pass even before the CFP date closes. Keep in mind the expanded view of what we are after: it’s more than just MySQL and MongoDB. And don’t forget that with one day less, there will be intense competition to fit all the content in.

A new book on MySQL Cluster is out: Pro MySQL NDB Cluster by Jesper Wisborg Krogh and Mikiya Okuno. At 690 pages, it is a weighty tome, and something I fully plan on reading, considering I haven’t played with NDBCLUSTER for quite some time.

Did you know that since MySQL 5.7.17, connection control plugins are included? They help DBAs introduce an increasing delay in server response to clients after a certain number of consecutive failed connection attempts. Read more at the connection control plugins.

While there are a tonne of announcements coming out from the Amazon re:Invent 2017 event, I highly recommend also reading Some data of interest as AWS reinvent 2017 ramps up by James Governor. Telemetry data from sumologic’s 1,500 largest customers suggest that NoSQL database usage has overtaken relational database workloads! Read The State of Modern Applications in the Cloud. Page 8 tells us that MySQL is the #1 database on AWS (I don’t see MariaDB Server being mentioned which is odd; did they lump it in together?), and MySQL, Redis & MongoDB account for 40% of database adoption on AWS. In other news, Andy Jassy also mentions that less than 1.5 months after hitting 40,000 database migrations, they’ve gone past 45,000 over the Thanksgiving holiday last week. Have you started using AWS Database Migration Service?

Releases Link List Upcoming appearances
  • ACMUG 2017 gathering – Beijing, China, December 9-10 2017 – it was very exciting being there in 2016, I can only imagine it’s going to be bigger and better in 2017, since it is now two days long!
Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Percona Server for MongoDB 3.4.10-2.10 Is Now Available

November 30, 2017 - 1:50pm

Percona announces the release of Percona Server for MongoDB 3.4.10-2.10 on November 30, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features:

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.4.10 and includes the following additional change:

  • oplog searches have been optimized in MongoRocks, which should also increase overall performance.

Percona Monitoring and Management 1.5.1 Is Now Available

November 29, 2017 - 11:12am

Percona announces the release of Percona Monitoring and Management 1.5.1. This release contains fixes for bugs found after Percona Monitoring and Management 1.5.0 was released.

Bug fixes
  • PMM-1771: When upgrading PMM to 1.5.0 using Docker commands, PMM System SummaryPMM Add InstancePMM Query Analytics dashboards were not available.
  • PMM-1761: The PMM Query Analytics dashboard did not display the list of hosts correctly.
  • PMM-1769: It was possible to add an Amazon RDS instance providing invalid credentials on the PMM Add Instance dashboard.

Other bug fixes: PMM-1767PMM-1762

Percona XtraBackup 2.3.10 Is Now Available

November 29, 2017 - 11:00am

Percona announces the release of Percona XtraBackup 2.3.10 on November 29, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

New Features
  • Packages are now available for Ubuntu 17.10 (Artful).
  • xbcrypt now can decrypt files in parallel by specifying the number of threads with the --encrypt-threads option.
  • Percona XtraBackup --copy-back option can now be used with --parallel option to copy the user data files in parallel (redo logs and system tablespaces are copied in the main thread).
Bugs Fixed:
  • Percona XtraBackup failed to build with GCC 7. Bug fixed #1681721.
  • Percona XtraBackup would crash while preparing the 5.5 backup with utf8_general50_ci collation. Bug fixed #1533722 (Fungo Wang).
  • Percona XtraBackup would crash if --throttle was used while preparing backups. Fixed by making this option available only during the backup process. Bug fixed #1691093.
  • Percona XtraBackup could get stuck if backups are taken with --safe-slave-backup option, while there were long-running queries. Bug fixed #1717158.

Release notes with all the bugfixes for version 2.3.10 are available in our online documentation. Report bugs on the launchpad bug tracker.

Percona XtraBackup 2.4.9 Is Now Available

November 29, 2017 - 10:55am

Percona announces the GA release of Percona XtraBackup 2.4.9 on November 29, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

New features:
  • Packages are now available for Ubuntu 17.10 (Artful).
  • xbcrypt now can decrypt files in parallel by specifying the number of threads with the --encrypt-threads option.
  • --copy-back option can now be used with --parallel option to copy the user data files in parallel (redo logs and system tablespaces are copied in the main thread).
Bugs Fixed:
  • Percona XtraBackup would fail to backup large databases on 32-bit platforms. Bug fixed #1602537.
  • Percona XtraBackup failed to build with GCC 7. Bug fixed #1681721.
  • Percona XtraBackup would hang during the prepare phase if there was not enough room in log buffer to accommodate checkpoint information at the end of the crash recovery process. Bug fixed #1705383.
  • When backup was streamed in tar format with the --slave-info option, output file xtrabackup_slave_info did not contain the slave information. Bug fixed #1707918.
  • If --slave-info option was used while backing up 5.7 instances, the master binary log coordinates were not properly displayed in the logs. Bug fixed #1711010.
  • innobackupex --slave-info would report a single m instead of slave info in the standard output. Bug fixed #1727920.
  • Percona XtraBackup would crash while preparing the 5.5 backup with utf8_general50_ci collation. Bug fixed #1533722 (Fungo Wang).
  • Percona XtraBackup would crash if --throttle option was used while preparing backups. Fixed by making this option available only during the backup process. Bug fixed #1691093.
  • Percona XtraBackup could get stuck if backups are taken with --safe-slave-backup option, while there were long-running queries. Bug fixed #1717158.

Other bugs fixed: #1678838, #1727922, and #1729241.

Release notes with all the bugfixes for version 2.4.9 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Best Practices for Percona XtraDB Cluster on AWS

November 28, 2017 - 2:52pm

In this blog post I’ll look at the performance of Percona XtraDB Cluster on AWS using different service instances, and recommend some best practices for maximizing performance.

You can use Percona XtraDB Cluster in AWS environments. We often get questions about how best to deploy it, and how to optimize both performance and spend when doing so. I decided to look into it with some benchmark testing.

For these benchmark tests, I used the following configuration:

  • Region:
    • Availability zones: US East – 1, zones: b, c, d
    • Sysbench 1.0.8
    • ProxySQL 1.4.3
    • 10 tables, 40mln records – ~95GB dataset
    • Percona XtraDB Cluster 5.7.18
    • Amazon Linux AMI

We evaluated different AWS instances to provide the best recommendation to run Percona XtraDB Cluster. We used instances

  • With General Purpose storage volumes, 200GB each
  • With IO provisioned volumes, 200GB, 10000 IOS
  • I3 instances with local attached NVMe storage.

We also used different instance sizes:

Instance vCPU Memory r4.large 2 15.25 r4.xlarge 4 30.5 r4.2xlarge 8 61 r4.4xlarge 16 122 i3.large 2 15.25 i3.xlarge 4 30.5 i3.2xlarge 8 61 i3.4xlarge 16 122

 

While I3 instances with NVMe storage do not provide the same functionality for handling shared storage and snapshots as General Purpose and IO provisioned volumes, since Percona XtraDB Cluster provides data duplication by itself we think it is still valid to include them in this comparison.

We ran benchmarks in the US East 1 (N. Virginia) Region, and we used different availability zones for each of the Percona XtraDB Cluster zones (mostly zones “b”, “c” and “d”):

The client was directly connected and used ProxySQL, so we were able to measure ProxySQL’s performance overhead as well.

ProxySQL is an advanced method to access Percona XtraDB Cluster. It can perform a health check of the nodes and route the traffic to the ONLINE node. It can also split read and write traffic and route read traffic to different nodes (although we didn’t use this capability in our benchmark).

In our benchmarks, we used 1,4, 16, 64 and 256 user threads. For this detailed review, however, we’ll look at the 64 thread case. 

Results

First, let’s review the average throughput (higher is better) and latency (lower is better) results (we measured 99% percentile with one-second resolution):

Results summary, raw performance:

The performance for Percona XtraDB Cluster running on GP2 volumes is often pretty slow, so it is hard to recommend this volume type for the serious workloads.

IO provisioned volumes perform much better, and should be considered as the primary target for Percona XtraDB Cluster deployments. I3 instances show even better performance.

I3 instances use locally attached volumes and do not provide equal functionality as EBS IO provisioned volumes — although some of these limitations are covered by Percona XtraDB Cluster’s ability to keep copies of data on each node.

Results summary for jitter:

Along with average throughput and latency, it is important to take into account “jitter” — how stable is the performance during the runs?

Latency variation for GP2 volumes is significant — practically not acceptable for serious usage. Let’s review the latency for only IO provisioning and NVMe volumes. The following chart provides better scale for just these two:

At this scale, we see that NVMe provides a 99% better response time and is more stable. There is still variation for IO provisioned volumes.

Results summary, cost

When speaking about instance and volume types, it would be impractical to avoid mentioning of the instance costs. We need to analyze how much we need to pay to achieve the better performance. So we prepared data how much does it cost to produce throughput of 1000 transactions per second.

We compare on-demand and reserved instances pricing (reserved for one year / all upfront / tenancy-default):

Because IO provisioned instances give much better performance, the price performance is comparable if not better than GP2 instances.

I3 instances are a clear winner.

It is also interesting to compare the raw cost of benchmarked instances:

We can see that IO provisioned instances are the most expensive, and using reserved instances does not provide much savings. To understand the reason for this, let’s take a look at how cost is calculated for components:

So for IO provisioned volumes, the majority of the cost comes from IO provisioning (which is the same for both on-demand and reserved instances).

Percona XtraDB Cluster scalability

Another interesting effort is looking at how Percona XtraDB Cluster performance scales with the instance size. As we double resources (both CPU and Memory) while increasing the instance size, how does it affect Percona XtraDB Cluster?

So let’s take a look at throughput:

Throughput improves with increasing the instance size. Let’s calculate speedup with increasing instance size for IO provisioned and I3 instances:

Speedup X Times to Large Instance IO1 i3 large 1 1 xlarge 2.67 2.11 2xlarge 5.38 4.31 4xlarge 5.96 7.83

 

Percona XtraDB Cluster can scale (improve performance) with increasing instance size. Keep in mind, however, that it depends significantly on the workload. You may not get the same performance speedup as in this benchmark.

ProxySQL overhead

As mentioned above, ProxySQL adds additional functionality to the cluster. It can also add overhead, however. We would like to understand the expected performance penalty, so we compared the throughput and latency with and without ProxySQL.

Out of box, the ProxySQL performance was not great and required additional tuning. 

ProxySQL specific configuration:

  • Use connection through TCP-IP address, not through local socket
  • Adjust  mysql-max_stmts_per_connection variable for optimal value (default:50) – optimal – 1000
  • Ensure that “monitor@<host>” user has permissions as it’s important for proper handling of prepared statement.
    • CREATE USER ‘monitor’@‘172.30.%.%’ IDENTIFIED BY ‘monitor’;

Throughput:

Response time:

ProxySQL performance penalty in throughput

ProxySQL performance penalty IO1 i3 large 0.97 0.98 xlarge 1.03 0.97 2xlarge 0.95 0.95 4xlarge 0.96 0.93

 

It appears that ProxySQL adds 3-7% overhead. I wouldn’t consider this a significant penalty for additional functionality.

Summary Amazon instances

First, the results show that instances based on General Purpose volumes do not provide acceptable performance and should be avoided in general for serious production usage. The choice is between IO provisioned instances and NVMe based instances.

IO provisioned instances are more expensive, but offer much better performance than General Purpose volumes. If we also look at price/performance metric, IO provisioned volumes are comparable with General Purpose volumes. You should use IO provisioned volumes if you are looking for the functionality provided by EBS volumes.

If you do not need EBS volumes, however, then i3 instances with NVMe volumes are a better choice. Both are cheaper and provide better performance than IO provisioned instances. Percona XtraDB Cluster provides data duplication on its own, which mitigates the need for EBS volumes to some extent.

ProxySQL overhead

We recommend using Percona XtraDB Cluster in combination with ProxySQL, as ProxySQL provides additional management and routing functionality. In general, the overhead for ProxySQL is not significant. But in our experience, however, ProxySQL has to be properly tuned — otherwise the performance penalty could be a bottleneck.

Percona XtraDB Cluster scalability

AWS has great capability to increase the instance size (both CPU and memory) if we exceed the capacity of the current instance. From our experiments, we see that Percona XtraDB Cluster can scale along with and benefit from increased instance size.

Below is a chart showing the speedup in relation to the instance size:

So increasing the instance size is a feasible strategy for improving Percona XtraDB Cluster performance in an AWS environment.

Thanks for reading this benchmark! Put any questions or thoughts in the comments below.

Percona Monitoring and Management 1.5.0 Is Now Available

November 28, 2017 - 4:56am

Percona announces the release of Percona Monitoring and Management 1.5.0 on November 28, 2017.

This release focuses on the following features:

  • Enhanced support for MySQL on Amazon RDS and Amazon Aurora – Dedicated Amazon Aurora dashboard offers maximum visibility into key database characteristics, eliminating the need for additional monitoring nodes.  We renamed Amazon RDS OS Metrics to Amazon RDS / Aurora MySQL Metrics
  • Simpler configuration – Percona Monitoring and Management now offers easier configuration of key Amazon RDS and Amazon Aurora settings via a web interface
  • One-click data collection – One button retrieves vital information on server performance to assist with troubleshooting
  • Improved interface – A simple, consistent user interface makes it faster and more fluid to switch between Query Analytics and Metrics Monitor

Highlights from our new Amazon RDS / Aurora MySQL Metrics dashboard:

Shared elements for Amazon Aurora MySQL and RDS MySQL

Amazon Aurora MySQL unique elements

Amazon RDS for MySQL unique elements

We’ve integrated Query Analytics into Metrics Monitor, and it appears as a separate dashboard known as PMM Query Analytics.

With this release, Percona Monitoring and Management introduces a new deployment option via AWS Marketplace. This is in addition to our distribution method of Amazon Machine Images (AMI).

We have upgraded Grafana and Prometheus in this release. PMM now includes Grafana 4.6.1. One of the most prominent features that the upgraded Grafana offers is the support of annotations. You can mark a point or select a region in a graph and give it a meaningful description. For more information, see the release highlights.

Prometheus version 1.8.2, shipped with this release, offers a number of bug fixes. For more information, see the Prometheus change log.

New features
  • PMM-434: PMM enables monitoring of Amazon RDS and Amazon Aurora metrics
  • PMM-1133Query Analytics is available from Grafana as a dashboard
  • PMM-1470: Integrated Cloudwatch metrics into Prometheus
  • PMM-699: Combined AWS RDS and Amazon Aurora metrics into one dashboard
  • PMM-722: Distributed the MariaDB dashboard graph elements among other existing dashboards and removed the MariaDB dashboard. Further, we renamed the MyISAM dashboard  to MyISAM/Aria Metrics
  • PMM-1258: The DISABLE_UPDATES option enables preventing manual updates when PMM Server is run from a Docker container.
  • PMM-1500: Added InnoDB Buffer Disk Reads to graph InnoDB Buffer Pool Requests to better understand missed InnoDB BP cache hits
Improvements
  • PMM-1577: Updated Prometheus to version 1.8.2
  • PMM-1603: Updated Grafana to version 4.6.1
  • PMM-1669: The representation of numeric values in the Context Switches graph in the System Overview dashboard was changed to improve readability.
  • PMM-1575: Templating rules were improved for the MyRocks and TokuDB dashboards so that only those instances with these storage engines are displayed
Bug fixes
  • PMM-1082: The CPU Usage graph on the Trends dashboard showed incorrect spikes
  • PMM-1549: The authentication of the mongodb:queries monitoring service did not work properly when the name of the database to authenticate was not provided.
  • PMM-1673: Fixed display issue with Microsoft Internet Explorer 11

autoxtrabackup v1.5.0: A Tool for Automatic Backups

November 27, 2017 - 10:39am

There is a new version of the autoxtrabackup tool. In this post, I’ll provide some of the highlights available this time around.

autoxtrabackup is a tool created by PerconLabs. We’ve now put out the 1.5.0 version, and you can test it further.

Note: PerconaLabs is a repository for various open source scripts and tools created by Percona staff. Items added to the PerconaLabs repository are not official Percona products, and aren’t covered by support or services agreements. 

autoxtrabackup is written in Python3 and hosted in PerconaLab (forked from Shako’s repo). Basically, this tool automates backup/prepare/copy-back actions. I want to talk about recent changes and additions.

First of all, autoxtrabackup now has a --test_mode option, intended to test XtraBackup automation process.

Here is the brief flow for this:

  • Clone percona-qa repo
  • Clone Percona Server for MySQL 5.6 and 5.7 from github.
  • Build PS servers in debug mode.
  • Get 2.3 and 2.4 versions of XtraBackup
  • Generate autoxtrabackup .conf files for each version of PS and XtraBackup
  • Pass different combination of options to PS start command and initialize PS servers each time with different options
  • Run sysbench against each started PS server
  • Take backup in cycles for each started PS + prepare
  • If make_slaves is defined, then create slave1 server from this backup (i.e., copy-back to another directory and start the slave from it)
  • Then take a backup, prepare and copy-back from this new slave1 to create slave2
  • Run pt-table-checksum on the master to check backup consistency

I have prepared my environment, and now want to start --test_mode. Basically, it creates option combinations and passes them to the start script:

2017-11-15 22:28:21 DEBUG Starting cycle1 2017-11-15 22:28:21 DEBUG Will start MySQL with --innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=64K --early-plugin-load=keyring_file.so --keyring_file_data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/mysql-keyring/keyring --log-bin=mysql-bin --log-slave-updates --server-id=1 --gtid-mode=ON --enforce-gtid-consistency --binlog-format=row

So as you see, it is starting MySQL with --innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=64K. In cycle2, it will likely pick --innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=32K, and so on. It depends what you have passed in config:

# Do not touch; this is for --test_mode, which is testing for XtraBackup itself. [TestConf] ps_branches=5.6 5.7 gitcmd=--recursive --depth=1 https://github.com/percona/percona-server.git testpath=/home/shahriyar.rzaev/XB_TEST/server_dir incremental_count=3 #make_slaves=1 xb_configs=xb_2_4_ps_5_6.conf xb_2_4_ps_5_7.conf xb_2_3_ps_5_6.conf default_mysql_options=--log-bin=mysql-bin,--log-slave-updates,--server-id={},--gtid-mode=ON,--enforce-gtid-consistency,--binlog-format=row mysql_options=--innodb_buffer_pool_size=1G 2G 3G,--innodb_log_file_size=1G 2G 3G,--innodb_page_size=4K 8K 16K 32K 64K

You can pass more options by changing the mysql_options in the config file. Also you can specify how many incremental backups you want by setting the incremental_count option. You can enable creating slaves from backup to test it as well, by enabling the make_slaves option. This is not recommended for daily usage. You can read more about it here: –test_mode.

For daily backup actions, I have added the --tag and --show_tags options, which can be quite useful. They help you to tag your backups. Take a full backup:

$ sudo autoxtrabackup --tag="My Full backup" -v -lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log -l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Take an incremental one:

$ autoxtrabackup --tag="First incremental backup" -v -lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log -l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Take a second incremental one:

$ autoxtrabackup --tag="Second incremental backup" -v -lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log -l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Now you can use the --show_tags to list tags:

$ sudo autoxtrabackup --show_tags --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf Backup Type Status TAG ------------------------------------------- 2017-11-16_20-10-53 Full OK 'My Full backup' 2017-11-16_20-12-23 Inc OK 'First incremental backup' 2017-11-16_20-13-39 Inc OK 'Second incremental backup'

It would be quite nice if we could prepare those backups with a tag name. In other words, if I have a full backup and five incremental backups, what if I want to prepare until the second or third incremental, or just a full backup?

Pass the tag name with the --prepare option, and it will do the trick:

$ autoxtrabackup --tag="First incremental backup" -v -lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log -l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --prepare

It will prepare the full and “First incremental backup” – the remaining incremental backups will be ignored.

autoxtrabackup 1.5.0 also has a --dry_run option, which is going to show but not run exact commands. It is described here: –dry_run.

How about autoxtrabackup 1.5.0 installation? You can install it from the source or use pip3:

pip3 install mysql-autoxtrabackup

For more please read: Installation.

Do you want to enable encryption and compression for backups? Yes? You can enable this from the autoxtrabackup config as described here: Config file structure.

You can enable taking partial backups again by editing the config: partial backups.

autoxtrabackup 1.5.0 allows you to perform a partial recovery – i.e., restoring only a single specified table from a full backup. If the table was dropped,  autoxtrabackup will try to extract the create table statement from the .frm file using the mysqlfrm tool and then discard/import the tablespace from full backup. This is related to the transportable tablespace concept. You can read more here: restoring-single-table-after-drop.

For a full list of available options, read the DOC: autoxtrabackup DOC.

Thanks for reading! If you are going to try autoxtrabackup 1.5.0, don’t hesitate to provide some feedback!

This Week in Data with Colin Charles 16: FOSDEM, Percona Live call for papers, and ARM

November 24, 2017 - 1:53am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Hurry up – the call for papers (CFP) for FOSDEM 2018 ends December 1, 2017. I highly recommend submitting as its a really fun, free, and technically-oriented event.

Don’t forget that the CFP for Percona Live Open Source Database Conference 2018 in Santa Clara closes December 22, 2017, so please also consider submitting as soon as possible. We want to make an early announcement of the talks, so we’ll definitely do the first pass even before the CFP date closes.

Is ARM the new hotness? Marvell confirms $6 billion purchase of chip maker Cavium. This month we’ve seen Red Hat Enterprise Linux for ARM arrive. We’ve also seen a press release from MariaDB about the performance on the Qualcomm Centriq 2400 Processor.

Some new books to add to your bookshelf and read: MariaDB and MySQL Common Table Expressions and Window Functions Revealed by Daniel Bartholomew. The accompanying source code repository will also be useful. Much awaited for, by Percona Live keynote speakers, Database Reliability Engineering: Designing and Operating Resilient Database Systems by Laine Campbell and Charity Majors is now ready to read.

Releases Link List Upcoming appearances
  • ACMUG 2017 gathering – Beijing, China, December 9-10 2017 – it was very exciting being there in 2016, I can only imagine it’s going to be bigger and better for 2017 since it is now two days long!
Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

MongoDB 3.6 Change Streams: A Nest Temperature and Fan Control Use Case

November 22, 2017 - 6:16pm

In this post, I’ll look at what MongoDB 3.6 change streams are, in a creative way. Just in time for the holidays!

What is a change stream?

Change streams in MongoDB provide a cross-platform unified API that can be supported with sharding. It has an option for talking to secondaries, and even allows for security controls like restrictions and action controls.

How is this important? To demonstrate, I’ll walk through an example of using a smart oven with a Nest Thermostat to keep your kitchen from becoming a sauna while you bake a cake — without the need for you to moderate the room temperature yourself.

What does a change stream look like? db.device_states.watch( { $match: { documentKey.device: { $in : [ "jennair_oven", "nest_kitchen_thermo"] }, operationType: "insert" } }); What can we watch?

We can use change streams to watch these actions:

  • Insert
  • Delete
  • Replace
  • Update
  • Invalidate
Why not just use the Oplog?

Any change presented in the oplog could be rolled back as it’s only single node durable. Change streams need at least one other node to receive the change. In general, this represents a majority for a typical three node replica-set.

In addition, change streams are resumable. Having a collector job that survives an election is easy as pie, as by default it will automatically retry once. However, you can also record the last seen token to know how to resume where it left off.

Finally, since this is sharding supported with the new cluster clock (wc in Oplog), you can trust the operations order you get, even across shards. This was problematic both with the old oplog format and when managing connections manually.

In short, this is the logical evolution of oplog scrapping, and helps fit a long help request to be able to tail the oplog via mongos, not per replica set.

So what’s the downside?

It’s estimated that after 1000 streams you will start to see very measurable performance drops. Why there is not a global change stream option to avoid having so many cursors floating around is not clear. I think it’s something that should be looked at for future versions of this feature. Up to now, many use cases of mongo, specifically in the multi-tenant world, might have > 1000 namespaces on a system. This would make the performance drop problematic.

What’s in a change stream anyhow?

The first thing to understand is that while some drivers will have db.collection.watch(XXXX) as a function, you could use, this is just an alias for an actual aggregation pipeline $changeStream. This means you could mix this with much more powerful pipelines, though you should be careful. Things like projection could break the ability to resume if the token is not passed on accurately.

So a change stream:

  1. Is a view of an oplog entry for a change This sometimes means you know the change contents, and sometimes you don’t, for example in a delete
  2. Is an explicit API for drivers and code, but also ensures you can get data via Mongos rather than having to connect directly to each node.
  3. Is scalable, resumable, and well ordered – even when sharded!
  4. Harnesses the power of aggregations.
  5. Provides superior ACL support via roles and privileges

Back to a real-world example. Let’s assume you have a Nest unit in your house (because none of us techies have those right?) Let’s also assume you’re fancy and have the Jenn-Air oven which can talk to the Nest. If you’re familiar with the Nest, you might know that its API lets you enable the Jenn-Air fan or set its oven temperature remotely. Sure the oven has a fan schedule to prevent it running at night, but its ability to work with other appliances is a bit more limited.

So for our example, assume you want the temperature in the kitchen to drop by 15 degrees F whenever the oven is on, and that the fan should run even if it’s outside its standard time window.

Hopefully, you can see how such an app, powered by MongoDB, could be useful? However, there are a few more assumptions, which we have already set up: a collection of “device_states” to record the original state of the temperature setting in the Nest; and to record the oven’s status so that we know how to reset the oven using the Nest once cooking is done.

As we know we have the state changes for the devices coming in on a regular basis, we could simply say:

db.device_states.watch({ $match: { documentKey.device: { $in : [ "jennair_oven", "nest_kitchen_thermo"] }, operationType: "insert" } });

This will watch for any changes to either of these devices whether it be inserting new states or updating old ones.

Now let’s assume anytime something comes in for the Nest, we are updating  db.nest_settings with that document. However, in this case, when the oven turns on we update a secondary document with an _id of “override” to indicate this is the last known nest_setting before the oven enabling. This means that we can revert to it later.

This would be accomplished via something like…

Change Event document

{  _id: <resume_token>, operationType: 'insert', ns: {db:'example',coll:"device_states"}, documentKey: { device:'nest_kitchen_thermo'}, fullDocument: {  _id : ObjectId(), device: 'nest_kitchen_thermo', temp: 68 } }

So you could easily run the follow from your code:

db.nest_settings.update({_id:"current"},{_id:"current",data: event.fullDocument})

Now the current document is set to the last checking from the Nest API.

That was simple enough, but now we can do even more cool things…

Change Event document

{  _id: <resume_token>, operationType: 'insert', ns: {db:'example',coll:"device_states"}, documentKey: { device:'jennair_oven'}, fullDocument: {  _id : ObjectId(), device: 'jennair_oven', temp: 350, power: 1, state: "warming" } }

This next segment is mode pseudocode:

var event = watcherObj.next(); var device = event.documentKey.device; var data = event.fullDocument; if ( device == "jennair_oven"){ override_enabled = db.nest_settings.count({_id:"override"}); if ( data.power  && !override_enabled){ var doc = db.nest_settings.findOne({_id:"current"}); doc._id="override"; doc.data.temp += -15;  db.nest_settings.insert(doc); } if (data.power){ overide_doc = db.nest_settings.findOne({_id:"override"}); NestObj.termostate.updateTemp(override_doc.data.temp); NestObj.termostate.enableFan(15); //Enable for 15 minutes  }else{ overide_doc = db.nest_settings.findOne({_id:"override"}); overide_doc.data.temp += 15; NestObj.termostate.updateTemp(override_doc.data.temp); NestObj.termostate.enableFan(0); //Enable for 15 minutes  db.nest_settings.remove({_id:"override"}); } }

This code is doing a good deal, but it’s pretty basic at the same time:

  1. If the oven is on, but there is no override document, create one from the most recent thermostat settings.
  2. Decrease the current temp setting by 15, and then insert it with the “override” _id value
  3. If the power is set to on
    (a) read in the current override document
    (b) set the thermostat to that setting
    (c) enable the fan for 15 minutes
  4. If the power is now off
    (a) read in the current override document
    (b) set the thermostat to 15 degrees higher
    (c) set the fan to disabled

Assuming you are constantly tailing the watch cursor, this means you will disable the oven and fan as soon as the oven is off.

Hopefully, this blog has helped explain how change streams work by using a real-world logical application to keep your kitchen from becoming a sweat sauna while making some cake… and then eating it!

Sudoku Recursive Common Table Expression Solver

November 22, 2017 - 11:17am

In this blog post, we’ll look at a solving Sudoku using MySQL 8.0 recursive common table expression.

Vadim was recently having a little Saturday morning fun solving Sudoku using MySQL 8. The whole idea comes from SQLite, where Richard Hipp has come up with some outlandish recursive query examplesWITH clause.

The SQLite query:

WITH RECURSIVE  input(sud) AS (    VALUES('53..7....6..195....98....6.8...6...34..8.3..17...2...6.6....28....419..5....8..79')  ),  digits(z, lp) AS (    VALUES('1', 1)    UNION ALL SELECT    CAST(lp+1 AS TEXT), lp+1 FROM digits WHERE lp<9  ),  x(s, ind) AS (    SELECT sud, instr(sud, '.') FROM input    UNION ALL    SELECT      substr(s, 1, ind-1) || z || substr(s, ind+1),      instr( substr(s, 1, ind-1) || z || substr(s, ind+1), '.' )     FROM x, digits AS z    WHERE ind>0      AND NOT EXISTS (            SELECT 1              FROM digits AS lp             WHERE z.z = substr(s, ((ind-1)/9)*9 + lp, 1)                OR z.z = substr(s, ((ind-1)%9) + (lp-1)*9 + 1, 1)                OR z.z = substr(s, (((ind-1)/3) % 3) * 3                        + ((ind-1)/27) * 27 + lp                        + ((lp-1) / 3) * 6, 1)         )  ) SELECT s FROM x WHERE ind=0;

Which should provide the answer: 534678912672195348198342567859761423426853791713924856961537284287419635345286179.

The modified query to run on MySQL 8.0.3 release candidate and MariaDB Server 10.2.9 stable GA courtesy of Vadim:

WITH RECURSIVE  input(sud) AS (    SELECT '53..7....6..195....98....6.8...6...34..8.3..17...2...6.6....28....419..5....8..79'  ),  digits(z, lp) AS (    SELECT '1', 1    UNION ALL SELECT    CAST(lp+1 AS CHAR), lp+1 FROM digits WHERE lp<9  ),  x(s, ind) AS (    SELECT sud, instr(sud, '.') FROM input    UNION ALL    SELECT      concat(substr(s, 1, ind-1) , z , substr(s, ind+1)),      instr( concat(substr(s, 1, ind-1) ,z ,substr(s, ind+1)), '.' )     FROM x, digits AS z    WHERE ind>0      AND NOT EXISTS (            SELECT 1              FROM digits AS lp             WHERE z.z = substr(s, ((ind-1) DIV 9)*9 + lp, 1)                OR z.z = substr(s, ((ind-1)%9) + (lp-1)*9 + 1, 1)                OR z.z = substr(s, (((ind-1) DIV 3) % 3) * 3                        + ((ind-1) DIV 27) * 27 + lp                        + ((lp-1) DIV 3) * 6, 1)         )  ) SELECT s FROM x WHERE ind=0;

The test environment for the setup is a standard Linode 1024 instance, with one CPU core and 1GB of RAM. The base OS was Ubuntu 17.04. MySQL and MariaDB Server were installed via their respective tarballs. No configuration is done beyond a basic out-of-the-box install inside of the MySQL sandbox. This is similar for sqlite3. Remember to run “.timer on” for sqlite3.

Note that initially they were done on separate instances, but because of the variance you get in cloud instances, it was decided that it would be better to run on the same instance using the MySQL Sandbox.

MySQL 8 first run time: 0.16s. 5 runs: 0.16, 0.16, 0.17, 0.16, 0.16 MariaDB Server 10.2 first run time: 0.20s. 5 runs: 0.22, 0.22, 0.21, 0.21, 0.20 MariaDB Server 10.3.2 first run time: 0.206s. 5 runs: 0.237, 0.199, 0.197, 0.198, 0.192 SQLite3 first run time: Run Time: real 0.328 user 0.323333 sys 0.003333 / Run Time: real 0.334 user 0.333333 sys 0.000000

Trying a more complex Sudoku routine, “..41..2.3……..12…..8..82.6.43…..8.9…..67.2.48..5…..64……..3.7..69..” to produce the result “574198263638425791219367854821654379743819625956732148195273486462981537387546912″, the results are:

MySQL 8 first run time: 4.87s. 5 runs: 5.43, 5.35, 5.10, 5.19, 5.05 MariaDB Server 10.2 first run time: 6.65s. 5 runs: 7.03, 6.57, 6.61, 6.59, 7.12 MariaDB Server 10.3.2 first run time: 6.121s. 5 runs: 5.701, 6.043, 6.043, 5.849, 6.199 SQLite3 first run time: Run Time: real 10.105 user 10.099999 sys 0.000000 / Run Time: real 11.305 user 11.293333 sys 0.000000

Conclusions from this fun little exercise? SQL, even though it’s a standard is not portable between databases. Thankfully, MySQL and MariaDB are syntax-compatible in this case! MySQL and MariaDB Server are both faster than sqlite3 when returning a recursive CTE. It would seem that the MySQL 8.0.3 release candidate is faster at solving these Sudoku routines compared to the MariaDB Server 10.2 stable GA release. It also seems that MariaDB Server 10.3.2 alpha is marginally quicker than MariaDB Server 10.2.

Kudos to Team MariaDB for getting recursive common table expression support first in the MySQL ecosystem, and kudos to Team MySQL for making it fast!

Percona Toolkit 3.0.5 is Now Available

November 21, 2017 - 10:32am

Percona announces the release of Percona Toolkit 3.0.5 on November 21, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

You download Percona Toolkit packages from the web site or install from official repositories.

This release includes the following changes:

New Features:
  • PT-216: The pt-mongodb-query-digest supports MongoDB versions lower than 3.2; incorrect output was fixed.
  • PT-182: The pt-summary, pt-mysql-summary, pt-mongodb-summary commands provide output in the the JSON format.
  • PT-152: pt-mysql-summary shows the output of the SHOW SLAVE HOSTS command.
  • PT-139: pt-table-sync supports replication channels (requires MySQL version 5.7.6 or higher)
  • PMM-1590: MongoDB Profiler for Percona Management and Monitoring and Percona Toolkit has been improved.
Bug fixes:
  • PT-211: pt-mext would fail if the Rsa_public_key variable was empty.
  • PT-212: pt-mongodb-query-digest --version produced incorrect values.
  • PT-202: pt-online-schema-change incorrectly processed virtual columns.
  • PT-200: pt-online-schema-change command reported an error when the name of an index contained UNIQUE as as the prefix or suffix.
  • PT-199: pt-table-checksum did not detect differences on a system with the ROW based replication active.
  • PT-196: pt-onine-schema-change --max-load paused if a status variable was passed 0 as the value.
  • PT-193: pt-table-checksum reported a misleading error if a column comment contained an apostrophe. For more information, see #1708749.
  • PT-187: In some cases, pt-table-checksum did not report that the same table contained different values on the master and slave.
  • PT-186: pt-online-schema-change --alter could fail if field names contained upper case characters. For more information, see #1705998.
  • PT-183: In some cases pt-mongodb-query-digest could not connect to a database using authentication.
  • PT-167: In some cases, pt-kill could ignore the value of the --busy-time parameter. For more information, see #1016272.
  • PT-161: When run with the --skip-check-slave-lag, the pt-table-checksum could could fail in some cases.

InnoDB Page Compression: the Good, the Bad and the Ugly

November 20, 2017 - 10:54am

In this blog post, we’ll look at some of the facets of InnoDB page compression.

Somebody recently asked me about the best way to handle JSON data compression in MySQL. I took a quick look at InnoDB page compression and wanted to share my findings.

There is also some great material on this topic that was prepared and presented by Yura Sorokin at Percona Live Europe 2017: https://www.percona.com/live/e17/sessions/percona-xtradb-compressed-columns-with-dictionaries-an-alternative-to-innodb-table-compression. Yura also implemented Compressed Columns in Percona Server.

First, the good part.

InnoDB page compression is actually really easy to use and provides a decent compression ratio. To use it, I just ran CREATE TABLE commententry (...) COMPRESSION="zlib"; – and that’s all. By the way, for my experiment I used the subset of Reddit comments stored in JSON (described here: Big Dataset: All Reddit Comments – Analyzing with ClickHouse).

This method got me a compressed table of 3.9GB. Compare this to 8.4GB for an uncompressed table and it’s about a 2.15x compression ratio.

Now, the bad part.

As InnoDB page compression uses “hole punching,” the standard Linux utils do not always properly support files created this way. In fact, to see the size “3.9GB” I had to use du --block-size=1 tablespace_name.ibd , as the standard ls -l tablespace_name.ibd shows the wrong size (8.4GB). There is a similar limitation on copying files. The standard way cp old_file new_file may not always work, and to be sure I had to use cp --sparse=always old_file new_file.

Speaking about copying, here’s the ugly part.

The actual time to copy the sparse file was really bad.

On a fairly fast device (a Samsung SM863), copying the sparse file mentioned above in its compressed size of 3.9GB took 52 minutes! That’s shocking, so let me repeat it again: 52 minutes to copy a 3.9GB file on an enterprise SATA SSD.

By comparison, copying regular 8.4GB file takes 9 seconds! Compare 9 sec and 52 mins.

To be fair, the NMVe device (Intel® SSD DC D3600) handles sparse files much better. It took only 12 seconds to copy the same sparse file on this device.

Having considered all this, it is hard to recommend that you use InnoDB page compression for serious production. Well, unless you power your database servers with NVMe storage.

For JSON data, the Compressed Columns in Percona Server for MySQL should work quite well using Dictionary to store JSON keys – give it a try!

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.