Buy Percona ServicesBuy Now!

ProxySQL 1.3.2-1 is now available from Percona repositories

Lastest Forum Posts - 15 hours 4 min ago
ProxySQL 1.3.2-1 is now available from Percona repositories. This release is based on ProxySQL v1.3.2a and introduces following new changes: proxysql-admin_v1.3.2a.md.

ProxySQL 1.3.2-1 is now available from Percona repositories

Lastest Forum Posts - 19 hours 12 min ago
ProxySQL 1.3.2-1 is now available from Percona repositories. This release is based on ProxySQL v1.3.2a and introduces following new changes: proxysql-admin_v1.3.2a.md.

MongoDB 3.4 Views

Latest MySQL Performance Blog posts - January 13, 2017 - 3:07pm

This blog post covers MongoDB 3.4 views, one of the more recent MongoDB features.

Views are often used in relational databases to achieve both data security and a high level of abstraction, making it easier to retrieve data. Unlike regular tables, views neither have a physical schema nor use disk space. They execute a pre-specified query. There are exceptions (such as materialized views and pre-executed views), but as a default the engine actually executes a query and then sends the result set as a single table when a view is used.

In MySQL, a simple view can be defined as:

create database percona; use percona; create view user_hosts as select user, host from mysql.user; select * from user_hosts +------------------+-----------+ | user | host | +------------------+-----------+ | myuser | % | ...

The query above shows only the users and host field, rather than all the table fields. Anyone who queries this view sees a table that only has the user and host fields.

This feature was not available in previous MongoDB versions. All we could do was either deny reads in a collection (which would make it useless to the user) or allow reads to the entire collection (which  was pretty unsafe).

The views feature request was open for a while, and as we can see there was a considerable number of votes to make this feature available: https://jira.mongodb.org/browse/SERVER-142.

MongoDB 3.4 views are non-materialized views, and behind the scenes the engine runs an aggregation. Creating a view requires that we specify a collection or a previous existing view. When a view is the source collection from another view, it allows us to execute a chained aggregation.

To create a view, we should use the db.createView(‘view_name’,’source’,[pipeline]) command, specifying the view name, the view source collection and the aggregation pipeline. This aggregation pipeline, as well as the other parameters, is saved in the system.views collection. This is the only space that the view will use in the system. A new document is saved in the system.views collection for each view created.

Although views seem very easy to create, a few pitfalls when using them.

Since views always run an aggregation, an index is desired to cover the aggregation $match pipeline, or slow responses might be expected during the full collection scans.

Cascading aggregations (creating views of views) can be slow, as the view does not have any data and therefore cannot be indexed. MongoDB neither checks the collection fields nor the collection existence before creating the view. If there is no collection, the view returns an empty cursor.

Views appear as a collection when we are listing them. The show collections command shows us views as one collection, but such collections are read-only. To drop a view, we simply execute db.collection.drop(). The collection is removed from the system.collections, but the data remains untouched because it only removes the code that generates the view result.

How to create views:

In this step-by-step, we will create a view and restrict the user UserReadOnly to read privileges only:

1. Populate collection:

$ mongo --authenticatinDatabase admin -u foo -p use financial switched to db financial > db.employee.insert({FirstName : 'John', LastName: 'Test', position : 'CFO', wage : 180000.00 }) WriteResult({ "nInserted" : 1 }) > db.employee.insert({FirstName : 'John', LastName: 'Another Test', position : 'CTO', wage : 210000.00 }) WriteResult({ "nInserted" : 1 }) > db.employee.insert({FirstName : 'Johnny', LastName: 'Test', position : 'COO', wage : 180000.00 }) WriteResult({ "nInserted" : 1 })

2. Create view that only shows full names:

use financial db.createView('employee_names','employee', [{ $project : { _id : 0, "fullname" : {$concat : ["$FirstName", " ", "$LastName"]}}}]) { "ok" : 1 } >show collections employee employee_names system.views db.system.views.find() db.employee_names.find() { "fullname" : "John Test" } { "fullname" : "John Another Test" } { "fullname" : "Johnny Test" }

3. Create a user-defined role that only gives access to the views:

Create a file “createviewOnlyRole.js” with the following javascript, or copy and paste the following code: 

use financial db_name = db.toString() priv = [] db.system.views.find({},{"_id" : 1, "viewOn" : 1}).forEach(function (view) { database_collection = view['_id'].split('.') database = database_collection[0] database_collection.splice(0,1) coll = database_collection.join([separator = '.']) priv.push({"resource" : { "db" : database, "collection" : coll}, "actions" : ["find"]}) }) var viewrole = db.getRole(db_name + '_readAnyView') if (viewrole == null) { db.runCommand({ createRole: db_name + "_readAnyView", "privileges": priv, roles : [] }); } else { db.runCommand({ updateRole: db_name + "_readAnyView", "privileges": priv, roles : [] }); } print('access granted to:') printjson(priv)

Then authenticate and use the desired database to create this role. In our case:

use financial db.load('createviewOnlyRole.js')

4. Create a new user assigned to the readAnyView role. This new user is only able to query against views, and they must know the view name because no other privileges are granted:

use financial db_name = db.toString() db.createUser( { user: "userReadViews", pwd: "123", roles: [ db_name + "_readAnyView"] } )

Notes: If you receive an error when trying to execute the .js file, please create a new role that grants find in the system.views collection:

use admin db.runCommand({ createRole: "readViewCollection", privileges: [ { resource: { db: "", collection: "system.views" }, actions: [ "find"] }], roles : [] }) db.grantRoleToUser('<your_user>',['readViewCollection'])

For more information about user-defined roles, please check please check the user-defined docs.

This should help explain MongoDB 3.4 views. Please feel free to contact me @AdamoTonete or @percona for any questions and suggestions.

The Impact of Swapping on MySQL Performance

Latest MySQL Performance Blog posts - January 13, 2017 - 9:27am

In this blog, I’ll look at the impact of swapping on MySQL performance. 

It’s common sense that when you’re running MySQL (or really any other DBMS) you don’t want to see any I/O in your swap space. Scaling the cache size (using innodb_buffer_pool_size in MySQL’s case) is standard practice to make sure there is enough free memory so swapping isn’t needed.   

But what if you make some mistake or miscalculation, and swapping happens? How much does it really impact performance? This is exactly what I set out to investigate.

My test system has the following:

  • 32GB of physical memory
  • OS (and swap space) on a (pretty old) Intel 520 SSD device
  • Database stored on Intel 750 NVMe storage

To simulate a worst case scenario, I’m using Uniform Sysbench Workload:

sysbench --test=/usr/share/doc/sysbench/tests/db/select.lua   --report-interval=1 --oltp-table-size=700000000 --max-time=0 --oltp-read-only=off --max-requests=0 --num-threads=64 --rand-type=uniform --db-driver=mysql --mysql-password=password --mysql-db=test_innodb  run

To better visualize the performance of the metrics that matter for this test, I have created the following custom graph in our Percona Monitoring and Management (PMM) tool. It shows performance disk IO and swapping activity on the same graph.

Here are the baseline results for innodb_buffer_pool=24GB. The results are a reasonable ballpark number for a system with 32GB of memory.

As you can see in the baseline scenario, there is almost no swapping, with around 600MB/sec read from the disk. This gives us about 44K QPS. The 95% query response time (reported by sysbench) is about 3.5ms.

Next, I changed the configuration to innodb_buffer_pool_size=32GB, which is the total amount of memory available. As memory is required for other purposes, it caused swapping activity:

We can see that performance stabilizes after a bit at around 20K QPS, with some 380MB/sec disk IO and 125MB/sec swap IO. The 95% query response time has grown to around 9ms.

Now let’s look at an even worse case. This time, we’ll set our configuration to innodb_buffer_pool_size=48GB (on a 32GB system).

Now we have around 6K QPS. Disk IO has dropped to 250MB/sec, and swap IO is up to 190MB/sec. The 95% query response time is around 35ms. As the graph shows, the performance becomes more variable, confirming the common assumption that intense swapping affects system stability.

Finally, let’s remember MySQL 5.7 has the Online Buffer Pool Resize feature, which was created to solve exactly this problem (among other reasons). It changes the buffer pool size if you accidentally set it too large. As we have tested innodb_buffer_pool_size=24GB, and demonstrated it worked well, let’s scale it back to that value:

mysql> set global innodb_buffer_pool_size=24*1024*1024*1024; Query OK, 0 rows affected (0.00 sec)

Now the graph shows both good and bad news. The good news is that the feature works as intended, and after the resize completes we get close to the same results before our swapping experiment. The bad news is everything pretty much grinds to halt for 15 minutes or so while resizing occurs. There is almost no IO activity or intensive swapping while the buffer pool resize is in progress.   

I also performed other sysbench runs for selects using Pareto random type rather than Uniform type, creating more realistic testing (skewed) data access patterns. I further performed update key benchmarks using both Uniform and Pareto access distribution.

You can see the results below:

As you can see, the results for selects are as expected. Accesses with Pareto distributions are better and are affected less – especially by minor swapping.  

If you look at the update key results, though, you find that minor swapping causes performance to improve for Pareto distribution. The results at 48GB of memory are pretty much the same.

Before you say that that is impossible, let me provide an explanation: I limited innodb_max_purge_lag on this system to avoid unbound InnoDB history length growth. These workloads tend to be bound by InnoDB purge performance. It looks like swapping has impacted the user threads more than it did the purge threads, causing such an unusual performance profile. This is something that might not be repeatable between systems.

Summary

When I started, I expected severe performance drop even with very minor swapping. I surprised myself by getting swap activity to more than 100MB/sec, with performance “only” halved.  

While you should continue to plan your capacity so that there is no constant swapping on the database system, these results show that a few MB/sec of swapping activity it is not going to have a catastrophic impact.

This assumes your swap space is on an SSD, of course! SSDs handle random IO (which is what paging activity usually is) much better than HDDs.

Xtrabackup Error - SET SESSION lock_wait_timeout=31536000: MySQL server has gone away

Lastest Forum Posts - January 13, 2017 - 8:05am


While taking backup of our percona mysql DB, I'm getting this error:

SET SESSION lock_wait_timeout=31536000: MySQL server has gone away

The database is: Server version: 5.7.16-10-log Percona Server (GPL), Release '10', Revision 'a0c7d0d'

The Xtrabackup version that I use is

$ xtrabackup --version xtrabackup version 2.4.5 based on MySQL server 5.7.13 Linux (x86_64) (revision id: e41c0be)

The backup command that I've used is -

innobackupex /backuppath/

I use Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-59-generic x86_64)

Problem with backing up docker mysql

Lastest Forum Posts - January 13, 2017 - 7:36am
I am trying to backup a Mysql docker container from another machine and i get the following error:

InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: File /var/lib/mysql/ib_logfile0: 'open' returned OS error 71. Cannot continue operation
InnoDB: Cannot continue operation.

It looks like a permission error but i qm pretty confident that everything is setup correctly. The data directory of the container is mapped to a folder on my pc that belongs to "mysql" user. Also inside the container all the configuration files and the data belong to mysql. I read about this bug https://bugs.launchpad.net/percona-x...p/+bug/1526467 which is exactly my case, but upgrading to xtrabackup 2.4 did not help.....

Am i missing something?

Copying only partition when altering single table partition

Lastest Forum Posts - January 13, 2017 - 3:20am
Hi

Can pt-online-schema-change be called so e.g. a REBUILD PARTITION will only copy the partition data by using ALTER TABLE EXCHANGE PARTITION so only the specific partition and not the complete table is copied when executing pt-online-schema-change?

Best Regards

User defined copy algorithm for pt-online-schema-change

Lastest Forum Posts - January 13, 2017 - 3:13am
Hi

Is there a way to supply pt-online-schema-change with a user define copy algorithm?
E.g. a Stored Procedure or via the plugin option?
This functionality is useful when standard algorithm is slow due to unsuitable indexes.

Best Regards

CVE-2016-6225: Percona Xtrabackup Encryption IV Not Being Set Properly

Latest MySQL Performance Blog posts - January 12, 2017 - 1:34pm

If you are using Percona XtraBackup with xbcrypt to create encrypted backups, and are using versions older than 2.3.6 or 2.4.5, we advise that you upgrade Percona XtraBackup.

Note: this does not affect encryption of encrypted InnoDB tables.

CVE-2016-6225

Percona XtraBackup versions older than 2.3.6 or 2.4.5 suffered an issue of not properly setting the Initialization Vector (IV) for encryption. This could allow someone to carry out a Chosen-Plaintext Attack, which could recover decrypted content from the encrypted backup files without the need for a password.

Compatibility

Percona XtraBackup carries backward compatibility to allow for the decryption of older backup files. However, encrypted backup files produced by the versions that have the fix will not be compatible with older versions of Percona XtraBackup.

Applicability

Access to the encrypted files must already be present for exploitation to occur. So long as you adequately protect the encrypted files, we don’t expect this issue to adversely affect users.

Credits

Percona would like to thank and give credit to Ken Takara for discovering this issue and working it through to PoC exploitation.

More Information Release Notes

The Percona Online Store: Get Database Help Now with Support and Health Audit Services

Latest MySQL Performance Blog posts - January 12, 2017 - 1:10pm

We are proud to announce the new Percona online store!

Keeping your database environment tuned, optimized and high-performance is key to achieving business goals. If your database goes down, so does your business. Percona experts have a long history of helping enterprises ensure their databases are running smoothly. With Percona, you can meet today’s workloads, and prepare for future workloads before they impact performance.

Now we’ve made it even easier to get Percona database services: visit Percona’s new online store! The webstore is perfect for ordering a health audit and immediate, smaller-scale database support. Simply select your service type, answer a few questions about your environment, and then submit. A Percona expert will be in touch.

The webstore makes it fast and easy to purchase Percona Services, with recurring monthly credit card payments. Shop now for Percona’s highly responsive, effective and affordable support and service options, including MySQL Standard Support, MongoDB Standard Support and a MySQL Health Audit.

Percona has some of the best reviews and one of the highest renewal rates in the industry. We can help you increase your uptime, be more productive, reduce your support budget and implement fixes for performance issues faster.

Check out the new Percona online store here!

Metric Data Archiving

Lastest Forum Posts - January 12, 2017 - 7:46am
Are there any tool(s)/scripts to archive historical data from PMM to avoid having the data store grow and grow and grow?

pt-query-digest slowlog output format looses query cache hit information.

Lastest Forum Posts - January 12, 2017 - 6:06am
Hi everyone,

I'm using pt-query-digest (v2.2.20) slowlog output format to filter out queries based on their timestamp, to run further digests on the filtered slowlog :

ptqd 11-21.slow.log --output slowlog --no-report --filter '($event->{timestamp} > 1483957800) && ($event->{timestamp} < 1483959600)' > 11-21.halfhour.log

The point is my source log 11-21.slow.log has query cache hit info, for instance :

# Time: 170109 12:08:43
# User@Host: pidf_user[pidf_user] @ [10.226.13.132]
# Thread_id: 32829731 Schema: webmbpidf QC_hit: No
# Query_time: 0.015387 Lock_time: 0.000038 Rows_sent: 0 Rows_examined: 0

Whereas the filtered output doesn't :

# Time: 170109 11:59:59
# User@Host: pest_user[pest_user] @ 10.226.13.13 []
# Thread_id: 32805954
# Query_time: 0.000221 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0

l lose schema information as well but it's less a problem in my situation.

Is there a way to keep the query cache info in the slowlog output ?

Any help would me much appreciated.

PMM on server with Failover IP

Lastest Forum Posts - January 12, 2017 - 1:50am
Situation:

PMM Server is running in a docker container on a server with a failover IP.

If I'm using the original ip of the server instead the failover ip all is working fine.

If I'm adding the the server with failover ip on a pmm client, i get tls handshake errors from the real ip of the server and not from the failover ip.

It's strange.


* Connection: Client --> Server is OK

but

* Connection: Client <-- Server is not possible.


Forcing the failover ip with SNAT doesn't change the situation.

Any idea?

How to Replace MySQL with Percona Server on a CPanel, WHM VPS or Dedicated Server

Latest MySQL Performance Blog posts - January 11, 2017 - 10:41am

In this blog post, we’ll look at how to replace MySQL with Percona Server for MySQL on a CPanel, WHM VPS or dedicated server.

In general, CPanel and WHM have been leaning towards support of MariaDB over other flavors. This is partly due to the upstream repos replacing the MySQL package with MariaDB (for example, on CentOS).

MySQL 5.6 is still supported though, which means they are keeping support for core MySQL products. But if you want to get some extra performance enhancements or enterprise features for free, without getting too many bells and whistles, you might want to install Percona Server.

I’ve done this work on a new dedicated server with the latest WHM and CPanel on CentOS 7, with MySQL 5.6 installed. Besides the backup, this is a fairly quick process.

It’s pretty simple. From the Percona Server for MySQL 5.7 installation doc, we can get the YUM repo. (Run commands as root if you can, otherwise as sudo.)

yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm

Now that we have the repo, let’s install Percona XtraBackup in case we need to roll this back at any point:

yum install percona-xtrabackup

This server had a drive mounted at /backup, so I created the backup with the following commands:

xtrabackup --target-dir=/backup/xtrabackup --backup xtrabackup --target-dir=/backup/xtrabackup --prepare

Now that we have a good backup, let’s remove MySQL:

service mysql stop yum remove MySQL* mysql*

Depending on your dependency chain, this could remove Percona XtraBackup, but that can be fixed. Let’s accept this uninstall.

Let’s install Percona Server for MySQL 5.7 and Percona Toolkit:

yum install Percona-Server-server-57 percona-toolkit percona-xtrabackup

Now that it’s installed ensure the mysql service is running. If it isn’t, start it. Now let’s upgrade:

mysql_upgrade

NOTE. This works if you can log in as root without a password; if you can’t, you will need to specify the -u and -p flags.

Once you run the upgrade, restart the mysql service:

service mysql restart

And there you go, you are now running on Percona Server for MySQL 5.7. If your managed providers tell you it’s not supported, don’t worry! It works as long as CPanel supports MySQL 5.6.

If you have any issues, just restore the backup.

NOTE: One thing to keep in mind is that 5.7 breaks CPanel’s ability to create users in MySQL. I believe this is due to the changes to the mysql.user table. If this is an issue for you, you can always use Percona Server for MySQL 5.6.

Reinstall MySQL and Preserve All MySQL Grants and Users

Latest MySQL Performance Blog posts - January 11, 2017 - 8:35am

In this blog post, we’ll look at how to preserve all MySQL grants and users after reinstalling MySQL.

Every so often, I need to reinstall a MySQL version from scratch and preserve all the user accounts and their permissions (or move the same users and privileges to another server).

As of MySQL 5.7, MySQL does not make this easy! MySQL SHOW GRANTS only shows permissions for one user, and the method suggested on StackExchange – dumping tables containing grants information directly – is not robust (as Rick James mentions in the comments). It also doesn’t work between different MySQL versions.

This problem is easily solved, however, with the pt-show-grants tool from Percona Toolkit (which serves pretty much as a mysqldump for user privileges).  

All you need to do is:

  1. On the source, or to backup MySQL privileges, run:

pt-show-grants > grants.sql

  1. On the target, or to restore MySQL privileges, run:

mysql  < grants.sql

  1. If you would like to clean up the old privileges from MySQL before loading new ones, use:

pt-show-grants --drop  --ignore root@localhost | grep "^DROP USER " | mysql

This removes all the users (except the root user, which you will need to connect back and load new privileges).

With Percona Toolkit, preserving your grants and user privileges is easy!

Does fast incremental backups work for MariaDB

Lastest Forum Posts - January 11, 2017 - 6:13am
Hi

According percona www "Fast incremental backups are supported for Percona Server with XtraDB changed page tracking enabled"
Does this mean fast incremental backups also work for MariaDB that support XtraDB?

Best Regards
Karsten

Cannot start logging of slow queries at run-time

Lastest Forum Posts - January 11, 2017 - 2:46am
Hi!
I cannot configure my Percona-mysql instance to start logging of slow queries:

Server version: 5.6.21-70.1-56-log Percona XtraDB Cluster (GPL), Release rel70.1, Revision 938, WSREP version 25.8, wsrep_25.8.r4150

It is resetting “slow_query_log” every moment.

Checking the “slow_query_log” . It is OFF now:


mysql> show global variables like 'slow_query_log' \G
*************************** 1. row ***************************
Variable_name: slow_query_log
Value: OFF
1 row in set (0.00 sec)


Switching it ON:

mysql> set global slow_query_log=1;
Query OK, 0 rows affected (0.00 sec)

Checking the “slow_query_log” . It is ON now:

mysql> show global variables like 'slow_query_log' \G
*************************** 1. row ***************************
Variable_name: slow_query_log
Value: ON
1 row in set (0.00 sec)


Checking one more time the “slow_query_log” in a few moments . It is OFF now:

mysql> show global variables like 'slow_query_log' \G
*************************** 1. row ***************************
Variable_name: slow_query_log
Value: OFF
1 row in set (0.00 sec)


Checking the slow log file size:

[root@sugardb1 mysql]# ll /data/mysql/sugardb1-slow.log
-rw-r--r-- 1 mysql root 0 Jan 11 12:23 /data/mysql/sugardb1-slow.log
[root@sugardb1 mysql]# ll /data/mysql/sugardb1-slow.log
-rw-r--r-- 1 mysql root 260 Jan 11 12:23 /data/mysql/sugardb1-slow.log
[root@sugardb1 mysql]# ll /data/mysql/sugardb1-slow.log
-rw-r--r-- 1 mysql root 0 Jan 11 12:23 /data/mysql/sugardb1-slow.log
[root@sugardb1 mysql]# ll /data/mysql/sugardb1-slow.log
-rw-r--r-- 1 mysql root 0 Jan 11 12:24 /data/mysql/sugardb1-slow.log
[root@sugardb1 mysql]# ll /data/mysql/sugardb1-slow.log
-rw-r--r-- 1 mysql root 260 Jan 11 12:24 /data/mysql/sugardb1-slow.log


The file has been resetting.


The system variables are

mysql> show global variables like 'log_slow%' \G
*************************** 1. row ***************************
Variable_name: log_slow_admin_statements
Value: ON
*************************** 2. row ***************************
Variable_name: log_slow_filter
Value:
*************************** 3. row ***************************
Variable_name: log_slow_rate_limit
Value: 10
*************************** 4. row ***************************
Variable_name: log_slow_rate_type
Value: query
*************************** 5. row ***************************
Variable_name: log_slow_slave_statements
Value: ON
*************************** 6. row ***************************
Variable_name: log_slow_sp_statements
Value: ON
*************************** 7. row ***************************
Variable_name: log_slow_verbosity
Value: microtime,query_plan,innodb
7 rows in set (0.00 sec)


mysql> show global variables like 'slow_query%' \G
*************************** 1. row ***************************
Variable_name: slow_query_log
Value: OFF
*************************** 2. row ***************************
Variable_name: slow_query_log_always_write_time
Value: 0.010000
*************************** 3. row ***************************
Variable_name: slow_query_log_file
Value: /data/mysql/sugardb1-slow.log
*************************** 4. row ***************************
Variable_name: slow_query_log_timestamp_always
Value: ON
*************************** 5. row ***************************
Variable_name: slow_query_log_timestamp_precision
Value: microsecond
*************************** 6. row ***************************
Variable_name: slow_query_log_use_global_control
Value: log_slow_filter,log_slow_rate_limit,log_slow_verbo sity,long_query_time,min_examined_row_limit
6 rows in set (0.00 sec)

mysql>


mysql> show variables like 'performance_schema' \G
*************************** 1. row ***************************
Variable_name: performance_schema
Value: OFF
1 row in set (0.00 sec)

Can someone give me advice howto resolve this issue?

Webinar Thursday, January 12: Percona Software News and Roadmap Update for Q1 2017

Latest MySQL Performance Blog posts - January 10, 2017 - 11:44am

Please join Percona CEO Peter Zaitsev for a webinar on Thursday, January 12, 2017 at 11 am PST/ 2 pm EST (UTC-8) for a discussion on the Percona Software News and Roadmap Update for Q1 2017.

In this webinar, Peter will discuss what’s new in Percona open source software. This will include Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

During this webinar Peter will talk about newly released features in Percona software, show a few quick demos and share with you highlights from the Percona open source software roadmap.

Peter will also talk about new developments in Percona commercial services and finish with a Q&A.

Register for the Percona Software News and Roadmap Update webinar here.

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Data Performance Blog. He has also been tapped as a contributor to Fortune and DZone, and his recent ebook Practical MySQL Performance Optimization is one of Percona’s most popular downloads.

 

How to Move a MySQL Partition from One Table to Another

Latest MySQL Performance Blog posts - January 10, 2017 - 10:00am

In this blog post we’ll look at how to move a MySQL partition from one table to another, for MySQL versions before 5.7.

Up to version 5.7, MySQL had a limitation that made it impossible to directly exchange partitions between partitioned tables. Now and then, we get questions about how to import an .ibd for use as a partition in a table, as well as how to exchange partitions with another partitioned table. Below is step-by-step instructions on how to move a partition from one table to another.

In this example, one of our customers had two tables with the following structures:

CREATE TABLE live_tbl ( some_id bigint(20) NOT NULL DEFAULT '0', summary_date date NOT NULL, PRIMARY KEY (some_id,summary_date) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci /*!50500 PARTITION BY RANGE COLUMNS(summary_date) (PARTITION p201203 VALUES LESS THAN ('2012-04-01') ENGINE = InnoDB, PARTITION p201204 VALUES LESS THAN ('2012-05-01') ENGINE = InnoDB, PARTITION p201205 VALUES LESS THAN ('2012-06-01') ENGINE = InnoDB, PARTITION p201206 VALUES LESS THAN ('2012-07-01') ENGINE = InnoDB, PARTITION future VALUES LESS THAN (MAXVALUE) ENGINE = InnoDB) */;

CREATE TABLE archive_tbl ( some_id bigint(20) NOT NULL DEFAULT '0', summary_date date NOT NULL, PRIMARY KEY (some_id,summary_date) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci /*!50500 PARTITION BY RANGE COLUMNS(summary_date) (PARTITION p201109 VALUES LESS THAN ('2011-10-01') ENGINE = InnoDB, PARTITION p201110 VALUES LESS THAN ('2011-11-01') ENGINE = InnoDB, PARTITION p201111 VALUES LESS THAN ('2011-12-01') ENGINE = InnoDB, PARTITION p201112 VALUES LESS THAN ('2012-01-01') ENGINE = InnoDB, PARTITION p201201 VALUES LESS THAN ('2012-02-01') ENGINE = InnoDB, PARTITION p201202 VALUES LESS THAN ('2012-03-01') ENGINE = InnoDB, PARTITION future VALUES LESS THAN (MAXVALUE) ENGINE = InnoDB) */;

And their (likely obvious) goal is to move (not copy) the oldest partition from live_tbl to archive_tbl. To achieve this, we came up with the following procedure:

For the following, we assume:

  • The datadir is “/var/lib/mysql/”
  • MySQL Server is run by “mysql” Linux user
  • “p201203” is the partition name you want to move
  • “live_tbl is the source table from where you want to move the partition
  • “archive_tbl” is the destination table to where you want to move the partition
  • “dest_tbl_tmp” is the temporary table we will create, using the same CREATE TABLE criteria as in the live_tbl
  • “thedb” is the database name
1. Copy the .ibd data file from that particular partition

First, make sure you flush any pending changes to disk and that the table is locked, so that binary table copies can be made while the server is running. Keep in mind that the table will be locked while you copy the .ibd file. All reads/writes during that time will be blocked.

Important: Don’t close this session or the lock will be released.

mysql> USE thedb mysql> FLUSH TABLE live_tbl FOR EXPORT;

Open another session, and copy the .ibd file to a temporary folder.

shell> cp /var/lib/mysql/thedb/live_tbl#P#p201203.ibd /tmp/dest_tbl_tmp.ibd

After you copy the .ibd file to the temporary folder, go back to the MySQL session and unlock the table so that all reads and writes to that particular table are allowed again.

mysql> UNLOCK TABLES;

2. Prepare a temporary table to import the tablespace

Create a temporary table exactly like the one into which you want to import the partition. Remove the partitioning on it and discard the tablespace so that it is ready for the .ibd import.

mysql> CREATE TABLE dest_tbl_tmp LIKE archive_tbl; mysql> ALTER TABLE dest_tbl_tmp REMOVE PARTITIONING; mysql> ALTER TABLE dest_tbl_tmp DISCARD TABLESPACE;

3.  Import the tablespace to the temporary table

Place the .ibd file in the appropriate folder, set the correct permissions and ownership and then import the tablespace to the temporary table.

shell> cp /tmp/dest_tbl_tmp.ibd /var/lib/mysql/thedb/ shell> chmod 660 /var/lib/mysql/thedb/dest_tbl_tmp.ibd shell> chown mysql.mysql /var/lib/mysql/thedb/dest_tbl_tmp.ibd mysql> ALTER TABLE dest_tbl_tmp IMPORT TABLESPACE;

4. Swap the tablespace with the destination table’s partition tablespace

Partition according to your own schema. (This is just an example using date values. In our case, we have to REORGANIZE PARTITION to accommodate a new LESS THAN range before the MAXVALUE.)

mysql> ALTER TABLE archive_tbl REORGANIZE PARTITION future INTO ( PARTITION p201203 VALUES LESS THAN ('2012-04-01'), PARTITION future VALUES LESS THAN (MAXVALUE) ); mysql> ALTER TABLE archive_tbl EXCHANGE PARTITION p201203 WITH TABLE dest_tbl_tmp;

5. Check that the partitions are correctly exchanged before dropping the one from the source table

SELECT * FROM archive_tbl; SELECT * FROM dest_tbl_tmp; SELECT * FROM live_tbl; ALTER TABLE live_tbl DROP PARTITION p201203;

For more information on why these steps are needed, please check the following documentation link for ALTER TABLE … EXCHANGE PARTITION:
https://dev.mysql.com/doc/refman/5.6/en/partitioning-management-exchange.html

In MySQL version 5.7, it is possible to exchange partitions without the unpartitioned table step, as described in the following link:
https://dev.mysql.com/doc/refman/5.7/en/partitioning-management-exchange.html

There are bugs related to the steps in this guide that might be useful to take into consideration:

Connect to a pmm-server with authentication (is it a bug?!)

Lastest Forum Posts - January 10, 2017 - 9:44am
Hi Everyone,

I have created a pmm-server with:

docker run -d -p 85:80 --volumes-from pmm-data --name pmm-server --restart always -e SERVER_USER=someusername -e SERVER_PASSWORD=somepassword percona/pmm-server:1.0.7

So far so good - I can access it via web and it's asking for the given credentials.

Now, when I want to set up monitoring, I do

pmm-admin config --server 127.0.0.1:85 --server-user someusername --server-password somepassword

I get a message:

Unable to connect to PMM server by address: 127.0.0.1:85

* Check if the configured address is correct.
* If server is running on non-default port, ensure it was specified along with the address.
* If server is enabled for SSL or self-signe

I've tried to search the documentation and googling, but the --server-user and --server-password options are quite poorly documented.
What am I missing?

Client version is pmm-client-1.0.7-1.x86_64

Best,
Istvan Prosinger
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.