Buy Percona ServicesBuy Now!

Percona Monitoring and Management 1.1.4 is now available

Percona announces the release of Percona Monitoring and Management 1.1.4 on May 29, 2017.

For installation instructions, see the Deployment Guide.

This release includes experimental support for MongoDB in Query Analytics, including updated QAN interface.

Query Analytics for MongoDB

To enable MongoDB query analytics, use the mongodb:queries alias when adding the service. As an experimental feature, it also requires the --dev-enable option:

sudo pmm-admin add --dev-enable mongodb:queries

NOTE: Currently, it monitors only collections that are present when you enable MongoDB query analytics. Query data for collections that you add later is not gathered. This is a known issue and it will be fixed in the future.

Query Analytics Redesign

The QAN web interface was updated for better usability and functionality (including the new MongoDB query analytics data). The new UI is experimental and available by specifying /qan2 after the URL of PMM Server.

NOTE: The button on the main landing page still points to the old QAN interface.

You can check out the new QAN web UI at https://pmmdemo.percona.com/qan2

New in PMM Server
  • PMM-724: Added the Index Condition Pushdown (ICP) graph to the MySQL InnoDB Metrics dashboard.
  • PMM-734: Fixed the MySQL Active Threads graph in the MySQL Overview dashboard.
  • PMM-807: Fixed the MySQL Connections graph in the MySQL Overview dashboard.
  • PMM-850: Updated the MongoDB RocksDB and MongoDB WiredTiger dashboards.
  • Removed the InnoDB Deadlocks and Index Collection Pushdown graphs from the MariaDB dashboard.
  • Added tooltips with descriptions for graphs in the MySQL Query Response Time dashboard.Similar tooltips will be gradually added to all graphs.
New in PMM Client
  • PMM-801: Improved PMM Client upgrade process to preserve credentials that are used by services.
  • Added options for pmm-admin to enable MongoDB cluster connections.
About Percona Monitoring and Management

Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

NEED help,there was a crush issue when importing big data into Percona cluster

Lastest Forum Posts - May 27, 2017 - 7:26pm
yesterday,we have builded percona cluster which has three nodes in test box.and then we using mysqldump to backup one database data from mysql 5.5 in production box.after we got the sql file,it was 11G and have 30000 lines.we use command "mysql -uroot -p123456 dbname < mysql_dump.sql" to load the database file into percona cluster.actually,we have created a same dbname in percona cluster。

“CREATE DATABASE dbname DEFAULT CHARSET utf8 ".

ok,we run this command "mysql -uroot -p123456 dbname < mysql_dump.sql" in node1 ,Unluckily we got a crush issue in middle of it,and node1's mysql service was crush,need start.we try to run command many times ,so we wonder that if the percona cluster not be allown import sql file which more than 1GB???but if we decide to use percona cluster in our production box,the first step it load the currentlly db data into percona cluster to keep moving our data.did someone help us??thx very much in advance !!

master server with quorum 2

Lastest Forum Posts - May 27, 2017 - 8:16am
can we run only 2 nodes instead of 3 with master - master setup with multiple slave node with version Percona xtradb Cluster version 5.5.34-25.?

We are also planning to upgrade the server, so by adding slave with newer version will work

Percona Server for MongoDB 3.0.15-1.10 is Now Available

Latest MySQL Performance Blog posts - May 26, 2017 - 10:28am

Percona announces the release of Percona Server for MongoDB 3.0.15-1.10 on May 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.0 protocol and drivers. It extends MongoDB with PerconaFT and MongoRocks storage engines, as well as several enterprise-grade features:

NOTE: PerconaFT was deprecated and is not available in later versions. TokuBackup was replaced with Hot Backup for WiredTiger and MongoRocks storage engines.

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.0.15 and includes the following additional change:

Percona Server for MongoDB 3.0.15-1.10 release notes are available in the official documentation.

Percona Server for MySQL 5.7.18-15 is Now Available

Latest MySQL Performance Blog posts - May 26, 2017 - 10:05am

Percona announces the GA release of Percona Server for MySQL 5.7.18-15 on May 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.18, including all the bug fixes in it, Percona Server for MySQL 5.7.18-15 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.18-15 milestone at Launchpad.

Bugs Fixed:
  • The server would crash when querying partitioning table with a single partition. Bug fixed #1657941 (upstream #76418).
  • Running a query on InnoDB table with ngram full-text parser and a LIMIT clause could lead to a server crash. Bug fixed #1679025 (upstream #85835).

The release notes for Percona Server for MySQL 5.7.18-15 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Can't add mysql:queries to remote pmm server

Lastest Forum Posts - May 26, 2017 - 1:29am
Hi !
I was try to add mysql:queries to my remote pmm server, but with no luck.

[root@hotel ~]# pmm-admin list
pmm-admin 1.1.3

PMM Server | 46.xx.xx.xx (password-protected)
Client Name | client-dns
Client Address | 88.xx.xx.xx
Service Manager | unix-systemv

-------------- --------------- ----------- -------- ----------------------------- ---------------
SERVICE TYPE NAME LOCAL PORT RUNNING DATA SOURCE OPTIONS
-------------- --------------- ----------- -------- ----------------------------- ---------------
linux:metrics xxxxx 42000 YES -
mysql:metrics xxxxx 42002 YES root:***@tcp(127.0.0.1:3306) tablestats=OFF

Both servers - mysql and pmm set with public IP and placed in different DC, linux and mysql metric working fine, but when i try add queries i get next error:

pmm-admin add mysql:queries
Error adding MySQL queries: timeout 10s waiting on agent to connect to API.

if check log

# Version: percona-qan-agent 1.1.3
# Basedir: /usr/local/percona/qan-agent
# PID: 45814
# API: 46.xx.xx.xx/qan-api
# UUID: xxx
2017/05/26 11:22:48.517407 main.go:165: Starting agent...
2017/05/26 11:22:48.517892 main.go:346: Agent is ready
2017/05/26 11:22:58.548048 main.go:201: Cannot connect to API: GET http://admin:admin@192.168.122.68/qan-api/agents/xxx error: client.Do: Get http://admin:admin@192.168.122.68/qan-api/agents/xxx: dial tcp 192.168.122.68:80: i/o timeout. Verify that the agent UUID and API hostname printed above are correct and that no network issues prevent this host from accessing the API. Connection attempts to API will continue
until successful, but additional errors will not be logged, and agent will not send data until connected to API.

looks like qan agent try to use default private address for server connection.

if check qan-agent config they contain right server ip

"ApiHostname":"46.xx.xx.xx",

So what's wrong with my installation and why qan-agent still try connect to private server ip except public?

What About ProxySQL and Mirroring?

Latest MySQL Performance Blog posts - May 25, 2017 - 3:45pm

In this blog post, we’ll look at how ProxySQL and mirroring go together.

Overview

Let me be clear: I love ProxySQL, and I think it is a great component for expanding architecture flexibility and high availability. But not all that shines is gold! In this post, I want to correctly set some expectations, and avoid selling carbon for gold (carbon has it’s own uses, while gold has others).

First of all, we need to cover the basics of how ProxySQL manages traffic dispatch (I don’t want to call it mirroring, and I’ll explain further below).

ProxySQL receives a connection from the application, and through it we can have a simple SELECT or a more complex transaction. ProxySQL gets each query, passes them to the Query Processor, processes them, identifies if a query is mirrored, duplicates the whole MySQL session ProxySQL internal object and associates it to a mirror queue (which refer to a mirror threads pool). If the pool is free (has an available active slot in the concurrent active threads set) then the query is processed right away. If not, it will stay in the queue. If the queue is full, the query is lost.

Whatever is returned from the query goes to /dev/null, and as such no result set is passed back to the client.

The whole process is not free for a server. If you check the CPU utilization, you will see that the “mirroring” in ProxySQL actually doubles the CPU utilization. This means that the traffic on server A is impacted because of resource contention.

Summarizing, ProxySQL will:

  1. Send the query for execution in different order
  2. Completely ignore any transaction isolation
  3. Have different number of query executed on B with respect to A
  4. Add significant load on the server resources

This point, coupled with the expectations I mention in the reasoning at the end of this article, it is quite clear to me that at the moment we cannot consider ProxySQL as a valid mechanism to duplicate a consistent load from server A to server B.

Personally, I don’t think that the ProxySQL development team (Rene :D) should waste time on fixing this issue, as there are so many other things to cover and improve on in ProxySQL.

After working extensively with ProxySQL, and doing a deep QA on mirroring, I think that either we keep it as basic blind traffic dispatcher. Otherwise, a full re-conceptualization is required. But once we have clarified that, ProxySQL “traffic dispatch” (still don’t want to call it mirroring) remains a very interesting feature that can have useful applications – especially since it is easy to setup.

The following test results should help set the correct expectations.

The tests were simple: load data in a Percona XtraDB Cluster and use ProxySQL to replicate the load on a MySQL master-slave environment.

  • Machines for MySQL/Percona XtraDB Cluster: VM with CentOS 7, 4 CPU 3 GB RAM, attached storage
  • Machine for ProxySQL: VM CentOS 7, 8 CPU 8GB RAM

Why did I choose to give ProxySQL a higher volume of resources? I knew in advance I could need to play a bit with a couple of settings that required more memory and CPU cycles. I wanted to be sure I didn’t get any problems from ProxySQL in relation to CPU and memory.

The application that I was using to add load is a Java application I develop to perform my tests. The app is at https://github.com/Tusamarco/blogs/blob/master/stresstool_base_app.tar.gz, and the whole set I used to do the tests are here:  https://github.com/Tusamarco/blogs/tree/master/proxymirror.

I used four different tables:

+------------------+ | Tables_in_mirror | +------------------+ | mirtabAUTOINC | | mirtabMID | | mirtabMIDPart | | mirtabMIDUUID |

Ok so let start. Note that the meaningful tests are the ones below. For the whole set, refer to the whole set package. First setup ProxySQL:

First setup ProxySQL:

delete from mysql_servers where hostgroup_id in (500,501,700,701); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',500,3306,60000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',501,3306,100,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.21',501,3306,20000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.231',501,3306,20000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',700,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',701,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.25',701,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.43',701,3306,1,400); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; delete from mysql_users where username='load_RW'; insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) values ('load_RW','test',1,500,'test',1); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK; delete from mysql_query_rules where rule_id=202; insert into mysql_query_rules (rule_id,username,destination_hostgroup,mirror_hostgroup,active,retries,apply) values(202,'load_RW',500,700,1,3,1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

Test 1

The first test is mainly a simple functional test during which I insert records using one single thread in Percona XtraDB Cluster and MySQL. No surprise, here I have 3000 loops and at the end of the test I have 3000 records on both platforms.

To have a baseline we can see that the ProxySQL CPU utilization is quite low:

At the same time, the number of “questions” against Percona XtraDB Cluster and MySQL very similar:

Percona XtraDB Cluster

MySQL

The other two metrics we want to keep an eye on are Mirror_concurrency and Mirror_queue_length. These two refer respectively to mysql-mirror_max_concurrency and mysql-mirror_max_queue_length:

These two new variables and metrics were introduced in ProxySQL 1.4.0, with the intent to control and manage the load ProxySQL generates internally related to the mirroring feature. In this case, you can see we have a max of three concurrent connections and zero queue entries (all good).

Now that we have a baseline, and that we know at functional level “it works,” let see what happens when increasing the load.

Test 2

The scope of the test was identifying how ProxySQL behaves with a standard configuration and increasing load. It comes up that as soon as ProxySQL has a little bit more load, it starts to lose some queries along the way.

Executing 3000 loops for 40 threads only results in 120,000 rows inserted in all the four tables in Percona XtraDB Cluster. But the table in the secondary (mirrored) platform only has a variable number or inserted rows, between 101,359 and 104,072. This demonstrates consistent loss of data.

After reviewing and comparing the connections running in Percona XtraDB Cluster and the secondary, we can see that (as expected) Percona XtraDB Cluster’s number of connections is scaling and serving the number of incoming requests, while the connections on the secondary are limited by the default value of mysql-mirror_max_concurrency=16.

Is also interesting to note that the ProxySQL transaction process queue maintains its connection to the Secondary longer than the connection to Percona XtraDB Cluster.

As we can see above, the queue is an evident bell curve that reaches 6K entries (which is quite below the mysql-mirror_max_queue_length limit (32K)). Yet queries were dropped by ProxySQL, which indicates the queue is not really enough to accommodate the pending work.

CPU-wise, ProxySQL (as expected) take a few more cycles, but nothing crazy. The overhead for the simple mirroring queue processing can be seen when the main load stops around 12:47.

Another interesting graph to keep an eye on is the one describing the executed commands inside Percona XtraDB Cluster and the secondary:

Percona XtraDB Cluster

Secondary

As you can see, the traffic on the secondary was significantly less (669 on average, compared to Percona XtraDB Cluster’s 1.17K). Then it spikes when the main load on the Percona XtraDB Cluster node terminates. In short it is quite clear that ProxySQL is not able to scale following the traffic existing in Percona XtraDB Cluster, and actually loses a significant amount of data on the secondary.

Doubling the load in Test 3 shows the same behavior, with ProxySQL reaches its limit for traffic duplication.

But can this be optimized?

The answer is, of course, yes! This is what the mysql-mirror_max_concurrency is for, so let;’s see what happens if we increase the value from 16 to 100 (just to make it crazy high).

Test 4 (two app node writing)

The first thing that comes to attention is that both Percona XtraDB Cluster and secondary report the same number of rows in the tables (240,000). That is a good first win.

Second, note the number of running connections:

The graphs are now are much closer, and the queue drops to just a few entries.

Commands executed in Percona XtraDB Cluster:

And commands executed in the secondary:

Average execution reports the same value, and very similar trends.

Finally, what was the CPU cost and effect?

Percona XtraDB Cluster and secondary CPU utilization:

     

As expected, some difference in the CPU usage distribution exists. But the trend is consistent between the two nodes, and the operations.

The ProxySQL CPU utilization is definitely higher than before:

But it’s absolutely manageable, and still reflects the initial distribution.

What about CRUD? So far I’ve only tested the insert operation, but what happen if we run a full CRUD set of tests?

Test 7 (CRUD)

First of all, let’s review the executed commands in Percona XtraDB Cluster:

And the secondary:

While in appearance we have very similar workloads, selects aside the behavior will significantly diverge. This is because in the secondary the different operations are not encapsulated by the transaction. They are executed as they are received. We can see a significant difference in update and delete operations between the two.

Also, the threads in the execution show a different picture between the two platforms:

Percona XtraDB Cluster

Secondary

It appears quite clear that Percona XtraDB Cluster is constantly running more threads and more connections. Nevertheless, both platforms process a similar total number of questions:

Percona XtraDB Cluster

Secondary

Both have an average or around 1.17K/second questions.

This is also another indication of how much the impact of concurrent operation on behavior, with no respect to the isolation or execution order. Below we can clearly see different behavior by reviewing the CPU utilization:

Percona XtraDB Cluster

Secondary

Conclusions

To close this article, I want to go back to the start. We cannot consider the mirror function in ProxySQL as a real mirroring, but more as traffic redirection (check here for more reasoning on mirroring from my side).

Using ProxySQL with this approach is still partially effective in testing the load and the effect it has on a secondary platform. As we know, data consistency is not guaranteed in this scenario, and Selects, Updates and Deletes are affected (given the different data-set and result-set they manage).

The server behaviors change between the original and mirror, if not in the quantity or the quality.

I am convinced that when we need a tool able to test our production load on a different or new platform, we would do better to look to something else. Possibly query Playback, recently reviewed and significantly patched by DropBox (https://github.com/Percona-Lab/query-playback).

In the end, ProxySQL is already a cool tool. If it doesn’t cover mirroring well, I can live with that. I am interested in having it working as it should (and it does in many other functionalities).

Acknowledgments

As usual, to Rene, who worked on fixing and introducing new functionalities associated with mirroring, like queue and concurrency control.

To the Percona team who developed Percona Monitoring and Management (PMM): all the graphs here (except 3) come from PMM (some of them I customized).

percona server performance 5.5 vs 5.6

Lastest Forum Posts - May 25, 2017 - 2:59am
Recently we upgraded our percona mysql server from 5.5 to 5.6 version (5.6.34).
The problem is that with the same configuration, the same queries started to execute much longer ( comparing to 5.5 ).

I have started to examine some particular queries, however eventually came into conclusion that the problem rather common ( and not related to particular queries ).

A lot of queries started to use incorrect indexes ( comparing to 5.5 ).
As I understand the root cause is in changed optimizer logic and optimizer "improvements" in 5.6.

The solution to rewrite a lot of queries to use index hints seems to me bad and non permanent. I am looking to some server-side optimization.

I have tried to disable all new optimizer options, but without an effect:

Code: SET @@global.optimizer_switch='index_condition_pushdow n=off,mrr=off,mrr_cost_based=off,block_nested_loop =off,materialization=off,semijoin=off,loosescan=of f,firstmatch=off,subquery_materialization_cost_bas ed=off,use_index_extensions=off'; Questions:
Have someone faced such situation after upgrade to 5.6? How did you deal in such case? What can be tuned to correct improvements in 5.6 and change optimizer work?

Any help is very appreciated!

Percona Software and Roadmap Update with CEO Peter Zaitsev: Q2 2017

Latest MySQL Performance Blog posts - May 24, 2017 - 3:24pm

This blog post is a summary of the Percona Software and Roadmap Update – Q2 2017 webinar given by Peter Zaitsev on May 4, 2017. This webinar reflects changes and updates since the last update (Q1 2017).

A full recording of this webinar, along with the presentation slide deck, can be found here.

Percona Software

Below are the latest and upcoming features in Percona’s software. All of Percona’s software is 100% free and open source, with no restricted “Enterprise” version. Percona doesn’t restrict users with open core or “open source, eventually” (BSL) licenses.

Percona Server for MySQL 5.7

Latest Improvements

Features About To Be Released 

  • Integration of TokuDB and Performance Schema
  • MyRocks integration in Percona Server
  • Starting to look towards MySQL 8
Percona XtraBackup 2.4

Latest Improvements

Percona Toolkit

Latest Improvements

Percona Server for MongoDB 3.4

Latest Improvements

Percona XtraDB Cluster 5.7

Latest Improvements

Performance Improvement Benchmarks

Below, you can see the benchmarks for improvements to Percona XtraDB Cluster 5.7 performance. You can read about the improvements and benchmark tests in more detail here and here.

Percona XtraDB Cluster 5.7 Integrated with ProxySQL 1.3

Percona Monitoring and Management

New in Percona Monitoring and Management

Advanced MariaDB Dashboards in PMM (Links go to PMM Demo)

Improved MongoDB Dashboards in PMM (Links go to PMM Demo)

Check out the PMM Demo

Thanks for tuning in for an update on Percona Software and Roadmap Update – Q2 2017.

New Percona Online Store – Easy to Buy, Pay Monthly

Percona server is EXTREMELY slow

Lastest Forum Posts - May 24, 2017 - 4:48am
I have just installed Percona server 5.7 and have reused the same my.cnf as I did in mysql

Right now I'm importing a sql file with a mysql dump and it is EXTREMELY slow.. At least 5-6 times slower than Mysql

What is wrong? I don't get it because I just reused the same configuration.. If there is no easy way to make percona faster I'm about to go back to mysql right away

Hardware:
Intel i7 2600
16GB RAM

Here is my conf


[mysql]

# CLIENT #
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld]

# GENERAL #
user = mysql
default-storage-engine = InnoDB
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock

# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
skip-name-resolve
sql-mode = STRICT_ALL_TABLES,NO_AUTO_CREATE_USER,NO_AUTO_VALU E_ON_ZERO,NO_ENGINE_SUBSTITUTION,ONLY_FULL_GROUP_B Y,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_ BY_ZERO
sysdate-is-now = 1
symbolic-links = 0

# DATA STORAGE #
datadir = /var/lib/mysql/

# BINARY LOGGING #
server-id = 2
log-bin = /var/lib/mysql/mysql-bin
expire-logs-days = 14
sync-binlog = 1
log-bin-trust-function-creators= 1

# REPLICATION #
log-slave-updates
gtid-mode = ON
enforce-gtid-consistency

# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 4096
table-open-cache = 4096

# INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 256M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 12G

# LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log



basedir = /usr
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
explicit_defaults_for_timestamp

innobackupex --apply-log source and destination folder?

Lastest Forum Posts - May 24, 2017 - 12:44am
I am trying to use innobackupex --apply-log --redo-only, but every time it's applying to the source folder, which I don't want to write anything from the source. Any way to get around by writing to destination folder only ?

How to Save and Load Docker Images to Offline Servers

Latest MySQL Performance Blog posts - May 23, 2017 - 3:16pm

,In this post, we’ll see how to make Docker images available to servers that don’t have access to the Internet (i.e., machines where docker pull <image_name> does not work).

As a specific example, we will do this with the latest Percona Monitoring and Management Docker images, since we had requests for this from users and customers. With the following steps, you’ll be able to deploy PMM within your secure network, without access to the Internet. Additionally, the same steps can be used when you need to upgrade the containers’ version in future releases.

There are two ways in which we can do this:

  • the easy way, by using docker save and docker load, or
  • the not-so-easy way, by setting up our own registry

We’ll focus on the first option, since the latter is a bit more convoluted. If you need your own registry, you are probably looking into something else rather than simply avoiding a firewall to pull one image to a server. Check out the Docker online docs in case option two fits your needs better.

As of this writing, 1.1.3 is the latest PMM version, so this is what we’ll use in the example. An image name is comprised of three parts, namely:

  • user_account/ (note the ‘/’ at the end); or empty string (and no ‘/’) for the official Docker repo
  • image_name
  • :tag (note the ‘:’ at the beginning)

The PMM Docker images have the following syntax: percona/pmm-server:1.1.3, but you can change this in the following examples to whatever image name you want, and it will work just the same. Before moving on to the commands needed, let’s imagine that serverA is the machine that has access to the Internet and serverB is the machine behind the firewall.

The steps are simple enough. On serverA, get the image, and save it to a file:

serverA> docker pull percona/pmm-server:1.1.3 1.1.3: Pulling from percona/pmm-server 45a2e645736c: Pull complete 7a3c6f252004: Pull complete 2cc1d8878ff1: Pull complete 6c49ea4e9955: Pull complete bc4630d3a194: Pull complete 75f0952c00bd: Pull complete 79d583a1689c: Pull complete 5a820193ac79: Pull complete 927a0614b164: Pull complete Digest: sha256:5310b23066d00be418a7522c957b2da4155a63c3e7b08663327aef075674bc2e Status: Downloaded newer image for percona/pmm-server:1.1.3 serverA> docker save percona/pmm-server:1.1.3 > ~/pmm-server_1.1.3.tar

Now, all you need to do is move the generated tar file to serverB (by using “scp” or any other means), and execute the following:

serverB> docker load < ~/pmm-server_1.1.3.tar serverB> docker images REPOSITORY           TAG                 IMAGE ID            CREATED             VIRTUAL SIZE percona/pmm-server   1.1.3               acc9af2459a4        3 weeks ago         1.146 GB

Now you’ll be able to use the image as if you had used docker pull percona/pmm-server:1.1.3​:

serverB> docker create ... percona/pmm-server:1.1.3 /bin/true 301a9e89ee95886f497482038aa6601d6cb2e21c0532e1077fa44213ef597f38 serverB> docker run -d ... percona/pmm-server:1.1.3 dbaffa80f62bc0b80239b922bbc746d828fbbeb212a638cfafea92b827141abb serverB> curl http://localhost | grep "Percona Monitoring and Management" ...                    <p>Percona Monitoring and Management (PMM) is a free and open-source solution for managing and monitoring performance on MySQL and MongoDB, and provides time-based analysis of performance to ensure that your data works as efficiently as possible.</p> ...

Lastly, let me add the relevant documentation links, so you have them at hand, if needed:

https://www.percona.com/doc/percona-monitoring-and-management/deploy/server/docker.html

https://docs.docker.com/engine/reference/commandline/save/

https://docs.docker.com/engine/reference/commandline/load/

What are the root causes of import failure due to errp: errors

Lastest Forum Posts - May 23, 2017 - 2:30pm
We are using MySQL 5.5 and innobackupex-1.5.1. All but a few table backups fail to be restored and mysql.log indicates lots of [errpage#] errors and InnoDB finally claim the .ibd file seems to be corrupted. We have run innochecksum on the file and it is fine.

We are wondering whether the .ibd file is really corrupted or the corresponding .exp file maybe be corrupted and hence is incompatible with the .ibd file? How can we identify the root cause so to have a remedy for this?


InnoDB: Using DIRECT_IO for IMPORT TABLESPACE.
InnoDB: Import: The extended import of bb_replica_1/bb_data is being started.
InnoDB: Import: 1 indexes have been detected.
InnoDB: Starting chunked import: Tue May 23 21:05:33 2017
InnoDB: chunk size: 8
InnoDB: free_limit_bytes: 11918114816
InnoDB: Progress in %: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 [errp:664701] [errp:664706] [errp:664725] [errp:664728] [errp:66473
4] [errp:664735] [errp:664759] [errp:664761] [errp:664762] [errp:664763] [errp:664764] [errp:664777] [errp:664780] [er
rp:664782] [errp:664783] [errp:664784] [errp:664785] [errp:664786] [errp:664787] [errp:664794] [errp:664795] [errp:664
796] [errp:664797] [errp:664799] [errp:664800] [errp:664801] [errp:664802] ...... [errp:727422] [errp:727423] 100 done.
InnoDB: Chunked import done: Tue May 23 21:08:30 2017
170523 21:08:30 InnoDB: Error: file './bb_replica_1/bb_data.ibd' seems to be corrupt.
InnoDB: An attempt to convert and salvage all corrupt pages was not made.
InnoDB: ##### CAUTION #####
InnoDB: ## The .ibd file may cause InnoDB to crash, even though its re-import seems to have succeeded.
InnoDB: ## If you don't know how to salvage data from a .ibd, you should not use the file.
InnoDB: ###################
170523 21:08:30 InnoDB: cannot find or open in the database directory the .ibd file of
InnoDB: table `bb_replica_1`.`bb_data`
InnoDB: in ALTER TABLE ... IMPORT TABLESPACE


file bb_data.ibd = 12041846784 bytes (734976 pages)...
checking pages in range 0 to 734975
page 18367 okay: 2.499% done
page 48255 okay: 6.566% done
page 76671 okay: 10.432% done
page 107455 okay: 14.620% done
page 139071 okay: 18.922% done
page 148223 okay: 20.167% done
page 155519 okay: 21.160% done
page 163263 okay: 22.214% done
page 172799 okay: 23.511% done
page 183039 okay: 24.904% done
page 193151 okay: 26.280% done
page 203647 okay: 27.708% done
page 213439 okay: 29.040% done
page 222527 okay: 30.277% done
page 232127 okay: 31.583% done
page 242623 okay: 33.011% done
page 252991 okay: 34.422% done
page 262975 okay: 35.780% done
page 273087 okay: 37.156% done
page 281279 okay: 38.271% done
page 288447 okay: 39.246% done
page 296063 okay: 40.282% done
page 304511 okay: 41.432% done
page 314367 okay: 42.773% done
page 323327 okay: 43.992% done
page 331007 okay: 45.037% done
page 337919 okay: 45.977% done
page 347455 okay: 47.274% done
page 356095 okay: 48.450% done
page 363647 okay: 49.478% done
page 371135 okay: 50.496% done
page 379647 okay: 51.654% done
page 387711 okay: 52.752% done
page 395519 okay: 53.814% done
page 403391 okay: 54.885% done
page 411007 okay: 55.921% done
page 419455 okay: 57.071% done
page 430975 okay: 58.638% done
page 442431 okay: 60.197% done
page 453759 okay: 61.738% done
page 464959 okay: 63.262% done
page 476351 okay: 64.812% done
page 487807 okay: 66.371% done
page 499199 okay: 67.921% done
page 510463 okay: 69.453% done
page 521727 okay: 70.986% done
page 533311 okay: 72.562% done
page 544767 okay: 74.121% done
page 556159 okay: 75.671% done
page 569983 okay: 77.551% done
page 587455 okay: 79.929% done
page 604671 okay: 82.271% done
page 622207 okay: 84.657% done
page 639935 okay: 87.069% done
page 656831 okay: 89.368% done
page 674111 okay: 91.719% done
page 691455 okay: 94.079% done
page 708735 okay: 96.430% done
page 725823 okay: 98.755% done

> hexdump -C bb_data.exp
00000000 78 70 6f 72 74 69 6e 66 00 00 00 01 62 62 5f 72 |xportinf....bb_r|
00000010 65 70 6c 69 63 61 5f 39 31 2f 62 62 5f 64 61 74 |eplica_91/bb_dat|
00000020 61 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |a...............|
00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200 00 00 00 00 00 00 00 2f 00 00 00 03 50 52 49 4d |......./....PRIM|
00000210 41 52 59 00 00 00 00 00 00 00 00 00 00 00 00 00 |ARY.............|
00000220 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00004000

ICP Counters in information_schema.INNODB_METRICS

Latest MySQL Performance Blog posts - May 22, 2017 - 1:53pm

In this blog, we’ll look at ICP counters in the information_schema.INNODB_METRICS. This is part two of the Index Condition Pushdown (ICP) counters blog post series. 

As mentioned in the previous post, in this blog we will look at how to check on ICP counters on MySQL and Percona Server for MySQL. This also applies to MariaDB, since the INNODB_METRICS table is also available for MariaDB (as opposed to the Handler_icp_% counters being MariaDB-specific). We will use the same table and data set as in the previous post.

For simplicity we’ll show the examples on MySQL 5.7.18, but they also apply to the latest Percona Server for MySQL (5.7.18) and MariaDB Server (10.2.5):

mysql [localhost] {msandbox} (test) > SELECT @@version, @@version_comment; +-----------+------------------------------+ | @@version | @@version_comment            | +-----------+------------------------------+ | 5.7.18    | MySQL Community Server (GPL) | +-----------+------------------------------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SHOW CREATE TABLE t1G *************************** 1. row ***************************       Table: t1 Create Table: CREATE TABLE `t1` (  `f1` int(11) DEFAULT NULL,  `f2` int(11) DEFAULT NULL,  `f3` int(11) DEFAULT NULL,  KEY `idx_f1_f2` (`f1`,`f2`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT COUNT(*) FROM t1; +----------+ | COUNT(*) | +----------+ |  3999996 | +----------+ 1 row in set (3.98 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 LIMIT 12; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    1 |    2 |    1 | |    1 |    3 |    1 | |    1 |    4 |    1 | |    2 |    1 |    1 | |    2 |    2 |    1 | |    2 |    3 |    1 | |    2 |    4 |    1 | |    3 |    1 |    1 | |    3 |    2 |    1 | |    3 |    3 |    1 | |    3 |    4 |    1 | +------+------+------+ 12 rows in set (0.00 sec)

Before proceeding with the examples, let’s see what counters we have available and how to enable and query them. The documentation page is at the following link: https://dev.mysql.com/doc/refman/5.7/en/innodb-information-schema-metrics-table.html.

The first thing to notice is that we are advised to check the validity of the counters for each version where we want to use them. The counters represented in the INNODB_METRICS table are subject to change, so for the most up-to-date list it’s best to query the running MySQL server:

mysql [localhost] {msandbox} (test) > SELECT NAME, SUBSYSTEM, STATUS FROM information_schema.INNODB_METRICS WHERE NAME LIKE '%icp%'; +------------------+-----------+----------+ | NAME             | SUBSYSTEM | STATUS   | +------------------+-----------+----------+ | icp_attempts     | icp       | disabled | | icp_no_match     | icp       | disabled | | icp_out_of_range | icp       | disabled | | icp_match        | icp       | disabled | +------------------+-----------+----------+ 4 rows in set (0.00 sec)

Looking good! We have all the counters we expected, which are:

  • icp_attempts: the number of rows where ICP was evaluated
  • icp_no_match: the number of rows that did not completely match the pushed WHERE conditions
  • icp_out_of_range: the number of rows that were checked that were not in a valid scanning range
  • icp_match: the number of rows that completely matched the pushed WHERE conditions

This link to the code can be used for reference: https://github.com/mysql/mysql-server/blob/5.7/include/my_icp.h.

After checking which counters we have at our disposal, you need to enable them (they are not enabled by default). For this, we can use the “modules” provided by MySQL to group similar counters for ease of use. This is also explained in detail in the documentation link above, under the “Counter Modules” section. INNODB_METRICS counters are quite inexpensive to maintain, as you can see in this post by Peter Z.

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_enable = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, SUBSYSTEM, STATUS FROM information_schema.INNODB_METRICS WHERE NAME LIKE '%icp%'; +------------------+-----------+---------+ | NAME             | SUBSYSTEM | STATUS  | +------------------+-----------+---------+ | icp_attempts     | icp       | enabled | | icp_no_match     | icp       | enabled | | icp_out_of_range | icp       | enabled | | icp_match        | icp       | enabled | +------------------+-----------+---------+ 4 rows in set (0.00 sec)

Perfect, we now know what counters we need, and how to enable them. We just need to know how to query them, and we can move on to the examples. However, before rushing into saying that a simple SELECT against the INNODB_METRICS table will do, let’s step back a bit and see what columns we have available that can be of use:

mysql [localhost] {msandbox} (test) > DESCRIBE information_schema.INNODB_METRICS; +-----------------+--------------+------+-----+---------+-------+ | Field           | Type         | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+-------+ | NAME            | varchar(193) | NO   |     |         |       | | SUBSYSTEM       | varchar(193) | NO   |     |         |       | | COUNT           | bigint(21)   | NO   |     | 0       |       | | MAX_COUNT       | bigint(21)   | YES  |     | NULL    |       | | MIN_COUNT       | bigint(21)   | YES  |     | NULL    |       | | AVG_COUNT       | double       | YES  |     | NULL    |       | | COUNT_RESET     | bigint(21)   | NO   |     | 0       |       | | MAX_COUNT_RESET | bigint(21)   | YES  |     | NULL    |       | | MIN_COUNT_RESET | bigint(21)   | YES  |     | NULL    |       | | AVG_COUNT_RESET | double       | YES  |     | NULL    |       | | TIME_ENABLED    | datetime     | YES  |     | NULL    |       | | TIME_DISABLED   | datetime     | YES  |     | NULL    |       | | TIME_ELAPSED    | bigint(21)   | YES  |     | NULL    |       | | TIME_RESET      | datetime     | YES  |     | NULL    |       | | STATUS          | varchar(193) | NO   |     |         |       | | TYPE            | varchar(193) | NO   |     |         |       | | COMMENT         | varchar(193) | NO   |     |         |       | +-----------------+--------------+------+-----+---------+-------+ 17 rows in set (0.00 sec)

There are two types: %COUNT and %COUNT_RESET. The former counts since the corresponding counters were enabled, and the latter since they were last reset (we have the TIME_% columns to check when any of these were done). This is why in our examples we are going to check the %COUNT_RESET counters, so we can reset them before running each query (as we did with FLUSH STATUS in the previous post).

Without further ado, let’s check how this all works together:

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE f1 < 3 AND (f2 % 4) = 1; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    2 |    1 |    1 | +------+------+------+ 2 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp%'; +------------------+-------------+ | NAME             | COUNT_RESET | +------------------+-------------+ | icp_attempts     |           9 | | icp_no_match     |           6 | | icp_out_of_range |           1 | icp_match        |           2 | +------------------+-------------+ 4 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > EXPLAIN SELECT * FROM t1 WHERE f1 < 3 AND (f2 % 4) = 1; +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ | id | select_type | table | partitions | type  | possible_keys | key       | key_len | ref  | rows | filtered | Extra                 | +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ |  1 | SIMPLE      | t1    | NULL       | range | idx_f1_f2     | idx_f1_f2 | 5       | NULL |    8 |   100.00 | Using index condition | +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ 1 row in set, 1 warning (0.00 sec)

If you checked the GitHub link above, you might have noted that the header file only contains three of the counters. This is because icp_attempts is computed as the sum of the rest. As expected, icp_match equals the number of returned rows, which makes sense. icp_no_match should also make sense if we check the amount of rows present without the WHERE conditions on f2.

mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE f1 < 3; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    1 |    2 |    1 | |    1 |    3 |    1 | |    1 |    4 |    1 | |    2 |    1 |    1 | |    2 |    2 |    1 | |    2 |    3 |    1 | |    2 |    4 |    1 | +------+------+------+ 8 rows in set (0.00 sec)

So, 8 – 2 = 6, which is exactly icp_no_match‘s value. Finally, we are left with icp_out_of_range. For each end of range the ICP scan detects, this counter is incremented by one. We only scanned one range in the previous query, so let’s try something more interesting (scanning three ranges):

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE ((f1 < 2) OR (f1 > 4 AND f1 < 6) OR (f1 > 8 AND f1 < 12)) AND (f2 % 4) = 1; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    5 |    1 |    1 | |    9 |    1 |    1 | |   10 |    1 |    1 | |   11 |    1 |    1 | +------+------+------+ 5 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp%'; +------------------+-------------+ | NAME             | COUNT_RESET | +------------------+-------------+ | icp_attempts     |          23 | | icp_no_match     |          15 | | icp_out_of_range |           3 | | icp_match        |           5 | +------------------+-------------+ 4 rows in set (0.01 sec)

We have now scanned three ranges on f1, namely: (f1 < 2), (4 < f1 < 6) and (8 < f1 < 12). This is correctly reflected in the corresponding counter. Remember that the MariaDB Handler_icp_attempts status counter we looked at in the previous post does not take into account the out-of-range counts. This means the two “attempts” counters will not be the same!

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; SET GLOBAL innodb_monitor_reset = dml_reads; FLUSH STATUS; ... mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE ((f1 < 2) OR (f1 > 4 AND f1 < 6) OR (f1 > 8 AND f1 < 12)) AND (f2 % 4) = 1; ... 5 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp_attempts'; +--------------+-------------+ | NAME         | COUNT_RESET | +--------------+-------------+ | icp_attempts |          23 | +--------------+-------------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SHOW STATUS LIKE 'Handler_icp_attempts'; +----------------------+-------+ | Variable_name        | Value | +----------------------+-------+ | Handler_icp_attempts | 20    | +----------------------+-------+ 1 row in set (0.00 sec)

It can be a bit confusing to have two counters that supposedly measure the same counts yielding different values, so watch this if you use MariaDB.

ICP Counters in PMM

Today you can find an ICP counters graph for MariaDB (Handler_icp_attempts) in PMM 1.1.3.

Additionally, in release 1.1.4 you’ll find graphs for ICP metrics from information_schema.INNODB_METRICS: just look for the INNODB_METRICS-based graph on the InnoDB Metrics dashboard!

I hope you found this blog post series useful! Let me know if you have any questions or comments below.

mysql_config_editor missing on 5.7

Lastest Forum Posts - May 22, 2017 - 11:09am
Any idea on how do I get mysql_config_editor command in Percona 5.7. I have installed libmysqlclient-dev but still I don't see it

# dpkg -l | grep -i percona
ii percona-release 0.1-4.xenial all Package to install Percona gpg key and APT repo
ii percona-server-client-5.7 5.7.18-14-1.xenial amd64 Percona Server database client binaries
ii percona-server-common-5.7 5.7.18-14-1.xenial amd64 Percona Server database common files (e.g. /etc/mysql/my.cnf)
ii percona-server-server-5.7 5.7.18-14-1.xenial amd64 Percona Server database server binaries

# dpkg -l | grep -i mysql
ii libmysqlclient-dev 5.7.18-0ubuntu0.16.04.1 amd64 MySQL database development files
ii libmysqlclient20:amd64 5.7.18-0ubuntu0.16.04.1 amd64 MySQL database client library
ii mysql-common 5.7.18-0ubuntu0.16.04.1 all MySQL database common files, e.g. /etc/mysql/my.cnf
ii percona-server-common-5.7 5.7.18-14-1.xenial amd64 Percona Server database common files (e.g. /etc/mysql/my.cnf)

QAN window not logging db queries running on RDS Aurora

Lastest Forum Posts - May 22, 2017 - 10:36am
Hi,
I installed pmm 1.1.3 on EC2 instance and added aurora clusters to collect mysql metrics.
I used the below command to install pmm-server,
sudo docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server -e METRICS_RETENTION=360h -e METRICS_RESOLUTION=5s --restart always percona/pmm-server:1.1.3
Performance_schema and slow_log are ON and QAN is using performance_schema. I'm not seeing actual db queries running on Aurora instance in QAN window. It's logging the queries running by pmm-server means the queries that needs the metrics. Grafana is working fine.

Webinar May 23, 2017: MongoDB Monitoring and Performance for the Savvy DBA

Latest MySQL Performance Blog posts - May 22, 2017 - 9:54am

Join Percona’s Senior Technical Services Engineer Bimal Kharel on Tuesday, May 23, 2017, as he presents a webinar on MongoDB monitoring called How to Help Your DBA’s Sleep Better at Night at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Register Now

Are you trying to stay on top of your database before things turn ugly? Between metrics for throughput, database performance, resource utilization, resource saturation, errors (asserts) and many others, how do you know which one needs to be looked at NOW (and which can wait)?

Both DBAs and system admins must stay on top of the systems they manage. But filtering between metrics that need immediate attention and those that should be watched over time is challenging. In this webinar, Bimal narrows down the list of metrics that help you decide whether the on-call DBA gets their recommended eight hours of shuteye, or gets to run on caffeine with no sleep.

Bimal also discusses which graphs relate to each other, with examples from Percona’s Monitoring and Management (PMM) tool, to help you understand how things in MongoDB can impact other areas.

Please register for the webinar here.

Bimal Kharel, Senior Technical Services Engineer, Percona

Bimal is a MongoDB support engineer at Percona. Before Percona he worked as a MongoDB DBA at EA and Charles Schwab. He has been in various roles throughout his career, from graphics to web developer to systems administration. MongoDB was the first database Bimal got into (he used MySQL for some websites but never other relational databases).

Script to monitor and bootstrap a cluster

Lastest Forum Posts - May 22, 2017 - 4:31am
At the moment when I need to bootstrap a cluster, what I need to do is:
  • Make sure everything is down (including my monit)
  • Check the grastrate.dat to find which one has the maximum sequence number
  • Set the safe_to_bootstrap flag
  • /etc/init.d/mysql bootstrap-pxc
  • Wait for that to come up.
  • If it was successful, start the other nodes one at a time, waiting for the SST to complete on each
Has anyone done any automation of this kind of thing?

Cluster hang with wsrep: initiating replication for write set

Lastest Forum Posts - May 21, 2017 - 11:34pm
I have a 3 node cluster. This morning the cluster wasn't working. In SHOW PROCESSLIST, 2 of the 3 nodes had many instances of requests showing:

wsrep: initiating replication for write set

I had to shut the cluster down and rebootstrap. Any suggestions?

Percona Server for MongoDB 3.2.13-3.3 is Now Available

Lastest Forum Posts - May 21, 2017 - 11:24pm
Percona announces the release of Percona Server for MongoDB 3.2.13-3.3 on May 15, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.



Percona Server for MongoDB is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage engine, as well as enterprise-grade features like External Authentication, Audit Logging, Profiling Rate Limiting, and Hot Backup at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: We deprecated the PerconaFT storage engine. It will not be available in future releases.

This release is based on MongoDB 3.2.13 and includes the following additional changes:
  • #PSMDB-127: Fixed cleanup of deleted documents and indexes for MongoRocks. When you upgrade to this release, deferred compaction may occur and cause database size to decrease significantly.
  • #PSMDB-133: Added the wiredTigerCheckpointSizeMB variable, set to 1000 in the configuration template for WiredTiger. Valid values are 32 to 2048 (2GB), with the latter being default.
  • #PSMDB-138: Implemented SERVER-23418 for MongoRocks.
Percona Server for MongoDB 3.2.13-3.3 release notes are available in the official documentation.
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.