Buy Percona ServicesBuy Now!
Subscribe to Latest MySQL Performance Blog posts feed
Updated: 6 sec ago

Percona Monitoring and Management (PMM) Graphs Explained: MongoDB MMAPv1

February 28, 2017 - 3:28pm

This post is part of the series of Percona’s MongoDB 3.4 bundle release blogs. In this blog post, I hope to cover some areas to watch with Percona Monitoring and Management (PMM) when running MMAPv1. The graph examples from this article are from the MMAPv1 dashboard that will be released for the first time in PMM 1.1.2.

Since the very beginning of MongoDB, the MMAPv1 storage engine has existed. MongoDB 3.0 added a pluggable storage engine API. You could only use MMAPv1 with MongoDB before that. While MMAPv1 often offers good read performance, it has become famous for its poor write performance and fragmentation at scale. This means there are many areas to watch for regarding performance and monitoring.

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB. It was developed by Percona on top of open-source technology. Behind the scenes, the graphing features this article covers use Prometheus (a popular time-series data store), Grafana (a popular visualization tool), mongodb_exporter (our MongoDB database metric exporter) plus other technologies to provide database and operating system metric graphs for your database instances.

(Beware of) MMAPv1

mmap() is a system-level call that causes the operating system kernel to map on-disk files to memory while it is being read and written by a program.

As mmap() is a core feature of the Unix/Linux operating system kernel (and not the MongoDB code base), I’ve always felt that calling MMAPv1 a “storage engine” is quite misleading, although it does allow for a simpler explanation. The distinction and drawbacks of the storage logic being in the operating system kernel vs. the actual database code (like most database storage engines) becomes very important when monitoring MMAPv1.

As Unix/Linux are general-purpose operating systems that can have many processes, users and uses cases, they offer limited OS-level metrics in terms of activity, latency and performance of mmap(). Those metrics are for the entire operating system, not just for the MongoDB processes.

mmap() uses memory from available OS-level buffers/caches for mapping the MMAPv1 data to RAM — memory that can be “stolen” away by any other operating system process that asks for it. As many deployments “micro-shard” MMAPv1 to reduce write locks, this statement can become exponentially more important. If 3 x MongoDB instances run on a single host, the kernel fights to cache and evict memory pages created by 3 x different instances with no priority or queuing, essentially at random, while creating contention. This causes inefficiencies and less-meaningful monitoring values.

When monitoring MMAPv1, you should consider MongoDB AND the operating system as one “component” more than most engines. Due to this, it is critical that a database host runs a single MongoDB instance with no other processes except database monitoring tools such as PMM’s client. This allows MongoDB to be the only user of the operating system filesystem cache that MMAPv1 relies on. This also makes OS-level memory metrics more accurate because MongoDB is the only user of memory. If you need to “micro-shard” instances, I recommend using containers (Docker or plain cgroups) or virtualization to separate your memory for each MongoDB instance, with just one MongoDB instance per container.


MMAPv1’s has locks for both reads and writes. In the early days the lock was global only. Locking became per-database in v2.2 and per-collection in v3.0.

Locking is the leading cause of the performance issues we see on MMAPv1 systems, particularly write locking. To measure how much locking an MMAPv1 instance is waiting on, first we look at the “MMAPv1 Lock Ratio”:

Another important metric to watch is “MongoDB Lock Wait Time”, breaking down a number of time operations spend waiting on locks:

Three factors in combination influence locking:

  1. Data hotspots — if every query hits the same collection or database, locking increases
  2. Query performance — a lock is held for the duration of an operation; if that operation is slow, lock time increases
  3. Volume of queries — self-explanatory

Page Faults

Page faults happen when MMAPv1 data is not available in the cache and needs to be fetched from disk. On systems with data that is smaller than memory page faults usually only occur on reboot, or if the file system cache is dumped. On systems where data exceeds memory, this happens more frequently — MongoDB is asked for data not in memory.

How often this happens depends on how your application accesses your data. If it accesses new or frequently-queried data, it is more likely to be in memory. If it accesses old or infrequent data, more page faults occur.

If page faults suddenly start occurring, check to see if your data set has grown beyond the size of memory. You may be able to reduce your data set by removing fragmentation (explained later).


As MMAPv1 eventually flushes changes to disk in batches, journaling is essential for running MongoDB with any real data integrity guarantees. As well as being included in the lock statistic graphs mentioned above, there are some good metrics for journaling (which is a heavy consumer of disk writes).

Here we have “MMAPv1 Journal Write Activity”, showing the data rates of journaling (max 19MB/sec):

“MMAPv1 Journal Commit Activity” measures the commits to the journal ops/second:

A very useful metric for write query performance is “MMAPv1 Journaling Time” (there is another graph with 99th percentile times):

This is important to watch, as write operations need to wait for a journal commit. In the above example, “write_to_journal” and “write_to_data_files” are the main metrics I tend to look at. “write_to_journal” is the rate of changes being written to the journal, and “write_to_data_files” is the rate that changes are written to on-disk data.

If you see very high journal write times, you may need faster disks or in-sharding scenarios. Adding more shards spreads out the disk write load.

Background Flushing

“MMAPv1 Background Flushing Time” graphs the background operation that calls flushes to disk:

This process does not block the database, but does cause more disk activity.


Due to the way MMAPv1 writes to disk, it creates a high rate of fragmentation (or holes) in its data files. Fragmentation slows down scan operations, wastes some filesystem cache memory and can use much more disk space than there is actual data. On many systems I’ve seen, the size of MMAPv1 data files on disk take over twice the true data size.

Currently, our Percona Monitoring and Management MMAPv1 support does not track this, but we plan to add it in the future.

To track it manually, look at the output of the “.stats()” command for a given collection (replace “sbtest1” with your collection name):

> 1 - ( db.sbtest1.stats().size / db.sbtest1.stats().storageSize ) 0.14085410557184752

Here we can see this collection is about 14% fragmented on disk. To fix fragmentation, the most common fix is dropping and recreating the collection using a backup. Many just remove a replication member, clear the data and let it do a new replication initial sync.

Operating System Memory

In PMM we have graphed the operating system cached memory as it acts as the primary cache for MMAPv1:

For the most part, “Cached” is the value showing the amount of data that is cached MMAPv1 data (assuming the host is only running MongoDB).

We also graph the dirty memory pages:

It is important that dirty pages do not exceed the hard dirty page limit (which causes pauses). It is also important that dirty pages don’t accumulate (which wastes cache memory). The “soft” dirty page limit is the limit that starts dirty page cleanup without pausing.

On this host, you could probably lower the soft limit to clean up memory faster, assuming the increase in disk activity is acceptable. This topic is covered in this post:

What’s Missing?

As mentioned earlier, fragmentation rates are missing for MMAPv1 (this would be a useful addition). Due to the limited nature of the metrics offered for MMAPv1, PMM probably won’t provide the same level of graphs for MMAPv1 compared to what we provide for WiredTiger or RocksDB. There will likely be fewer additions to the graphing capabilities going forward.

If you are using a highly concurrent system, we highly recommend you upgrade to WiredTiger or RocksDB (both also covered in this monitoring series). These engines provide several solutions to MMAPv1 headaches: document-level locking, built-in compression, checkpointing that cause near-zero fragmentation on disk and much-improved visibility for monitoring. We just released Percona Server for MongoDB 3.4, and it provides many exciting features (including these engines).

Look out for more monitoring posts from this series!

Webinar Thursday March 2, 2017: MongoDB Query Patterns

February 27, 2017 - 5:06pm

Join Percona’s Senior Technical Services Engineer Adamo Tonete on Thursday, March 2, 2017, at 11:00 a.m. PST / 2:00 p.m. EST (UTC-8) as he reviews and discusses MongoDB® query patterns.

Register Now

MongoDB is a fast and simple-to-query schema-free database. It features a smart query optimizer that tries to use the easiest data retrieval method.

In this webinar, Adamo will discuss common query operators and how to use them effectively. The webinar will cover not only common query operations, but also the best practices for their usage.

Register for the webinar here.

Adamo Tonete, Senior Technical Services Engineer

Adamo joined Percona in 2015, after working as a MongoDB/MySQL Database Administrator for three years. As the main database member of a startup, he was responsible for suggesting the best architecture and data flows for a worldwide company in a 24/7 environment. Before that, he worked as a Microsoft SQL Server DBA for a large e-commerce company, mainly on performance tuning and automation. Adamo has almost eight years of experience working as a DBA, and in the past three years, he has moved to NoSQL technologies without giving up relational databases.

MySQL Ransomware: Open Source Database Security Part 3

February 27, 2017 - 2:28pm

This blog post examines the recent MySQL® ransomware attacks, and what open source database security best practices could have prevented them.

Unless you’ve been living under a rock, you know that there has been an uptick in ransomware for MongoDB and Elasticsearch deployments. Recently, we’re seeing the same for MySQL.

Let’s look and see if this is MySQL’s fault.

Other Ransomware Targets

Let’s briefly touch on how Elasticsearch and MongoDB became easy targets…


Elasticsearch® does not implement any access control: neither authentication nor authorization. For this, you need to deploy the Elastic’s shield offering. As such, if you have an Elasticsearch deployment that is addressable from the Internet, you’re asking for trouble. We see many deployments have some authentication around their access, such as HTTP Basic Auth – though sadly, some don’t employ authentication or network isolation. We already wrote a blog about this here.


MongoDB (< 2.6.0) does allow for access control through account creation. It binds to by default (allowing access from anywhere). This is now changed in /etc/mongod.conf in versions >= 2.6.0. Often administrators don’t realize or don’t know to look for this. (Using MongoDB? My colleague David Murphy wrote a post on this issue here).

We began to see incidents where both Elasticsearch and MongoDB had their datasets removed and replaced with a README/note instructing the user to pay a ransom of 0.2BTC (Bitcoin) to the specified wallet address (if they wanted their data back).


So is this latest (and similar) attack on MySQL MySQL’s fault? We don’t think so. MySQL and Percona Server® for MySQL by default do not accept authentication from everywhere without a password for the root user.

Let’s go over the various security options MySQL has, and describe some other best practices in order to protect your environment.

Default bind_address= in Percona Server for MySQL

MySQL currently still binds to (listen to all network interfaces) by default. However, Percona Server for MySQL and Percona XtraDB Cluster have different defaults, and only bind on in its default configuration (Github pull request).

Recall, if you will, CVE-2012-2122. This ALONE should be enough to ensure that you as the administrator use best practices, and ONLY allow access to the MySQL service from known good sources. Do not setup root level or equivalent access from any host (% indicates any host is allowed). Ideally, you should only allow root access from – or if you must, from a subset of a secured network (e.g., 10.10.0.% would only allow access to

Prevent Access

Also, does the MySQL database really need a publicly accessible IP address? If you do have a valid reason for this, then you should firewall port 3306 and whitelist access only from hosts that need to access the database directly. You can easily use iptables for this.

Default Users

MySQL DOES NOT by default create accounts that can be exploited for access. This comes later through an administrator’s lack of understanding, sadly. More often than not, the grant will look something like the following.


You may scoff at the above (and rightly so). However, don’t discount this just yet: “123456” was the MOST USED password in 2016! So it’s reasonable to assume that somewhere out there this is a reality.

Max Connection Errors

You can deploy max_connection_errors with a suitably low value to help mitigate a direct attack. This will not prevent a distributed attack, where many thousands of hosts are used. Network isolation is the only way to ensure your mitigation against this attack vector.

MySQL 5.7 Improvements on Security Default Root Password

Since MySQL 5.7, a random password is generated for the only root user (root@localhost) when you install MySQL for the first time. That password is then written in the error log and has to be changed. Miguel Ángel blogged about this before.

Connection Control Plugin

MySQL 5.7.17 introduced a new open source plugin called Connection Control. When enabled, it delays the authentication of users that failed to login by default more than three times. This is also part as of Percona Server for MySQL 5.7.17.

Here’s an example where the 4th consecutive try caused a one-second delay (default settings were used):

$ time mysql -u bleh2 -pbleh ERROR 1045 (28000): Access denied for user 'bleh2'@'localhost' (using password: YES) real 0m0.009s $ time mysql -u bleh2 -pbleh ERROR 1045 (28000): Access denied for user 'bleh2'@'localhost' (using password: YES) real 0m0.008s $ time mysql -u bleh2 -pbleh ERROR 1045 (28000): Access denied for user 'bleh2'@'localhost' (using password: YES) real 0m0.008s $ time mysql -u bleh2 -pbleh ERROR 1045 (28000): Access denied for user 'bleh2'@'localhost' (using password: YES) real 0m1.008s mysql> SELECT * FROM INFORMATION_SCHEMA.CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS; +---------------------+-----------------+ | USERHOST | FAILED_ATTEMPTS | +---------------------+-----------------+ | 'bleh2'@'localhost' | 4 | +---------------------+-----------------+ 1 row in set (0.01 sec)

Password Validation Plugin

MySQL 5.6.6 and later versions also ship with a password validation plugin, which prevents creating users with unsafe passwords (such as 123456) by ensuring passwords meet certain criteria:


In order to get stung, one must ignore the best practices mentioned above (which in today’s world, should take some effort). These best practices include:

  1. Don’t use a publicly accessible IP address with no firewall configured
  2. Don’t use a root@% account, or other equally privileged access account, with poor MySQL isolation
  3. Don’t configure those privileged users with a weak password, allowing for brute force attacks against the MySQL service

Hopefully, these are helpful security tips for MySQL users. Comment below!

Percona Monitoring and Management (PMM) Graphs Explained: WiredTiger and Percona Memory Engine

February 27, 2017 - 9:34am

This post is part of the MongoDB 3.4 bundled release series of blog posts. In this blog, we’ll go over some useful metrics WiredTiger outputs and how we visualize them in Percona Monitoring and Management (PMM).

WiredTiger is the default storage engine for MongoDB since version 3.2. The addition of this full-featured, comprehensive storage engine offered a lot of new, useful metrics that were not available before in MMAPv1.

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB, developed by Percona on top of open-source technology. Behind the scenes, the graphing features this article covers use Prometheus (a popular time-series data store), Grafana (a popular visualization tool), mongodb_exporter (our MongoDB database metric exporter) plus other technologies to provide database and operating system metric graphs for your database instances.

Please see a live demo of our PMM 1.1.1 release of the MongoDB WiredTiger graphs covered in this article:

You can see a sneak peak demo of our Percona Memory Engine graphs we’ll release in PMM 1.1.2 here:

WiredTiger and Percona Memory Engine

WiredTiger is a storage engine that was developed outside of MongoDB, and was acquired and integrated into MongoDB in version 3.0. WiredTiger offers document-level locking, inline compression and many other useful storage engine features. WiredTiger writes data to disk in “checkpoints” and internally uses Multi-Version Concurrency Control (MVCC) to create “transactions” or “snapshots” when accessing data in the engine. In WiredTiger’s metrics, you will see the term “transactions” used often. It is important to note, however, that MongoDB does not support transactions at this time (this only occurs within the storage engine).

WiredTiger has an in-heap cache for mostly uncompressed pages (50% RAM by default). Like many other engines, it relies on the performance of the Linux filesystem cache, which ends up caching hot, compressed WiredTiger disk blocks.

Besides supporting WiredTiger, Percona Server for MongoDB also ships with a free, open-source in-memory storage engine: Percona Memory Engine for MongoDB. Since we based the Memory Engine on WiredTiger, all graphs and troubleshooting techniques for in-memory are essentially the same (the database data is not stored on disk, of course).

Checkpointing Graphs

WiredTiger checkpoints data to disk every 60 seconds, or after writing 2GB of journaled data.

PMM graphs the current minimum and maximum checkpoint times for WiredTiger checkpoints in the “WiredTiger Checkpoint Time” graph:

Above I have selected “current,” and we can see we have an average of 176ms checkpoints and over a long period it remains flat, not worsening or “snowballing” each checkpoint (which may indicate a performance issue).

Checkpointing is important to watch because it requires WiredTiger to use system resources, and also can affect query performance in an possibly unexpected way — WiredTiger Cache dirty pages:

The WiredTiger Cache is an LRU cache of mostly uncompressed pages. Like most caches, it creates dirty pages that can take up useful memory until flushed. The WiredTiger Cache uses checkpointing as the point in which it clears dirty pages, making the relationship between dirty pages and checkpointing important to note. WiredTiger cleans dirty pages less often if checkpoint performance is slow. They then can slowly consume more and more of the available cache memory.

In the above graph, we can see on average about 8.8% of the cache is dirty pages with spikes up/down aligning with checkpointing. Systems with a very high rate of dirty pages benefit from more RAM to provide more room for “clean” pages. Another option could be improving storage performance, so checkpoints happen faster.

Concurrency Graph

Similar to InnoDB, WiredTiger uses a system of tickets to control concurrency. Where things differ from InnoDB is both “reads” and “writes” have their own ticket pools with their own maximum-ticket limits. The defaults of “128” tickets for both read and write concurrency is generally enough for even medium-high usage systems. Some systems are capable of more than the default concurrency limit, however (usually systems with very fast storage). Also, concurrency can sometimes reduce overhead on network-based storage.

If you notice higher ticket usage, it can sometimes be due to a lot of single-document locking in WiredTiger. This is something to check if you see high rates alongside storage performance and general query efficiency.

In Percona Monitoring and Management, we have the “WiredTiger Concurrent Transactions” graph to visualize the usage of the tickets. In most cases, tickets shouldn’t reach the limit and you shouldn’t need to tweak this tuneable. If you do require more concurrency, however, PMM’s graphing helps indicate when limits are being reached and whether a new limit will mitigate the problem.

Here we can see a max usage of 8/128 write tickets and 5/128 read tickets. This means this system isn’t having any concurrency issues.

Throughput Graphs

There are several WiredTiger graphs to explain the rate of data moving through the engine. As storage is a common bottleneck, I generally look at “WiredTiger Block Activity” first when investigating storage resource usage. This graph shows the total rates written and read to/from storage by WiredTiger (disk for WiredTiger, memory for in-memory).

For correlation, there are also rates for the amount of data written from and read into the WiredTiger cache, from disk. The “read” metric shows the rate of data added to the cache due to query patterns (e.g.: scanning), while the “written” metric shows the rate of data written out to storage from the WiredTiger cache.

Also there are rates to explain the IO caused by the WiredTiger Log. The metric “payload” is the essentially the write rate of raw BSON pages, and “written” is a combined total of log bytes written (including overhead, likely the frames around the payload, etc.). You should watch changes to the average rate of “read” carefully, as they may indicate changes in query patterns or efficiency.

Detailed Cache Graphs

In addition to the Dirty Pages in the cache graph, “WiredTiger Cache Capacity” graphs the size and usage of the WiredTiger cache:

The rate of cache eviction is graphed in “WiredTiger Cache Eviction,” with a break down of modified vs. unmodified pages:

Very large spikes in eviction can indicate collection scanning or generally poor performing queries. This pushes data out of caches. You should avoid high rates of cache evictions, as they can cause a high overhead to the overall engine.

When increasing the size of the WiredTiger cache it is useful to look at both of the above cache graphs. You should look for more “Used” memory in the “WiredTiger Cache Capacity” graph and less rate of eviction in the “WiredTiger Cache Eviction” graph. If you do not see changes to these metrics, you may see better performance leaving the cache size as-is.

Transactions and Document Operations

The “WiredTiger Transactions” graph shows the overall operations happening inside the engine. All transactions start with a “begin,” and operations that changed data end with a “commit.” Read-only operations show a “rollback” at the time they returned data:

This graph above correlates nicely with the “Mongod – Document Activity” graph, which shows the rate of operations from the MongoDB-layer perspective instead of the storage engine level:

Detailed Log Graphs

The graph “WiredTiger Log Operations” explains activity inside the WiredTiger Log system:

Also, the rate of log record compression is graphed as “WiredTiger Log Records.” WiredTiger only compresses log operations that are greater than 128 bytes, which explains why some log records are not compressed:

In some cases, changes in the ratio of compressed vs. uncompressed pages may help explain changes in CPU% used.

What’s Missing?

As you’ll see in my other blog post “Percona Monitoring and Management (PMM) Graphs Explained: MongoDB with RocksDB” from this series, RocksDB includes read latency metrics and a hit ratio for the RocksDB block cache. These are two things I would like to see added to WiredTiger’s metric output, and thus PMM. I would also like to improve the user-experience of this dashboard. Some areas use linear-scaled graphs when a logarithmic-scaled graph could provide more value. “WiredTiger Concurrent Transactions” is one example of this.

A known-mystery (so-to-speak) is why WiredTiger reports the cache “percentage overhead” always as 8% in “db.serverStatus().cache.” We added this metric to PMM as a graph named “WiredTiger Cache Overhead.” We assumed it provided a variable overhead metric. However, I’ve seen that it returns 8% regardless of usage: it is 8% on a busy system or even on an empty system with no data or traffic. We’re aware of this, and plan to investigate, as a hit ratio for the cache is a very valuable metric:

Also, if you’ve ever seen the full output of the WiredTiger status metrics (‘db.serverStatus().wiredTiger’ in Mongo shell), you’ll know that there are a LOT more WiredTiger metrics than are currently graphed in Percona Monitoring and Management. In our initial release, we’ve aimed to only include high-value graphs to simplify monitoring WiredTiger. A major barrier in our development of monitoring features for WiredTiger has been the little-to-no documentation on the meaning of many status metrics. I hope this improves with time. As we understand more correlations and useful metrics to determine the health of WiredTiger, we plan to integrate those into Percona Monitoring and Management in the future. As always, we appreciate your suggestions.

Lastly, look out for an upcoming blog post from this series regarding creating custom dashboards, graphs and raw data queries with Percona Monitoring and Management!

Installing Percona Monitoring and Management (PMM) for the First Time

February 24, 2017 - 1:32pm

This post is another in the series on Percona’s MongoDB 3.4 bundle release. This post is meant to walk a prospective user through the benefits of Percona Monitoring and Management (PMM), how it’s architected and the simple install process. By the end of this post, you should have a good idea of what PMM is, where it can add value in your environment and how you can get PMM going quickly.

Percona Monitoring and Management (PMM) is Percona’s open-source tool for monitoring and alerting on database performance and the components that contribute to it. PMM monitors MySQL (Percona Server and MySQL CE), Amazon RDS/Aurora, MongoDB (Percona Server and MongoDB CE), Percona XtraDB/Galera Cluster, ProxySQL, and Linux.

What is it?

Percona Monitoring and Management is an amalgamation of exciting, best in class, open-source tools and Percona “engineering wizardry,” designed to make it easier to monitor and manage your environment. The real value to our users is the amount of time we’ve spent integrating the tools, plus the pre-built dashboards we’ve constructed that leverage the ten years of performance optimization experience. What you get is a tool that is ready to go out of the box, and installs in minutes. If you’re still not convinced, like ALL Percona software it’s completely FREE!

Sound good? I can hear you nodding your head. Let’s take a quick look at the architecture.

What’s it made of?

PMM, at a high-level, is made up of two basic components: the client and the server. The PMM Client is installed on the database servers themselves and is used to collect metrics. The client contains technology specific exporters (which collect and export data), and an “admin interface” (which makes the management of the PMM platform very simple). The PMM server is a “pre-integrated unit” (Docker, VM or AWS AMI) that contains four components that gather the metrics from the exporters on the PMM client(s). The PMM server contains Consul, Grafana, Prometheus and a Query Analytics Engine that Percona has developed. Here is a graphic from the architecture section of our documentation. In order to keep this post to a manageable length, please refer to that page if you’d like a more “in-depth” explanation.

How do I use it?

PMM is very easy to access once it has been installed (more on the install process below). You will simply open up the web browser of your choice and connect to the PMM Landing Page by typing http://<ip_address_of _PMM_server>. That takes you to the PMM landing page, where you can access all of PMM’s tools. If you’d like to get a look into the user experience, we’ve set up a great demo site so you can easily test it out.

Where should I use it?

There’s a good chance that you already have a monitoring/alerting platform for your production workloads. If not, you should set one up immediately and start analyzing trends in your environment. If you’re confident in your production monitoring solution, there is still a use for PMM in an often overlooked area: development and testing.

When speaking with users, we often hear that their development and test environments run their most demanding workloads. This is often due to stress testing and benchmarking. The goal of these workloads is usually to break something. This allows you to set expectations for normal, and thus abnormal, behavior in your production environment. Once you have a good idea of what’s “normal” and the critical factors involved, you can alert around those parameters to identify “abnormal” patterns before they cause user issues in production. The reason that monitoring is critical in your dev/test environment(s) is that you want to easily spot inflection points in your workload, which signal impending disaster. Dashboards are the easiest way for humans to consume and analyze this data.

Are you sold? Let’s get to the easiest part: installation.

How do you install it?

PMM is very easy to install and configure for two main reasons. The first is that the components (mentioned above) take some time to install, so we spent the time to integrate everything and ship it as a unit: one server install and a client install per host. The second is that we’re targeting customers looking to monitor MySQL and MongoDB installations for high-availability and performance. The fact that it’s a targeted solution makes pre-configuring it to monitor for best practices much easier. I believe we’ve all seen a particular solution that tries to do a little of everything, and thus actually does no particular thing well. This is the type of tool that we DO NOT want PMM to be. Now, onto the installation procedure.

There are four basic steps to get PMM monitoring your infrastructure. I do not want to recreate the Deployment Guide in order to maintain the future relevancy of this post. However, I’ll link to the relevant sections of the documentation so you can cut to the chase. Also, underneath each step, I’ll list some key takeaways that will save you time now and in the future.

  1. Install the integrated PMM server in the flavor of your choice (Docker, VM or AWS AMI)
    1. Percona recommends Docker to deploy PMM server as of v1.1
      1. As of right now, using Docker will make the PMM server upgrade experience seamless.
      2. Using the default version of Docker from your package manager may cause unexpected behavior. We recommend using the latest stable version from Docker’s repositories (instructions from Docker).
    2. PMM server AMI and VM are “experimental” in PMM v1.1
    3. When you open the “Metrics Monitor” for the first time, it will ask for credentials (user: admin pwd: admin).
  2. Install the PMM client on every database instance that you want to monitor.
    1. Install with your package manager for easier upgrades when a new version of PMM is released.
  3. Connect the PMM client to the PMM Server.
    1. Think of this step as sending configuration information from the client to the server. This means you are telling the client the address of the PMM server, not the other way around.
  4. Start data collection services on the PMM client.
    1. Collection services are enabled per database technology (MySQL, MongoDB, ProxySQL, etc.) on each database host.
    2. Make sure to set permissions for PMM client to monitor the database: Cannot connect to MySQL: Error 1045: Access denied for user ‘jon’@’localhost’ (using password: NO)
      1. Setting proper credentials uses this syntax sudo pmm-admin add <service_type> –user xxxx –password xxxx
    3. There’s good information about PMM client options in the “Managing PMM Client” section of the documentation for advanced configurations/troubleshooting.
What’s next?

That’s really up to you, and what makes sense for your needs. However, here are a few suggestions to get the most out of PMM.

  1. Set up alerting in Grafana on the PMM server. This is still an experimental function in Grafana, but it works. I’d start with Barrett Chambers’ post on setting up email alerting, and refine it with  Peter Zaitsev’s post.
  2. Set up more hosts to test the full functionality of PMM. We have completely free, high-performance versions of MySQL, MongoDB, Percona XtraDB Cluster (PXC) and ProxySQL (for MySQL proxy/load balancing).
  3. Start load testing the database with benchmarking tools to build your troubleshooting skills. Try to break something to learn what troubling trends look like. When you find them, set up alerts to give you enough time to fix them.

Percona MongoDB 3.4 Bundle Release: Percona Server for MongoDB 3.4 Features Explored

February 23, 2017 - 1:36pm

This blog post continues the series on the Percona MongoDB 3.4 bundle release. This release includes Percona Server for MongoDB, Percona Monitoring and Management, and Percona Toolkit. In this post, we’ll look at the features included in Percona Server for MongoDB.

I apologize for the long blog, but there is a good deal of important information to cover. Not just about what new features exist, but also why that are so important. I have tried to break this down into clear areas for you to cover the most amount of data, while also linking to further reading on these topics.

The first and biggest new feature for many people is the addition of collation in MongoDB. Wikipedia says about collation:

Collation is the assembly of written information into a standard order. Many systems of collation are based on numerical order or alphabetical order, or extensions and combinations thereof. Collation is a fundamental element of most office filing systems, library catalogs, and reference books.

What this is saying is a collation is an ordering of characters for a given character set. Different languages order the alphabet differently or even have different base characters (such as Asian, Middle Eastern and other regions) that are not English-native. Collations are critical for multi-language support and sorting of non-English words for index ordering.

Sharding General

All members of a cluster are aware of sharding (all members, sharding set name, etc.). Due to this, the sharding.clusterRole must be defined on all shard nodes, a new requirement.

Mongos processes MUST connect to 3.4 mongod instances (shard and config nodes). 3.2 and lower is not possible.

Config Servers Balancer on Config Server PRIMARY

In MongoDB 3.4, the cluster balancer is moved from the mongos processes (any) to the config server PRIMARY member.

Moving to a config-server-based balancer has the following benefits:

Predictability: the balancer process is always the config server PRIMARY. Before 3.4, any mongos processes could become the balancer, often chosen at random. This made troubleshooting difficult.

Lighter “mongos” process: the mongos/shard router benefits from being as light and thin as possible. This removes some code and potential for breakage from “mongos.”

Efficiency: config servers have dedicated nodes with very low resource utilization and no direct client traffic, for the most part. Moving the balancer to the config server set moves usage away from critical “router” processes.

Reliability: balancing relies on fewer components. Now the balancer can operate on the “config” database metadata locally, without the chance of network interruptions breaking balancing.

Config servers are a more permanent member of a cluster, unlikely to scale up/down or often change, unlike “mongos” processes that may be located on app hosts, etc.

Config Server Replica Set Required

In MongoDB 3.4, the former “mirror” config server strategy (SCCC) is no longer supported. This means all sharded clusters must use a replica-set-based set of config servers.

Using a replica-set based config server set has the following benefits:

Adding and removing config servers is greatly simplified.

Config servers have oplogs (useful for investigations).

Simplicity/Consistency: removing mirrored/SCCC config servers simplifies the high-level and code-level architecture.

Chunk Migration / Balancing Example


Parallel Migrations

Previous to MongoDB 3.4, the balancer could only perform a single chunk migration at any given time. When a chunk migrates, a “source” shard and a “destination” shard are chosen. The balancer coordinates moving the chunks from the source to the target. In a large cluster with many shards, this is inefficient because a migration only involves two shards and a cluster may contain 10s or 100s of shards.

In MongoDB 3.4, the balancer can now perform many chunk migrations at the same time in parallel — as long as they do not involve the same source and destination shards. This means that in clusters with more than two shards, many chunk migrations can now occur at the same time when they’re mutually exclusive to one another. The effective outcome is (Number of Shards / 2) -1 == number of max parallel migrations: or an increase in the speed of the migration process.

For example, if you have ten shards, then 10/2 = 5 and  5-1 = 4. So you can have four concurrent moveChunks or balancing actions.

Tags and Zone

Sharding Zones supersedes tag-aware sharding. There is mostly no changes to the functionality. This is mostly a naming change and some new helper functions.

New commands/shell-methods added:

addShardToZone / sh.addShardToZone().

removeShardFromZone / sh.removeShardFromZone().

updateZoneKeyRange / sh.updateZoneKeyRange() + sh.removeRangeFromZone().

You might recall  MongoDB has for a long time supported the idea of shard and replication tags. They break into two main areas: hardware-aware tags and access pattern tags. The idea behind hardware-aware tags was that you could have one shard with slow disks, and as data ages, you have a process to move documents to a collection that lives on that shard (or tell specific ranges to live on that shard). Then your other shards could be faster (and multiples of them) to better handle the high-speed processing of current data.

The other is a case based more in replication, where you want to allow BI and other reporting systems access to your data without damaging your primary customer interactions. To do this, you could tag a node in a replica set to be {reporting: true}, and all reporting queries would use this tag to prevent affecting the same nodes the user-generated work would live on. Zones is this same idea, simplified into a better-understood term. For now, there is no major difference between these areas, but it could be something to look at more in the 3.6 and 3.8 MongoDB versions.


New “linearizable” Read Concern: reflects all successful writes issued with a “majority” and acknowledged before the start of the read operation.

Adjustable Catchup for Newly Elected Primary: the time limit for a newly elected primary to catch up with the other replica set members that might have more recent writes.

Write Concern Majority Journal Default replset-config option: determines the behavior of the { w: "majority" } write concern if the write concern does not explicitly specify the journal option j.

Initial-sync improvements:

Now the initial sync builds the indexes as the documents are copied.

Improvements to the retry logic make it more resilient to intermittent failures on the network.

Data Types

MongoDB 3.4 adds support for the decimal128 format with the new decimal data type. The decimal128 format supports numbers with up to 34 decimal digits (i.e., significant digits) and an exponent range of −6143 to +6144.

When performing comparisons among different numerical types, MongoDB conducts a comparison of the exact stored numerical values without first converting values to a common type.

Unlike the double data type, which only stores an approximation of the decimal values, the decimal data type stores the exact value. For example, a decimal NumberDecimal("9.99") has a precise value of 9.99, whereas a double 9.99 would have an approximate value of 9.9900000000000002131628….

To test for decimal type, use the $type operator with the literal “decimal” or 19 db.inventory.find( { price: { $type: "decimal" } } ) New Number Wrapper Object Type db.inventory.insert( {_id: 1, item: "The Scream", price: NumberDecimal("9.99"), quantity: 4 } )

To use the new decimal data type with a MongoDB driver, an upgrade to a driver version that supports the feature is necessary.

Aggregation Changes Stages Recursive Search

MongoDB 3.4 introduces a stage to the aggregation pipeline that allows for recursive searches.

Stage Description $graphLookup   Performs a recursive search on a collection. To each output document, adds a new array field that contains the traversal results of the recursive search for that document. Faceted Search

Faceted search allows for the categorization of documents into classifications. For example, given a collection of inventory documents, you might want to classify items by a single category (such as by the price range), or by multiple groups (such as by price range as well as separately by the departments).

3.4 introduces stages to the aggregation pipeline that allow for faceted search.

Stage Description $bucket Categorizes or groups incoming documents into buckets that represent a range of values for a specified expression. $bucketAuto Categorizes or groups incoming documents into a specified number of buckets that constitute a range of values for a specified expression. MongoDB automatically determines the bucket boundaries. $facet Processes multiple pipelines on the input documents and outputs a document that contains the results of these pipelines. By specifying facet-related stages ($bucket$bucketAuto, and$sortByCount) in these pipelines, $facet allows for multi-faceted search. $sortByCount   Categorizes or groups incoming documents by a specified expression to compute the count for each group. Output documents are sorted in descending order by the count.

Also read:


Reshaping Documents

MongoDB 3.4 introduces stages to the aggregation pipeline that facilitate replacing documents as well as adding new fields.

Stage Description $addFields Adds new fields to documents. The stage outputs documents that contain all existing fields from the input documents as well as the newly added fields. $replaceRoot   Replaces a document with the specified document. You can specify a document embedded in the input document to promote the embedded document to the top level. Count

MongoDB 3.4 introduces a new stage to the aggregation pipeline that facilitates counting document.

Stage Description $count   Returns a document that contains a count of the number of documents input to the stage. Operators Array Operators Operator Description $in Returns a boolean that indicates if a specified value is in an array. $indexOfArray    Searches an array for an occurrence of a specified value and returns the array index (zero-based) of the first occurrence. $range Returns an array whose elements are a generated sequence of numbers. $reverseArray Returns an output array whose elements are those of the input array but in reverse order. $reduce Takes an array as input and applies an expression to each item in the array to return the final result of the expression. $zip Returns an output array where each element is itself an array, consisting of elements of the corresponding array index position from the input arrays. Date Operators Operator Description $isoDayOfWeek   Returns the ISO 8601-weekday number, ranging from 1 (for Monday) to 7 (for Sunday). $isoWeek Returns the ISO 8601 week number, which can range from 1 to 53. Week numbers start at 1with the week (Monday through Sunday) that contains the year’s first Thursday. $isoWeekYear Returns the ISO 8601 year number, where the year starts on the Monday of week 1 (ISO 8601) and ends with the Sundays of the last week (ISO 8601). String Operators Operator Description $indexOfBytes   Searches a string for an occurrence of a substring and returns the UTF-8 byte index (zero-based) of the first occurrence. $indexOfCP Searches a string for an occurrence of a substring and returns the UTF-8 code point index (zero-based) of the first occurrence. $split Splits a string by a specified delimiter into string components and returns an array of the string components. $strLenBytes Returns the number of UTF-8 bytes for a string. $strLenCP Returns the number of UTF-8 code points for a string. $substrBytes Returns the substring of a string. The substring starts with the character at the specified UTF-8 byte index (zero-based) in the string for the length specified. $substrCP Returns the substring of a string. The substring starts with the character at the specified UTF-8 code point index (zero-based) in the string for the length specified. Others/Misc

Other new operators:

$switch: Evaluates, in sequential order, the case expressions of the specified branches to enter the first branch for which the case expression evaluates to “true”.

$collStats: Returns statistics regarding a collection or view.

$type: Returns a string which specifies the BSON Types of the argument.

$project: Adds support for field exclusion in the output document. Previously, you could only exclude the _id field in the stage.


MongoDB 3.4 adds support for creating read-only views from existing collections or other views. To specify or define a view, MongoDB 3.4 introduces:

    • theViewOn and pipeline options to the existing create command:
      • db.runCommand( { create: <view>, viewOn: <source>, pipeline: <pipeline> } )
    • or if specifying a default collation for the view:
      • db.runCommand( { create: <view>, viewOn: <source>, pipeline: <pipeline>, collation: <collation> } )
    • and a corresponding  mongo shell helper db.createView():
      • db.createView(<view>, <source>, <pipeline>, <collation>)

For more information on creating views, see Views.

Visit Percona Store

General Inquiries

For general inquiries, please send us your question and someone will contact you.