Buy Percona ServicesBuy Now!

mysqldump not releasing memory after completion

Lastest Forum Posts - 4 hours 23 min ago
Good day all

We are running Percona server 5.7 on a server with 128Gb of Memory and 32Gb of Swap Space.

We have had a lot of issues with the memory filling up and using up swap space on a daily basis requiring the database to be restarted daily in order to release the memory.

After adding some graphs, monitoring and physical testing, we found that the problem seems to be related to mysqldump.

The memory usage is fairly stable , increasing little bits until innodb_buffer_pool is full, however the moment we start a backup, the memory usage spikes and uses up all physical memory as well as Swap.

The symptoms we found was that the backup uses up all memory and starts to swap, however once the backup completes, the memory is not released at all and the database server eventually runs out of memory and kills mysql.


We have tried removing the compression of the backups as well using the --quick option but to no avail.

The size of the databases on the server totals about 350Gb in total, however we run backups on a per database setup and the biggest database being +-130Gb in size.


Is there anything else we can perhaps look at in order to try and troubleshoot why the mysqldump processes are not releasing the memory?

Any suggestions would be greatly appreciated.


Regards

coreos

Lastest Forum Posts - February 26, 2017 - 11:18pm
hello

is possible to migrate pmm-client , pmm-server container to Docker OS such as CoreOS / RancherOS ?




FEATURE REQUEST (Ability to delete/modify files created by using SELECT INTO OUTFILE)

Lastest Forum Posts - February 26, 2017 - 4:12am
At the moment, for security reasons in MySQL implemented the ability to create files only through SELECT INTO OUTFILE.
The ability to overwrite or delete files is missing. In the case of a large number of files you create, implementation of removal procedure possible only in non MySQL way.
It is proposed to implement the safe removal option for OUTFILEs. For example:
1. Allow only delete the files that exist in directories tmpdir, thus isolating the OS from the database.
2. Implement registration procedure for all files created using the OUTFILE, respectively delete and change only files passed the registration procedure.
3. Create a single global parameter specifies where OUTILEs will be created, and allows you to manipulate files only within this parameter.
4. Create a global parameter that will indicate the allowable operations for OUTFILEs (create, change, delete)
In MySQL really lacking quality file manipulation procedures.

mysql_config_editor missing from percona-server-client-5.7?

Lastest Forum Posts - February 25, 2017 - 2:08am
I installed percona 5.7 per instructions from the repo on ubuntu 16.04. I want to use mysql_config_editor but get

Code: # mysql_config_editor The program 'mysql_config_editor' is currently not installed. You can install it by typing: apt install mysql-client-5.7 which would replace percona with plain mysql. I wonder, is this tool not available within percona?

pmm client QAN

Lastest Forum Posts - February 24, 2017 - 8:24am
I am running the latest version of the client and server and I have been noticing that on some of my mysql servers that the CPU load is going high - and it seems to be related to this slow query


​​​SELECT COALESCE(SCHEMA_NAME, ''), COALESCE(DIGEST, ''), COUNT_STAR, SUM_TIMER_WAIT, MIN_TIMER_WAIT, AVG_TIMER_WAIT, MAX_TIMER_WAIT, SUM_LOCK_TIME, SUM_ERRORS, SUM_WARNINGS, SUM_ROWS_AFFECTED, SUM_ROWS_SENT, SUM_ROWS_EXAMINED, SUM_CREATED_TMP_DISK_TABLES, SUM_CREATED_TMP_TABLES, SUM_SELECT_FULL_JOIN, SUM_SELECT_FULL_RANGE_JOIN, SUM_SELECT_RANGE, SUM_SELECT_RANGE_CHECK, SUM_SELECT_SCAN, SUM_SORT_MERGE_PASSES, SUM_SORT_RANGE, SUM_SORT_ROWS, SUM_SORT_SCAN, SUM_NO_INDEX_USED, SUM_NO_GOOD_INDEX_USED FROM performance_schema.events_statements_summary_by_di gest WHERE DIGEST IS NOT NULL AND LAST_SEEN > NOW() - INTERVAL 9223372036 SECOND:



It takes a long time to run (~60 seconds), is this to be expected or is there some known issue? Or am I missing something on my end?

Thanks

Why pt-table-check only checks 2 databases while my server has 10 databases?

Lastest Forum Posts - February 24, 2017 - 1:41am


I have 10 databases in my server:
Code: [root@TENCENT64 /data/home/fernguo/percona-toolkit-2.2.20/bin]# mysql -upercona -ppercona -h xxxx mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 13372 Server version: 5.7.17-log Source distribution Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | binlog | | dba | | mysql | | percona | | performance_schema | | sbtest | | sys | | tmp | | tmpdir | +--------------------+ 10 rows in set (0.00 sec) but pt-table-checksum only checks 2 database,other databases are ignored,why?
Code: [root@TENCENT64 /data/home/fernguo/percona-toolkit-2.2.20/bin]# ./pt-table-checksum h=xxxx,u=percona,p=percona,P=3306 --no-check-binlog-format --no-check-replication-filters TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 02-24T17:37:05 0 0 1 1 0 0.011 dba.delay_monitor 02-24T17:37:05 0 0 0 1 0 0.011 mysql.columns_priv 02-24T17:37:05 0 0 4 1 0 0.012 mysql.db 02-24T17:37:05 0 0 2 1 0 0.011 mysql.engine_cost 02-24T17:37:05 0 0 0 1 0 0.012 mysql.event 02-24T17:37:05 0 0 0 1 0 0.011 mysql.func 02-24T17:37:05 0 0 1 1 0 0.011 mysql.gtid_executed 02-24T17:37:05 0 0 40 1 0 0.011 mysql.help_category 02-24T17:37:05 0 0 682 1 0 0.012 mysql.help_keyword 02-24T17:37:05 0 0 1340 1 0 0.012 mysql.help_relation 02-24T17:37:05 0 0 637 1 0 0.015 mysql.help_topic 02-24T17:37:05 0 0 0 1 0 0.011 mysql.ndb_binlog_index 02-24T17:37:05 0 0 0 1 0 0.011 mysql.procs_priv 02-24T17:37:05 0 0 1 1 0 0.011 mysql.proxies_priv 02-24T17:37:05 0 0 6 1 0 0.011 mysql.server_cost 02-24T17:37:05 0 0 0 1 0 0.011 mysql.servers 02-24T17:37:05 0 1 1 1 0 0.011 mysql.tables_priv 02-24T17:37:05 0 0 0 1 0 0.011 mysql.time_zone 02-24T17:37:05 0 0 0 1 0 0.011 mysql.time_zone_leap_second 02-24T17:37:05 0 0 0 1 0 0.011 mysql.time_zone_name 02-24T17:37:05 0 0 0 1 0 0.011 mysql.time_zone_transition 02-24T17:37:05 0 0 0 1 0 0.011 mysql.time_zone_transition_type 02-24T17:37:05 0 0 30 1 0 0.014 mysql.user

Percona MongoDB 3.4 Bundle Release: Percona Server for MongoDB 3.4 Features Explored

Latest MySQL Performance Blog posts - February 23, 2017 - 1:36pm

This blog post continues the series on the Percona MongoDB 3.4 bundle release. This release includes Percona Server for MongoDB, Percona Monitoring and Management, and Percona Toolkit. In this post, we’ll look at the features included in Percona Server for MongoDB.

I apologize for the long blog, but there is a good deal of important information to cover. Not just about what new features exist, but also why that are so important. I have tried to break this down into clear areas for you to cover the most amount of data, while also linking to further reading on these topics.

The first and biggest new feature for many people is the addition of collation in MongoDB. Wikipedia says about collation:

Collation is the assembly of written information into a standard order. Many systems of collation are based on numerical order or alphabetical order, or extensions and combinations thereof. Collation is a fundamental element of most office filing systems, library catalogs, and reference books.

What this is saying is a collation is an ordering of characters for a given character set. Different languages order the alphabet differently or even have different base characters (such as Asian, Middle Eastern and other regions) that are not English-native. Collations are critical for multi-language support and sorting of non-English words for index ordering.

Sharding General

All members of a cluster are aware of sharding (all members, sharding set name, etc.). Due to this, the sharding.clusterRole must be defined on all shard nodes, a new requirement.

Mongos processes MUST connect to 3.4 mongod instances (shard and config nodes). 3.2 and lower is not possible.

Config Servers Balancer on Config Server PRIMARY

In MongoDB 3.4, the cluster balancer is moved from the mongos processes (any) to the config server PRIMARY member.

Moving to a config-server-based balancer has the following benefits:

Predictability: the balancer process is always the config server PRIMARY. Before 3.4, any mongos processes could become the balancer, often chosen at random. This made troubleshooting difficult.

Lighter “mongos” process: the mongos/shard router benefits from being as light and thin as possible. This removes some code and potential for breakage from “mongos.”

Efficiency: config servers have dedicated nodes with very low resource utilization and no direct client traffic, for the most part. Moving the balancer to the config server set moves usage away from critical “router” processes.

Reliability: balancing relies on fewer components. Now the balancer can operate on the “config” database metadata locally, without the chance of network interruptions breaking balancing.

Config servers are a more permanent member of a cluster, unlikely to scale up/down or often change, unlike “mongos” processes that may be located on app hosts, etc.

Config Server Replica Set Required

In MongoDB 3.4, the former “mirror” config server strategy (SCCC) is no longer supported. This means all sharded clusters must use a replica-set-based set of config servers.

Using a replica-set based config server set has the following benefits:

Adding and removing config servers is greatly simplified.

Config servers have oplogs (useful for investigations).

Simplicity/Consistency: removing mirrored/SCCC config servers simplifies the high-level and code-level architecture.

Chunk Migration / Balancing Example

(from docs.mongodb.com)

Parallel Migrations

Previous to MongoDB 3.4, the balancer could only perform a single chunk migration at any given time. When a chunk migrates, a “source” shard and a “destination” shard are chosen. The balancer coordinates moving the chunks from the source to the target. In a large cluster with many shards, this is inefficient because a migration only involves two shards and a cluster may contain 10s or 100s of shards.

In MongoDB 3.4, the balancer can now perform many chunk migrations at the same time in parallel — as long as they do not involve the same source and destination shards. This means that in clusters with more than two shards, many chunk migrations can now occur at the same time when they’re mutually exclusive to one another. The effective outcome is (Number of Shards / 2) -1 == number of max parallel migrations: or an increase in the speed of the migration process.

For example, if you have ten shards, then 10/2 = 5 and  5-1 = 4. So you can have four concurrent moveChunks or balancing actions.

Tags and Zone

Sharding Zones supersedes tag-aware sharding. There is mostly no changes to the functionality. This is mostly a naming change and some new helper functions.

New commands/shell-methods added:

addShardToZone / sh.addShardToZone().

removeShardFromZone / sh.removeShardFromZone().

updateZoneKeyRange / sh.updateZoneKeyRange() + sh.removeRangeFromZone().

You might recall  MongoDB has for a long time supported the idea of shard and replication tags. They break into two main areas: hardware-aware tags and access pattern tags. The idea behind hardware-aware tags was that you could have one shard with slow disks, and as data ages, you have a process to move documents to a collection that lives on that shard (or tell specific ranges to live on that shard). Then your other shards could be faster (and multiples of them) to better handle the high-speed processing of current data.

The other is a case based more in replication, where you want to allow BI and other reporting systems access to your data without damaging your primary customer interactions. To do this, you could tag a node in a replica set to be {reporting: true}, and all reporting queries would use this tag to prevent affecting the same nodes the user-generated work would live on. Zones is this same idea, simplified into a better-understood term. For now, there is no major difference between these areas, but it could be something to look at more in the 3.6 and 3.8 MongoDB versions.

Replication

New “linearizable” Read Concern: reflects all successful writes issued with a “majority” and acknowledged before the start of the read operation.

Adjustable Catchup for Newly Elected Primary: the time limit for a newly elected primary to catch up with the other replica set members that might have more recent writes.

Write Concern Majority Journal Default replset-config option: determines the behavior of the { w: "majority" } write concern if the write concern does not explicitly specify the journal option j.

Initial-sync improvements:

Now the initial sync builds the indexes as the documents are copied.

Improvements to the retry logic make it more resilient to intermittent failures on the network.

Data Types

MongoDB 3.4 adds support for the decimal128 format with the new decimal data type. The decimal128 format supports numbers with up to 34 decimal digits (i.e., significant digits) and an exponent range of −6143 to +6144.

When performing comparisons among different numerical types, MongoDB conducts a comparison of the exact stored numerical values without first converting values to a common type.

Unlike the double data type, which only stores an approximation of the decimal values, the decimal data type stores the exact value. For example, a decimal NumberDecimal("9.99") has a precise value of 9.99, whereas a double 9.99 would have an approximate value of 9.9900000000000002131628….

To test for decimal type, use the $type operator with the literal “decimal” or 19 db.inventory.find( { price: { $type: "decimal" } } ) New Number Wrapper Object Type db.inventory.insert( {_id: 1, item: "The Scream", price: NumberDecimal("9.99"), quantity: 4 } )

To use the new decimal data type with a MongoDB driver, an upgrade to a driver version that supports the feature is necessary.

Aggregation Changes Stages Recursive Search

MongoDB 3.4 introduces a stage to the aggregation pipeline that allows for recursive searches.

Stage Description $graphLookup   Performs a recursive search on a collection. To each output document, adds a new array field that contains the traversal results of the recursive search for that document. Faceted Search

Faceted search allows for the categorization of documents into classifications. For example, given a collection of inventory documents, you might want to classify items by a single category (such as by the price range), or by multiple groups (such as by price range as well as separately by the departments).

3.4 introduces stages to the aggregation pipeline that allow for faceted search.

Stage Description $bucket Categorizes or groups incoming documents into buckets that represent a range of values for a specified expression. $bucketAuto Categorizes or groups incoming documents into a specified number of buckets that constitute a range of values for a specified expression. MongoDB automatically determines the bucket boundaries. $facet Processes multiple pipelines on the input documents and outputs a document that contains the results of these pipelines. By specifying facet-related stages ($bucket$bucketAuto, and$sortByCount) in these pipelines, $facet allows for multi-faceted search. $sortByCount   Categorizes or groups incoming documents by a specified expression to compute the count for each group. Output documents are sorted in descending order by the count.

Also read: https://www.percona.com/blog/2016/12/13/mongodb-3-4-facet-aggregation-features-and-server-27395-mongod-crash/

 

Reshaping Documents

MongoDB 3.4 introduces stages to the aggregation pipeline that facilitate replacing documents as well as adding new fields.

Stage Description $addFields Adds new fields to documents. The stage outputs documents that contain all existing fields from the input documents as well as the newly added fields. $replaceRoot   Replaces a document with the specified document. You can specify a document embedded in the input document to promote the embedded document to the top level. Count

MongoDB 3.4 introduces a new stage to the aggregation pipeline that facilitates counting document.

Stage Description $count   Returns a document that contains a count of the number of documents input to the stage. Operators Array Operators Operator Description $in Returns a boolean that indicates if a specified value is in an array. $indexOfArray    Searches an array for an occurrence of a specified value and returns the array index (zero-based) of the first occurrence. $range Returns an array whose elements are a generated sequence of numbers. $reverseArray Returns an output array whose elements are those of the input array but in reverse order. $reduce Takes an array as input and applies an expression to each item in the array to return the final result of the expression. $zip Returns an output array where each element is itself an array, consisting of elements of the corresponding array index position from the input arrays. Date Operators Operator Description $isoDayOfWeek   Returns the ISO 8601-weekday number, ranging from 1 (for Monday) to 7 (for Sunday). $isoWeek Returns the ISO 8601 week number, which can range from 1 to 53. Week numbers start at 1with the week (Monday through Sunday) that contains the year’s first Thursday. $isoWeekYear Returns the ISO 8601 year number, where the year starts on the Monday of week 1 (ISO 8601) and ends with the Sundays of the last week (ISO 8601). String Operators Operator Description $indexOfBytes   Searches a string for an occurrence of a substring and returns the UTF-8 byte index (zero-based) of the first occurrence. $indexOfCP Searches a string for an occurrence of a substring and returns the UTF-8 code point index (zero-based) of the first occurrence. $split Splits a string by a specified delimiter into string components and returns an array of the string components. $strLenBytes Returns the number of UTF-8 bytes for a string. $strLenCP Returns the number of UTF-8 code points for a string. $substrBytes Returns the substring of a string. The substring starts with the character at the specified UTF-8 byte index (zero-based) in the string for the length specified. $substrCP Returns the substring of a string. The substring starts with the character at the specified UTF-8 code point index (zero-based) in the string for the length specified. Others/Misc

Other new operators:

$switch: Evaluates, in sequential order, the case expressions of the specified branches to enter the first branch for which the case expression evaluates to “true”.

$collStats: Returns statistics regarding a collection or view.

$type: Returns a string which specifies the BSON Types of the argument.

$project: Adds support for field exclusion in the output document. Previously, you could only exclude the _id field in the stage.

Views

MongoDB 3.4 adds support for creating read-only views from existing collections or other views. To specify or define a view, MongoDB 3.4 introduces:

    • theViewOn and pipeline options to the existing create command:
      • db.runCommand( { create: <view>, viewOn: <source>, pipeline: <pipeline> } )
    • or if specifying a default collation for the view:
      • db.runCommand( { create: <view>, viewOn: <source>, pipeline: <pipeline>, collation: <collation> } )
    • and a corresponding  mongo shell helper db.createView():
      • db.createView(<view>, <source>, <pipeline>, <collation>)

For more information on creating views, see Views.

Mixing WiredTiger &amp;amp; Percona Memory Engine

Lastest Forum Posts - February 23, 2017 - 9:20am
I have just installed Percona 3.4 and see that the Memory Engine is a really nice option. Is is possible to use WiredTiger for one database and Memory Engine for another? I can't work out how if so!!

Percona XtraBackup 2.4.6 is Now Available

Lastest Forum Posts - February 23, 2017 - 4:59am
Percona announces the GA release of Percona XtraBackup 2.4.6 on February 22, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

Percona XtraBackup 2.3.7 is Now Available

Lastest Forum Posts - February 23, 2017 - 4:58am
Percona announces the release of Percona XtraBackup 2.3.7 on February 22, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

ALTER TABLE without downtime in a cluster

Lastest Forum Posts - February 22, 2017 - 2:38pm
I'm struggling to work out how to perform ALTER TABLES without down time in a cluster, even with the nice pt-online-schema-change tool.

If there are many other tables which have foreign key references to the table that you are altering, then my experience is that you'll get a lot of downtime at the end (presumably even if it selects drop_swap). This also doesn't seem to play nicely when there are multiple nodes running.

So for large tables with many key references to them, I've ended up reducing the cluster to a single node, doing an ALTER TABLE and not using pt-online-schema-change, then forcing SSTs to the other nodes to bring them back in by deleting grastate.dat.

This isn't great - significant down time. Am I missing something?

One special case here is the use of ALTER TABLE to reclaim InnoDB space. I've been wondering whether the rolling schema updates would allow this. Can you use the RSU process to perform an ALTER TABLE which reclaims disk space without any down time?

General hints and tips welcome.

Webinar Thursday, February 23, 2017: Troubleshooting MySQL Access Privileges Issues

Latest MySQL Performance Blog posts - February 22, 2017 - 12:50pm

Please join Sveta Smirnova, Percona’s Principal Technical Services Engineer, as she presents Troubleshooting MySQL Access Privileges Issues on
February 23, 2017 at 11:00 am PST / 2:00 pm EST (UTC-8).

Do you have registered users who can’t connect to the MySQL server? Strangers modifying data to which they shouldn’t have access?

MySQL supports a rich set of user privilege options and allows you to fine tune access to every object in the server. The latest versions support authentication plugins that help to create more access patterns.

However, finding errors in such a big set of options can be problematic. This is especially true for environments with hundreds of users, all with different privileges on multiple objects. In this webinar, I will show you how to decipher error messages and unravel the complicated setups that can lead to access errors. We will also cover network errors that mimic access privileges errors.

In this webinar, we will discuss:

  • Which privileges MySQL supports
  • What GRANT statements are
  • How privileges are stored
  • How to find out why a privilege does not work properly
  • How authentication plugins make difference
  • What the best access control practices are

To register for this webinar please click here.

Sveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can solve typical issues quicker, and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona, Sveta worked as Support Engineer in the MySQL Bugs Analysis Support Group at MySQL AB-Sun-Oracle. She is the author of book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

Percona Monitoring and Management (PMM) Graphs Explained: MongoDB with RocksDB

Latest MySQL Performance Blog posts - February 22, 2017 - 12:36pm

This post is part of the series of Percona’s MongoDB 3.4 bundle release blogs. In mid-2016, Percona Monitoring and Management (PMM) added support for RocksDB with MongoDB, also known as “MongoRocks.” In this blog, we will go over the Percona Monitoring and Management (PMM) 1.1.0 version of the MongoDB RocksDB dashboard, how PMM is useful in the day-to-day monitoring of MongoDB and what we plan to add and extend.

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB, developed by Percona on top of open-source technology. Behind the scenes, the graphing features this article covers use Prometheus (a popular time-series data store), Grafana (a popular visualization tool), mongodb_exporter (our MongoDB database metric exporter) plus other technologies to provide database and operating system metric graphs for your database instances.

The mongodb_exporter tool, which provides our monitoring platform with MongoDB metrics, uses RocksDB status output and optional counters to provide detailed insight into RocksDB performance. Percona’s MongoDB 3.4 release enables RocksDB’s optional counters by default. On 3.2, however, you must set the following in /etc/mongod.conf to enable this: storage.rocksdb.counters: true .

This article shows a live demo of our MongoDB RocksDB graphs: https://pmmdemo.percona.com/graph/dashboard/db/mongodb-rocksdb.

RocksDB/MongoRocks

RocksDB is a storage engine available since version 3.2 in Percona’s fork of MongoDB: Percona Server for MongoDB.

The first thing to know about monitoring RocksDB is compaction. RocksDB stores its data on disk using several tiered levels of immutable files. Changes written to disk are written to the first RocksDB level (Level0). Later the internal compactions merge the changes down to the next RocksDB level when Level0 fills. Each level before the last is essentially deltas to the resting data set that soon merges down to the bottom.

We can see the effect of the tiered levels in our “RocksDB Compaction Level Size” graph, which reflects the size of each level in RocksDB on-disk:

Note that most of the database data is in the final level “L6” (Level 6). Levels L0, L4 and L5 hold relatively smaller amounts of data changes. These get merged down to L6 via compaction.

More about this design is explained in detail by the developers of MongoRocks, here: https://www.percona.com/live/plam16/sessions/everything-you-wanted-know-about-mongorocks.

RocksDB Compaction

Most importantly, RocksDB compactions try to happen in the background. They generally do not “block” the database. However, the additional resource usage of compactions can potentially cause some spikes in latency, making compaction important to watch. When compactions occur, between levels L4 and L5 for example, L4 and L5 are read and merged with the result being written out as a new L5.

The memtable in MongoRocks is a 64mb in-memory table. Changes initially get written to the memtable. Reads check the memtable to see if there are unwritten changes to consider. When the memtable has filled to 100%, RocksDB performs a compaction of the memtable data to Level0, the first on-disk level in RocksDB.

In PMM we have added a single-stat panel for the percentage of the memtable usage. This is very useful in indicating when you can expect a memtable-to-level0 compaction to occur:

Above we can see the memtable is 125% used, which means RocksDB is late to finish (or start) a compaction due to high activity. Shortly after taking this screenshot above, however, our test system began a compaction of the memtable and this can be seen at the drop in active memtable entries below:

Following this compaction further through PMM’s graphs, we can see from the (very useful) “RocksDB Compaction Time” graph that this compaction took 5 seconds.

In the graph above, I have singled-out “L0” to show Level0’s compaction time. However, any level can be selected either per-graph (by clicking on the legend-item) or dashboard-wide (by using the RocksDB Level drop-down at the top of the page).

In terms of throughput, we can see from our “RocksDB Write Activity” graph (Read Activity is also graphed) that this compaction required about 33MBps of disk write activity:

On top of additional resource consumption such as the write activity above, compactions cause caches to get cleared. One example is the OS cache due to new level files being written. These factors can cause some increases to read latencies, demonstrated in this example below by the bump in L4 read latency (top graph) caused by the L4 compaction (bottom graph):

This pattern above is one area to check if you see latency spikes in RocksDB.

RocksDB Stalls

When RocksDB is unable to perform compaction promptly, it uses a feature called “stalls” to try and slow down the amount of data coming into the engine. In my experience, stalls almost always mean something below RocksDB is not up to the task (likely the storage system).

Here is the “RocksDB Stall Time” graph of a host experiencing frequent stalls:

PMM can graph the different types of RocksDB stalls in the “RocksDB Stalls” graph. In our case here, we have 0.3-0.5 stalls per second due to “level0_slowdown” and “level0_slowdown_with_compaction.” This happens when Level0 stalls the engine due to slow compaction performance below its level.

Another metric reflecting the poor compaction performance is the pending compactions in “RocksDB Pending Operations”:

As I mentioned earlier, this almost always means something below RocksDB itself cannot keep up. In the top-right of PMM, we have OS-level metrics in a drop-down, I recommend you look at “Disk Performance” in these scenarios:

On the “Disk Performance” dashboard you can see the “sda” disk has an average write time of 212ms, and a max of 1100ms (1.1 seconds). This is fairly slow.

Further, on the same dashboard I can see the CPU is waiting on disk I/O 98.70% of the time on average. This explains why RocksDB needs to stall to hold back some of the load!

The disks seem too busy to keep up! Looking at the “Mongod – Document Activity” graph, it explains the cause of the high disk usage: 10,000-60,000 inserts per second:

Here we can draw the conclusion that this volume of inserts on this system configuration causes some stalling in RocksDB.

RocksDB Block Cache

The RocksDB Block Cache is the in-heap cache RocksDB uses to cache uncompressed pages. Generally, deployments benefit from dedicating most of their memory to the Linux file system cache vs. the RocksDB Block Cache. We recommend using only 20-30% of the host RAM for block cache.

PMM can take away some of the guesswork with the “RocksDB Block Cache Hit Ratio” graph, showing the efficiency of the block cache:

It is difficult to define a “good” and “bad” number for this metric, as the number varies for every deployment. However, one important thing to look for is significant changes in this graph. In this example, the Block Cache has a page in cache 3000 times for every 1 time it does not.

If you wanted to test increasing your block cache, this graph becomes very useful. If you increase your block cache and do not see an improvement in the hit ratio after a lengthy period of testing, this usually means more block cache memory is not necessary.

RocksDB Read Latency Graphs

PMM graphs Read Latency metrics for RocksDB in several different graphs, one dedicated to Level0:

And three other graphs display Average, 99th Percentile and Maximum latencies for each RocksDB level. Here is an example from the 99th Percentile latency metrics:

Coming Soon

Percona Monitoring and Management needs to add some more metrics that explain the performance of the engine. The rate of deletes/tombstones in the system affects RocksDB’s performance. Currently, this metric is not something our system can easily gather like other engine metrics. Percona Monitoring and Management can’t easily graph the efficiency of the Bloom filter yet, either. These are currently open feature requests to the MongoRocks (and likely RocksDB) team(s) to add in future versions.

Percona’s release of Percona Server for MongoDB 3.4 includes a new, improved version of MongoRocks and RocksDB. More is available in the release notes!

Percona XtraBackup 2.4.6 is Now Available

Latest MySQL Performance Blog posts - February 22, 2017 - 10:49am

Percona announces the GA release of Percona XtraBackup 2.4.6 on February 22, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

New features:
  • Percona XtraBackup implemented a new --remove-original option that can be used to remove the encrypted and compressed files once they’ve been decrypted/decompressed.
Bugs Fixed:
  • XtraBackup was using username set for the server in a configuration file even if a different user was defined in the user’s configuration file. Bug fixed #1551706.
  • Incremental backups did not include xtrabackup_binlog_info and xtrabackup_galera_info files. Bug fixed #1643803.
  • In case a warning was written to stout instead of stderr during the streaming backup, it could cause assertion in the xbstream. Bug fixed #1647340.
  • xtrabackup --move-back option did not always restore out-of-datadir tablespaces to their original directories. Bug fixed #1648322.
  • innobackupex and xtrabackup scripts were showing the password in the ps output when it was passed as a command line argument. Bug fixed #907280.
  • Incremental backup would fail with a path like ~/backup/inc_1 because xtrabackup didn’t properly expand tilde. Bug fixed #1642826.
  • Fixed missing dependency check for Perl Digest::MD5 in rpm packages. This will now require perl-MD5 package to be installed from EPEL repositories on CentOS 5 and CentOS 6 (along with libev). Bug fixed #1644018.
  • Percona XtraBackup now supports -H, -h, -u and -p shortcuts for --hostname, --datadir, --user and --password respectively. Bugs fixed #1655438 and #1652044.

Release notes with all the bugfixes for Percona XtraBackup 2.4.6 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Percona XtraBackup 2.3.7 is Now Available

Latest MySQL Performance Blog posts - February 22, 2017 - 10:48am

Percona announces the release of Percona XtraBackup 2.3.7 on February 22, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

New Features
  • Percona XtraBackup has implemented a new --remove-original option that can be used to remove the encrypted and compressed files once they’ve been decrypted/decompressed.
Bugs Fixed:
  • XtraBackup was using username set for the server in a configuration file even if a different user was defined in the user’s configuration file. Bug fixed #1551706.
  • Incremental backups did not include xtrabackup_binlog_info and xtrabackup_galera_info files. Bug fixed #1643803.
  • Percona XtraBackup would fail to compile with -DWITH_DEBUG and -DWITH_SSL=system options. Bug fixed #1647551.
  • xtrabackup --move-back option did not always restore out-of-datadir tablespaces to their original directories. Bug fixed #1648322.
  • innobackupex and xtrabackup scripts were showing the password in the ps output when it was passed as a command line argument. Bug fixed #907280.
  • Incremental backup would fail with a path like ~/backup/inc_1 because xtrabackup didn’t properly expand tilde. Bug fixed #1642826.
  • Fixed missing dependency check for Perl Digest::MD5 in rpm packages. This will now require perl-MD5 package to be installed from EPEL repositories on CentOS 5 and CentOS 6 (along with libev). Bug fixed #1644018.
  • Percona XtraBackup now supports -H, -h, -u and -p shortcuts for --hostname, --datadir, --user and --password respectively. Bugs fixed #1655438 and #1652044.

Other bugs fixed: #1655278.

Release notes with all the bugfixes for Percona XtraBackup 2.3.7 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Rolling upgrade for patches failed on only one node

Lastest Forum Posts - February 22, 2017 - 7:51am
I just did a rolling upgrade for CentOS security patches and upgrade to xtradb cluster 5.6.34-26.19.1.el6 from 5.6.30. The first node installed fine, I rebooted, and it rejoined the cluster without issue. I verified the node was joined on another node before I continued on. I completed the remaining two nodes and while I logged into the first node I started with to verify the third node rejoined the cluster, SHOW STATUS LIKE 'wsrep%' is missing a number of results only on that nod such as wsrep_local_state and wsrep_cluster_status and it has some results not on my other nodes like wsrep_OSU_method and wsrep_provider_options.

So I ran: sudo service mysql stop - no problem
They I ran sudo service mysql start and I get:

Starting MySQL (Percona XtraDB Cluster).sed: -e expression #1, char 10: unknown option to `s'
SUCCESS!

Otherwise, everything seems to be working fine.

I tried yum history undo and it fails on xtrabackup packages.
I manually removed and reinstalled the xtra db cluster server and the same problem remains.

Any ideas?

TY

Percona Monitoring and Management (PMM) Upgrade Guide

Latest MySQL Performance Blog posts - February 21, 2017 - 2:53pm

The purpose of this blog post is to demonstrate current best-practices for an in-place Percona Monitoring and Management (PMM) upgrade. Following this method allows you to retain data previously collected by PMM in your MySQL or MongoDB environment, while upgrading to the latest version.

Step 1: Housekeeping

Before beginning this process, I recommend that you use a package manager that installs directly from Percona’s official software repository. The install instructions vary by distro, but for Ubuntu users the commands are:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb

sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb

Step 2: PMM Server Upgrade

Now that we have ensured we’re using Percona’s official software repository, we can continue with the upgrade. To check which version of PMM server is running, execute the following command on your PMM server host:

docker ps

This command shows a list of all running Docker containers. The version of PMM server you are running is found in the image description.

Once you’ve verified you are on an older version, it’s time to upgrade!

The first step is to stop and remove your docker pmm-server container with the following command:

docker stop pmm-server && docker rm pmm-server

Please note that this command may take several seconds to complete.

The next step is to create and run the image with the new version tag. In this case, we are installing version 1.1.0. Please make sure to verify the correct image name in the install instructions.

Run the command below to create and run the new image.

docker run -d &nbsp;&nbsp;-p 80:80 &nbsp;&nbsp;--volumes-from pmm-data &nbsp;&nbsp;--name pmm-server &nbsp;&nbsp;--restart always &nbsp;&nbsp;percona/pmm-server:1.1.0

We can confirm our new image is running with the following command:

docker ps

As you can see, the latest version of PMM server is installed. The final step in the process is to update the PMM client on each host to be monitored.

Step 3: PMM Client Upgrade

The GA version of Percona Monitoring and Management supports in-place upgrades. Instructions can be found in our documentation. On the client side, update the local apt cache, and upgrade to the new version of pmm-client by running the following commands:

apt-get update

apt-get install pmm-client

Congrats! We’ve successfully upgraded to the latest PMM version. As you can tell from the graph below, there is a slight gap in our polling data due to the downtime necessary to upgrade the version. However, we have verified that the data that existed prior to the upgrade is still available and new data is being gathered.

Conclusion

I hope this blog post has given you the confidence to do an in-place Percona Monitoring and Management upgrade. As always, please submit your feedback on our forums with regards to any PMM-related suggestions or questions. Our goal is to make PMM the best-available open-source MySQL and MongoDB monitoring tool.

Webinar Wednesday February 22, 2017: Percona Server for MongoDB 3.4 Product Bundle Release

Latest MySQL Performance Blog posts - February 21, 2017 - 1:06pm

Join Percona’s MongoDB Practice Manager David Murphy on Wednesday, February 22, 2017 at 10:00 am PST / 1:00 pm EST (UTC-8) as he reviews and discusses the Percona Server for MongoDB, Percona Monitoring and Management (PMM) and Percona Toolkit product bundle release.

The webinar covers how this new bundled release ensures a robust, secure database that can be adapted to changing business requirements. It demonstrates how MongoDB, PMM and Percona Toolkit are used together so that organizations benefit from the cost savings and agility provided by free and proven open source software.

Percona Server for MongoDB 3.4 delivers all the latest MongoDB 3.4 Community Edition features, additional Enterprise features and a greater choice of storage engines.

Along with improved insight into the database environment, the solution provides enhanced control options for optimizing a wider range of database workloads with greater reliability and security.

Some of the features that will be discussed are:

  • Percona Server for MongoDB 3.4
    • All the features of MongoDB Community Edition 3.4, which provides an open source, fully compatible, drop-in replacement:
      • Integrated, pluggable authentication with LDAP to provide a centralized enterprise authentication service
      • Open-source auditing for visibility into user and process actions in the database, with the ability to redact sensitive information (such as user names and IP addresses) from log files
      • Hot backups for the WiredTiger engine protect against data loss in the case of a crash or disaster, without impacting performance
      • Two storage engine options not supported by MongoDB Community Edition 3.4:
        • MongoRocks, the RocksDB-powered storage engine, designed for demanding, high-volume data workloads such as in IoT applications, on-premises or in the cloud.
        • Percona Memory Engine is ideal for in-memory computing and other applications demanding very low latency workloads.
  • Percona Monitoring and Management 1.1
    • Support for MongoDB and Percona Server for MongoDB
    • Graphical dashboard information for WiredTiger, MongoRocks and Percona Memory Engine
  • Percona Toolkit 3.0
    • Two new tools for MongoDB:
      • pt-mongodb-summary (the equivalent of pt-mysql-summary) provides a quick, at-a-glance overview of a MongoDB and Percona Server for MongoDB instance.
      • pt-mongodb-query-digest (the equivalent of pt-query-digest for MySQL) offers a query review for troubleshooting.

You can register for the webinar here.

David Murphy, MongoDB Practice Manager

David joined Percona in October 2015 as Practice Manager for MongoDB. Prior to that, David joined the ObjectRocket by Rackspace team as the Lead DBA in Sept 2013. With the growth involved with a any recently acquired startup, David’s role covered a wide range from evangelism, research, run book development, knowledge base design, consulting, technical account management, mentoring and much more.

Prior to the world of MongoDB, David was a MySQL and NoSQL architect at Electronic Arts. There, he worked with some of the largest titles in the world like FIFA, SimCity, and Battle Field providing tuning, design, and technology choice responsibilities. David maintains an active interest in database speaking and exploring new technologies.

Installing Percona Monitoring and Management (PMM) for the First Time

Latest MySQL Performance Blog posts - February 21, 2017 - 11:22am

This post is part of a series of Percona’s MongoDB 3.4 bundle release blogs. In this blog, we’ll look at the process for installing Percona Monitoring and Management (PMM) for the first time.

Installing Percona Monitoring and Management

Percona Monitoring and Management (PMM) is Percona’s open source tool for monitoring databases. You can use it with either MongoDB and MySQL databases.

PMM requires the installation of a server and client component on each database server to be monitored. You can install the server component on a local or remote server, and monitor any MySQL or MongoDB instance (including Amazon RDS environments).

What is it?

PMM provides a graphical view of the status of monitored databases. You can use it to perform query analytics and metrics review. The graphical component relies on Grafana, and uses Prometheus for information processing. It includes a Query Analytics module that allows you to analyze queries over a period of time and it uses Orchestrator for replication. Since the integration of these items is the most difficult part, the server is distributed as a preconfigured Docker image.

PMM works with any variety of MongoDB or MySQL.

How do you install it?

As mentioned, there is a server component to PMM. You can install it on any server in your database environment, but be aware that if the server on which it is installed goes down or runs out of space, monitoring also fails. The server should have at least 5G of available disk space for each monitored client.

You must install the client component on each monitored database server. It is available as a package for a variety of Linux distributions.

To install the server

  • Create the Docker container for PMM. This container is the storage location for all PMM data and should not be altered or removed.
  • Create and run the PMM server container
  • Verify the installation

Next, install the client on each server to be monitored. It is installed based on the Linux distribution of the server.

Last, you connect the client(s) to the PMM server and monitoring begins.

The Query Analytics tool monitors and reviews queries run in the environment. It displays the current top 10 most time intensive queries. You can click on a query to view a detailed analysis of the query.

The Metrics Monitor gives you a historical view of queries. PMM separates time-based graphs by theme for additional clarity.

PMM includes Orchestrator for replication management and visualization. You must enable it separately, then you can view the metrics on the Discover page in Orchestrator.

Ongoing administration and maintenance

PMM includes an administration tool that adds, removes or monitors services. The pmm-admin tool requires a user with root or sudo access.

You can enable HTTP password protection to add authentication when accessing the PMM Server web interface or use SSL encryption to secure traffic between PMM Client and PMM Server.

You can edit the config files located in <your Docker container>/etc/grafana to set up alerting. Log files are stored in <your Docker container>/var/log.

Percona Monitoring and Management 1.1.1 is now available

Lastest Forum Posts - February 21, 2017 - 4:17am
Percona announces the release of Percona Monitoring and Management 1.1.1 on February 20, 2017. This is the first general availability (GA) release in the PMM 1.1 series with a focus on providing alternative deployment options for PMM Server:NOTE: The AMIs and VirtualBox images above are still experimental. For production, it is recommended to run Docker images.
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.