Frequently Asked Questions¶
- How can I contact the developers?
- What are the minimum system requirements for PMM?
- How to control data retention for PMM?
- How often are nginx logs in PMM Server rotated?
- What privileges are required to monitor a MySQL instance?
- Can I monitor multiple service instances?
- Can I rename instances?
- Can I add an AWS RDS MySQL or Aurora MySQL instance from a non-default AWS partition?
- How to troubleshoot communication issues between PMM Client and PMM Server?
- What resolution is used for metrics?
- How to set up Alerting in PMM?
- How to use a custom Prometheus configuration file inside of a PMM Server?
The best place to discuss PMM with developers and other community members is the community forum.
If you would like to report a bug, use the PMM project in JIRA.
Any system which can run Docker version 1.12.6 or later.
It needs roughly 1 GB of storage for each monitored database node with data retention set to one week.
Minimum memory is 2 GB for one monitored database node, but it is not linear when you add more nodes. For example, data from 20 nodes should be easily handled with 16 GB.
Any modern 64-bit Linux distribution. It is tested on the latest versions of Debian, Ubuntu, CentOS, and Red Hat Enterprise Linux.
Minimum 100 MB of storage is required for installing the PMM Client package. With good constant connection to PMM Server, additional storage is not required. However, the client needs to store any collected data that it is not able to send over immediately, so additional storage may be required if connection is unstable or throughput is too low.
By default, both Prometheus and QAN store time-series data for 30 days.
Depending on available disk space and your requirements, you may need to adjust data retention time.
You can control data retention by the following way.
Select the PMM Settings dashboard in the main menu.
In the Settings section, enter new data retention value in days.
Click the Apply changes button.
PMM Server runs
logrotate to rotate nginx logs on a daily basis
and keep up to 10 latest log files.
Yes, you can add multiple instances of MySQL or some other service to be monitored from one PMM Client. In this case, you will need to provide a distinct port and socket for each instance, and specify a unique name for each instance (by default, it uses the name of the PMM Client host).
For example, if you are adding complete MySQL monitoring for two local MySQL servers, the commands could look similar to the following:
$ sudo pmm-admin add mysql --username root --password root instance-01 127.0.0.1:3001 $ sudo pmm-admin add mysql --username root --password root instance-02 127.0.0.1:3002
For more information, run
$ pmm-admin add mysql --help
You can remove any monitoring instance as described in Removing monitoring services with pmm-admin remove and then add it back with a different name.
When you remove a monitoring service, previously collected data remains available in Grafana. However, the metrics are tied to the instance name. So if you add the same instance back with a different name, it will be considered a new instance with a new set of metrics. So if you are re-adding an instance and want to keep its previous data, add it with the same name.
By default the RDS discovery works with the default
aws partition. But you
can switch to special regions, like the GovCloud one, with the alternative AWS partitions (e.g.
aws-us-gov) adding them to the Settings via the PMM Server API:
You can specify any of them instead of the
aws default value, or use several
of them, with the JSON Array syntax:
Broken network connectivity may be caused by rather wide set of reasons. Particularly, when using Docker, the container is constrained by the host-level routing and firewall rules. For example, your hosting provider might have default iptables rules on their hosts that block communication between PMM Server and PMM Client, resulting in DOWN targets in Prometheus. If this happens, check firewall and routing settings on the Docker host.
Also PMM is able to generate a set of diagnostics data which can be examined
and/or shared with Percona Support to solve an issue faster. You can get
collected logs from PMM Client using the
pmm-admin summary command.
The logs archive obtained in this way includes PMM Client logs and also logs
which were received from the PMM Server, stored separately in the
server folders. The
server folder also contains its own
client subfolder with the self-monitoring client information collected on
the PMM Server.
Starting from PMM 2.4.0 there is an additional flag that allows to
fetch pprof debug profiles and add them
to the diagnostics data. To do it, run
pmm-admin summary --pprof.
Obtaining logs from PMM Server can be done by specifying the
``https://<address-of-your-pmm-server>/logs.zip` URL, or by clicking
server logs link on the Prometheus dashboard:
The logs archive obtained in this way includes diagnostics information gathered
from the PMM Server, and the
client subfolder with the self-monitoring
client information collected on the PMM Server.
MySQL metrics are collected with different resolutions (5 seconds, 10 seconds, and 60 seconds by default). Linux and MongoDB metrics are collected with 1 second resolution.
In case of bad network connectivity between PMM Server and PMM Client or between PMM Client and the database server it is monitoring, scraping every second may not be possible when latency is higher than 1 second.
You can change the minimum resolution for metrics by the following way:
Select the PMM Settings dashboard in the main menu.
In the Settings section, choose proper metrics resolution with the slider. The tooltip of the slider will show you actual resolution values.
Click the Apply changes button.
Consider increasing minimum resolution when PMM Server and PMM Client are on different networks, or when Adding an Amazon RDS MySQL, Aurora MySQL, or Remote Instance.
You can make PMM Server trigger alerts when your monitored service reaches some thresholds in two ways:
- using Grafana Alerting feature,
- using external Alertmanager (a high-performance solution developed by the Prometheus project to handle alerts sent by Prometheus).
Both options can be considered advanced features and require knowledge of third-party documentation.
Either with Grafana Alerting or with Alertmanager you need to configure some alerting rule to define conditions under which the alert should be triggered, and the channel used to send the alert (e.g. email).
Grafana Alerts are already integrated into PMM Server and may be simpler to get set up, while Alertmanager allows the creation of more sophisticated alerting rules and can be easier to manage installations with a large number of hosts; this additional flexibility comes at the expense of simplicity and requires advanced knowledge of Alertmanager rules. Currently Percona cannot offer support for creating custom rules so you should already have a working Alertmanager instance prior to using this feature, however we are working hard to bring an integrated Alertmanager solution to make rule generation easy!
Alerting in Grafana allows attaching rules to your dashboard panels. Details about Grafana Alerting Engine and Rules can be found in the official documentation. Setting it up and running within PMM Server is covered by the following blog post.
PMM allows you to integrate Prometheus with an external Alertmanager. Configuration is done on the PMM Settings dashboard. The Alertmanager section in it allows specifying the URL of the Alertmanager to serve your PMM alerts, as well as your alerting rules in the YAML configuration format.
Normally PMM Server fully manages Prometheus configuration file. Still, some users may want to be able to change generated configuration to add additional scrape jobs, configure remote storage, etc.
Starting from the version 2.4.0, when pmm-managed starts the Prometheus file
generation process, it tries to load the
file first, to use it as a base for the
prometheus.yml if present and can be
prometheus.yml file can be regenerated by restarting the PMM
Server container, or by the
SetSettings API call with an empty body.
You can find more details about using a custom Prometheus configuration file with PMM in a separate blog post.