Three Methods of Installing Percona Monitoring and Management

Installing Percona Monitoring and ManagementIn this blog post, we’ll look at three different methods for installing Percona Monitoring and Management (PMM).

Percona offers multiple methods of installing Percona Monitoring and Management, depending on your environment and scale. I’ll also share comments on which installation methods we’ve decided to forego for now. Let’s begin by reviewing the three supported methods:

  1. Virtual Appliance
  2. Amazon Machine Image
  3. Docker

Virtual Appliance

We ship an OVF/OVA method to make installation as simple as possible, with the least amount of effort required and at the lowest cost to you. You can leverage the investment in your virtualization deployment platform. OVF is an open standard for packaging and distributing virtual appliances, designed to be run in virtual machines.

Using OVA with VirtualBox as a first step is common in order to quickly play with a working PMM system, and get right to adding clients and observing activity within your own environment against your MySQL and MongoDB instances. But you can also use the OVA file for enterprise deployments. It is a flexible file format that can be imported into other popular hypervisor systems such as VMware, Red Hat Virtualization, XenServer, Microsoft System Centre Virtual Machine Manager and others.

We’d love to hear your feedback on this installation method!

AWS AMI

We also have an AWS AMI in order to provide easy scaling of PMM Server in AWS, so that you can deploy onto any instance size required for your monitoring instance. Depending on the AWS region you’re in, you’ll need to choose from the appropriate AMI Instance ID. Soon we’ll be moving to the AWS Marketplace for even easier deployment. When this is implemented, you will no longer need to clone an existing AMI ID.

Docker

Docker is our most common production deployment method. It is easy (three commands) and scalable (tuning passed on the command line to Docker run). While we recognize that Docker is still a relatively new deployment system for many users, it is dramatically gaining adoption. It is also where Percona is investing the bulk of our development efforts. We deploy PMM Server as two Docker containers: one for storing the data that persists across restarts/upgrades, and the other for running the actual PMM Server binaries (Grafana, Prometheus, consul, Orchestrator, QAN, etc.).

Where are the RPM/DEB/tar.gz packages?!

A common question I hear is why doesn’t Percona support binary-based installation?

We hear you: RPM/DEB/tar.gz methods are commonly used today for many of your own applications. Percona is striving for simplicity in our deployment of PMM Server, and we spend considerable development and QA effort validating the specific versions of Grafana/Prometheus/QAN/consul/Orchestrator all work seamlessly together.

Percona wants to ensure OS compatibility and long-term support of PMM, and to do binary distribution “right” means it can quickly get expensive to build and QA across all the popular Linux distributions available today. We’re in no way against binary distributions. For example, see our list of the nine supported platforms for which we provide bug fix support.

Percona decided to focus our development efforts on stability and features, and less on the number of supported platforms. Hence the hyper-focus on Docker. We don’t have any current plans to move to a binary deployment method for PMM, but we are always open to hearing your feedback. If there is considerable interest, then please let me know via the comments below. We’ll take these thoughts into consideration for PMM planning in the second half of 2017.

Which other methods of installing Percona Monitoring and Management would you like to see?

Share this post

Comments (16)

  • Jouni Järvinen Reply

    Docker is pretty much out of question. The systems I run have enough work to run with everything being their own processes, and adding a virtualizer …

    June 15, 2017 at 3:01 pm
    • Jouni Järvinen Reply

      Let’s put it this way: a virtualizer just complicates everything; additional tweaking, calculating, estimating, … Completely unnecessary.

      June 15, 2017 at 3:09 pm
    • Michael Coburn Reply

      Thanks for the feedback – have you used our AMI? Yes I appreciate the irony that EC2 is virtualisation as well, but at least that is one, and only one level of virt. 🙂

      June 15, 2017 at 3:22 pm
      • Jouni Järvinen Reply

        I don’t think that’ll ever be an option in the slightest.

        June 15, 2017 at 3:30 pm
        • Michael Coburn Reply

          Understood – what would be your preferred method of installation of PMM?

          June 15, 2017 at 3:43 pm
          • Jouni Järvinen

            DEB/RPM/etc. Even having to compile the software myself is better than using a virtualizer just for 1 purpose, but given the Open Source nature someone will compile it the standard way if the devs won’t.

            There are continuous integration systems and you’ve virtualized systems such as Debian for this purpose and you can control different Linux distro types with same remote control softwares, so I don’t see how spinning a DEB/RPM/… would be such a hassle not worth it.

            June 16, 2017 at 7:36 am
  • Philip Iezzi Reply

    We would greatly appreciate DEB packages but absolutely understand the extra effort it would require you to build and support that. Thanks for the explanation about development focus.
    So far, the Docker image (and the proposed other two solutions) held us back from using PMM alltogether – and we are probably not the only ones out there who make this decision. Bad PR for PMM, which surely is a great product!

    We could perfectly live with one of the following two options:

    – LXC package (a simple .tar.gz, could be probably easily ported from Docker image. or can I easily do this by myself via docker export?)
    – Omnibus package (as GitLab uses now for several years, works like a charm, no upgrade issues ever! Chef-based)

    That would be great! Thanks Michael for listening to the crowd.

    June 15, 2017 at 3:31 pm
    • Michael Coburn Reply

      Thanks for the feedback Phillip!

      June 15, 2017 at 3:44 pm
  • jason y Reply

    we use rackspace not AWS, so this is limiting for me to easily install., and i have been wanting to install directly to a linux VM, without having to install docker

    June 16, 2017 at 1:25 pm
    • Michael Coburn Reply

      Thanks Jason for your note. I’ve added Rackspace support to our list for future installation candidates.

      June 16, 2017 at 1:37 pm
  • Matt Butch Reply

    Agreed with the above responses. I’d prefer a deb package (especially like the Omnibus that GitLab uses as somebody mentioned above), especially because I can automate a server deployment with Ansible.

    We also use Rackspace, so its also difficult to install.

    June 19, 2017 at 10:14 am
    • Michael Coburn Reply

      Thanks for the reply, Matt!

      June 19, 2017 at 12:31 pm
  • Dawelbeit Reply

    Running docker image on kubernetes feels slow with avg load 5-15, any ideas?

    June 28, 2017 at 1:20 am
    • Jouni Järvinen Reply

      The first culprit could be you’re running too much stuff on the first core/thread of the first CPU. Or whichever CPU you’ve allocated it to.

      June 28, 2017 at 6:55 am
  • Joseph Chan Reply

    I’ve tried to deploy the server with the OVA file percona has provided. I had to to edit the PMM-Server-2018-10-31-1318.ovf and convert the disk controllers from IDE and SATA controllers to SCSI controllers as that’s the only type of controller we have backed with disks. That solved the problem of getting the vCenter to deploy the OVF template to the host, but the VM will not boot. I suspect by changing the controller, I’ve also changed the disk device ID recognized by the underlying OS in the VM. At this point, I think I’m just going to give up. Docker is not an option at the moment.

    I would like to put forward a couple of suggestions.
    1) Its more likely that PMM gets deployed in a closed enterprise environment where SCSI and SAS devices are more common. Hence it would make sense to provide OVA to support either of the device types.
    2) It would be simpler for the VM to have just the single disk. Additional disks can be created and mounted by the end user once the VM is deployed. It would also make it easier to identify the OS disk if I had boot in single user mode to fix the current disk issue.

    Just for your reference, in case you are curious as to what I did to get vCenter to deploy the OVA.

    0
    ideController0
    IDE Controller
    ideController0
    3
    PIIX4
    5

    1
    ideController1
    IDE Controller
    ideController1
    4
    PIIX4
    5

    0
    sataController0
    SATA Controller
    sataController0
    5
    AHCI
    20

    TO:

    0
    SCSIController
    SCSIController
    SCSIController
    3
    VirtualSCSI
    6

    1
    SCSIController
    SCSIController
    SCSIController
    4
    VirtualSCSI
    6

    2
    SCSIController
    SCSIControllerController
    SCSIController
    5
    VirtualSCSI
    6

    November 6, 2018 at 11:14 pm
  • Joseph Chan Reply

    Apologies, I’m not sure how to include the xml like tags used in the ovf config file in the above post.

    November 6, 2018 at 11:20 pm

Leave a Reply