EmergencyEMERGENCY? Get 24/7 Help Now!

New MariaDB Dashboard in Percona Monitoring and Management Metrics Monitor

 | April 3, 2017 |  Posted In: Events and Announcements, InnoDB, Insight for DBAs, Insight for Developers, MariaDB, MySQL, Percona Monitoring and Management, Percona Server for MySQL, Percona Software, PMM

PREVIOUS POST
NEXT POST

In honor of the upcoming MariaDB M17 conference in New York City on April 11-12, we have enhanced Percona Monitoring and Management (PMM) Metrics Monitor with a new MariaDB Dashboard and multiple new graphs!

The Percona Monitoring and Management MariaDB Dashboard builds on the efforts of the MariaDB development team to instrument the Aria Storage Engine Status Variables related to Aria Pagecache and Aria Transaction Log activity, the tracking of Index Condition Pushdown (ICP), InnoDB Online DDL when using ALTER TABLE ... ALGORITHM=INPLACE, InnoDB Deadlocks Detected, and finally InnoDB Defragmentation. This new dashboard is available in Percona Monitoring and Management release 1.1.2. Download it now using our docker, VirtualBox or Amazon AMI installation options!

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MariaDB® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL, MariaDB® and MongoDB servers to ensure that your data works as efficiently as possible.

Aria Pagecache Reads/Writes

MariaDB 5.1 introduced the Aria Storage Engine, which is MariaDB’s MyISAM replacement. Originally known as the Maria storage engine, they renamed it in late 2010 in order to avoid confusion with the overall MariaDB project name. The Aria Pagecache Status Variables graph plots the count of disk block reads and writes, which occur when the data isn’t already in the Aria Pagecache. We also plot the reads and writes from the Aria Page Cache, which count the reads/writes that did not incur a disk lookup (as the data was previously fetched and available from the Aria pagecache):

MariaDB - Aria Pagecache Reads/Writes

Aria Pagecache Blocks

Aria reads and writes to the pagecache in order to cache data in RAM and avoid or delay activity related to disk. Overall, this translates into faster database query response times:

  • Aria_pagecache_blocks_not_flushed: The number of dirty blocks in the Aria pagecache.
  • Aria_pagecache_blocks_unused: Free blocks in the Aria pagecache.
  • Aria_pagecache_blocks_used: Blocks used in the Aria pagecache.

Aria Pagecache Total Blocks is calculated using Aria System Variables and the following formula:aria_pagecache_buffer_size / aria_block_size:MariaDB - Aria Pagecache Blocks

Aria Transaction Log Syncs

As Aria strives to be a fully ACID- and MVCC-compliant storage engine, an important factor is support for transactions. A transaction is the unit of work in a database that defines how to implement the four properties of Atomicity, Consistency, Isolation, and Durability (ACID). This graph tracks the rate at which Aria fsyncs the Aria Transaction Log to disk. You can think of this as the “write penalty” for running a transactional storage engine:

MariaDB - Aria Transaction Log Syncs

InnoDB Online DDL

MySQL 5.6 released the concept of an in-place DDL operation via ALTER TABLE ... ALGORITHM=INPLACE, which in some cases avoided performing a table copy and thus didn’t block INSERT/UPDATE/DELETE. MariaDB implemented three measures to track ongoing InnoDB Online DDL operations, which we plot via the following three status variables:

  • Innodb_onlineddl_pct_progress: Shows the progress of the in-place alter table. It might not be accurate, as in-place alter is highly dependent on the disk and buffer pool status
  • Innodb_onlineddl_rowlog_pct_used: Shows row log buffer usage in 5-digit integers (10000 means 100.00%)
  • Innodb_onlineddl_rowlog_rows: Number of rows stored in the row log buffer

MariaDB - InnoDB Online DLL

For more information, please see the MariaDB blog post Monitoring progress and temporal memory usage of Online DDL in InnoDB.

InnoDB Defragmentation

MariaDB merged the Facebook/Kakao defragmentation patch for defragmenting InnoDB tablespaces into their 10.1 release. Your MariaDB instance needs to have started with innodb_defragment=1 and your tables need to be in innodb_file_per_table=1 for this to work. We plot the following three status variables:

  • Innodb_defragment_compression_failures: Number of defragment re-compression failures
  • Innodb_defragment_failures: Number of defragment failures
  • Innodb_defragment_count: Number of defragment operations

MariaDB - InnoDB Defragmentation

Index Condition Pushdown

Oracle introduced this in MySQL 5.6. From the manual:

Index Condition Pushdown (ICP) is an optimization for the case where MySQL retrieves rows from a table using an index. Without ICP, the storage engine traverses the index to locate rows in the base table and returns them to the MySQL server which evaluates the WHERE condition for the rows. With ICP enabled, and if parts of the WHERE condition can be evaluated by using only columns from the index, the MySQL server pushes this part of the WHERE condition down to the storage engine. The storage engine then evaluates the pushed index condition by using the index entry and only if this is satisfied is the row read from the table. ICP can reduce the number of times the storage engine must access the base table and the number of times the MySQL server must access the storage engine.

Percona’s Agustin Gallego performed some in-depth research regarding MariaDB ICP counters. For more information on how to interpret them, please read his blog post MariaDB Handler_icp_% Counters: What They Are, and How To Use Them.

MariaDB - Index Condition Pushdown (ICP)

InnoDB Deadlocks Detected (MariaDB 10.1 Only)

Ever since MySQL implemented a transactional storage engine there have been deadlocks. Deadlocks are conditions where different transactions are unable to proceed because each holds a lock that the other needs. In MariaDB 10.1, there is a Status variable that counts the occurrences of deadlocks since the server startup. Previously, you had to instrument your application to get an accurate count of deadlocks, because otherwise you could miss occurrences if your polling interval wasn’t configured frequent enough (even using pt-deadlock-logger). Unfortunately, this Status variable doesn’t appear to be present in the MariaDB 10.2.4 build I tested:

MariaDB - InnoDB Deadlocks Detected

Again, please download Percona Monitoring and Management 1.1.2 to take advantage of the new MariaDB Dashboard and new graphs!  For installation instructions, see the Deployment Guide.

You can see the MariaDB Dashboard and new graphs in action at the PMM Demo site. If you feel the graphs need any tweaking or if I’ve missed anything, leave a note on the blog. You can also write me directly (I look forward to your comments): michael.coburn@percona.com.

To start: on the ICP graph, should we have a line that defines the percentage of successful ICP matches vs. attempts?

PREVIOUS POST
NEXT POST
Michael Coburn

Michael joined Percona as a Consultant in 2012 after having worked with high volume stock photography websites and email service provider platforms. With a foundation in Systems Administration, Michael enjoys working with SAN technologies and high availability solutions. A Canadian, Michael currently lives in Costa Rica with his wife, two children, and two dogs.

5 Comments

  • I think this misses a few recent and important counters.
    Binlog_commits and Binlog_group_commits counters allow to watch how Group Commit with the binary log is functioning.

    Then, Binlog_group_commit_trigger_count, Binlog_group_commit_trigger_lock_wait, Binlog_group_commit_trigger_timeout are probably useful for exploring group commit’s behavior.

    Can’t remember what counters are used for optimistic parallel replication, but the one with # transactions that had to roll back and be retried is important.

    • Hi @spetrunia2 , tracking Group Commit is a great idea! I’ve filed https://jira.percona.com/browse/PMM-747 for us to implement these two graphs in an upcoming release. We will likely have them on the MySQL Replication dashboard https://pmmdemo.percona.com/graph/dashboard/db/mysql-replication

  • > Essentially, the closer that ICP Attempts are to ICP Matches, the better!

    I am reading https://mariadb.com/kb/en/mariadb/index-condition-pushdown/ and I see this passage:

    > That way, the value Handler_icp_attempts – Handler_icp_match shows the number records that the server did not have to read because of Index Condition Pushdown.

    Suppose icp_attempts==icp_matches. According to the statement in this blog post, this is the best situation. However, in this case the difference between them is zero, and according to the quoted phrase the number of record reads saved is also zero, which means ICP is not useful.

    Something doesn’t add up, here.

    • Hi,

      I’ve been looking into this, and after running some tests, I’m left with a couple of questions regarding the quotes you sent. I need to research a bit more, but maybe you have some ideas about them 🙂

      Neither of them seem to be entirely accurate: you can have ICP matches lower than attempts, and it will still be something good; and (attempts – matches) being 0 doesn’t mean that there were no gains, I think, because the server actually saved (count(*) – attempts) rows from being read. I also wonder about the exact meaning of each handler_icp_ counter, since their docs seem to be inaccurate, too:

      “The smaller the ratio of Handler_icp_attempts to Handler_icp_match the better the filtering”

      https://mariadb.com/kb/en/mariadb/server-status-variables/#handler_icp_attempts

      I will continue on this, but let me know if you have any thoughts.

      • Ok, I was mistaken, and what you said was correct. It doesn’t mean it makes things worse, just that if they are equal, it would have been the same to do the check without using ICP. The best situation is indeed to have (attempts – match) as big as possible.

Leave a Reply