Please join Percona’s CEO and Founder, Peter Zaitsev on April 6th, 2017 at 8:00 am PDT / 11:00 am EDT (UTC-7) as he presents Best Practices Migrating to Open Source Databases.Register Now
This is a high-level webinar that covers the history of enterprise open source database use. It addresses both the advantages companies see in using open source database technologies, as well as the fears and reservations they might have.
In this webinar, we will look at how to address such concerns to help get a migration commitment. We’ll cover picking the right project, selecting the right team to manage migration, and developing the right migration path to maximize migration success (from a technical and organizational standpoint).Peter Zaitsev, Co-Founder and CEO, Percona
Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of Internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.
Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Data Performance Blog. Fortune and DZone tapped him as a contributor, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of percona.com’s most popular downloads.
Welcome to another post in the series of Percona Live featured session blogs! In these blogs, we’ll highlight some of the session speakers that will be at this year’s Percona Live conference. We’ll also discuss how these sessions can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!
In this Percona Live featured session, we’ll meet the folks at SelectStar, a database monitoring and management tool company. SelectStar will be a sponsor at Percona Live this year.
I recently came across the SelectStar database monitoring product. There are a number of monitoring products on the market (with the evolution of various SaaS and on-premises solutions), but SelectStar piqued my interest for a few reasons. I had a chance to speak with Cameron Jones, Principal Product Manager at SelectStar about their tool:
Percona: What are the challenges that lead to developing SelectStar?
Cameron: One of the challenges that we’ve found in the database monitoring and management sector comes from the dilution of the database market – and not in a bad way. Traditional, closed source database solutions continue to be used across the board (especially by large enterprises), but open source options like MySQL, MongoDB, PostgreSQL and Elasticsearch continue to gain traction as organizations seek solutions that meet their demand for agility and flexibility.
From a database monitoring perspective, this adds some challenges. Traditional solutions are focused on monitoring RDBMS and are really great at it, while newer solutions may only focus on one piece of the puzzle (NoSQL or cloud only, for example).
Percona: How does SelectStar compare to other monitoring and management tools?
Cameron: SelectStar covers a wide array of open and closed source database solutions and is easy to setup. This makes it ideal for enterprises that have a lot going on. Here is the matrix of supported products from our website:Database Types Key Metrics Monitored by SelectStar Big Data
In addition to monitoring key metrics for different database types, one of the key differences with SelectStar came from its comprehensive alerts and recommendations system.
The alerts and recommendations are designed to ensure you have an immediate understanding of key issues – and where they are coming from. MonYOG is great at this for MySQL, but lacks on other aspects. With SelectStar, you can pinpoint the exact database instance that may be causing the issue; or go further up the chain and see if it’s an issue impacting several database instances at the host level.
Recommendations are often tied to alerts – if you have a red alert, there’s going to be a recommendation tied to it on how you can improve. However, the recommendations pop up even if your database is completely healthy – ensuring that you have visibility into how you can improve your configuration before you actually have an issue impacting performance.
With insight into key metrics, alerts and recommendations, you can fine tune your database performance. In addition, it gives you the opportunity to become more proactive with your database monitoring.
Percona: Is configuring SelectStar difficult?
Cameron: SelectStar is easy to set up – in fact, most customers are up and running in 20 minutes.
Simply head over to the website – selectstar.io – and log in. From there, you’ll be greeted by a welcome screen where you can easily click through and configure a database.
To configure a database, you select your type:
And from there, set up your collector by inputting some key information.
And that’s it! As soon as it’s configured, the collector will start gathering information and data is populated within 20 minutes.
Percona: How does SelectStar work?
Cameron: Using agentless collectors, SelectStar gathers data from both your on-premises and AWS platforms so that you can have insight into all of your database instances.
The collector is basically an independent machine within your infrastructure that pulls data from your database. It is low impact so that it doesn’t impact performance. This is a different approach from all of the other monitoring tools.
Router Metrics (Shown Above)
Mongo relationship tree displaying router, databases, replica set, shards and nodes. (Shown Above)
Percona: Any final thoughts? What are you looking forward to at Percona Live?
Cameron: If you’re in the market for a new database monitoring solution, SelectStar is worth looking at because it covers a breadth of databases with the depth into key metrics, alerts and notifications that optimize performance across your databases. We have a free trial, so you have an easy option to try it. We’re looking forward to meeting with as much of the community as possible, getting feedback and hearing about people’s monitoring needs.
Register for Percona Live Data Performance Conference 2017, and meet the creators of SelectStar. You can find them at selectstar.io. Use the code FeaturedTalk and receive $100 off the current registration price!
Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community, as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.
The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.
In honor of the upcoming MariaDB M17 conference in New York City on April 11-12, we have enhanced Percona Monitoring and Management (PMM) Metrics Monitor with a new MariaDB Dashboard and multiple new graphs!
The Percona Monitoring and Management MariaDB Dashboard builds on the efforts of the MariaDB development team to instrument the Aria Storage Engine Status Variables related to Aria Pagecache and Aria Transaction Log activity, the tracking of Index Condition Pushdown (ICP), InnoDB Online DDL when using ALTER TABLE ... ALGORITHM=INPLACE, InnoDB Deadlocks Detected, and finally InnoDB Defragmentation. This new dashboard is available in Percona Monitoring and Management release 1.1.2. Download it now using our docker, VirtualBox or Amazon AMI installation options!
Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MariaDB® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL, MariaDB® and MongoDB servers to ensure that your data works as efficiently as possible.Aria Pagecache Reads/Writes
MariaDB 5.1 introduced the Aria Storage Engine, which is MariaDB’s MyISAM replacement. Originally known as the Maria storage engine, they renamed it in late 2010 in order to avoid confusion with the overall MariaDB project name. The Aria Pagecache Status Variables graph plots the count of disk block reads and writes, which occur when the data isn’t already in the Aria Pagecache. We also plot the reads and writes from the Aria Page Cache, which count the reads/writes that did not incur a disk lookup (as the data was previously fetched and available from the Aria pagecache):Aria Pagecache Blocks
Aria reads and writes to the pagecache in order to cache data in RAM and avoid or delay activity related to disk. Overall, this translates into faster database query response times:
Aria Pagecache Total Blocks is calculated using Aria System Variables and the following formula:aria_pagecache_buffer_size / aria_block_size:Aria Transaction Log Syncs
As Aria strives to be a fully ACID- and MVCC-compliant storage engine, an important factor is support for transactions. A transaction is the unit of work in a database that defines how to implement the four properties of Atomicity, Consistency, Isolation, and Durability (ACID). This graph tracks the rate at which Aria fsyncs the Aria Transaction Log to disk. You can think of this as the “write penalty” for running a transactional storage engine:InnoDB Online DDL
MySQL 5.6 released the concept of an in-place DDL operation via ALTER TABLE ... ALGORITHM=INPLACE, which in some cases avoided performing a table copy and thus didn’t block INSERT/UPDATE/DELETE. MariaDB implemented three measures to track ongoing InnoDB Online DDL operations, which we plot via the following three status variables:
For more information, please see the MariaDB blog post Monitoring progress and temporal memory usage of Online DDL in InnoDB.InnoDB Defragmentation
MariaDB merged the Facebook/Kakao defragmentation patch for defragmenting InnoDB tablespaces into their 10.1 release. Your MariaDB instance needs to have started with innodb_defragment=1 and your tables need to be in innodb_file_per_table=1 for this to work. We plot the following three status variables:
Oracle introduced this in MySQL 5.6. From the manual:
Index Condition Pushdown (ICP) is an optimization for the case where MySQL retrieves rows from a table using an index. Without ICP, the storage engine traverses the index to locate rows in the base table and returns them to the MySQL server which evaluates the WHERE condition for the rows. With ICP enabled, and if parts of the WHERE condition can be evaluated by using only columns from the index, the MySQL server pushes this part of the WHERE condition down to the storage engine. The storage engine then evaluates the pushed index condition by using the index entry and only if this is satisfied is the row read from the table. ICP can reduce the number of times the storage engine must access the base table and the number of times the MySQL server must access the storage engine.
Essentially, the closer that ICP Attempts are to ICP Matches, the better!InnoDB Deadlocks Detected (MariaDB 10.1 Only)
Ever since MySQL implemented a transactional storage engine there have been deadlocks. Deadlocks are conditions where different transactions are unable to proceed because each holds a lock that the other needs. In MariaDB 10.1, there is a Status variable that counts the occurrences of deadlocks since the server startup. Previously, you had to instrument your application to get an accurate count of deadlocks, because otherwise you could miss occurrences if your polling interval wasn’t configured frequent enough (even using pt-deadlock-logger). Unfortunately, this Status variable doesn’t appear to be present in the MariaDB 10.2.4 build I tested:
You can see the MariaDB Dashboard and new graphs in action at the PMM Demo site. If you feel the graphs need any tweaking or if I’ve missed anything, leave a note on the blog. You can also write me directly (I look forward to your comments): email@example.com.
To start: on the ICP graph, should we have a line that defines the percentage of successful ICP matches vs. attempts?
Percona announces the release of Percona Monitoring and Management 1.1.2 on April 3, 2017.
For installation instructions, see the Deployment Guide.
This release includes several new dashboards in Metrics Monitor, updated versions of software components used in PMM Server, and a number of small bug fixes.Thank You to the Community!
We would like to mention some of the key contributors in this release, and thank the community for continued support of PMM:
This release includes the following new dashboards:
The new MariaDB dashboard also includes three new graphs for monitoring InnoDB within MariaDB. We are planning to move them into one of the existing InnoDB dashboards in the next PMM release:
PMM is based on several third-party open-source software components. We ensure that PMM includes the latest versions of these components in every release, making it the most secure, stable and feature-rich database monitoring platform possible. Here are some highlights of changes in the latest releases:
WARNING: Some Metrics Monitor data may be lost when renaming a running client.
Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.
PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.
A live demo of PMM is available at pmmdemo.percona.com.
Please provide your feedback and questions on the PMM forum.
If you would like to report a bug or submit a feature request, use the PMM project in JIRA.
In this blog post, we’ll look at the performance of SST data transfer using encryption.
In my previous post, we reviewed SST data transfer in an unsecured environment. Now let’s take a closer look at a setup with encrypted network connections between the donor and joiner nodes.
The base setup is the same as the previous time:
The setup details for the encryption aspects in our testing:
Several notes regarding the above aspects:
Also take a note that in the chart below, there are results for two variants of rsync: “rsync” (the current approach), and “rsync_improved” (the improved one). I’ve explained the difference between them in my previous post.
In my testing for streaming over encrypted connections, I used the --parallel=4 option for xtrabackup. In my previous post, I showed that this is important factor to get the best time. There is also a way to pass the name of the cipher that will be used by socat for the OpenSSL connection in the wsrep_sst_xtrabackup-v2.sh script with the sockopt option. For instance:[sst] inno-backup-opts="--parallel=4" sockopt=",cipher=AES128"
The xtrabackup tool has a feature to encrypt data when performing a backup. That encryption is based on the libgcrypt library, and it’s possible to use AES128 or AES256 ciphers. For encryption, it’s necessary to generate a key and then provide it to xtrabackup to perform encryption on fly. There is a way to specify the number of threads that will encrypt data, along with the chunk size to tune process of encryption.
The current version of xtrabackup supports an efficient way to read, compress and encrypt data in parallel, and then write/stream it. From the other side, when we accept a stream we can’t decompress/decrypt stream on the fly. At first, the stream should be received/written to disk with the xbstream tool and only after that can you use xtrabackup with --decrypt/--decompress modes to unpack data. The inability to process data on the fly and save the stream to disk for later processing has a notable impact on stream time from the donor to the joiner. We have a plan to fix that issue, so that encryption+compression+streaming of data with xtrabackup happens without the necessity to write stream to the disk on the receiver side.
For my testing, in the case of xtrabackup with internal encryption, I didn’t use SSL encryption for socat.
Results (click on the image for an enlarged view):
For purposes of the testing, I’ve created a script “sst-bench.sh” that covers all the methods used in this post. You can use it to measure all the above SST methods in your environment. In order to run the script, you have to adjust several environment variables at the beginning of the script: joiner ip, datadirs location on joiner and donor hosts, etc. After that, put the script on the “donor” and “joiner” hosts and run it as the following:#joiner_host> sst_bench.sh --mode=joiner --sst-mode=<tar|xbackup|rsync> --cipher=<DEFAULT|AES128|AES256|CHACHA20> --ssl=<0|1> --aesni=<0|1> #donor_host> sst_bench.sh --mode=donor --sst-mode=<tar|xbackup|rsync|rsync_improved> --cipher=<DEFAULT|AES128|AES256|CHACHA20> --ssl=<0|1> --aesni=<0|1>
For general inquiries, please send us your question and someone will contact you.