Buy Percona ServicesBuy Now!

About Percona percona xtradb cluster and Multiple NICS

Lastest Forum Posts - April 5, 2017 - 8:43am
Hello:

Is there any information about the use of percona xtradb cluster with multiple NIC, for example to be use for internal cluster communication? Is this recommended?... Do exist documentation about best practices?

Thanks for your comments.

Regards,

pt-table-checksum too many connection on claster second node

Lastest Forum Posts - April 5, 2017 - 6:59am
I have such an environment in the percona cluster. I start pt-table-checksum on rider2 to check replica but when start check big table number thread on writer increases to very large number and i must stop checksum. Has anyone met such a case already and is it possible make chacksum on read not affected writer? wsrep_desync can help for that?


rider1 <-> write <-> rider2 | +-> replica

Percona Live Webinar Thursday, April 6, 2017: Best Practices Migrating to Open Source Databases

Latest MySQL Performance Blog posts - April 4, 2017 - 9:45am

Please join Percona’s CEO and Founder, Peter Zaitsev on April 6th, 2017 at 8:00 am PDT / 11:00 am EDT (UTC-7) as he presents Best Practices Migrating to Open Source Databases.

Register Now

This is a high-level webinar that covers the history of enterprise open source database use. It addresses both the advantages companies see in using open source database technologies, as well as the fears and reservations they might have.

In this webinar, we will look at how to address such concerns to help get a migration commitment. We’ll cover picking the right project, selecting the right team to manage migration, and developing the right migration path to maximize migration success (from a technical and organizational standpoint).

Register for the webinar here.

Peter Zaitsev, Co-Founder and CEO, Percona

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of Internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Data Performance Blog. Fortune and DZone tapped him as a contributor, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of percona.com’s most popular downloads.

Percona Live Featured Session: Using SelectStar to Monitor and Tune Your Databases

Latest MySQL Performance Blog posts - April 4, 2017 - 9:18am

Welcome to another post in the series of Percona Live featured session blogs! In these blogs, we’ll highlight some of the session speakers that will be at this year’s Percona Live conference. We’ll also discuss how these sessions can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!

In this Percona Live featured session, we’ll meet the folks at SelectStar, a database monitoring and management tool company. SelectStar will be a sponsor at Percona Live this year.

I recently came across the SelectStar database monitoring product. There are a number of monitoring products on the market (with the evolution of various SaaS and on-premises solutions), but SelectStar piqued my interest for a few reasons. I had a chance to speak with Cameron Jones, Principal Product Manager at SelectStar about their tool:

Percona: What are the challenges that lead to developing SelectStar?

Cameron: One of the challenges that we’ve found in the database monitoring and management sector comes from the dilution of the database market – and not in a bad way. Traditional, closed source database solutions continue to be used across the board (especially by large enterprises), but open source options like MySQL, MongoDB, PostgreSQL and Elasticsearch continue to gain traction as organizations seek solutions that meet their demand for agility and flexibility.

From a database monitoring perspective, this adds some challenges. Traditional solutions are focused on monitoring RDBMS and are really great at it, while newer solutions may only focus on one piece of the puzzle (NoSQL or cloud only, for example).

Percona: How does SelectStar compare to other monitoring and management tools?

Cameron: SelectStar covers a wide array of open and closed source database solutions and is easy to setup. This makes it ideal for enterprises that have a lot going on. Here is the matrix of supported products from our website:

Database Types Key Metrics Monitored by SelectStar Big Data

  • Hadoop
  • Cassandra
  • Ops Counters – Inserts, Queries, etc.
  • Network Traffic
  • Asserts
  • Locks
  • Memory Usage
Cloud

  • Amazon Aurora
  • Amazon Dynamo
  • Amazon RDS
  • Microsoft Azure
  • Queries
  • Memory Usage
  • Network
  • CPU Balance
  • IOPS
NoSQL

  • MongoDB
  • Ops Counters – Inserts, Queries, etc.
  • Network Traffic
  • Asserts
  • Locks
  • Memory Usage
Open Source

  • PostgreSQL
  • MongoDB
  • MySQL
  • MariaDB
  • Average Query Execution Time
  • Query Executions
  • Memory Usage
  • Wait Time
Traditional RDBMS

  • IBM DB2
  • MS SQL Server
  • Oracle
  • Average Query Execution Time
  • Query Executions
  • Memory Usage
  • Wait Time


In addition to monitoring key metrics for different database types, one of the key differences with SelectStar came from its comprehensive alerts and recommendations system.

The alerts and recommendations are designed to ensure you have an immediate understanding of key issues – and where they are coming from. MonYOG is great at this for MySQL, but lacks on other aspects. With SelectStar, you can pinpoint the exact database instance that may be causing the issue; or go further up the chain and see if it’s an issue impacting several database instances at the host level.

Recommendations are often tied to alerts – if you have a red alert, there’s going to be a recommendation tied to it on how you can improve. However, the recommendations pop up even if your database is completely healthy – ensuring that you have visibility into how you can improve your configuration before you actually have an issue impacting performance.

With insight into key metrics, alerts and recommendations, you can fine tune your database performance. In addition, it gives you the opportunity to become more proactive with your database monitoring.

Percona: Is configuring SelectStar difficult?

Cameron: SelectStar is easy to set up – in fact, most customers are up and running in 20 minutes.

Simply head over to the website – selectstar.io – and log in. From there, you’ll be greeted by a welcome screen where you can easily click through and configure a database.

To configure a database, you select your type:

And from there, set up your collector by inputting some key information.

And that’s it! As soon as it’s configured, the collector will start gathering information and data is populated within 20 minutes.

Percona: How does SelectStar work?

Cameron: Using agentless collectors, SelectStar gathers data from both your on-premises and AWS platforms so that you can have insight into all of your database instances.

The collector is basically an independent machine within your infrastructure that pulls data from your database. It is low impact so that it doesn’t impact performance. This is a different approach from all of the other monitoring tools.

Router Metrics (Shown Above)

Mongo relationship tree displaying router, databases, replica set, shards and nodes. (Shown Above)

Percona: Any final thoughts? What are you looking forward to at Percona Live?

Cameron: If you’re in the market for a new database monitoring solution, SelectStar is worth looking at because it covers a breadth of databases with the depth into key metrics, alerts and notifications that optimize performance across your databases. We have a free trial, so you have an easy option to try it. We’re looking forward to meeting with as much of the community as possible, getting feedback and hearing about people’s monitoring needs.

Register for Percona Live Data Performance Conference 2017, and meet the creators of SelectStar. You can find them at selectstar.io. Use the code FeaturedTalk and receive $100 off the current registration price!

Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community, as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

New MariaDB Dashboard in Percona Monitoring and Management Metrics Monitor

Latest MySQL Performance Blog posts - April 3, 2017 - 2:56pm

In honor of the upcoming MariaDB M17 conference in New York City on April 11-12, we have enhanced Percona Monitoring and Management (PMM) Metrics Monitor with a new MariaDB Dashboard and multiple new graphs!

The Percona Monitoring and Management MariaDB Dashboard builds on the efforts of the MariaDB development team to instrument the Aria Storage Engine Status Variables related to Aria Pagecache and Aria Transaction Log activity, the tracking of Index Condition Pushdown (ICP), InnoDB Online DDL when using ALTER TABLE ... ALGORITHM=INPLACE, InnoDB Deadlocks Detected, and finally InnoDB Defragmentation. This new dashboard is available in Percona Monitoring and Management release 1.1.2. Download it now using our docker, VirtualBox or Amazon AMI installation options!

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MariaDB® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL, MariaDB® and MongoDB servers to ensure that your data works as efficiently as possible.

Aria Pagecache Reads/Writes

MariaDB 5.1 introduced the Aria Storage Engine, which is MariaDB’s MyISAM replacement. Originally known as the Maria storage engine, they renamed it in late 2010 in order to avoid confusion with the overall MariaDB project name. The Aria Pagecache Status Variables graph plots the count of disk block reads and writes, which occur when the data isn’t already in the Aria Pagecache. We also plot the reads and writes from the Aria Page Cache, which count the reads/writes that did not incur a disk lookup (as the data was previously fetched and available from the Aria pagecache):

Aria Pagecache Blocks

Aria reads and writes to the pagecache in order to cache data in RAM and avoid or delay activity related to disk. Overall, this translates into faster database query response times:

  • Aria_pagecache_blocks_not_flushed: The number of dirty blocks in the Aria pagecache.
  • Aria_pagecache_blocks_unused: Free blocks in the Aria pagecache.
  • Aria_pagecache_blocks_used: Blocks used in the Aria pagecache.

Aria Pagecache Total Blocks is calculated using Aria System Variables and the following formula:aria_pagecache_buffer_size / aria_block_size:

Aria Transaction Log Syncs

As Aria strives to be a fully ACID- and MVCC-compliant storage engine, an important factor is support for transactions. A transaction is the unit of work in a database that defines how to implement the four properties of Atomicity, Consistency, Isolation, and Durability (ACID). This graph tracks the rate at which Aria fsyncs the Aria Transaction Log to disk. You can think of this as the “write penalty” for running a transactional storage engine:

InnoDB Online DDL

MySQL 5.6 released the concept of an in-place DDL operation via ALTER TABLE ... ALGORITHM=INPLACE, which in some cases avoided performing a table copy and thus didn’t block INSERT/UPDATE/DELETE. MariaDB implemented three measures to track ongoing InnoDB Online DDL operations, which we plot via the following three status variables:

  • Innodb_onlineddl_pct_progress: Shows the progress of the in-place alter table. It might not be accurate, as in-place alter is highly dependent on the disk and buffer pool status
  • Innodb_onlineddl_rowlog_pct_used: Shows row log buffer usage in 5-digit integers (10000 means 100.00%)
  • Innodb_onlineddl_rowlog_rows: Number of rows stored in the row log buffer

For more information, please see the MariaDB blog post Monitoring progress and temporal memory usage of Online DDL in InnoDB.

InnoDB Defragmentation

MariaDB merged the Facebook/Kakao defragmentation patch for defragmenting InnoDB tablespaces into their 10.1 release. Your MariaDB instance needs to have started with innodb_defragment=1 and your tables need to be in innodb_file_per_table=1 for this to work. We plot the following three status variables:

  • Innodb_defragment_compression_failures: Number of defragment re-compression failures
  • Innodb_defragment_failures: Number of defragment failures
  • Innodb_defragment_count: Number of defragment operations

Index Condition Pushdown

Oracle introduced this in MySQL 5.6. From the manual:

Index Condition Pushdown (ICP) is an optimization for the case where MySQL retrieves rows from a table using an index. Without ICP, the storage engine traverses the index to locate rows in the base table and returns them to the MySQL server which evaluates the WHERE condition for the rows. With ICP enabled, and if parts of the WHERE condition can be evaluated by using only columns from the index, the MySQL server pushes this part of the WHERE condition down to the storage engine. The storage engine then evaluates the pushed index condition by using the index entry and only if this is satisfied is the row read from the table. ICP can reduce the number of times the storage engine must access the base table and the number of times the MySQL server must access the storage engine.

Essentially, the closer that ICP Attempts are to ICP Matches, the better!

InnoDB Deadlocks Detected (MariaDB 10.1 Only)

Ever since MySQL implemented a transactional storage engine there have been deadlocks. Deadlocks are conditions where different transactions are unable to proceed because each holds a lock that the other needs. In MariaDB 10.1, there is a Status variable that counts the occurrences of deadlocks since the server startup. Previously, you had to instrument your application to get an accurate count of deadlocks, because otherwise you could miss occurrences if your polling interval wasn’t configured frequent enough (even using pt-deadlock-logger). Unfortunately, this Status variable doesn’t appear to be present in the MariaDB 10.2.4 build I tested:

Again, please download Percona Monitoring and Management 1.1.2 to take advantage of the new MariaDB Dashboard and new graphs!  For installation instructions, see the Deployment Guide.

You can see the MariaDB Dashboard and new graphs in action at the PMM Demo site. If you feel the graphs need any tweaking or if I’ve missed anything, leave a note on the blog. You can also write me directly (I look forward to your comments): michael.coburn@percona.com.

To start: on the ICP graph, should we have a line that defines the percentage of successful ICP matches vs. attempts?

innobackupex decompress to different dir

Lastest Forum Posts - April 3, 2017 - 2:26pm
Is it possible to decompress a qpress compressed backup to another directory? Also, can the backup be decompressed to a stream?

thx
Frank

Percona Monitoring and Management 1.1.2 is Now Available

Latest MySQL Performance Blog posts - April 3, 2017 - 10:12am

Percona announces the release of Percona Monitoring and Management 1.1.2 on April 3, 2017.

For installation instructions, see the Deployment Guide.

This release includes several new dashboards in Metrics Monitor, updated versions of software components used in PMM Server, and a number of small bug fixes.

Thank You to the Community!

We would like to mention some of the key contributors in this release, and thank the community for continued support of PMM:

New Dashboards and Graphs

This release includes the following new dashboards:

  • MariaDB dashboard includes three new graphs for the Aria storage engine. There will be a detailed blog post about monitoring possibilities with these new graphs:

The new MariaDB dashboard also includes three new graphs for monitoring InnoDB within MariaDB. We are planning to move them into one of the existing InnoDB dashboards in the next PMM release:

  • The InnoDB Defragmentation graph shows how OPTIMIZE TABLE impacts defragmentation on tables when running MariaDB with innodb_file_per_table=1 and innodb_defragment=1.

  • The InnoDB Online DDL graph includes metrics related to online DDL operations when using ALTER TABLE ... ALGORITHM=INPLACE in MariaDB.

  • The InnoDB Deadlocks Detected graph currently works only with MariaDB 10.1. We are planning to add support for MariaDB 10.2, Percona Server, and MySQL in the next PMM release.

  • The Index Condition Pushdown graph shows how InnoDB leverages the Index Condition Pushdown (ICP) routines. Currently this graph works only with MariaDB, but we are planning to add support for Percona Server and MySQL in the next PMM release.

Updated Software

PMM is based on several third-party open-source software components. We ensure that PMM includes the latest versions of these components in every release, making it the most secure, stable and feature-rich database monitoring platform possible. Here are some highlights of changes in the latest releases:

  • Grafana 4.2 (from 4.1.1)
    • HipChat integration
    • Templating improvements
    • Alerting enhancements
  • Consul 0.7.5 (from 0.7.3)
    • Bug fix for serious server panic
  • Prometheus 1.5.2 (from 1.5.1)
    • Prometheus binaries are built with Go1.7.5
    • Fixed two panic conditions and one series corruption bug
  • Orchestrator 2.0.3 (from 2.0.1)
    • GTID improvements
    • Logging enhancements
    • Improved timing resolution and faster discoveries
Other Changes in PMM Server
  • Migrated the PMM Server docker container to use CentOS 7 as the base operating system.
  • Changed the entry point so that supervisor is PID 1.
  • PMM-633: Set the following default values in my.cnf:
    [mysqld] # Default MySQL Settings innodb_buffer_pool_size=128M innodb_log_file_size=5M innodb_flush_log_at_trx_commit=1 innodb_file_per_table=1 innodb_flush_method=O_DIRECT # Disable Query Cache by default query_cache_size=0 query_cache_type=0
  • PMM-676: Added descriptions for graphs in Disk Performance and Galera dashboards.
Changes in PMM Client
  • Fixed pmm-admin remove --all to clear all saved credentials.
  • Several fixes to mongodb_exporter including PMM-629 and PMM-642.
  • PMM-504: Added ability to change the name of a client with running services: $ sudo pmm-admin config --client-name new_name --force

    WARNING: Some Metrics Monitor data may be lost when renaming a running client.

About Percona Monitoring and Management

Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

Support for Partitions

Lastest Forum Posts - April 3, 2017 - 9:04am
I am just exploring XtraDB cluster for a production rollout and is confused about the table partitioning support it provides. By the way the version that I am using is 5.6. I created a table with range partition and loaded data. When I am explaining the select, it is not showing the partitions which will be used. The same query on MySQL normal database is showing me the partitions which will be used to fetch the data.

On checking filesystem I do can see the partitions. But, not able to see it in explain plan with XtraDB. Is there any setting to tweak ? Or is it actually using partitions ? Was not able to find any documentation which talks about partition support on XtraDB cluster.

Cluster goes down with 1 node offline

Lastest Forum Posts - April 3, 2017 - 6:48am
I was testing XtraDB for our new database-setup and I'm running into trouble after testing a fail-over scenario.
Right now I have 2 nodes in this cluster, both working and syncing fine.
When I shut the network down on 1 of the nodes, the other node keeps reconnecting and the database-server is hanging:

2017-04-03T13:38:48.310033Z 0 [Note] WSREP: (720e634e, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://192.168.0.161:4567 timed out, no messages seen in PT3S

At this time I can not log in to the node that is online, it's waiting for the other side to come back online, it seems:

[root@server01 ~]# mysql -uroot -pxxxxxxx database_name
mysql: [Warning] Using a password on the command line interface can be insecure.
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A


Is this expecting behavior for a 2-node cluster perhaps?

The plan was to have 1 server in the datacenter and the other node locally at the office, but right now this seems a bad idea when both go offline like this.

Upgrading Percona-MySQL from 5.5 to 5.7

Lastest Forum Posts - March 31, 2017 - 5:01am
Hi,

We are looking for in place upgrade of Perocna MySQL from 5.5 to 5.7. Just wanted to understand the impact and risk to the existing data which may arise from new features added.

What are the data considerations needs to be taken during upgrade process?

Is there steps or guide available to upgrade from 5.5--->5.6---->5.7 ?

Thanks in advance

configure SMTP

Lastest Forum Posts - March 30, 2017 - 3:03pm
And additional to this, how to configure SMTP options to set mail notification alerts. I am using followed options and unable to get notifications. Please let me know where I need to modify below SMTP configuration.

#################################### SMTP / Emailing ##########################
[smtp]
;enabled = true
;host = localhost:25
;user = ivishnu7@gmail.com
;password =
;cert_file =
;key_file =
;skip_verify = false
;from_address = admin@grafana.localhost

Performance Evaluation of SST Data Transfer: With Encryption (Part 2)

Latest MySQL Performance Blog posts - March 30, 2017 - 2:08pm

In this blog post, we’ll look at the performance of SST data transfer using encryption.

In my previous post, we reviewed SST data transfer in an unsecured environment. Now let’s take a closer look at a setup with encrypted network connections between the donor and joiner nodes.

The base setup is the same as the previous time:

  • Database server: Percona XtraDB Cluster 5.7 on donor node
  • Database: sysbench database – 100 tables 4M rows each (total ~122GB)
  • Network: donor/joiner hosts are connected with dedicated 10Gbit LAN
  • Hardware: donor/joiner hosts – boxes with 28 Cores+HT/RAM 256GB/Samsung SSD 850/Ubuntu 16.04

The setup details for the encryption aspects in our testing:

  • Cryptography libraries: openssl-1.0.2, openssl-1.1.0, libgcrypt-1.6.5(for xbstream encryption)
  • CPU hardware acceleration for AES – AES-NI: enabled/disabled
  • Ciphers suites: aes(default), aes128, aes256, chacha20(openssl-1.1.0)

Several notes regarding the above aspects:

  • Cryptography libraries. Now almost every Linux distribution is based on the openssl-1.0.2. This is the previous stable version of the OpenSSL library. The latest stable version (1.1.0) has various performance/scalability fixes and also support of new ciphers that may notably improve throughput, However, it’s problematic to upgrade from 1.0.2 to 1.1.0, or just to find packages for openssl-1.1.0 for existing distributions. This is due to the fact that replacing OpenSSL triggers update/upgrade of a significant number of packages. So in order to use openssl-1.1.0, most likely you will need to build it from sources. The same applies to socat – it will require some effort to build socat with openssl-1.1.0.
  • AES-NI. The Advanced Encryption Standard Instruction Set (AES-NI) is an extension to the x86 CPU’s from Intel and AMD. The purpose of AES-NI is to improve the performance of encryption and decryption operations using the Advanced Encryption Standard (AES), like the AES128/AES256 ciphers. If your CPU supports AES-NI, there should be an option in BIOS that allows you to enabled/disable that feature. In Linux, you can check /proc/cpuinfo for the existence of an “aes” flag. If it’s present, then AES-NI is available and exposed to the OS.There is a way to check what acceleration ratio you can expect from it:
    # AES_NI disabled with OPENSSL_ia32cap OPENSSL_ia32cap="~0x200000200000000" openssl speed -elapsed -evp aes-128-gcm ... The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-gcm 57535.13k 65924.18k 164094.81k 175759.36k 178757.63k # AES_NI enabled openssl speed -elapsed -evp aes-128-gcm The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-gcm 254276.67k 620945.00k 826301.78k 906044.07k 923740.84k
    Our interest is the very last column: 178MB/s(wo AES-NI) vs 923MB/s(w AES-NI)
  • Ciphers. In our testing for network encryption with socat+openssl 1.0.2/1.1.0, we used the following ciphers suites:
    DEFAULT – if you don’t specify a cipher/cipher string for OpenSSL connection, this suite will be used
    AES128 – suite with aes128 ciphers only
    AES256 – suites with aes256 ciphers onlyAdditionally, for openssl-1.1.0, there is an extra cipher suite:
    CHACHA20 – cipher suites using ChaCha20 algoIn the case of xtrabackup, where internal encryption is based on libgcrypt, we use the AES128/AES256 ciphers from this library.
  • SST methods. Streaming database files from the the donor to joiner with the rsync protocol over an OpenSSL-encrypted connection:
    (donor) rsync | socat+ssl socat+ssl| rsync(daemon mode) (joiner)
    The current approach of wsrep_sst_rsync.sh doesn’t allow you to use the rsync SST method with SSL. However, there is a project that tries to address the lack of SSL support for rsync method. The idea is to create a secure connection with socat and then use that connection as a tunnel to connect rsync between the joiner and donor hosts. In my testing, I used a similar approach.

    Also take a note that in the chart below, there are results for two variants of rsync: “rsync” (the current approach), and “rsync_improved” (the improved one). I’ve explained the difference between them in my previous post.

  • Backup data on the donor side and stream it to the joiner in xbstream format over an OpenSSL encrypted connection

    (donor) xtrabackup| socat+ssl socat+ssl | xbstream (joiner)

    In my testing for streaming over encrypted connections, I used the --parallel=4 option for xtrabackup. In my previous post, I showed that this is important factor to get the best time. There is also a way to pass the name of the cipher that will be used by socat for the OpenSSL connection in the wsrep_sst_xtrabackup-v2.sh script with the sockopt option. For instance:

    [sst] inno-backup-opts="--parallel=4" sockopt=",cipher=AES128"

  • Backup data on the donor side/encrypt it internally(with libgcrypt) and stream the data to the joiner in xbstream format, and afterwards decrypt files on the joiner

    (donor) xtrabackup | socat socat | xbstream ; xtrabackup decrypt (joiner)

    The xtrabackup tool has a feature to encrypt data when performing a backup. That encryption is based on the libgcrypt library, and it’s possible to use AES128 or AES256 ciphers. For encryption, it’s necessary to generate a key and then provide it to xtrabackup to perform encryption on fly. There is a way to specify the number of threads that will encrypt data, along with the chunk size to tune process of encryption.

    The current version of xtrabackup supports an efficient way to read, compress and encrypt data in parallel, and then write/stream it. From the other side, when we accept a stream we can’t decompress/decrypt stream on the fly. At first, the stream should be received/written to disk with the xbstream tool and only after that can you use xtrabackup with --decrypt/--decompress modes to unpack data. The inability to process data on the fly and save the stream to disk for later processing has a notable impact on stream time from the donor to the joiner. We have a plan to fix that issue, so that encryption+compression+streaming of data with xtrabackup happens without the necessity to write stream to the disk on the receiver side.

    For my testing, in the case of xtrabackup with internal encryption, I didn’t use SSL encryption for socat.

Results (click on the image for an enlarged view):

Observations:
  • Transferring data with rsync is very inefficient, and the improved version is 2-2.5 times faster. Also, you may note that in the case of “no-aes-n”, the rsync_improved method has the best time for default/aes128/aes256 ciphers. The reason is that we perform both data transfers in parallel (we spawn rsync process for each file), as well as encryption/decryption (socat forks extra processes for each stream). This approach allows us to compensate for the absence of hardware acceleration by using several CPU cores. In all other cases, we only use one CPU for streaming of data and encryption/decryption.
  • xtrabackup (with hardware optimized crc32) shows the best time in all cases, except for the default/aes128/aes256 ciphers in “no-aes-ni” mode (where rsync_imporved showed the best time). However I would like to remind you that SST with rsync is a blocking operation, and during the data transfer the donor node becomes READ-ONLY. xtrabackup, on the other hand, uses backup locks and allows any operations on donor node during SST.
  • On the boxes without hardware acceleration (no-aes-ni mode), the chacha20 cipher allows you to perform data transfer 2-3 times faster. It’s a very good replacement for “aes” ciphers on such boxes. However, the problem with that cipher is that it is available only in openssl-1.1.0. In order to use it, you will need a custom build of OpenSSL and socat for many distros.
  • Regarding xtrabackup with internal encryption (xtrabackup_enc): reading/encrypting and streaming data is quite fast, especially with the latest libgcrypt library(1.7.x). The problem is decryption. As I’ve explained above, right now we need to get the stream and save encrypted data to storage first, and then perform the extra step of reading/decrypting and saving the data back. That extra part consumes 2/3 of the total time. Improving the xbstream tool to perform steam decryption/decompression on the fly would allow you to get very good results.
Testing Details

For purposes of the testing, I’ve created a script “sst-bench.sh” that covers all the methods used in this post. You can use it to measure all the above SST methods in your environment. In order to run the script, you have to adjust several environment variables at the beginning of the script: joiner ip, datadirs location on joiner and donor hosts, etc. After that, put the script on the “donor” and “joiner” hosts and run it as the following:

#joiner_host> sst_bench.sh --mode=joiner --sst-mode=<tar|xbackup|rsync> --cipher=<DEFAULT|AES128|AES256|CHACHA20> --ssl=<0|1> --aesni=<0|1> #donor_host> sst_bench.sh --mode=donor --sst-mode=<tar|xbackup|rsync|rsync_improved> --cipher=<DEFAULT|AES128|AES256|CHACHA20> --ssl=<0|1> --aesni=<0|1>

Keepalive VIP vs HAproxy

Lastest Forum Posts - March 30, 2017 - 5:27am
Hi,

I am testing Percona XtraDB Cluster in my lab. I have two web servers and 3 XtraDB Cluster nodes.

I have installed HAproxy on the two web nodes and they are successfully connecting to the DB nodes. I am also considering to use Keepalived with VIP. https://www.percona.com/blog/2013/10...tradb-cluster/
Do anyone have experience using this? Is there any pros / cons beetween using HAproxy or Keepalived?
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.