Buy Percona ServicesBuy Now!

Backups and Disaster Recovery

Latest MySQL Performance Blog posts - July 18, 2017 - 10:06am

In this post, we’ll look at strategies for backups and disaster recovery.

Note: I am giving a talk on Backups and Disaster Recovery Best Practices on July 27th.

Register Now

When discussing disaster recovery, it’s important to take your business’ continuity plan into consideration. Backup and recovery processes are a critical part of any application infrastructure.

A well-tested backup and recovery system can be the difference between a minor outage and the end of your business.

You will want to take three things into consideration when planning your disaster recovery strategy: recovery time objective, recovery point objective and risk mitigation.

Recovery time objective (RTO) is how long it takes to restore your backups. Recovery point objective (RPO) is what point in time you want to recover (in other words, how much data you can afford to lose after recovery). Finally, you need to understand what risks you are trying to mitigate. Risks to your data include (but are not limited to) bad actors, data corruption, user error, host failure and data center failure.

Recommended Backup Strategies

We recommend that you use both physical (Percona XtraBackup, RDS/LVM Snapshots, MySQL Enterprise Backup) and logical backups (mysqldump, mydumper, mysqlpump). Logical backups protect against the loss of single data points, while physical backups protect against total data loss or host failure.

The best practice is running Percona XtraBackup nightly, followed by mysqldump (or in 5.7+, mysqlpump). Percona XtraBackup enables you to quickly restore a server, and mysqldump enables you to quickly restore data points. These address recovery time objectives.

For point-in-time recovery, it is recommended that you download binlogs on a regular basis (once an hour, for example).

Another option is binlog streaming. You can find more information on binlog streaming in our blog: Backing up binary log files with mysqlbinlog.

There is also a whitepaper that is the basis of my webinar here: MySQL Backup and Recovery Best Practices.

Delayed Slave

One way to save on operational overhead is to create a 24-hour delayed slave. This takes the place of the logical backup (mysqldump) as well as the binlog streaming. You want to ensure that you stop the delayed slave immediately following any issues. This ensures that the data does not get corrupted on the backup as well.

A delayed slave is created in 5.6 and above with:

CHANGE MASTER TO MASTER_DELAY = N;

After a disaster, you would issue:

STOP SLAVE;

Then, in order to get a point-in-time, you can use:

START SLAVE UNTIL MASTER_LOG_FILE = 'log_name', MASTER_LOG_POS = log_pos;

Restore

It is a good idea to test your backups at least once a quarter. Backups do not exist unless you know you can restore them. There are some recent high-profile cases where developers dropped tables or schemas, or data was corrupted in production, and in one case five different backup types were not viable to use to restore.

The best case scenario is an automated restore test that runs after your backup, and gives you information on how long it takes to restore (RTO) and how much data you can restore (RPO).

For more details on backups and disaster recovery, come to my webinar.

Register Now

Inconsistency in libgcrypt version for generic binary vs. APT package on Ubuntu 16.04

Lastest Forum Posts - July 18, 2017 - 6:26am
The generic Linux binary version of XtraBackup 2.4.7 requires libgcrypt.so.11 but the version that is installed from Percona's APT repository using apt-get links to libgcrypt.so.20. The problem for those of us who use the generic binary version is that libgcrypt.so.11 is no longer available as a standard package from the Ubuntu repository, and we have to jump through hoops to obtain it.

Please can the build procedure for the generic Linux binary version of XtraBackup be standardised to use the same shared object libraries as the APT packaged release. Or if that's impossible for compatibility reasons, please publish generic Linux binaries for both versions of libgcrypt.

Upcoming Webinar Wednesday, July 19, 2017: Learning MySQL 5.7

Latest MySQL Performance Blog posts - July 18, 2017 - 6:13am

Join Percona’s, Technical Services Manager, Jervin Real as he presents Learning MySQL 5.7 on Wednesday, July 19, 2017 at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Register Now

MySQL 5.7 has a lot of new features. If you’ve dabbled with an older version of MySQL, it is worth learning what new features exist and how to use them. This webinar teaches you about new features such as multi-source replication, global transaction IDs (GTIDs), security improvements and more.

We’ll also discuss logical decoding. Logical decoding is one of the features under the BDR implementation, which allows bidirectional streams of data between Postgres instances. Also, it allows you to stream data outside Postgres into many other data systems.

Register for the webinar here.

Jervin Real, Technical Services Manager Jervin is a member of the Technical Services at Percona, where he partners with customers to build reliable and highly-performant MySQL infrastructures, He also does other fun stuff like watching cat videos on the internet, reading bugs for novels and playing around servers for numbers. Jervin joined Percona in April of 2010.

getting error when add new node into clusters

Lastest Forum Posts - July 17, 2017 - 11:31pm
2017-07-18 09:15:59 23988 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 176)
2017-07-18 09:15:59 23988 [Note] WSREP: Requesting state transfer: success, donor: 0
xb_stream_read_chunk(): wrong chunk magic at offset 0x0.

some graph show "No data points"

Lastest Forum Posts - July 17, 2017 - 10:19pm
Hi

I have question.

some graph show "No data points"


pmm server 1.2.0
pmm client 1.2.0


monitoring db server

mysql 5.6.35 community
mysql 5.6.35 enterprise


Dashboard : MySQL Performance Schema

No data points graph

Index Access Operations(Load)

Table Access Operations(Load)



Dashboard : MySQL InnoDB Metrics

No data points graph

InnoDB Checkpoint Age

InnoDb Change Buffer



How i can solve this problem?

Percona Live Europe 2017 Call for Papers Deadline Extended to July 24, 2017

Latest MySQL Performance Blog posts - July 17, 2017 - 8:21pm

We are extending the Percona Live Open Source Database Conference Europe 2017 call for papers deadline to Monday, July 24, 2017. Get your submissions in now!

Between our Conference Committee working hard to review all the outstanding talk ideas, and the many community requests for more time, we didn’t want to shortchange any of our applicants. If you procrastinated too long, didn’t complete your talk submission or just plain forgot to submit, this is your reprieve! You have one extra week to pull together a proposal for a talk on open source databases at Percona Live Europe 2017.

The theme of Percona Live Europe 2017 is “Championing Open Source Databases.” There are sessions on MySQL, MariaDBMongoDB and Other Open Source Database technologies, including time series databases, PostgreSQL and RocksDB. Are you:

  • Working with MongoDB as a developer?
  • Creating a new MySQL-variant time series database?
  • Deploying MariaDB in a novel way?
  • Using open source database technology to solve a business issue?

Share your open source database experiences with peers and professionals in the open source community! We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions:

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. Encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

Submit your talk ideas now.

PLEASE NOTE: We have a new call for papers system that requires everyone to register for a new account. You can save proposals as drafts and edit them later. Once a proposal is submitted, it is final. You can NOT change a proposal once submitted.

Register for Percona Live Europe 2017 now! Early Bird registration lasts until August 8

Last year’s Percona Live Europe sold out, and we’re looking to do the same at this year’s conference. Don’t miss your chance to get your ticket at its most affordable price. Click here to register.

Percona Live 2017 sponsorship opportunities are available now

Percona Live Europe 2017 is just around the corner. Have you secured your sponsorship yet? Last year’s event sold out! Booth selection for our limited sponsorship opportunities is on a first-come, first-served basis. Click here to find out how to sponsor.

Don’t miss out on a special room rate at the Percona Live Europe 2017 venue

This year, Percona Live Europe is being held at the Radisson Blu Royal Hotel, Dublin. You can get a special room rate when you attend the conference. But hurry, rooms are going fast! To get the special room rate:

  1. Visit https://www.radissonblu.com/en/royalhotel-dublin.
  2. Click BOOK NOW at the top right.
  3. Enter your preferred check-in and check-out dates, and how many rooms.
  4. From the drop-down “Select Rate Type,” choose Promotional Code.
  5. Enter the code PERCON.

The group rate only applies if used within the Percona Live Europe group block dates (September 25-27, 2017). The deadline for booking with this rate is July 24, 2017.

Percona Server for MongoDB 3.2.14-3.4 is Now Available

Latest MySQL Performance Blog posts - July 17, 2017 - 12:47pm

Percona announces the release of Percona Server for MongoDB 3.2.14-3.4 on July 17, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database that supports the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage engine, as well as enterprise-grade features like External Authentication, Audit Logging, Profiling Rate Limiting, and Hot Backup at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: This release deprecates the PerconaFT storage engine. It will not be available in future releases.

This release is based on MongoDB 3.2.14 and includes the following additional changes:

New Features

Bugs Fixed

  • #PSMDB-67: Fixed mongod service status messages.

Percona Server for MongoDB 3.2.14-3.4 release notes are available in the official documentation.

qpress rpm link

Lastest Forum Posts - July 17, 2017 - 7:37am
Hi,
Can someone please share link for downloading qpress rpm for RHEL

Join and order performance across two tables after switching from MySql

Lastest Forum Posts - July 17, 2017 - 5:29am
Hi,

Our server company advised we switch to Percona when setting up our new DB servers. So we're currently on Percona Server (GPL), Release 82.0, Revision 58e846a and there's one behavior I'm trying to wrap my head around that we definitely weren't experiencing before with MySql so I thought I'd reach out:

This is a query we perform fairly regularly to pull an article from our db

SELECT * FROM table_a a, table_b b
WHERE a.id = b.id AND a.status_field = 'open' AND b.filter_field = 'no_filter' AND b.view_field = 'article'
ORDER BY a.unixtimestamp DESC LIMIT 1

This used to complete very quickly but under Percona, the combination of the where conditions from table b and ordering from table a makes the whole query take ~3s. I don't fully understand this behaviour.

If I alter it to:

SELECT * FROM table_a a, table_b b
WHERE a.id = b.id AND a.status_field = 'open' AND b.filter_field = 'no_filter' AND b.view_field = 'article'
ORDER BY b.unixtimestamp DESC LIMIT 1

Then it completes very quickly (< 0.05s)

Is this sort of an expected behavior with Percona?

I just wanted to know before changing any db structure to compensate.

Thanks.





Unable to connect to PMM server by address

Lastest Forum Posts - July 17, 2017 - 2:34am
Hi,

While I'am starting the pmm-admin services, I am receiving the below error. What does this mean? did I missed something.

sudo pmm-admin start --all
OK, started 2 services.
Unable to connect to PMM server by address: XX.X.XX.XXX

Even though the server is reachable it does not look to be PMM server.
Check if the configured address is correct. %!s(<nil>)

Percona Monitoring and Management 1.2.0 is Now Available

Latest MySQL Performance Blog posts - July 14, 2017 - 10:51am

Percona announces the release of Percona Monitoring and Management 1.2.0 on July 14, 2017.

For installation instructions, see the Deployment Guide.

Changes in PMM Server

PMM Server 1.2.0 introduced the following changes:

Updated Components New Features
  • PMM-737: New graphs in System Overview dashboard:
      • Memory Advanced Details
      • Saturation Metrics

  • PMM-1090: Added ESXi support for PMM Server virtual appliance.
UI Fixes
  • PMM-707: Fixed QPS metric in MySQL Overview dashboard to always show queries per second regardless of the selected interval.
  • PMM-708: Fixed tooltips for graphs that displayed incorrectly.
  • PMM-739PMM-797: Fixed PMM Server update feature on the landing page.
  • PMM-823: Fixed arrow padding for collapsible blocks in QAN.
  • PMM-887: Disabled the Add button when no table is specified for showing query info in QAN.
  • PMM-888: Disabled the Apply button in QAN settings when nothing is changed.
  • PMM-889: Fixed the switch between UTC and local time zone in the QAN time range selector.
  • PMM-909: Added message No query example when no example for a query is available in QAN.
  • PMM-933: Fixed empty tooltips for Per Query Stats column in the query details section of QAN.
  • PMM-937: Removed the percentage of total query time in query details for the TOTAL entry in QAN (because it is 100% by definition).
  • PMM-951: Fixed the InnoDB Page Splits graph formula in the MySQL InnoDB Metrics Advanced dashboard.
  • PMM-953: Enabled stacking for graphs in MySQL Performance Schema dashboard.
  • PMM-954: Renamed Top Users by Connections graph in MySQL User Statistics dashboard to Top Users by Connections Created and added the Connections/sec label to the Y-axis.
  • PMM-957: Refined titles for Client Connections and Client Questions graphs in ProxySQL Overview dashboard to mentioned that they show metrics for all host groups (not only the selected one).
  • PMM-961: Fixed the formula for Client Connections graph in ProxySQL Overview dashboard.
  • PMM-964: Fixed the gaps for high zoom levels in MySQL Connections graph on the MySQL Overview dashboard.
  • PMM-976: Fixed Orchestrator handling by supervisorctl.
  • PMM-1129: Updated the MySQL Replication dashboard to support new connection_name label introduced in mysqld_exporter for multi-source replication monitoring.
  • PMM-1054: Fixed typo in the tooltip for the Settings button in QAN.
  • PMM-1055: Fixed link to Query Analytics from Metrics Monitor when running PMM Server as a virtual appliance.
  • PMM-1086: Removed HTML code that showed up in the QAN time range selector.
Bug Fixes
  • PMM-547: Added warning page to Query Analytics app when there are no PMM Clients running the QAN service.
  • PMM-799: Fixed Orchestrator to show correct version.
  • PMM-1031: Fixed initialization of Query Profile section in QAN that broke after upgrading Angular.
  • PMM-1087: Fixed QAN package building.
Other Improvements
  • PMM-348: Added daily log rotation for nginx.
  • PMM-968: Added Prometheus build information.
  • PMM-969: Updated the Prometheus memory usage settings to leverage new flag. For more information about setting memory consumption by PMM, see FAQ.
Changes in PMM Client

PMM Client 1.2.0 introduced the following changes:

New Features
  • PMM-1114: Added PMM Client packages for Debian 9 (“stretch”).
Bug Fixes
  • PMM-481PMM-1132: Fixed fingerprinting for queries with multi-line comments.
  • PMM-623: Fixed mongodb_exporter to display correct version.
  • PMM-927: Fixed bug with empty metrics for MongoDB query analytics.
  • PMM-1126: Fixed promu build for node_exporter.
  • PMM-1201: Fixed node_exporter version.
Other Improvements
  • PMM-783: Directed mongodb_exporter log messages to stderr and excluded many generic messages from the default INFO logging level.
  • PMM-756: Merged upstream node_exporter version 0.14.0.
    PMM deprecated several collectors in this release:

    • gmond – Out of scope.
    • megacli – Requires forking, to be moved to textfile collection.
    • ntp – Out of scope.

    It also introduced the following breaking change:

    • Collector errors are now a separate metric: node_scrape_collector_success, not a label on node_exporter_scrape_duration_seconds
  • PMM-1011: Merged upstream mysqld_exporter version 0.10.0.
    This release introduced the following breaking change:

    • mysql_slave_... metrics now include an additional connection_name label to support MariaDB multi-source replication.
About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

Percona Monitoring and Management is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

A Little Trick Upgrading to MySQL 5.7

Latest MySQL Performance Blog posts - July 14, 2017 - 10:20am

In this blog post, I’ll look at a trick we use at Percona when upgrading to MySQL 5.7.

I’ll be covering this subject (and others) in my webinar Learning MySQL 5.7 on Wednesday, July 19, 2017.

We’ve been doing upgrades for quite a while here are Percona, and we try to optimize, standardize and improve this process to save time. When upgrading to MySQL 5.7, more often than not you need to run REPAIR or ALTER via mysql_upgrade to a number of MySQL tables. Sometimes a few hundred, sometimes hundreds of thousands.

One way to cut some time from testing or executing mysql_upgrade is to combine it with mysqlcheck. This identifies tables that need to be rebuilt or repaired. The first step is to capture the output of this process:

revin@acme:~$ mysqlcheck --check-upgrade --all-databases > mysql-check.log

This provides a lengthy output of what needs to be done to successfully upgrade our tables. On my test data, I get error reports like the ones below. I’ll need to take the specified action against them:

ads.agency error : Table upgrade required. Please do "REPAIR TABLE `agency`" or dump/reload to fix it! store.categories error : Table rebuild required. Please do "ALTER TABLE `categories` FORCE" or dump/reload to fix it!

Before we run through this upgrade, let’s get an idea of how long it would take for a regular mysql_upgrade to complete on this dataset:

revin@acme:~$ time mysql_upgrade Enter password: Checking if update is needed. Checking server version. Running queries to upgrade MySQL server. Checking system database. mysql.columns_priv OK mysql.db OK ... mysql.user OK Upgrading the sys schema. Checking databases. ads.account_preference_assoc OK ... Repairing tables ... ads.agency Note : TIME/TIMESTAMP/DATETIME columns of old format have been upgraded to the new format. status : OK ... `store`.`categories` Running : ALTER TABLE `store`.`categories` FORCE status : OK ... Upgrade process completed successfully. Checking if update is needed. real 25m57.482s user 0m0.024s sys 0m0.072s

On a cold server, my baseline above took about 25 minutes.

The second step on our time-saving process is to identify the tables that need some action (in this case, REPAIR and ALTER … FORCE). Generate the SQL statements to run them and put them into a single SQL file:

revin@acme:~$ for t in $(cat mysql-check.log |grep -B1 REPAIR | egrep -v 'REPAIR|--'); do echo "mysql -e 'REPAIR TABLE $t;'" >> upgrade.sql; done revin@acme:~$ for t in $(cat mysql-check.log |grep -B1 ALTER | egrep -v 'ALTER|--'); do echo "mysql -e 'ALTER TABLE $t FORCE;'" >> upgrade.sql; done

My upgrade.sql file will have something like this:

mysql -e 'ALTER TABLE store.categories FORCE;' mysql -e 'REPAIR TABLE ads.agency;'

Now we should be ready to run these commands in parallel as the third step in the process:

revin@acme:~$ time parallel -j 4 -- < upgrade.sql ... real 17m31.448s user 0m1.388s sys 0m0.616s

Getting some parallelization is not bad, and the process improved by about 38%. If we are talking about multi-terabyte data sets, then it is already a big gain.

On the other hand, my dataset has a few tables that are bigger than the rest. Since mysqlcheck processes them in a specific order, one of the threads was processing most of them instead of spreading them out evenly to each thread by size. To fix this, we need to have an idea of the sizes of each table we will be processing. We can use a query from the INFORMATION_SCHEMA.TABLES for this purpose:

revin@acme:~$ for t in $(cat mysql-check.log |grep -B1 ALTER | egrep -v 'ALTER|--'); do d=$(echo $t|cut -d'.' -f1); tbl=$(echo $t|cut -d'.' -f2); s=$(mysql -BNe "select sum(index_length+data_length) from information_schema.tables where table_schema='$d' and table_name='$tbl';"); echo "$s |mysql -e 'ALTER TABLE $t FORCE;'" >> table-sizes.sql; done revin@acme:~$ for t in $(cat mysql-check.log |grep -B1 REPAIR | egrep -v 'REPAIR|--'); do d=$(echo $t|cut -d'.' -f1); tbl=$(echo $t|cut -d'.' -f2); s=$(mysql -BNe "select sum(index_length+data_length) from information_schema.tables where table_schema='$d' and table_name='$tbl';"); echo "$s |mysql -e 'REPAIR TABLE $t;'" >> table-sizes.sql; done

Now my table-sizes.sql file will have contents like below, which I can sort and pass to the parallel command again and cut even more time!

32768 |mysql -e 'REPAIR TABLE ads.agency;' 81920 |mysql -e 'ALTER TABLE store.categories FORCE;'

revin@acme:~$ cat table-sizes.sql |sort -rn|cut -d'|' -f2 > upgrade.sql revin@acme:~$ time parallel -j 4 -- < upgrade.sql ... real 8m1.116s user 0m1.260s sys 0m0.624s

This go-around, my total execution time is 8 minutes – a good 65% improvement. To wrap it up, we will need to run mysql_upgrade one last time so that the system tables are also upgraded, the tables are checked again and then restart the MySQL server as instructed by the manual:

revin@acme:~$ time mysql_upgrade --force

The whole process should be easy to automate and script, depending on your preference. Lastly: YMMV. If you have one table that is more than half the size of your total data set, there might not be big gains.

If you want to learn more about upgrading to MySQL 5.7, come to my webinar on Wednesday, July 19: Learning MySQL 5.7. This process is only one of the phases in a multi-step upgrade process when moving to 5.7. I will discuss them in more detail next week. Register now from the link below, and I’ll talk to you soon!

Register Now

Setting Up Percona PAM with Active Directory for External Authentication

Latest MySQL Performance Blog posts - July 13, 2017 - 12:17pm

In this blog post, we’ll look at how to set up Percona PAM with Active Directory for external authentication.

In my previous article on Percona PAM, I demonstrated how to use Samba as a domain, and how easy it is to create domain users and groups via the samba-tool. Then we configured nss-pam-ldapd and nscd to enumerate user and group information via LDAP calls, and authenticate users from this source.

This time around, I will demonstrate two other ways of using Active Directory for external authentication by joining the domain via SSSD or Winbind. System Security Services Daemon (SSSD) allows you to configure access to several authentication hosts such as LDAP, Kerberos, Samba and Active Directory and have your system use this service for all types of lookups. Winbind, on the other hand, pulls data from Samba or Active Directory only. If you’re mulling over using SSSD or Winbind, take a look at this article on what SSSD or Winbind support.

For both methods, we’ll use realmd. That makes it easy to join a domain and enumerate users from it.

My testbed environment consists of two machines:

Samba PDC
OS: CentOS 7
IP Address: 172.16.0.10
Hostname: samba-10.example.com
Domain name: EXAMPLE.COM
DNS: 8.8.8.8(Google DNS), 8.8.4.4(Google DNS), 172.16.0.10(Samba)
Firewall: none

Note: Please follow the steps in the last article for setting up the Samba PDC environment.

Percona Server 5.7 with LDAP authentication via SSS or WinBind
OS: CentOS 7
IP Address: 172.16.0.21
Hostname: ps-ldap-21.example.com
DNS: 172.16.0.10(Samba PDC)

Installing realmd and Its Dependencies
  1. First, we need to make sure that the time is in sync (since this is a requirement for joining domains). Install NTP and make sure that it starts up at boot time:
    [root@ps-ldap-21 ~]# yum -y install ntp * * * Installed: ntp.x86_64 0:4.2.6p5-25.el7.centos.2 * * * [root@ps-ldap-21 ~]# ntpdate 0.centos.pool.ntp.org systemctl enable ntpd.service systemc 3 Jul 03:48:35 ntpdate[3708]: step time server 202.90.132.242 offset 1.024550 sec [root@ps-ldap-21 ~]# systemctl enable ntpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service. [root@ps-ldap-21 ~]# systemctl start ntpd.service
  2. Install realmd and its dependencies for SSSD or Winbind.
    For SSSD:
    yum -y install realmd oddjob oddjob-mkhomedir sssd adcli samba-common-tools
    For Winbind:
    yum -y install realmd oddjob oddjob-mkhomedir samba-winbind-clients samba-winbind samba-common-tools
Joining the Domain via SSSD and Preparing It for Percona PAM
  1. Run realm discover domain for realmd to discover what type of server it’s connecting to and what packages dependencies need to be installed:
    [root@ps-ldap-21 ~]# realm discover example.com example.com type: kerberos realm-name: EXAMPLE.COM domain-name: example.com configured: no server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common-tools
    Our Samba PDC is detected as an Active Directory Controller, and the packages required have been installed previously.
  2. The next step is to join the domain by running realm join domain. If you want to get more information, add the --verbose option. You could also add the -U user option if you want to use a different administrator account.
    [root@ps-ldap-21 ~]# realm join example.com --verbose  * Resolving: _ldap._tcp.example.com  * Performing LDAP DSE lookup on: 172.16.0.10  * Successfully discovered: example.com Password for Administrator:  * Required files: /usr/sbin/oddjobd, /usr/libexec/oddjob/mkhomedir, /usr/sbin/sssd, /usr/bin/net  * LANG=C LOGNAME=root /usr/bin/net -s /var/cache/realmd/realmd-smb-conf.DM6W2Y -U Administrator ads join example.com Enter Administrator's password: Using short domain name -- EXAMPLE Joined 'PS-LDAP-21' to dns domain 'example.com'  * LANG=C LOGNAME=root /usr/bin/net -s /var/cache/realmd/realmd-smb-conf.DM6W2Y -U Administrator ads keytab create Enter Administrator's password:  * /usr/bin/systemctl enable sssd.service Created symlink from /etc/systemd/system/multi-user.target.wants/sssd.service to /usr/lib/systemd/system/sssd.service.  * /usr/bin/systemctl restart sssd.service  * /usr/bin/sh -c /usr/sbin/authconfig --update --enablesssd --enablesssdauth --enablemkhomedir --nostart && /usr/bin/systemctl enable oddjobd.service && /usr/bin/systemctl start oddjobd.service  * Successfully enrolled machine in realm
    As you can see from the command above, the realm command simplifies SSSD configuration and uses existing tools such as net and authconfig to join the domain and use it as an identity provider.
  3. Let’s test if we enumerate existing accounts by using the id command:
    [root@ps-ldap-21 ~]# id jervin id: jervin: no such user [root@ps-ldap-21 ~]# id jervin@example.com uid=343401115(jervin@example.com) gid=343400513(domain users@example.com) groups=343400513(domain users@example.com),343401103(support@example.com)
    As you can see, the user can be queried if the domain is specified. So if you want to log in as ‘jervin@example.com’, in Percona Server for MySQL you’ll need to create the user as ‘jervin@example.com’ and not ‘jervin’. For example:
    # Creating user 'jervin@example.com' CREATE USER 'jervin@example.com'@'%' IDENTIFIED WITH auth_pam; # Logging in as 'jervin@example.com' mysql -u 'jervin@example.com'
    If you want to omit the domain name when logging in, you’ll need to replace “use_fully_qualified_names = True” to “use_fully_qualified_names = False” in /etc/sssd/sssd.conf, and then restart SSSD. If you do this, then the user can be found without providing the domain:
    [root@ps-ldap-21 ~]# id jervin uid=343401115(jervin) gid=343400513(domain users) groups=343400513(domain users),343401103(support) [root@ps-ldap-21 ~]# id jervin@example.com uid=343401115(jervin) gid=343400513(domain users) groups=343400513(domain users),343401103(support)
    When you create the MySQL user, you don’t need to include the domain anymore:
    # Creating user 'jervin' CREATE USER 'jervin'@'%' IDENTIFIED WITH auth_pam; # Logging in as 'jervin' mysql -u jervin
  4. Optionally, you can specify which users and groups can log in by adding these settings to SSSD:
    Domain access filter
    Under “[domain/example.com]” /etc/sssd/sssd.conf, you can add the following to specify that only users that are members of support and dba are allowed to use SSSD. For example:
    ad_access_filter = (|(memberOf=CN=dba,CN=Users,DC=example,DC=com)(memberOf=CN=support,CN=Users,DC=example,DC=com))
    Simple filters
    You can use realm permit or realm permit -g to allow particular users or groups. For example:
    realm permit jervin realm permit -g support realm permit -g dba
    You can check sssd.conf on how these ACLs are implemented:
    access_provider = simple simple_allow_groups = support, dba simple_allow_users = jervin
  5. Finally, configure Percona Server for MySQL to authenticate to SSSD by creating /etc/pam.d/mysqld with this content:
    auth required pam_sss.so account required pam_sss.so
  6. Done. All you need to do now is to install Percona Server for MySQL, enable the auth_pam and auth_pam_compat plugins, and add PAM users. You can then check for authentication errors at /var/log/secure for troubleshooting. You could also get verbose logs by adding debug_level=[1-9] to [nss], [pam], or [domain] and then restarting SSSD. You can view the logs from /var/log/sssd.
Joining the Domain via Winbind and Preparing it for Percona PAM
  1. The realm command assumes that SSSD is used. To change the client software, use --client-software=winbind instead:
    [root@ps-ldap-21 ~]# realm --client-software=winbind discover example.com example.com     type: kerberos     realm-name: EXAMPLE.COM     domain-name: example.com     configured: no       server-software: active-directory     client-software: winbind     required-package: oddjob-mkhomedir     required-package: oddjob     required-package: samba-winbind-clients     required-package: samba-winbind     required-package: samba-common-tools
  2. Since the required packages have already been installed, we can now attempt to join this host to the domain:
    [root@ps-ldap-21 ~]# realm --verbose --client-software=winbind join example.com  * Resolving: _ldap._tcp.example.com  * Performing LDAP DSE lookup on: 172.16.0.10  * Successfully discovered: example.com Password for Administrator:  * Required files: /usr/libexec/oddjob/mkhomedir, /usr/sbin/oddjobd, /usr/bin/wbinfo, /usr/sbin/winbindd, /usr/bin/net  * LANG=C LOGNAME=root /usr/bin/net -s /var/cache/realmd/realmd-smb-conf.9YEO2Y -U Administrator ads join example.com Enter Administrator's password: Using short domain name -- EXAMPLE Joined 'PS-LDAP-21' to dns domain 'example.com'  * LANG=C LOGNAME=root /usr/bin/net -s /var/cache/realmd/realmd-smb-conf.9YEO2Y -U Administrator ads keytab create Enter Administrator's password:  * /usr/bin/systemctl enable winbind.service Created symlink from /etc/systemd/system/multi-user.target.wants/winbind.service to /usr/lib/systemd/system/winbind.service.  * /usr/bin/systemctl restart winbind.service  * /usr/bin/sh -c /usr/sbin/authconfig --update --enablewinbind --enablewinbindauth --enablemkhomedir --nostart && /usr/bin/systemctl enable oddjobd.service && /usr/bin/systemctl start oddjobd.service  * Successfully enrolled machine in realm
  3. Let’s test if we enumerate existing accounts by using the id command
    [root@ps-ldap-21 ~]# id jervin id: jervin: no such user [root@ps-ldap-21 ~]# id jervin@example.com uid=10000(EXAMPLEjervin) gid=10000(EXAMPLEdomain users) groups=10000(EXAMPLEdomain users),10001(EXAMPLEsupport)
    Unfortunately for Winbind, users identified with their domains cannot login to Percona Server for MySQL. We need to disable this from the Samba config (performed in the next step).
  4. Edit /etc/samba/smb.conf, and change “winbind use default domain = no” to “winbind use default domain = yes”. Restart the Winbind service. For example:
    vi /etc/samba/smb.conf #Look for: "winbind use default domain = no" #Change to: "winbind use default domain = yes" systemctl restart winbind.service
    Try running id again:
    [root@ps-ldap-21 ~]# id jervin uid=10000(jervin) gid=10000(domain users) groups=10000(domain users),10001(support) [root@ps-ldap-21 ~]# id jervin@example.com id: jervin@example.com: no such user
    When you create the MySQL user, do not include the domain name. For example:
    # Creating user 'jervin' CREATE USER 'jervin'@'%' IDENTIFIED WITH auth_pam; # Logging in as 'jervin' mysql -u jervin
  5. Finally, configure Percona Server for MySQL to authenticate to Winbind by creating /etc/pam.d/mysqld with this content:
    auth required pam_winbind.so account required pam_winbind.so

You can debug authentication attempts by reviewing the logs at /var/log/secure. You may also change “auth required pam_winbind.so” to “auth required pam_winbind.so debug” in /etc/pam.d/mysqld to get verbose logging in the same file.

As for filtering who can authenticate with Winbind, you can add require_membership_of=group_name under the [global] section of /etc/security/pam_winbind.conf

You’ll need to restart winbind daemon to apply the changes.

Conclusion

Thanks to realmd, it’s easier to setup Active Directory as an identity provider. With minimal configuration tweaks, you can use the identity provider to authenticate MySQL users.

percona toolkit installation failed in Centos7

Lastest Forum Posts - July 13, 2017 - 11:15am
Hello,

I am trying to install percona toolkit on Centos 7 using yum repositories. But getting HTTP Error 404 - Not Found errors. Will yum repositories installation support for Centos 7 or not? Thanks in advance for your help!

Error message:

Loaded plugins: fastestmirror, langpacks
http://repo.percona.com/release/7.3/...ta/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article

https://access.redhat.com/articles/1320623

If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/



One of the configured repositories failed (Percona-Release YUM repository - noarch),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:

1. Contact the upstream for the repository and get them to fix the problem.

2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).

3. Run the command with the repository temporarily disabled
yum --disablerepo=percona-release-noarch ...

4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:

yum-config-manager --disable percona-release-noarch
or
subscription-manager repos --disable=percona-release-noarch

5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:

yum-config-manager --save --setopt=percona-release-noarch.skip_if_unavailable=true

failure: repodata/repomd.xml from percona-release-noarch: [Errno 256] No more mirrors to try.
http://repo.percona.com/release/7.3/...ta/repomd.xml: [Errno 14] HTTP Error 404 - Not Found


Thanks,
Vishnu

pt-query-digest display queries in order of time from tcpdump

Lastest Forum Posts - July 13, 2017 - 8:13am
Hello,

I am currently debugging an application where there is a two second lag. I used tcpdump to capture the database activities and use pt-query-digest to produce output.
tcpdump -s 65535 -x -nn -q -tttt -i any -c 400000 port 3306 > mysqltcp.txt
pt-query-digest --type tcpdump mysqltcp.txt > tcpdigest.out

The output is useful but I want to display the tcpdigest in the order of time. Currently the data is aggregated. We want to see the play-by-play of the query execution during the two second. Is it possible?

thanks,

benny

Give PMM an IP address running on ESX

Lastest Forum Posts - July 13, 2017 - 6:42am
Hi,

I am trying to setup PMM (1.1.5) as a virtual appliance running on VMware ESX.

I need to assign the machine a static IP address but as far as i can see ESX doesn't give the option of assigning the IP address.

Some appliances take you through a wizard, others just get you to log in using the console and bring up a network adapter manually.

I understand you can manage the networking of the applicance externally when using VMWare workstation or virtualbox but this isn't the case with ESX.

Is there a default password for the root account on the box I could use? or is there another way of assigning a static IP address.

Thanks in advance

Darren

Gh-ost benchmark against pt-online-schema-change performance

Latest MySQL Performance Blog posts - July 12, 2017 - 11:31am

In this blog post, I will run a gh-ost benchmark against the performance of pt-online-schema-change.

When gh-ost came out, I was very excited. As MySQL ROW replication became commonplace, you could use it to track changes instead of triggers. This practice is cleaner and safer compared to Percona Toolkit’s pt-online-schema-change. Since gh-ost doesn’t need triggers, I assumed it would generate lower overhead and work faster. I frequently called it “pt-online-schema-change on steroids” in my talks. Finally, I’ve found some time to check my theoretical claims with some benchmarks.

DISCLAIMER: These benchmarks correspond to one specific ALTER TABLE on the table of one specific structure and hardware configuration. I have not set up a broad set of tests. If you have other results – please comment!

Benchmark Setup Details
  • pt-online-schema-change from Percona Toolkit 3.0.3
  • gh-ost 1.0.36
  • Percona Server 5.7.18 on Ubuntu 16.04 LTS
  • Hardware: 28CPU cores/56 Threads.  128GB Memory.   Samsung 960 Pro 512GB
  • Sysbench 1.0.7

Prepare the table by running:

sysbench --threads=40 --rate=0 --report-interval=1 --percentile=99 --events=0 --time=0 --db-ps-mode=auto --mysql-user=sbtest --mysql-password=sbtest  /usr/share/sysbench/oltp_read_write.lua --table_size=10000000 prepare

The table size is about 3GB (completely fitting to innodb_buffer_pool).

Run the benchmark in “full ACID” mode with:

  • sync_binlog=1
  • innodb_flush_log_at_trx_commit=1
  • innodb_doublewrite=1

This is important as this workload is heavily commit-bound, and extensively relies on group commit.

This is the pt-online-schema-change command to alter table:

time pt-online-schema-change --execute --alter "ADD COLUMN c1 INT" D=sbtest,t=sbtest1

This the gh-ost command to alter table:

time ./gh-ost  --user="sbtest" --password="sbtest" --host=localhost --allow-on-master --database="sbtest" --table="sbtest1"  --alter="ADD COLUMN c1 INT" --execute

Tests Details

For each test the old sysbench table was dropped and a new one prepared. I tested alter table in three different cases:

  • When nothing else was running (“Idle Load”)   
  • When the system handled about 2% of load it can handle at full capacity (“Light Background Load”)
  • When the system handled about 40% of the possible load, with sysbench injected about 25% of the transactions/sec the system could handle at full load (“Heavy Background Load”)

I measured the alter table completion times for all cases, as well as the overhead generated by the alter (in other words, how much peak throughput is reduced by running alter table through the tools).

Idle Load

For the Idle Load test, pt-online-schema-change completed nearly twice as fast as gh-ost. This was a big surprise for me. I haven’t looked into the reasons or details yet, though I can see most of the CPU usage for gh-ost is on the MySQL server side. Perhaps the differences relate to the SQL used to perform non-blocking alter tables.

Light Background Load

I generated the Light Background Load by running the sysbench command below. It corresponds to a roughly 4% load, as the system can handle some 2500 transactions/sec at this concurrency under full load. Adjust the --rate value to scale it for your system.

time sysbench --threads=40 --rate=100 --report-interval=1 --percentile=99 --events=0 --time=0 --db-ps-mode=auto --mysql-user=sbtest --mysql-password=sbtest  /usr/share/sysbench/oltp_read_write.lua --table_size=10000000 run

The numbers changed (as expected), but pt-online-schema-change is still approximately twice as fast as gh-ost.

What is really interesting in this case is how a relatively light background load affects the process completion time. It took both pt-online-schema-change and gh-ost about 2.7x times longer to finish! 

Heavy Background Load

I generated the Heavy Background Load running the sysbench command below. It corresponds to a roughly 40% load, as the system can handle some 2500 transactions/sec at this concurrency under full load. Adjust --rate value to scale it for your system.

time sysbench --threads=40 --rate=1000 --report-interval=1 --percentile=99 --events=0 --time=0 --db-ps-mode=auto --mysql-user=sbtest --mysql-password=sbtest  /usr/share/sysbench/oltp_read_write.lua --table_size=10000000 run

What happened in this case? When the load gets higher, gh-ost can’t keep up with binary log processing, and just never finishes at all. While this may be surprising at first, it makes sense if you think more about how these tools work. pt-online-schema-change uses triggers, and while they have a lot of limitations and overhead they can execute in parallel. gh-ost, on the other hand, processes the binary log in a single thread and might not be able to keep up.   

In MySQL 5.6 we didn’t have parallel replication, which applies writes to the same table in parallel. For that version the gh-ost limitation probably isn’t as big a deal, as such a heavy load would also cause replication lag. MySQL 5.7 has parallel replication. This makes it much easier to quickly replicate workloads that are too heavy for gh-ost to handle.

I should note that the workload being simulated in this benchmark is a rather extreme case. The table being altered by gh-ost here is at the same time handling a background load so high it can’t be replicated in a single thread.

Future versions of gh-ost could improve this issue by applying binlog events in parallel, similar to what MySQL replicas do.

An excerpt from the gh-ost log shows how it is totally backed up trying to apply the binary log:

root@rocky:/tmp# time ./gh-ost  --user="sbtest" --password="sbtest" --host=localhost --allow-on-master --database="sbtest" --table="sbtest1"  --alter="ADD COLUMN c1 INT" --execute 2017/06/25 19:16:05 binlogsyncer.go:75: [info] create BinlogSyncer with config &{99999 mysql localhost 3306 sbtest sbtest  false false <nil>} 2017/06/25 19:16:05 binlogsyncer.go:241: [info] begin to sync binlog from position (rocky-bin.000018, 640881773) 2017/06/25 19:16:05 binlogsyncer.go:134: [info] register slave for master server localhost:3306 2017/06/25 19:16:05 binlogsyncer.go:568: [info] rotate to (rocky-bin.000018, 640881773) 2017-06-25 19:16:05 ERROR parsing time "" as "2006-01-02T15:04:05.999999999Z07:00": cannot parse "" as "2006" # Migrating `sbtest`.`sbtest1`; Ghost table is `sbtest`.`_sbtest1_gho` # Migrating rocky:3306; inspecting rocky:3306; executing on rocky # Migration started at Sun Jun 25 19:16:05 -0400 2017 # chunk-size: 1000; max-lag-millis: 1500ms; max-load: ; critical-load: ; nice-ratio: 0.000000 # throttle-additional-flag-file: /tmp/gh-ost.throttle # Serving on unix socket: /tmp/gh-ost.sbtest.sbtest1.sock Copy: 0/9872432 0.0%; Applied: 0; Backlog: 0/100; Time: 0s(total), 0s(copy); streamer: rocky-bin.000018:641578191; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 0; Backlog: 100/100; Time: 1s(total), 1s(copy); streamer: rocky-bin.000018:641626699; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 640; Backlog: 100/100; Time: 2s(total), 2s(copy); streamer: rocky-bin.000018:641896215; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 1310; Backlog: 100/100; Time: 3s(total), 3s(copy); streamer: rocky-bin.000018:642178659; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 1920; Backlog: 100/100; Time: 4s(total), 4s(copy); streamer: rocky-bin.000018:642436043; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 2600; Backlog: 100/100; Time: 5s(total), 5s(copy); streamer: rocky-bin.000018:642722777; State: ... Copy: 0/9872432 0.0%; Applied: 120240; Backlog: 100/100; Time: 3m0s(total), 3m0s(copy); streamer: rocky-bin.000018:694142377; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 140330; Backlog: 100/100; Time: 3m30s(total), 3m30s(copy); streamer: rocky-bin.000018:702948219; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 160450; Backlog: 100/100; Time: 4m0s(total), 4m0s(copy); streamer: rocky-bin.000018:711775662; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 180600; Backlog: 100/100; Time: 4m30s(total), 4m30s(copy); streamer: rocky-bin.000018:720626338; State: migrating; ETA: N/A Copy: 0/9872432 0.0%; Applied: 200770; Backlog: 100/100; Time: 5m0s(total), 5m0s(copy); streamer: rocky-bin.000018:729509960; State: migrating; ETA: N/A

Online Schema Change Performance Impact

For this test I started the alter table, waited 60 seconds and then ran sysbench at full speed for five minutes. Then I measured how much the performance was impacted by running the tool:

sysbench --threads=40 --rate=0 --report-interval=1 --percentile=99 --events=0 --time=300 --db-ps-mode=auto --mysql-user=sbtest --mysql-password=sbtest  /usr/share/sysbench/oltp_read_write.lua --table_size=10000000 run

As we can see, gh-ost has negligible overhead in this case. pt-online-schema-change on the other hand, had peformance reduced by 12%. It is worth noting though that pt-online-schema-change still makes progress in this case (though slowly), while gh-ost would never complete.

If anything, I was surprised at how little impact the pt-online-schema-change run had on sysbench performance.

It’s important to note that in this case we only measured the overhead for the “copy” stage of the online schema change. Another thing you should worry about is the impact to performance during “table rotation” (which I have not measured).

Summary

While gh-ost introduces a number of design advantages, and gives better results in some situation, I wouldn’t call it always superior the tried and true pt-online-schema-change. At least in some cases, pt-online-schema-change offers better performance than gh-ost and completes a schema change when gh-ost is unable to keep up. Consider trying out both tools and see what works best in your situation.

Migrating from Percona server to PXC

Lastest Forum Posts - July 12, 2017 - 6:29am
Hi - I'm looking to migrate a Percona server (currently master->slave->slave setup on version 5.5) to PXC. My questions are in regards to the MySQL datadir: can I simply uninstall Percona Server (using yum for example) and install PXC on the machines I'm migrating? Will the /var/lib/mysql remain intact?

I've tested this in a dev environment and it seemed to work fine (even going from Percona server 5.5 to PXC 5.7). All the data was intact after migration. I was wondering if there was anything I should be concerned about. Also would it be more prudent to install 5.5, then upgrade PXC to 5.6 and then 5.7?

Active/Passiv Cluster with Percona

Lastest Forum Posts - July 12, 2017 - 2:48am
Hey guys,
after i setup a failover cluster with mysql replication (binlog&relaylog) with heartbeat as floating ip, i have change my choice about mysql replication clustering.
now i use percona and it works fine.
i got some trouble with the floating ip, created by heartbeat.
my company got two database servers (for failovercluster) in different subnets, and the floating ip dosnt works.
now i search a solution like floating ip, to start a active passive cluster.

can you help me?

how to get {master_log_file,mater_log_pos} when gtid_mode is enabled

Lastest Forum Posts - July 11, 2017 - 8:58pm
GTID is enabled on my instance. when using xtrabackup with 1) --slave-info 2) --prepare options, there is only master_auto_position option in the xtrabackup_slave_info file, instead of master_log_file and master_log_pos.

Is there any option available on the current versions for my need? I need to pass {file,pos} to mysqlbinlog, which is used to taking back up binlogs every full backup later.

thanks in advance!

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.