Buy Percona ServicesBuy Now!

Percona XtraBackup 2.4.7 is Now Available

Lastest Forum Posts - April 18, 2017 - 11:53pm
Percona announces the GA release of Percona XtraBackup 2.4.7 on April 18, 2017. You can download it from our download site and apt and yumrepositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups. New features:

  • Percona XtraBackup now uses hardware accelerated implementation of crc32 where it is supported.
  • Percona XtraBackup has implemented new options: --tables-exclude and --databases-exclude that work similar to --tables and --databases options, but exclude given names/paths from backup.
  • The xbstream binary now supports parallel extraction with the --parallel option.
  • The xbstream binary now supports following new options: --decrypt, --encrypt-threads, --encrypt-key, and --encrypt-key-file. When --decrypt option is specified xbstream will automatically decrypt encrypted files when extracting input stream. Either --encrypt-key or --encrypt-key-file options must be specified to provide encryption key, but not both. Option --encrypt-threads specifies the number of worker threads doing the encryption, default is 1.
Bugs Fixed:

  • Backups were missing *.isl files for general tablespace. Bug fixed #1658692.
  • In 5.7, MySQL changed default checksum algorithm to crc32, while xtrabackup was using innodb. This caused xtrabackup to perform extra checksum calculations which were not needed. Bug fixed #1664405.
  • For system tablespaces consisting of multiple files xtrabackup updated LSN only in first file. This caused MySQLversions lower than 5.7 to fail on startup. Bug fixed #1669592.
  • xtrabackup --export can now export tables that have more than 31 index. Bug fixed #1089681.
  • Unrecognized character x01; marked by message could be seen if backups were taken with the version check enabled. Bug fixed #1651978.
Release notes with all the bugfixes for Percona XtraBackup 2.4.7 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Percona XtraBackup 2.3.8 is Now Available

Lastest Forum Posts - April 18, 2017 - 11:51pm
Percona announces the release of Percona XtraBackup 2.3.8 on April 18, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series. New Features

  • Percona XtraBackup now uses hardware accelerated implementation of crc32 where it is supported.
  • Percona XtraBackup has implemented new options: --tables-exclude and --databases-exclude that work similar to --tables and --databases options, but exclude given names/paths from backup.
  • The xbstream binary now supports parallel extraction with the --parallel option.
  • The xbstream binary now supports following new options: --decrypt, --encrypt-threads, --encrypt-key, and --encrypt-key-file. When --decrypt option is specified xbstream will automatically decrypt encrypted files when extracting input stream. Either --encrypt-key or --encrypt-key-file options must be specified to provide encryption key, but not both. Option --encrypt-threads specifies the number of worker threads doing the encryption, default is 1.
Bugs Fixed:

  • xtrabackup would not create fresh InnoDB redo logs when preparing incremental backup. Bug fixed #1669592.
  • xtrabackup --export can now export tables that have more than 31 index. Bug fixed #1089681.
  • Unrecognized character x01; marked by message could be seen if backups were taken with the version check enabled. Bug fixed #1651978.
Release notes with all the bugfixes for Percona XtraBackup 2.3.8 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Cluster failure following a filesystem-level error

Lastest Forum Posts - April 18, 2017 - 12:51pm
We have a three-node cluster, on VMs using ExtremeIO storage for the data filesystem, which suffered a failure this morning. The event that triggered the failure appears to have been a storage-level error which caused node 3 to fail to create a new binlog file, in response to which mysqld declared that it was ceasing all logging. Some time afterward, nodes 1 and 2 experienced simultaneous failures to commit a set of updates, declared themselves inconsistent, and shut down, whereupon node 3 lost quorum and declared itself non-primary.

Galera does use ROW replication data, as we all know. At what level does Galera obtain the data, and at what level does logging get shut off in response to a storage-level failure as described here? Would mysqld disabling all logging cause Galera replication from node 3 to fail? Our working theory at present is that nodes 1 and 2 failed because the attempted to update rows which had been written by node 3, but never replicated to nodes 1 and 2 because the binary logging failure on node 3 also disabled outgoing Galera replication from node 3. Does this hypothesis make sense?

Percona XtraBackup 2.4.7 is Now Available

Latest MySQL Performance Blog posts - April 18, 2017 - 12:19pm

Percona announces the GA release of Percona XtraBackup 2.4.7 on April 18, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

New features:
  • Percona XtraBackup now uses hardware accelerated implementation of crc32 where it is supported.
  • Percona XtraBackup has implemented new options: --tables-exclude and --databases-exclude that work similar to --tables and --databases options, but exclude given names/paths from backup.
  • The xbstream binary now supports parallel extraction with the --parallel option.
  • The xbstream binary now supports following new options: --decrypt, --encrypt-threads, --encrypt-key, and --encrypt-key-file. When --decrypt option is specified xbstream will automatically decrypt encrypted files when extracting input stream. Either --encrypt-key or --encrypt-key-file options must be specified to provide encryption key, but not both. Option --encrypt-threads specifies the number of worker threads doing the encryption, default is 1.
Bugs Fixed:
  • Backups were missing *.isl files for general tablespace. Bug fixed #1658692.
  • In 5.7, MySQL changed default checksum algorithm to crc32, while xtrabackup was using innodb. This caused xtrabackup to perform extra checksum calculations which were not needed. Bug fixed #1664405.
  • For system tablespaces consisting of multiple files xtrabackup updated LSN only in first file. This caused MySQL versions lower than 5.7 to fail on startup. Bug fixed #1669592.
  • xtrabackup --export can now export tables that have more than 31 index. Bug fixed #1089681.
  • Unrecognized character x01; marked by message could be seen if backups were taken with the version check enabled. Bug fixed #1651978.

Release notes with all the bugfixes for Percona XtraBackup 2.4.7 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Percona XtraBackup 2.3.8 is Now Available

Latest MySQL Performance Blog posts - April 18, 2017 - 11:36am

Percona announces the release of Percona XtraBackup 2.3.8 on April 18, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

New Features
  • Percona XtraBackup now uses hardware accelerated implementation of crc32 where it is supported.
  • Percona XtraBackup has implemented new options: --tables-exclude and --databases-exclude that work similar to --tables and --databases options, but exclude given names/paths from backup.
  • The xbstream binary now supports parallel extraction with the --parallel option.
  • The xbstream binary now supports following new options: --decrypt, --encrypt-threads, --encrypt-key, and --encrypt-key-file. When --decrypt option is specified xbstream will automatically decrypt encrypted files when extracting input stream. Either --encrypt-key or --encrypt-key-file options must be specified to provide encryption key, but not both. Option --encrypt-threads specifies the number of worker threads doing the encryption, default is 1.
Bugs Fixed:
  • xtrabackup would not create fresh InnoDB redo logs when preparing incremental backup. Bug fixed #1669592.
  • xtrabackup --export can now export tables that have more than 31 index. Bug fixed #1089681.
  • Unrecognized character x01; marked by message could be seen if backups were taken with the version check enabled. Bug fixed #1651978.

Release notes with all the bugfixes for Percona XtraBackup 2.3.8 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

M17 Conference Observations on the Future of MariaDB

Latest MySQL Performance Blog posts - April 18, 2017 - 10:52am

In this blog post, I’ll discuss some of my thoughts about the future of MariaDB after attending the M17 Conference.

Let me start with full disclosure: I’m the CEO of Percona, and we compete with the MariaDB Corporation in providing Support for MariaDB and other services. I probably have some biases!

Last week I attended the MariaDB Developers UnConference and the M17 Conference, which provided great insights into MariaDB’s positioning as a project and as a business. Below are some of my thoughts as I attended various sessions at the conference:

Breaking away from their MySQL past. Michael Howard’s (MariaDB CEO) keynote focused on breaking away from the past and embracing the future. In this case, the “past” means proprietary databases. But I think MariaDB is also trying to break away from their past of being a MySQL variant, and focus on becoming completely independent technology. If I didn’t know their history, I wouldn’t recognize how much codebase MariaDB shares with MySQL – and how much MariaDB innovation Oracle still drives.

MySQL compatibility is no longer the primary goal. In its first version, MariaDB 5.1 was truly compatible with MySQL (it had relatively few differences). By contrast, MariaDB 10.2 has different replication, JSON support and a very different optimizer. With MariaDB 10.3, more changes are planned for InnoDB on disk format, and no plans exist to remove .frm files and use the MySQL 8 Data Dictionary. With these features, another level of MySQL compatibility is removed. The MariaDB knowledgebase states: “For all practical purposes, MariaDB is a binary drop in replacement for the same MySQL version.” The argument can still be made that this is true for MySQL 5.7 (as long as your application does not use some of the new features), but this does not seem to be the case for MySQL 8.

The idea seems to be that since MariaDB has replaced MySQL in many (most?) Linux distributions, and many people use MariaDB when they think they are using MySQL, compatibility is not that big of a deal anymore.

Embracing contributions and keeping the development process open is a big focus of MariaDB. Facebook, Google, Alibaba and Tencent have contributed to MariaDB, along with many independent smaller companies and independent developers (Percona among them). This is different from the MySQL team at Oracle, who have provided some contributions, but not nearly to the extent that MariaDB has. An open source model is a double-edged sword – while it gives you more features, it also makes it harder to maintain a consistent user experience and consistent quality of code and documentation. It will be interesting to see how MariaDB deals with these challenges.

Oracle compatibility. MariaDB strives to be the open source database that is the most compatible with Oracle, and therefore the easiest to migrate to. I have heard people compare MariaDB’s desire for Oracle compatibility to EDB Postgres – only with the advantage of being open source as opposed to proprietary software.  For MariaDB 10.3 (alpha), they are developing support for Oracle PL/SQL syntax for stored procedures to be able to migrate applications with little, if any, changes. They are also developing support for SEQUENCE and other Oracle features, including a special sql_mode=ORACLE to maximize compatibility.

BSL as a key for success. When it comes to business source licensing (BSL), I couldn’t quite resolve the conflict I found in MariaDB’s messaging. On the one hand, MariaDB promotes open source as a great way to escape vendor lock-in (which we at Percona completely agree with). But on the other hand, Michael Howard stated that BSL software (“Eventual Open Source”) is absolutely critical for MariaDB’s commercial success. Is the assumption here that if vendor lock-in is not forever, it is not really lock-in? Currently, only MariaDB MaxScale is BSL, but it sounds like we should expect more of their software to follow this model.

Note. As MariaDB Server and MariaDB Columnstore use a lot of Oracle’s GPL code, these will most likely remain GPL.

I enjoyed attending both conferences. I had a chance to meet with many old friends and past colleagues, as well as get a much better feel for where MariaDB is going and when it is appropriate to advise its usage.

Database crashes when binary logging not possible

Lastest Forum Posts - April 18, 2017 - 7:04am
Hi everyone,
We have been experiencing this issue at least once a day for the last 5 days:

Code: 2017-04-18T00:11:49.499793Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 11624ms. The settings might not be optimal. (flushed=200, during the time.) 2017-04-18T00:11:49.649596Z 14 [ERROR] /usr/sbin/mysqld: Binary logging not possible. Message: An error occurred during flush stage of the commit. 'binlog_error_action' is set to 'ABORT_SERVER'. Hence aborting the server. 00:11:49 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. Attempting to collect some information that could help diagnose the problem. As this is a crash and something is definitely wrong, the information collection process might fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=8388608 read_buffer_size=131072 max_used_connections=189 max_threads=215 thread_count=101 connection_count=101 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 93386 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f04ff816000 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f0da45dce00 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2c)[0xec707c] /usr/sbin/mysqld(handle_fatal_signal+0x461)[0x79d941] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7f0dadae3330] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7f0dacf24c37] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f0dacf28028] /usr/sbin/mysqld[0x761bc0] /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG33handle_binlog_flush_or_sync_errorEP3THDb+0x19b)[0xe5d1ab] /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG14ordered_commitEP3THDbb+0x131)[0xe65691] /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG6commitEP3THDb+0xcb3)[0xe676e3] /usr/sbin/mysqld(_Z15ha_commit_transP3THDbb+0x131)[0x7fd571] /usr/sbin/mysqld(_Z12trans_commitP3THD+0x39)[0xd51cd9] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THDb+0xc92)[0xca4f12] /usr/sbin/mysqld(_Z11mysql_parseP3THDP12Parser_state+0x5c5)[0xcaba75] /usr/sbin/mysqld(_Z16dispatch_commandP3THDPK8COM_DATA19enum_server_command+0x92f)[0xcac42f] /usr/sbin/mysqld(_Z10do_commandP3THD+0x1b7)[0xcaddc7] /usr/sbin/mysqld(handle_connection+0x2a0)[0xd6f8c0] /usr/sbin/mysqld(pfs_spawn_thread+0x1b4)[0x1246fc4] /lib/x86_64-linux-gnu/libpthread.so.0(+0x8184)[0x7f0dadadb184] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f0dacfe837d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f04ff828030): COMMIT Connection ID (thread ID): 14 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 2017-04-18T00:11:51.272796Z mysqld_safe Number of processes running now: 0 2017-04-18T00:11:51.274351Z mysqld_safe mysqld restarted 2017-04-18T00:11:51.296136Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 5000) 2017-04-18T00:11:51.296265Z 0 [Warning] Changed limits: max_connections: 214 (requested 300) 2017-04-18T00:11:51.296278Z 0 [Warning] Changed limits: table_open_cache: 400 (requested 2000) 2017-04-18T00:11:51.443192Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2017-04-18T00:11:51.444043Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.14-7-log) starting as process 8607 ... 2017-04-18T00:11:51.454822Z 0 [Warning] InnoDB: Using innodb_support_xa is deprecated and the parameter may be removed in future releases. Only innodb_support_xa=ON is allowed. 2017-04-18T00:11:51.454848Z 0 [Warning] InnoDB: Using innodb_file_format is deprecated and the parameter may be removed in future releases. See http://dev.mysql.com/doc/refman/5.7/en/innodb-file-format.html 2017-04-18T00:11:51.454912Z 0 [Note] InnoDB: PUNCH HOLE support available 2017-04-18T00:11:51.454921Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2017-04-18T00:11:51.454925Z 0 [Note] InnoDB: Uses event mutexes 2017-04-18T00:11:51.454929Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2017-04-18T00:11:51.454933Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8 2017-04-18T00:11:51.454937Z 0 [Note] InnoDB: Using Linux native AIO 2017-04-18T00:11:51.455688Z 0 [Note] InnoDB: Number of pools: 1 2017-04-18T00:11:51.455770Z 0 [Note] InnoDB: Using CPU crc32 instructions 2017-04-18T00:11:51.456478Z 0 [Note] InnoDB: Initializing buffer pool, total size = 32G, instances = 8, chunk size = 128M 2017-04-18T00:11:52.181194Z 0 [Note] InnoDB: Completed initialization of buffer pool 2017-04-18T00:11:52.312596Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2017-04-18T00:11:52.324805Z 0 [Note] InnoDB: Recovering partial pages from the parallel doublewrite buffer at /var/lib/mysql/xb_doublewrite 2017-04-18T00:11:52.405692Z 0 [Note] InnoDB: Highest supported file format is Barracuda. 2017-04-18T00:11:52.542600Z 0 [Note] InnoDB: Log scan progressed past the checkpoint lsn 303947279741 2017-04-18T00:11:52.613070Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 303952522240 2017-04-18T00:11:52.681503Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 303957765120 2017-04-18T00:11:52.841734Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 303948327936 2017-04-18T00:11:52.919589Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 303953570816 2017-04-18T00:11:53.005760Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 303958813696 2017-04-18T00:11:53.082855Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 303964056576 2017-04-18T00:11:53.086403Z 0 [Note] InnoDB: Doing recovery: scanned up to log sequence number 303964205271 2017-04-18T00:11:53.086696Z 0 [Note] InnoDB: Database was not shutdown normally! 2017-04-18T00:11:53.086701Z 0 [Note] InnoDB: Starting crash recovery. 2017-04-18T00:11:53.338831Z 0 [Note] InnoDB: Created parallel doublewrite buffer at /var/lib/mysql/xb_doublewrite, size 31457280 bytes 2017-04-18T00:11:53.427281Z 0 [Note] InnoDB: Transaction 432564072 was in the XA prepared state. 2017-04-18T00:11:53.428568Z 0 [Note] InnoDB: Transaction 432564072 was in the XA prepared state. 2017-04-18T00:11:53.428713Z 0 [Note] InnoDB: 1 transaction(s) which must be rolled back or cleaned up in total 0 row operations to undo 2017-04-18T00:11:53.428726Z 0 [Note] InnoDB: Trx id counter is 432564480 2017-04-18T00:11:53.428740Z 0 [Note] InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 I have no idea why this happens or how to solve it. Any help will be appreciated.

Procedure for Percona XtraDB-Cluster 5.5 --> 5.7 and CentOS update

Lastest Forum Posts - April 18, 2017 - 6:19am
Hi

I have a 3-nodes Percona XtraDB-Cluster. Each node runs CentOS 6.6 and Percona XtraDB-Cluster v5.5. These are the installed Percona rpms:

Percona-Server-shared-compat-51-5.1.73-rel14.12.624.rhel6.x86_64
Percona-XtraDB-Cluster-galera-3-3.11-1.rhel6.x86_64
Percona-Server-shared-55-5.5.44-rel37.3.el6.x86_64
Percona-XtraDB-Cluster-client-55-5.5.41-25.11.853.el6.x86_64
Percona-XtraDB-Cluster-server-55-5.5.41-25.11.853.el6.x86_64

I'd like to update this cluster to CentOS 7 and to the latest (5.7) version of Percona XtraDB-Cluster.


In case I don't want a service downtime, I think the procedure should be:

- reinstall from scratch each node, one by one, with CentOS 7.x and Percona XtraDB-Cluster v5.5 (I assume there are no problems if the different nodes of the cluster run different operating systems)
- Migrate Percona XtraDB-Cluster from v5.5 to v5.6
- Migrate Percona XtraDB-Cluster from v5.6 to v5.7

Is this correct or are there better options ?

What about if instead I could afford some downtime ?




Thanks, Massimo

Cluster security

Lastest Forum Posts - April 18, 2017 - 3:07am
I just set up my new 2-node cluster + 1 Galera Arbitrator and found out that anyone can join the cluster, without knowing the 'wsrep_sst_auth' credentials.
To test this I set up a 4th node which is not in any of the other nodes' 'wsrep_cluster_address', but it does know the correct 'wsrep_cluster_name'.

Is this a feature I can only block by firewall, or did I forget to configure security?

Experimental Build of MyRocks with Percona Server for MySQL

Latest MySQL Performance Blog posts - April 17, 2017 - 5:07pm

We have been working on bringing out a build of MyRocks with Percona Server for MySQL.

MyRocks is a RocksDB-based storage engine. You can find more information about MyRocks here.

While there is still a lot of work to do, I want to share an experimental build of Percona Server for MySQL with MyRocks, which you can use to evaluate and test this engine

(WARNING: in NO WAY is this build supposed to be for production usage! Consider this ALPHA quality.)

The tar.gz binaries are available from our TESTING area. To start Percona Server for MySQL with the MyRocks engine, use following line in my.cnf:

plugin-load=rocksdb=ha_rocksdb.so;rocksdb_cfstats=ha_rocksdb.so;rocksdb_dbstats=ha_rocksdb.so;rocksdb_perf_context=ha_rocksdb.so;rocksdb_perf_context_global=ha_rocksdb.so;rocksdb_cf_options=ha_rocksdb.so;rocksdb_compaction_stats=ha_rocksdb.so;rocksdb_global_info=ha_rocksdb.so;rocksdb_ddl=ha_rocksdb.so;rocksdb_index_file_map=ha_rocksdb.so;rocksdb_locks=ha_rocksdb.so;rocksdb_trx=ha_rocksdb.so

Later we will provide experimental RPM and DEB packages from our testing repositories. Please let us know how MyRocks is working for you!

mongodb:metrics: exporter files fill up /tmp

Lastest Forum Posts - April 17, 2017 - 12:38pm
Hello:

I'm new to the forum, and first time poster.....

We have a problem the pmm-admin process for "mongdb:metrics" where it appends to files in /tmp which eventually fills up that filesystem. Can you change the location of the "mongodb_exporter" INFO/WARNING/ERROR files? Or run some sort of cleanup command from "pmm-admin" to get rid of old files?

(FYI, we just started using pmm and the graphs look great!)

Dan

The mysqlpump Utility

Latest MySQL Performance Blog posts - April 17, 2017 - 10:45am

In this blog, we’ll look at the mysqlpump utility.

mysqlpump is a utility that performs logical backups (which means backing up your data as SQL statements instead of a raw copy of data files). It was added in MySQL Server version 5.7.8, and can be used to dump a database or a set of databases to a file and then loaded on another SQL server (not necessarily a MySQL server).

Its usage is similar to mysqldump, but it includes a new set of features. Many of the options are the same, but it was written from scratch to avoid being limited to mysqldump compatibility.

The Main Features Include:
  • To make the dump process faster, it allows parallel processing of databases and objects within databases.
  • There are more options to customize your dumps and choose which databases and objects to dump (tables, stored programs, user accounts), using the --include-* and --exclude-* parameters.
  • User accounts can be dumped now as CREATE USER and GRANT statements, instead of inserting directly to the MySQL system database.
  • Information between the client and the server can be compressed using the --compress option. This feature is very useful for remote backups, as it saves bandwidth and transfer time. You can also compress the output file using--compress-output, which supports ZLIB and LZ4 compression algorithms.
  • It has an estimated progress indicator. This is really useful to check the current status of the dump process. You can see the total amount of rows dumped and the number of databases completed. It also reports an estimate of the total time to complete the dump.
  • Creation of secondary indexes for InnoDB tables happens after data load for shorter load times.
Exclude/Include:

This feature provides more control over customizing your dumps, and filter the data that you need. Using this feature, you can be more selective with the data you want to dump (databases, tables, triggers, events, routines, users) and save file size, process time and transferring time while copying/moving the file to another host.

Keep in mind that there are some options that are mutually exclusive: e.g., if you use the --all-databases option, the --exclude-databases  parameter won’t take effect. By default, mysqlpump will not dump the following databases unless you specify them using the --include-databases option: INFORMATION_SCHEMA, performance_schema, ndbinfo  and sys.

Values for these options need to be declared by comma-separated listing. Using a “%” as a value for any of the exclude/include options acts as a wildcard. For example, you can dump all databases starting with “t” and “p” by adding the option --include-databases=t%,p%  to the command line.

For users, routines, triggers and events, mysqlpump has --include-* and --exclude-* options with similar usage. Some specific notes:

  • Triggers are dumped by default, but you can also filter them using the --include-triggers/--exclude-triggers options
  • Routines and events are not dumped by default, and need to be specified in the command line with --routines and --events, or the corresponding --include and --exclude options
  • Keep in mind that if a stored procedure and a function have the same name, then include/exclude applies to both
Parallel Processing:

This feature allows you to process several databases, and tables within the databases, in parallel. By default, mysqlpump uses one processing queue with two threads. You can increase the number of threads for this default queue with --default-parallelism. Unless you create additional queues, all the databases and/or tables you elect to dump go through the default queue.

To create additional queues you can use the --parallel-schemas option, which takes two parameters: the number of threads for the queue and the sub-set of databases this queue processes.  As an example, you could run:

mysqlpump --include-databases=a,b,c,d,e,f,g,h --default-parallelism=3 --parallel-schemas=4:a,b

so that schemas c, d, e, f, g and h are processed by the default queue (which uses three threads), and then tables from schemas a and b are processed by a separate queue (that uses four threads). Database names should be included as a comma-separated list:

$ mysqlpump --parallel-schemas=4:example1,example2,example3 --parallel-schemas=3:example4,example5 > examples.sql Dump progress: 0/1 tables, 250/261184 rows Dump progress: 24/30 tables, 1204891/17893833 rows Dump progress: 29/30 tables, 1755611/17893833 rows Dump progress: 29/30 tables, 2309111/17893833 rows ... Dump completed in 42424 milliseconds

User Accounts:

User accounts can be dumped using this tool. Here’s a comparison of our Percona Tool pt-show-grants versus mysqlpump to check their differences.

By default, mysqlpump doesn’t dump user account definitions (even while dumping the MySQL database). To include user accounts on the dump, you must specify the --users option.

Here’s an example on how use mysqlpump to get only user accounts dumped to a file:

$ mysqlpump --exclude-databases=% --exclude-triggers=% --users -- Dump created by MySQL dump utility, version: 5.7.8-rc, linux-glibc2.5 (x86_64) -- Dump start time: Thu Aug 27 17:10:10 2015 -- Server version: 5.7.8 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_TIME_ZONE=@@TIME_ZONE; SET TIME_ZONE='+00:00'; SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT; SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS; SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION; SET NAMES utf8; CREATE USER 'msandbox'@'127.%' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT ALL PRIVILEGES ON *.* TO 'msandbox'@'127.%'; CREATE USER 'msandbox_ro'@'127.%' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT SELECT, EXECUTE ON *.* TO 'msandbox_ro'@'127.%'; CREATE USER 'msandbox_rw'@'127.%' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON *.* TO 'msandbox_rw'@'127.%'; CREATE USER 'rsandbox'@'127.%' IDENTIFIED WITH 'mysql_native_password' AS '*B07EB15A2E7BD9620DAE47B194D5B9DBA14377AD' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT REPLICATION SLAVE ON *.* TO 'rsandbox'@'127.%'; CREATE USER 'furrywall'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*AB8D50A9E3B8D1F3ACE85C54736B5BF472B44539' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT LOCK; GRANT USAGE ON *.* TO 'furrywall'@'localhost'; CREATE USER 'msandbox'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT ALL PRIVILEGES ON *.* TO 'msandbox'@'localhost'; CREATE USER 'msandbox_ro'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT SELECT, EXECUTE ON *.* TO 'msandbox_ro'@'localhost'; CREATE USER 'msandbox_rw'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON *.* TO 'msandbox_rw'@'localhost'; CREATE USER 'root'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION; CREATE USER 'testuser'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6E543F385210D9BD42A4FDB4BB23FD2C31C95462' REQUIRE NONE PASSWORD EXPIRE INTERVAL 30 DAY ACCOUNT UNLOCK; GRANT USAGE ON *.* TO 'testuser'@'localhost'; SET TIME_ZONE=@OLD_TIME_ZONE; SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT; SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS; SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; -- Dump end time: Thu Aug 27 17:10:10 2015 Dump completed in 823 milliseconds

As you can see, above the tool makes sure the session uses known values for timezone and character sets. This won’t affect users, it’s part of the dump process to ensure correctness while restoring on the destination.

Comparing it with pt-show-grants from Percona Toolkit, we can see that mysqlpump dumps the CREATE USER  information as well. The statements produced by mysqlpump are the right thing to run to recreate users (and should be the preferred method), especially because of the sql_mode NO_AUTO_CREATE_USERS. If enabled, it renders pt-show-grants useless.

Here’s an example of pt-show-grants usage:

$ pt-show-grants --host 127.0.0.1 --port 5708 --user msandbox --ask-pass Enter password: -- Grants dumped by pt-show-grants -- Dumped from server 127.0.0.1 via TCP/IP, MySQL 5.7.8-rc at 2015-08-27 17:06:52 -- Grants for 'furrywall'@'localhost' GRANT USAGE ON *.* TO 'furrywall'@'localhost'; -- Grants for 'msandbox'@'127.%' GRANT ALL PRIVILEGES ON *.* TO 'msandbox'@'127.%'; -- Grants for 'msandbox'@'localhost' GRANT ALL PRIVILEGES ON *.* TO 'msandbox'@'localhost'; -- Grants for 'msandbox_ro'@'127.%' GRANT EXECUTE, SELECT ON *.* TO 'msandbox_ro'@'127.%'; -- Grants for 'msandbox_ro'@'localhost' GRANT EXECUTE, SELECT ON *.* TO 'msandbox_ro'@'localhost'; -- Grants for 'msandbox_rw'@'127.%' GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INDEX, INSERT, LOCK TABLES, SELECT, SHOW DATABASES, UPDATE ON *.* TO 'msandbox_rw'@'127.%'; -- Grants for 'msandbox_rw'@'localhost' GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INDEX, INSERT, LOCK TABLES, SELECT, SHOW DATABASES, UPDATE ON *.* TO 'msandbox_rw'@'localhost'; -- Grants for 'root'@'localhost' GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION; GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION; -- Grants for 'rsandbox'@'127.%' GRANT REPLICATION SLAVE ON *.* TO 'rsandbox'@'127.%'; -- Grants for 'testuser'@'localhost' GRANT USAGE ON *.* TO 'testuser'@'localhost';

Some Miscellaneous Notes:
  • One of the differences with mysqldump is that mysqlpump adds CREATE DATABASE statements to the dump by default, unless specified with the --no-create-db option.
    • There’s an important difference on the dump process that is closely related: it includes the database name while adding the CREATE TABLE statement. This causes a problem when trying to use the tool to create a duplicate.

MySQL 5.6 crashing with stack_bottom = 0 thread_stack 0x30000

Lastest Forum Posts - April 15, 2017 - 7:38pm
HI,
I'm having serous trouble with one MySQL server. I was getting permission errors in the log that the plugin and user table was not readable. I tried starting the server with innodb_force_recovery = 1 and --skip-grant-tables, then I could access the tables but the user table was still read only.

But when i now starts the server it just crashes with the errors below:

root@DB11a:~# /usr/sbin/mysqld --skip-grant-tables --basedir=/ --datadir=/data/mysql/data --user=mysql --pid-file=/var/lib/mysql/db11a.pid --skip-external-locking --port=3306 --socket=/var/lib/mysql/mysql.sock
2017-04-16 04:33:09 0 [Note] /usr/sbin/mysqld (mysqld 5.6.30-log) starting as process 15357 ...
2017-04-16 04:33:09 15357 [Note] Plugin 'FEDERATED' is disabled.
2017-04-16 04:33:09 15357 [Note] InnoDB: Using atomics to ref count buffer pool pages
2017-04-16 04:33:09 15357 [Note] InnoDB: The InnoDB memory heap is disabled
2017-04-16 04:33:09 15357 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-04-16 04:33:09 15357 [Note] InnoDB: Memory barrier is not used
2017-04-16 04:33:09 15357 [Note] InnoDB: Compressed tables use zlib 1.2.8
2017-04-16 04:33:09 15357 [Note] InnoDB: Using Linux native AIO
2017-04-16 04:33:09 15357 [Note] InnoDB: Using CPU crc32 instructions
2017-04-16 04:33:09 15357 [Note] InnoDB: Initializing buffer pool, size = 64.0G
2017-04-16 04:33:11 15357 [Note] InnoDB: Completed initialization of buffer pool
2017-04-16 04:33:11 15357 [Note] InnoDB: Highest supported file format is Barracuda.
2017-04-16 04:33:12 15357 [Note] InnoDB: 128 rollback segment(s) are active.
2017-04-16 04:33:12 15357 [Note] InnoDB: Waiting for purge to start
02:33:12 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

key_buffer_size=33554432
read_buffer_size=2097152
max_used_connections=0
max_threads=6000
thread_count=0
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 61549549 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0x8d192b]
/usr/sbin/mysqld(handle_fatal_signal+0x4a1)[0x65ce81]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fbf93ade390]
/usr/sbin/mysqld(my_realpath+0x8b)[0x8cebdb]
/usr/sbin/mysqld(_Z19find_or_create_fileP10PFS_threadP14PFS_ file_classPKcjb+0x9c)[0x93711c]
/usr/sbin/mysqld[0x95d134]
/usr/sbin/mysqld[0xacf54f]
/usr/sbin/mysqld[0xad02c3]
/usr/sbin/mysqld[0xad5268]
/usr/sbin/mysqld[0xad9ebe]
/usr/sbin/mysqld[0xaa2bf3]
/usr/sbin/mysqld[0xa88a22]
/usr/sbin/mysqld[0xa6f212]
/usr/sbin/mysqld[0xa15cb3]
/usr/sbin/mysqld[0xa1280f]
/usr/sbin/mysqld[0xa14386]
/usr/sbin/mysqld[0x9e2ad8]
/usr/sbin/mysqld[0xa3a072]
/usr/sbin/mysqld[0xa2ca0b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7fbf93ad46ba]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fbf92f6982d]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.

I would realy appriciate any help here.

Regards
Jens

Run Percona Server on Bash on Windows on Ubuntu

Latest MySQL Performance Blog posts - April 14, 2017 - 1:54pm

In this post, I’ll explain how to run Percona Server for MySQL and Percona Server for MongoDB on Bash on Windows on Ubuntu.

We are getting a good number of questions about whether Percona Server (for both MySQL and MongoDB) is available for Windows for evaluation or development purposes. I want to provide a guide to how to get it running.

In comments to the post Running Percona XtraBackup on Windows … in Docker, Peter Laursen recommend Bash on Ubuntu on Windows. That hadn’t occurred to me before, so the credit goes to Peter Laursen.

As of that older post, it appears that Percona XtraBackup was not working right in Bash on Ubuntu on Windows. But in my latest test on Windows 10 Creators Edition, the problem seems resolved.

But you can get Percona Server for MySQL Percona Server for and MongoDB)  running in Bash on Windows on Ubuntu right away. It is quite easy and does not require extra work. Probably the biggest step is to get Bash on Ubuntu on Windows enabled by itself. A manual on how to do this is here:

https://www.howtogeek.com/249966/how-to-install-and-use-the-linux-bash-shell-on-windows-10/

After this, follow the steps to install Percona Server for MySQL or Percona Server for MongoDB from our Ubuntu repositories:

After this, you can start the server as you would in a regular Linux environment.

TokuDB Hotbackup and Replication

Latest MySQL Performance Blog posts - April 14, 2017 - 11:10am

TokuDB Hotbackup is a solution that allows you to do backups on the fly. It works as a library that intercepts certain system calls that duplicate data written to already copied parts of files, so that at the end of the backup process the copied files contain the same content as the original files. There are several blog posts describing how TokuDB Hot Backup works in details:

Replication often uses backups replication to create slaves. For this purpose, we need to know the last executed GTID or binary log position both for the slave and master configurations.

You can obtain the corresponding information with SHOW MASTER/SLAVE STATUS. But if we request this information after the backup, the corresponding binlog position or executed GTID can be changed and the backed-up data won’t correspond to the master/slave state.

It’s possible to lock tables with FLUSH TABLE WITH READ LOCK, or use more smart locks like LOCK TABLES/BINLOG FOR BACKUP. See this  blog post for details: Introducing backup locks in Percona Server.

But the larger question is when can we use the corresponding locks? We could lock binlog or some table before getting a backup, and then release after it’s done. But if the data being backed-up is big enough the backup itself can take time — and all that time the data is locked for changes.

There must be a more suitable solution. Let’s recall how TokuDB Hotbackup works: it uses a special library that intercepts some system calls and it has an API. This API includes such commands as “start capturing” and “stop capturing”. “Capturing” means that if some part of a file gets copied to a backup location, and this part is changed (by the MySQL server in our case), then these changes are also applied to the backup location. TokuDB Hotbackup plugin uses the API, starts capturing, make copies and stops capturing.

There is no need to lock the binlog during the copy process, because when the binlog is flushed the changes are copied to backup by a “capturing” mechanism. After everything has been copied, and with the capturing still running, this is a good time for LOCK BINLOG FOR BACKUP execution. After this statement is executed, the binlog is flushed, the flushed changes are captured and applied to a backup location, and any queries that could change the binlog position or executed GTID are blocked. After this we can stop capturing, get the binlog position or the last executed GTID and unlock the binlog.

This is how it’s implemented in TokuDB Hotbackup. After a backup is taken, there are “tokubackup_slave_info” and “tokubackup_binlog_info” files in the backup directory that contain the corresponding information about the slave and master (in human-readable format). You can use this information to start a new slave from the master or slave. 5.7 supports a multisource replication, and “tokubackup_slave_info” contains information for all replication channels.

There could be a happy ending here, but the devil is in the details. For example, there are several binary logging formats: RBR, SBR, MBR (for details see https://dev.mysql.com/doc/refman/5.7/en/replication-formats.html). In the case of SBR or MBR, a binary log event can contain statements that produce temporary tables on the slave side, and the result of further statements can depend on the content of the temporary tables.

Usually, temporary tables are created in a separate directory that is out of a MySQL data directory, and aren’t backed up. That is why if we create a backup when temporary tables produced by binary log events exist, and then try to restore the backup, the temporary tables aren’t restored (as they were not backed up). If we try to restore the replication from the point saved during the backup, and after this point the binary log contains events that use the content of non-restored temporary tables, the data will be inconsistent.

That is why the so-called “safe-slave” mode is implemented in TokuDB Hotbackup. The same mode is implemented in Percona XtraBackup, and its name was also inherited from Percona XtraBackup. In this mode, along with LOCK BINLOG FOR BACKUP statement execution, the slave SQL thread is stopped. After this it is checked to see if temporary tables produced by slave SQL thread exist or not. If yes, then the slave SQL thread is restarted until there are no temporary tables (or the certain timeout is reached).

For those purposes, we introduced the following new system variables:

  • --tokudb-backup-safe-slave – turn on/off safe-slave mode
  • --tokudb-backup-safe-slave-timeout – maximum amount of time in seconds to wait until temp tables disappear

Note for the case of multisource replication, the simplified version of this feature is implemented. So if there are several replication channels, and for some of them the SQL thread is started while for others the thread is stopped, the TokuDB Hotbackup does not restore the channels state. It just restarts the SQL threads for all channels, and after the backup SQL threads for all channels will be started.

You can’t use this option for group-replication.

This could be the happy ending, but… well, you know, it’s complicated. Apart from replication features in the MySQL server, there are a lot of engine-specific features. One of them is how frequently the recovery log is synchronized.

For TokuDB, there are two system variables you can use to tune recovery log synchronization frequency: tokudb_commit_sync and tokudb_fsync_log_period. By playing with these variables, you can establish some tradeoff between durability and performance. Here is a good blogpost about these parameters: High Insertion Rates into a TokuDB Table with Durable Transactions.

But let’s imagine we have a certain recovery log synchronization period, and the backup finished somewhere in the middle of this period. It’s OK if we use the backup just to restore some data, because the data is restored using recovery and rollback logs during the server’s start.

But if we also want to start the slave using the information stored in the “tokubackup_slave_info” and “tokubackup_binlog_info” files, there might be a situation where after the recovery stage the data in the database is behind the replication position stored during backup. See this bug description https://bugs.launchpad.net/percona-server/+bug/1533174 for details. Now, when capturing is still active, TokuDB forcibly synchronizes the recovery log when the copy process is finished.

So this could be the happy ending but… Well, actually I hope this is a happy ending! At least we’ve implemented the general functionality for creating slaves using TokuDB Hotbackup. However, as “ the show must go on”, or “the spice must flow” etc., we are happy to discuss any concerns, ideas, proposals and bug reports.

Thanks to all who took part in development: George O. Lorch III, Sergey Glushchenko, Shahriyar Rzaev, Alexey Kopytov (as the author of “LOCK BINLOG FOR BACKUP” feature) and other people from Percona team and community.

TokuDB Troubleshooting: Q & A

Latest MySQL Performance Blog posts - April 13, 2017 - 10:08am

In this blog, we will provide answers to the Q & A for the TokuDB Troubleshooting webinar.

First, we want to thank everybody for attending the March 22, 2017 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that we were unable to answer during the webinar:

Q: Is it possible to load specific tables or data from the backup?

A: Do you mean the backup created by TokuBackup? No, this is not possible. You have to restore the full backup to a temporary instance, then perform a logical dump/reload of the specific tables you want to restore.

Q: Since it is not recoverable when corruption happens, we have a risk of losing data. Is incremental backup the only option?

A: TokuBackup currently does not support incremental backup options. You can only create a kind of incremental backup if you copy and store the binary logs on a separate server, and then apply them on top of a previous full backup.

Issue with monitoring service on MongoDB

Lastest Forum Posts - April 13, 2017 - 8:54am
Hello,

I am new to MongoDB and am setting up the PMM client on a 3-node cluster for the first time. On the first node I am working on, I got the linux metrics monitoring to work with no issues. But I am having trouble getting the mongodb:metrics service into a Running state.

When I run pmm-admin list, it shows the service at not running. I have tried numerous commands to get the service to start. Each time I run one of the below commands, I see a return message stating "OK, started mongodb:metrics service for <servername>"

These are the commands I have run so far:

pmm-admin add mongodb:metrics

pmm-admin add mongodb:metrics --uri mongodb://dbuser:<password>@0.0.0.0:27017/admin --cluster rs0

pmm-admin add mongodb:metrics --uri mongodb://dbuser:<password>@localhost:27017/admin --cluster rs0

I verified I can log into the Mongo instance using the dbusert account, which has admin privileges.

I have also run the start command numerous times, but am seeing the same results.

Any help with this would be appreciated- Thanks

Percona Live Featured Session with Wei Hu – AliSQL: Breakthrough for the Future

Latest MySQL Performance Blog posts - April 13, 2017 - 8:31am

Welcome to another post in the series of Percona Live featured session blogs! In these blogs, we’ll highlight some of the session speakers that will be at this year’s Percona Live conference. We’ll also discuss how these sessions can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!

In this Percona Live featured session, we’ll meet Wei Hu, Staff Engineer at Alibaba. His session (along with co-presenter Dengcheng He, Senior Staff Engineer at Alibaba) is AliSQL: Breakthrough for the Future. AliSQL is a MySQL branch maintained by the Alibaba Database Team. AliSQL has made many improvements over the last year in its efforts to make it a high-performance, high-availability and low-maintenance storage engine option.

I had a chance to speak with Wei about AliSQL:

Percona: How did you get into database technology? What do you love about it?

Wei: I worked on an RDBMS storage engine project in graduate school, where I spent years studying database theory, and experienced the charm of database systems.

Before joining the Alibaba Group, I worked for Netease. I developed another storage engine called TNT (Transactional/Non-Transactional) for MySQL 5.1 and 5.5. During this project, I had the opportunity to learn about and gain a deep understanding of the MySQL source code. Last year I joined the Alibaba group. Alibaba’s E-Commerce business has extreme RDBMS demands, and my work here is making AliSQL faster, safer and more efficient.

Percona: Your talk is called AliSQL: Breakthrough for the Future. What is AliSQL, and what workloads could benefit from it?

Wei: Last year, we joined Percona Live for the very first time. We brought the world AliSQL. AliSQL is a fork of MySQL(based on the community version) tailored for Alibaba’s business characteristics and requirements. AliSQL is focused on extreme performance and data consistency.  As many people know, AliSQL supports the world’s largest throughput of OLTP system. This has been demonstrated in the Singles’ Day shopping festival. Compared to the community version, AliSQL can offer high throughput, high concurrency and low latency at the same time.

Last Percona Live, We share many of the improvements we made, including Column Compression, Double Read Buffer, SQL Firewall and so on. This year we’re bringing the world a brand new AliSQL.

Firstly, we developed the new “Hot SKU” feature. We were not satisfied with AliSQL’s previous performance (5,000 single key updates per second). We developed a new Group update algorithm to improve throughputs to 100,000 single key updates per second. Panic buying is no longer an annoying problem in our e-commerce scenario.

Secondly, based on the InnoDB memcache plugin, AliSQL developed X-KV, a new powerful Key-Value interface. X-KV implements a new protocol with more operation and data type support. Our customers used X-KV as a memory cache, and save the use of hundreds of machines in a production environment.

In addition, based on AliSQL we have developed X-Cluster. X-Cluster uses X-Paxos (Alibaba’s consensus library) to replicate data among instances. It supports high availability and high reliability. X-Cluster has better performance compared to Group Replication. Our benchmarking shows that X-Cluster has five times the throughput of Group Replication (for MySQL 5.7.17) in our high latency network. Furthermore, X-Cluster has many customization features for Alibaba’s production environment, such as leader election priority, LogType instance (low cost), etc.

Percona: How does the latest version of AliSQL make DBAs’ work easier?

Wei: With new “Hot SKU” feature, DBAs do not need to scale out instances for panic buying. With AliSQL X-KV, DBAs do not need to care about schema changes anymore. With AliSQL X-Cluster, DBAs don’t need to worry about data inconsistency problems. All the data transfer systems for AliSQL can use X-Paxos SDK to communicate with X-Cluster. DBAs do not need to set the log position. All is handled by X-Cluster itself.

Percona: What do you want attendees to take away from your session? Why should they attend?

Wei: In my session, I will share the AliSQL HOT SKU, X-KV and X-Cluster internals. Other developers can gain insights and spark new ideas from the talk.

Percona: What are you most looking forward to at Percona Live 2017?

Wei: I am looking forward to chatting with MySQL developers from all over the world, and making more friends.

Register for Percona Live Data Performance Conference 2017, and see Wei and Dengcheng present AliSQL: Breakthrough for the Future. Use the code FeaturedTalk and receive $100 off the current registration price!

Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community, as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara and the Santa Clara Convention Center.

At Percona Live 2017: Many Technical MariaDB Server Sessions!

Latest MySQL Performance Blog posts - April 12, 2017 - 12:56pm

At Percona, we support MariaDB Server (in addition to MySQL, Percona Server for MySQL, MongoDB and Percona Server for MongoDB, of course!), and that is reflected in the good, high-quality technical content about MariaDB Server at the Percona Live Open Source Database Conference 2017 in Santa Clara.

MariaDB is a fork of MySQL, developed and maintained by some of the original creators of MySQL (most notably, Michael “Monty” Widenius). At Percona Live, learn about how MariaDB promises to be a drop-in replacement for MySQL – with better performance and more flexibility – in our MariaDB track. These sessions tackle subjects on analytics, architecture and design, security, operations, scalability and performance for MariaDB.

If you’re hungry for good MariaDB Server content, this is what you’ll probably want to attend:

Monday (tutorial day)

  • Come to the MyRocks Deep Dive Tutorial by Yoshinori Matsunobu. Percona Server for MySQL and MariaDB Server will include the new storage engine in production. You should attend this tutorial if you want to learn how to use it.

Tuesday

Wednesday

Thursday

So there is plenty of MariaDB Server related content to fill you up while attending Percona Live. Use the code SeeMeSpeak to get 10% off your tickets. What are you waiting for, register now!

ProxySQL Rules: Applying and Chaining the Rules

Latest MySQL Performance Blog posts - April 12, 2017 - 11:55am

In this post, I am going to show you how you can minimize the performance impact of ProxySQL rules by using some finesse.

Apply Test

In my previous post, we could see the effect of the rules on ProxySQL performance. As we could also see, the “apply” option does not help with 1000 tables. Are we sure about this? Let’s consider: if we know 90% of our traffic won’t match any rules, it doesn’t matter if we have 10 or 500 rules – it has to check all of them. And this is going to have a serious effect on performance. How can we avoid that?

Let’s insert rule number ONE, which matches all queries, like this:

insert into mysql_query_rules (username,destination_hostgroup,active,retries,match_digest,apply) values('testuser_rw',600,1,3,'(from|into|update|into table) sbtest([1-9]d{3,}|[1-9][0-9][1-9])b',1);

This rule matches all queries where table names > sbtest100. But again, this logic also can be applied on “userids” or any other keys. We just have to know our application and our query distribution.

With this rule, the 90% of the queries have to check only one rule (the first one):

Now we have 101 rules, but the performance is almost the same as when we had only ten rules! As we can see, creating the rules based on our query distribution has a huge impact!

But what if we don’t know which queries are the busiest, or every query has the same amount of hits? Can we do anything? Yes, we can.

Chaining

In my previous post, I mentioned the “flagIN”, “flagOUT” options. With these options we can chain the rules. But why is that good for us?

If we have 100 rules and 100 tables, even with applying, on average ProxySQL has to check 50 rules. But if we write rules like these:

insert into mysql_query_rules (flagin,flagout,username,active,retries,match_digest,apply) VALUES (0,1000,'testuser_rw',1,3,'(from|into|update|into table) sbtest.b',0), (0,1100,'testuser_rw',1,3,'(from|into|update|into table) sbtest1.b',0), (0,1200,'testuser_rw',1,3,'(from|into|update|into table) sbtest2.b',0), (0,1300,'testuser_rw',1,3,'(from|into|update|into table) sbtest3.b',0), (0,1400,'testuser_rw',1,3,'(from|into|update|into table) sbtest4.b',0), (0,1500,'testuser_rw',1,3,'(from|into|update|into table) sbtest5.b',0), (0,1600,'testuser_rw',1,3,'(from|into|update|into table) sbtest6.b',0), (0,1700,'testuser_rw',1,3,'(from|into|update|into table) sbtest7.b',0), (0,1800,'testuser_rw',1,3,'(from|into|update|into table) sbtest8.b',0), (0,1900,'testuser_rw',1,3,'(from|into|update|into table) sbtest9.b',0); insert into mysql_query_rules (flagin,destination_hostgroup,active,match_digest,apply) VALUES (1100,600,1,'(from|into|update|into table) sbtest11b',1), (1100,600,1,'(from|into|update|into table) sbtest12b',1), (1100,600,1,'(from|into|update|into table) sbtest13b',1), (1100,600,1,'(from|into|update|into table) sbtest14b',1), (1100,600,1,'(from|into|update|into table) sbtest15b',1), (1100,600,1,'(from|into|update|into table) sbtest16b',1), (1100,600,1,'(from|into|update|into table) sbtest17b',1), (1100,600,1,'(from|into|update|into table) sbtest18b',1), (1100,600,1,'(from|into|update|into table) sbtest19b',1); ...

We are going to have more than 100 rules, but first we match on the first digit after the second and then go on. With this approach ProxySQL has to only check 15 rules on average.

Let’s see the results:

As we can see, even with more rules, chaining is way faster than without chaining.

Tips Hits

ProxySQL keeps statistics about a rule’s hits. When you add a rule you can see how many queries it applied to:

select * from stats_mysql_query_rules; +---------+------+ | rule_id | hits | +---------+------+ | 2 | 6860 | | 3 | 6440 | | 4 | 6880 | | 5 | 6610 | | 6 | 6850 | | 7 | 7130 | | 8 | 6620 | | 9 | 7300 | | 10 | 6750 | | 11 | 7050 | | 12 | 7280 | | 13 | 6780 | | 14 | 6670 | ...

Query_Processor_time_nsec

ProxySQL does not record how much time it spends on a rule (not yet, anyway: https://github.com/sysown/proxysql/issues/966), but it has a global stat:

select * from stats_mysql_global where Variable_name="Query_Processor_time_nsec"; +---------------------------+----------------+ | Variable_Name | Variable_Value | +---------------------------+----------------+ | Query_Processor_time_nsec | 3184114671740 | +---------------------------+----------------+

You can monitor this statistic, and if you see a huge increase after you added a rule, you might want to review it again.

Conclusion

ProxySQL can handle many rules, and of course they have some costs. But if you design your rules based on your workload and your query distribution, you can minimize this cost a lot.

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.