EmergencyEMERGENCY? Get 24/7 Help Now!

Restoring backup has no tables

Lastest Forum Posts - May 13, 2016 - 11:11am
I am trying to get xtrabackup working on a very old version of MySQL. This system is 5.1 (all InnoDB), yes the plan is to update it but it will be a month out. I am just trying to get better backups then the existing once a day mysqldump.

I have xtrabackup 2.0 installed, run the backup. Prepare the backup. Tar up transfer to s3. Unpack on instance that has MySQL 5.1 and xtrabackup 2.0.

After I ran the restore procedures in the 2.0 docs (using rsync). I start MySQL and the databases are there. But no tables. The data is there, if I grep for some keywords in the ibdata1.

Any suggestions ? I am rather baffled at this.

Benchmark MongoDB with sysbench

Latest MySQL Performance Blog posts - May 13, 2016 - 9:17am

In this blog post, we’ll discuss how to benchmark MongoDB with sysbench.

In an earlier post, I mentioned our use of sysbench-mongodb (via this fork) to run benchmarks of MongoDB servers. I now want to share our work extending sysbench to make it work with MongoDB.

If you’re not familiar with sysbench, it’s a great project developed by Alexey Kopytov that lets you run different types of benchmarks (referred to as “tests” by the tool), including database benchmarks. The database tests are implemented in Lua scripts, which means you can customize them as needed (or even write new ones from scratch) – something useful for simulating specific workloads.

All of the database tests in sysbench assume an SQL-based database, so instead of trying to shoehorn MongoDB tests into this framework I modified the connect/disconnect functions to handle MongoDB, and then implemented new functions specific for this database.

You can find the work (which is still in progress but usable, and in fact currently used by us in benchmarks) on the dev-mongodb-support-1.0 branch of our sysbench fork.

To use it, you just need to specify the –mongo-url argument (others too, as needed, but this is the one that must be present for sysbench to detect a MongoDB test is requested), and then provide the path to the Lua script you want to run. The following is an example:

sysbench --mongo-write-concern=1 --mongo-url="mongodb://localhost" --mongo-database-name=sbtest --test=sysbench/sysbench/tests/mongodb/oltp.lua --oltp_table_size=60000000 --oltp_tables_count=16 --num-threads=512 --rand-type=pareto --report-interval=10 --max-requests=0 --max-time=600 --oltp-point-selects=10 --oltp-simple-ranges=1 --oltp-sum-ranges=1 --oltp-order-ranges=1 --oltp-distinct-ranges=1 --oltp-index-updates=1 --oltp-non-index-updates=1 --oltp-inserts=1 run

To build this branch, you’ll first need to build and install (or otherwise obtain) the mongo-c-driver project, as that is what we use to connect to MongoDB. Once that’s done, building is just a matter of running the following commands from the repo’s root:

./autogen.sh ./configure make sudo make install #optionally

The changes should not affect the other database tests in sysbench, though I have only verified that the MySQL ones continue to work.

Right now, the workload from sysbench-mongodb is implemented in Lua scripts (oltp.lua), and work is in progress to allow freeform operations to be created with new Lua scripts (by providing functions that take JSON as the argument). As an alternative, you may want to check out this much-less-tested (and currently unstable) branch based on luamongo. It already supports the creation of arbitrary workloads in Lua. In this case, you also need to build luamongo, which is included.

With either branch, you can add new tests by implementing new Lua scripts (though the dev-mongodb-support-1.0 branch still needs a few functions implemented on the C side to support arbitrary operations from the Lua side).

We think there are still some types of operations needed to improve sysbench’s usefulness for MongoDB, such as queries involving arrays, union, the $in operator, geospatial operators, and in place updates.

We hope you find this useful, and we welcome suggestions and bug reports to improve it.

Happy benchmarking!

ProxySQL versus MaxScale for OLTP RO workloads

Latest MySQL Performance Blog posts - May 12, 2016 - 10:52am

In this blog post, we’ll discuss ProxySQL versus MaxScale for OLTP RO workloads.

Continuing my series of READ-ONLY benchmarks (you can find the other posts here: https://www.percona.com/blog/2016/04/07/mysql-5-7-sysbench-oltp-read-results-really-faster/ and https://www.percona.com/blog/2016/03/28/mysql-5-7-primary-key-lookup-results-is-it-really-faster), in this post I want to see how much overhead a proxy adds. At this

In my opinion, there are only two solid proxy software options for MySQL at the moment: ProxySQL and MaxScale. In the past, there was also MySQL Proxy, but it is pretty much dead for now. Its replacement, MySQl Router, is still in the very early stages and seriously lacks any features that would compete with ProxySQL and MaxScale. This will most likely change in the future – when MySQL Router adds more features, I will reevaluate them then!

To test the proxies, I will start with a very simple setup to gauge basic performance characteristics. I will use a sysbench client and proxy running on the same box. Sysbench connects to the proxy via local socket (for minimal network and TCP overhead), and the proxy is connected to a remote MySQL via a 10Gb network. This way, the proxy and sysbench share the same server resources.

Other parameters:

  • CPU: 56 logical CPU threads servers Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • sysbench ten tables x 10mln rows, Pareto distribution
  • OS: Ubuntu 15.10 (Wily Werewolf)
  • MySQL 5.7
  • MaxScale version 1.4.1
  • ProxySQL version 1.2.0b

You can find more details about benchmarks, scripts and configs here: https://github.com/Percona-Lab/benchmark-results/tree/201603-mysql55-56-57-RO/remote-OLTP-proxy-may.

An important parameter to consider is how much of the CPU resources you allocate for a proxy. Both ProxySQL and MaxScale allow you to configure how many threads they can use to process user requests and to route queries. I’ve found that 16 threads for ProxySQL 8 threads for  MaxScale is optimal (I will also show 16 threads for MaxScale in this). Both proxies also allow you to setup simple load-balancing configurations, or to work in read-write splitting mode. In this case, I will use simple load balancing, since there are no read-write splitting requirements in a read-only workload).

ProxySQL

First result: How does ProxySQL perform compared to vanilla MySQL 5.7?

As we can see, there is a noticeable drop in performance with ProxySQL. This is expected, as ProxySQL does extra work to process queries. What is good though is that ProxySQL scales with increasing user connections.

One of the tricks that ProxySQL has is a “fast-forward” mode, which minimizes overhead from processing (but as a drawback, you can’t use many of the other features). Out of curiosity, let’s see how the “fast-forward” mode performs:

MaxScale

Now let’s see what happens with MaxScale. Before showing the next chart, let me not it contains “error bars,” which are presented as vertical bars. Basically, an “error bar” shows a standard deviation: the longer the bar, the more variation was observed during the experiment. We want to see less variance, as it implies more stable performance.

Here are results for MaxScale versus ProxySQL:

We can see that with lower numbers of threads both proxies are nearly similar, but MaxScale has a harder time scaling over 100 threads. On average, MaxScale’s throughput is worse, and there is a lot of variation. In general, we can see that MaxScale demands more CPU resources and uses more of the CPU per request (compared to ProxySQL). This holds true if we run MaxScale with 16 threads (instead of 8):

MaxScale with 16 threads does not handle the workload well, and there is a lot of variation along with some visible scalability issues.

To summarize, here is a chart with relative performance (vanilla MySQL 5.7 is shown as 1):

While this chart does show that MaxScale has less overhead from 1-6 threads, it doesn’t scale as user load increases.

Quick start MySQL testing using Docker (on a Mac!)

Latest MySQL Performance Blog posts - May 11, 2016 - 9:38am

In this post, we’ll discuss how you can quick start MySQL testing using Docker, specifically in a Mac environment.

Like a lot of people, I’m hearing a lot about Docker and it’s got me curious. The Docker ecosystem seems to be moving quickly, however, and simple “getting started” or “how-to” type articles that are easy to find for well-established technologies seem to be out-of-date or non-existent for Docker. I’ve been playing with Docker on Mac for a bit, but it is definitely a second-class citizen in the Docker world. However, I saw Giuseppe’s blog on the new Docker beta for Mac and decided to try it for myself. These steps work for the beta version on a Mac (and probably Windows), but they should work with Linux as well (using the GA release, currently Docker 1.11.1).

The new Docker beta for Mac requires that you register for the beta program, and receive a download code from Docker. I got mine in about a day, but I would assume it won’t be long before the full version is released.

Once installed, I needed to setup some Docker containers for common MySQL versions so that I can easily have some sandboxes. The method I used is below:

jayj@~ [510]$ docker network create test 90005b3ffa9fef1f817ee4965e794a567404c9a8d5bf07320514e7d848d59ff9 jayj@~ [511]$ docker run --name=mysql57 --net=test -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d mysql/mysql-server:5.7 6c80fa89610dbd5418ba474ad7d5451cd061f80a8a72ff2e718341827a08144b jayj@~ [512]$ docker run -it --rm --net=test -e MYSQL_HOST=mysql57 mysql/shell init Creating a Classic Session to root@mysql57:3306 Enter password: No default schema selected. enableXProtocol: Installing plugin mysqlx... enableXProtocol: done

A quick summary of what I did above:

  1. I created a network called “test” for my containers to share, essentially this is a dedicated private network between containers.  I like this because multiple containers can listen on the same port and I don’t have to fight with ports on my host OS.
  2. I started a MySQL 5.7 image from Oracle’s official MySQL Docker container bound to that test network.
  3. I used the MySQL/shell image (also from Oracle) to initialize the mysqlx plugin on my 5.7 server. Notice I didn’t enter a password because I created the server without one (insecure, but it’s a sandbox).

The shell init uses a temporary container that is removed (–rm) after the run, so you don’t pollute your docker ps -a a output.

So, now I want to be able to use the standard MySQL command line and/or the new MySQL shell to access this container.  To  make this really clean, I added some bash aliases:

alias mysqlsh='docker run -it --rm --net=test mysql/shell' alias mysql='docker run -it --rm -e MYSQL_ALLOW_EMPTY_PASSWORD=yes --net=test --entrypoint="mysql" mysql/mysql-server:5.7'

With these in effect, I can call them directly and pass normal command line options to connect to my mysql57 image just as if I was using a native MySQL CLI binary.

Using the MySQL CLI from the 5.7 image:

jayj@~ [524]$ mysql -h mysql57 Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 4 Server version: 5.7.12 MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> show schemas; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.01 sec)

Using the MySQL shell:

jayj@~ [527]$ mysqlsh -h mysql57 -u root --session-type=node Creating a Node Session to root@mysql57:33060 Enter password: No default schema selected. Welcome to MySQL Shell 1.0.3 Development Preview Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help', 'h' or '?' for help. Currently in JavaScript mode. Use sql to switch to SQL mode and execute queries. mysql-js> sql Switching to SQL mode... Commands end with ; mysql-sql> show schemas; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.00 sec) mysql-sql>

Now if I want to run check MySQL 5.5 for something, I can just do this:

jayj@~ [530]$ docker run --name=mysql55 --net=test -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d mysql/mysql-server:5.5 Unable to find image 'mysql/mysql-server:5.5' locally 5.5: Pulling from mysql/mysql-server a3ed95caeb02: Already exists ffe36b360c6d: Already exists 646f220a8b5d: Pull complete ed65e4fea7ed: Pull complete d34b408b18dd: Pull complete Digest: sha256:12f0b7025d1dc0e7b40fc6c2172106cdf73b8832f2f910ad36d65228d9e4c433 Status: Downloaded newer image for mysql/mysql-server:5.5 6691dd9d42c73f53baf2968bcca92b7f4d26f54bb01d967be475193305affd4f jayj@~ [531]$ mysql -h mysql55 Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 1 Server version: 5.5.49 MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> show schemas; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | +--------------------+ 3 rows in set (0.00 sec)

or, Percona Server:

jayj@~ [534]$ docker run --name=ps57 --net=test -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d percona/percona-server:5.7 Unable to find image 'percona/percona-server:5.7' locally 5.7: Pulling from percona/percona-server a3ed95caeb02: Pull complete a07226856d92: Pull complete eee62d87a612: Pull complete 4c6755120a98: Pull complete 10eab0da5972: Pull complete d5159a6502a4: Pull complete e595a1a01d00: Pull complete Digest: sha256:d57f0ce736f5403b1714ff8d1d6b91d5a7ee7271f30222c2bc2c5cad4b4e6950 Status: Downloaded newer image for percona/percona-server:5.7 9db503852747bc1603ab59455124663e8cedf708ac6d992cff9b43e2fbebd167 jayj@~ [537]$ mysql -h ps57 Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 2 Server version: 5.7.10-3 Percona Server (GPL), Release 3, Revision 63dafaf Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql>

So all this is nice – once the images are cached locally, spinning new containers up and down is painless and fast. All this sandbox work is cleanly separated from my workstation OS. There are probably other things I’d want to be able to do with this setup that I haven’t figured out yet (e.g., loading data files, running code to connect to these containers, etc.) – but I’ll figure those out in the future.

threadstats memory usage

Lastest Forum Posts - May 11, 2016 - 8:57am
Hi,

When using threadstats we are noticing memory usage increasing over time. I've looked briefly into the code and could not find a place where thread stats were purged if the thread was purged (we are not using thread pooling) - I mainly looked through hash table usages and in sql_connect.cc. My questions are:
1) When a client connection is closed, does the corresponding thread get purged?
2) Does the hash table associated with thread stats maintain the same size as the number of current threads or does it keep state of every thread that has existed since startup?

Thanks
Jeez

http://organichealthsupplement.com/miracle-bust/

Lastest Forum Posts - May 11, 2016 - 8:20am
There is a whole litany of details that affect Miracle Bust. I suppose Miracle Bust was a success for a number of reasons. Miracle Bust is quite desirable. Most, if not all, Miracle Bust know-how is based on opinion, rather than fact.
http://organichealthsupplement.com/miracle-bust/

http://pre-workoutideas.com/green-coffee-zt/

Lastest Forum Posts - May 11, 2016 - 1:21am
When I started taking this supplement, I was set up for something that truly satisfied wishes. I expected that would shed pounds and I may not taking all things into account need to need to go on a key restrictive eating Green coffee zt regimen sorts out. I additionally didn't have acceptable significance to place hours in the development focus dependably, so when I saw that Ultra Trim 350 Forskolin worked without my needing to do that, I was just to endeavor it.
http://pre-workoutideas.com/green-coffee-zt/

XtraBackup2.4.2 - mysql5.7.10 --rsync Error Bug!

Lastest Forum Posts - May 11, 2016 - 12:19am
CMD:innobackupex --parallel=3 --rsync --tmpdir=/tmp --use-memory=1G --no-timestamp --defaults-file=/usr/local/webserver/mysql/conf/my.cnf --user=root --password=dandanMIJIAN --socket=/tmp/mysql/inside_system.sock --defaults-group=mysqld1 /opt/dbbackup/mysql/inside_system/test


160511 15:14:15 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints "completed OK!".

Can't locate Digest/MD5.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at - line 693.
BEGIN failed--compilation aborted at - line 693.
160511 15:14:15 Connecting to MySQL server host: localhost, user: root, password: set, port: 0, socket: /tmp/mysql/inside_system.sock
Using server version 5.7.10
innobackupex version 2.4.2 based on MySQL server 5.7.11 Linux (x86_64) (revision id: 8e86a84)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /opt/dbroot/mysql/inside_system/
xtrabackup: open files limit requested 0, set to 65535
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:12M;ibdata2:50M:autoextend
xtrabackup: innodb_log_group_home_dir = .
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 268435456
InnoDB: Number of pools: 1
160511 15:14:15 >> log scanned up to (2514613)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 14 for mysql/innodb_index_stats, old maximum was 0
xtrabackup: Starting 3 threads for parallel data files transfer
160511 15:14:16 [01] Copying ./ibdata1 to /opt/dbbackup/mysql/inside_system/test/ibdata1
160511 15:14:16 [02] Copying ./ibdata2 to /opt/dbbackup/mysql/inside_system/test/ibdata2
160511 15:14:16 [03] Copying ./mysql/innodb_index_stats.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/innodb_index_stats.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/server_cost.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/server_cost.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/plugin.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/plugin.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/slave_worker_info.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/slave_worker_info.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/time_zone_leap_second.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/time_zone_leap_second.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/time_zone_transition_type.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/time_zone_transition_type.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/slave_master_info.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/slave_master_info.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/innodb_table_stats.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/innodb_table_stats.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/time_zone.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/time_zone.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/help_category.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/help_category.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [01] ...done
160511 15:14:16 [03] Copying ./mysql/time_zone_name.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/time_zone_name.ibd
160511 15:14:16 [01] Copying ./mysql/time_zone_transition.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/time_zone_transition.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [01] ...done
160511 15:14:16 [03] Copying ./mysql/help_relation.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/help_relation.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [01] Copying ./mysql/help_keyword.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/help_keyword.ibd
160511 15:14:16 [01] ...done
160511 15:14:16 [01] Copying ./mysql/help_topic.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/help_topic.ibd
160511 15:14:16 [03] Copying ./mysql/gtid_executed.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/gtid_executed.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/servers.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/servers.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./mysql/engine_cost.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/engine_cost.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 >> log scanned up to (2514613)
160511 15:14:16 [03] Copying ./mysql/slave_relay_log_info.ibd to /opt/dbbackup/mysql/inside_system/test/mysql/slave_relay_log_info.ibd
160511 15:14:16 [03] ...done
160511 15:14:16 [03] Copying ./crm_wei006_com/5k_user_smtp.ibd to /opt/dbbackup/mysql/inside_system/test/crm_wei006_com/5k_user_smtp.ibd
160511 15:14:16 [03] ...done
160511 15:14:17 [03] Copying ./sys/sys_config.ibd to /opt/dbbackup/mysql/inside_system/test/sys/sys_config.ibd
160511 15:14:17 [03] ...done
160511 15:14:17 [01] ...done
160511 15:14:17 [03] Copying ./fx_wei9000_com/ims_bm_top_reply.ibd to /opt/dbbackup/mysql/inside_system/test/fx_wei9000_com/ims_bm_top_reply.ibd
160511 15:14:17 [03] ...done
160511 15:14:17 [01] Copying ./fx_wei9000_com/ims_ewei_shop_designer_menu.ibd to /opt/dbbackup/mysql/inside_system/test/fx_wei9000_com/ims_ewei_shop_designer_menu.ibd
160511 15:14:17 [01] ...done
160511 15:14:17 [02] ...done
160511 15:14:17 >> log scanned up to (2514613)
160511 15:14:18 Starting prep copy of non-InnoDB tables and files
160511 15:14:18 Starting rsync as: rsync -t . --files-from=/tmp/xtrabackup_rsyncfiles_pass1 /opt/dbbackup/mysql/inside_system/test
160511 15:14:18 rsync finished successfully.
160511 15:14:18 Finished a prep copy of non-InnoDB tables and files
160511 15:14:18 Executing FLUSH NO_WRITE_TO_BINLOG TABLES...
160511 15:14:18 Executing FLUSH TABLES WITH READ LOCK...
160511 15:14:18 Starting to backup non-InnoDB tables and files
160511 15:14:18 Starting rsync as: rsync -t . --files-from=/tmp/xtrabackup_rsyncfiles_pass2 /opt/dbbackup/mysql/inside_system/test
160511 15:14:18 Error: rsync failed with error code 1

Solid Wood Furniture UK Solid Wood Furniture UK

Lastest Forum Posts - May 10, 2016 - 11:42pm

S-o-l-i-d- -W-o-o-d- -F-u-r-n-i-t-u-r-e- -U-K- -G-o- -T-o- -w-w-w-(.)-s-o-l-i-d-w-o-o-d-b-e-d-s-(.)-c-0-(.)-u-k

innobackupex --copy-dir does not restore InnoDB tables correctly.

Lastest Forum Posts - May 10, 2016 - 9:24pm
After using innobackupex --copy-dir and then correcting permissions for /var/lib/mysql, I restart the MySQL server. I can see the MyISAM and Memory tables and their data has been restored properly, but the data for InnoDB tables is incorrect. MyISAM tables show up in SHOW TABLES and SHOW TABLE STATUS, but InnoDB tables only show up in SHOW TABLES. When I try to print data of an InnoDB table, it says "Table does not exist". What do I do?

Query Rewrite plugin can harm performance

Latest MySQL Performance Blog posts - May 10, 2016 - 10:53am

In this blog post, we’ll discuss how the Query Rewrite plugin can harm performance.

MySQL 5.7 comes with Query Rewrite plugin, which allows you to modify queries coming to the server. (You can view the details here: https://dev.mysql.com/doc/refman/5.7/en/rewriter-query-rewrite-plugin.html.)

It is based on the audit plugin API, and unfortunately it suffers from serious scalability issues (which seems to be the case for all API-based audit plugins).

I want to share the results for sysbench OLTP RO with and without the query rewrite plugin — but with one very simple rewrite rule, which doesn’t affect any queries. This is the rule from the documentation:

INSERT INTO query_rewrite.rewrite_rules (pattern, replacement) -> VALUES('SELECT ?', 'SELECT ? + 1');

There are results for both cases:

As you can see, the server with the Query Rewrite plugin can’t scale after 100 threads.

When we look at the PMP profile, it shows the following:

170 __lll_lock_wait,__GI___pthread_mutex_lock,native_mutex_lock,my_mutex_lock,inline_mysql_mutex_lock,plugin_unlock_list,mysql_a udit_release,handle_connection,pfs_spawn_thread,start_thread,clone 164 __lll_lock_wait,__GI___pthread_mutex_lock,native_mutex_lock,my_mutex_lock,inline_mysql_mutex_lock,plugin_foreach_with_mask,m ysql_audit_acquire_plugins,mysql_audit_notify,invoke_pre_parse_rewrite_plugins,mysql_parse,dispatch_command,do_command,handle_connec tion,pfs_spawn_thread,start_thread,clone 77 __lll_lock_wait,__GI___pthread_mutex_lock,native_mutex_lock,my_mutex_lock,inline_mysql_mutex_lock,plugin_lock,acquire_plugin s,plugin_foreach_with_mask,mysql_audit_acquire_plugins,mysql_audit_notify,invoke_pre_parse_rewrite_plugins,mysql_parse,dispatch_comm and,do_command,handle_connection,pfs_spawn_thread,start_thread,clone 12 __lll_unlock_wake,__pthread_mutex_unlock_usercnt,__GI___pthread_mutex_unlock,native_mutex_unlock,my_mutex_unlock,inline_mysq l_mutex_unlock,plugin_unlock_list,mysql_audit_release,handle_connection,pfs_spawn_thread,start_thread,clone 10 __lll_unlock_wake,__pthread_mutex_unlock_usercnt,__GI___pthread_mutex_unlock,native_mutex_unlock,my_mutex_unlock,inline_mysq l_mutex_unlock,plugin_lock,acquire_plugins,plugin_foreach_with_mask,mysql_audit_acquire_plugins,mysql_audit_notify,invoke_pre_parse_ rewrite_plugins,mysql_parse,dispatch_command,do_command,handle_connection,pfs_spawn_thread,start_thread,clone 10 __lll_unlock_wake,__pthread_mutex_unlock_usercnt,__GI___pthread_mutex_unlock,native_mutex_unlock,my_mutex_unlock,inline_mysq l_mutex_unlock,plugin_foreach_with_mask,mysql_audit_acquire_plugins,mysql_audit_notify,invoke_pre_parse_rewrite_plugins,mysql_parse, dispatch_command,do_command,handle_connection,pfs_spawn_thread,start_thread,clone 7 __lll_lock_wait,__GI___pthread_mutex_lock,native_mutex_lock,my_mutex_lock,inline_mysql_mutex_lock,Table_cache::lock,open_tab le,open_and_process_table,open_tables,open_tables_for_query,execute_sqlcom_select,mysql_execute_command,mysql_parse,dispatch_command ,do_command,handle_connection,pfs_spawn_thread,start_thread,clone 6 __GI___pthread_mutex_lock,native_mutex_lock,my_mutex_lock,inline_mysql_mutex_lock,plugin_unlock_list,mysql_audit_release,han dle_connection,pfs_spawn_thread,start_thread,clone 6 __GI___pthread_mutex_lock,native_mutex_lock,my_mutex_lock,inline_mysql_mutex_lock,plugin_foreach_with_mask,mysql_audit_acqui re_plugins,mysql_audit_notify,invoke_pre_parse_rewrite_plugins,mysql_parse,dispatch_command,do_command,handle_connection,pfs_spawn_t hread,start_thread,clone

So clearly it’s related to a mutex acquired in the audit plugin API code. I filed a bug (https://bugs.mysql.com/bug.php?id=81298), but it’s discouraging to see that while the InnoDB code is constantly being improved for better scaling, other parts of the server can still suffer from global mutexes.

Percona Server 5.7 parallel doublewrite

Latest MySQL Performance Blog posts - May 9, 2016 - 1:35pm

In this blog post, we’ll discuss the ins and outs of Percona Server 5.7 parallel doublewrite.

After implementing parallel LRU flushing as described in the previous post, we went back to benchmarking. At first, we tested with the doublewrite buffer turned off. We wanted to isolate the effect of the parallel LRU flusher, and the results validated the design. Then we turned the doublewrite buffer back on and saw very little, if any, gain from the parallel LRU flusher. What happened? Let’s take a look at the data:

We see that the doublewrite buffer mutex is gone as expected and that the top waiters are the rseg mutexes and the index lock (shouldn’t this be fixed in 5.7?). Then we checked PMP:

2678 nanosleep(libpthread.so.0),...,buf_LRU_get_free_block(buf0lru.cc:1435),... 867 pthread_cond_wait,...,log_write_up_to(log0log.cc:1293),... 396 pthread_cond_wait,...,mtr_t::s_lock(sync0rw.ic:433),btr_cur_search_to_nth_level(btr0cur.cc:1022),... 337 libaio::??(libaio.so.1),LinuxAIOHandler::collect(os0file.cc:2325),... 240 poll(libc.so.6),...,Protocol_classic::read_packet(protocol_classic.cc:810),...

Again we see that PFS is not telling the whole story, this time due to a missing annotation in XtraDB. Whereas the PFS results might lead us to leave the flushing analysis and focus on the rseg/undo/purge or check the index lock, PMP clearly shows that a lack of free pages is the biggest source of waits. Turning on the doublewrite buffer makes LRU flushing inadequate again. This data, however, doesn’t tell us why that is.

To see how enabling the doublewrite buffer makes LRU flushing perform worse, we collect PFS and PMP data only for the server flusher (cleaner coordinator, cleaner worker, and LRU flusher) threads and I/O completion threads:

If we zoom in from the whole server to the flushers only, the doublewrite mutex is back. Since we removed its contention for the single page flushes, it must be the batch doublewrite buffer usage by the flusher threads that causes it to reappear. The doublewrite buffer has a single area for 120 pages that is shared and filled by flusher threads. The page add to the batch action is protected by the doublewrite mutex, serialising the adds, and results in the following picture:

By now we should be wary of reviewing PFS data without checking its results against PMP. Here it is:

139 libaio::??(libaio.so.1),LinuxAIOHandler::collect(os0file.cc:2448),LinuxAIOHandler::poll(os0file.cc:2594),... 56 pthread_cond_wait,...,os_event_wait_low(os0event.cc:534),buf_dblwr_add_to_batch(buf0dblwr.cc:1111),...,buf_flush_LRU_list_batch(buf0flu.cc:1555),...,buf_lru_manager(buf0flu.cc:2334),... 25 pthread_cond_wait,...,os_event_wait_low(os0event.cc:534),buf_flush_page_cleaner_worker(buf0flu.cc:3482),... 21 pthread_cond_wait,...,PolicyMutex<TTASEventMutex<GenericPolicy>(ut0mutex.ic:89),buf_page_io_complete(buf0buf.cc:5966),fil_aio_wait(fil0fil.cc:5754),io_handler_thread(srv0start.cc:330),... 8 pthread_cond_timedwait,...,buf_flush_page_cleaner_coordinator(buf0flu.cc:2726),...

As with the single-page flush doublewrite contention and the wait to get a free page in the previous posts, here we have an unannotated-for-Performance Schema doublewrite OS event wait (same bug 80979):

if (buf_dblwr->batch_running) { /* This not nearly as bad as it looks. There is only page_cleaner thread which does background flushing in batches therefore it is unlikely to be a contention point. The only exception is when a user thread is forced to do a flush batch because of a sync checkpoint. */ int64_t sig_count = os_event_reset(buf_dblwr->b_event); mutex_exit(&buf_dblwr->mutex); os_event_wait_low(buf_dblwr->b_event, sig_count); goto try_again; }

This is as bad as it looks (the comment is outdated). A running doublewrite flush blocks any doublewrite page add attempts from all the other flusher threads for the duration of the flush (up to 120 data pages written twice to storage):

The issue also occurs with MySQL 5.7 multi-threaded flusher but becomes more acute with the PS 5.7 multi-threaded LRU flusher. There is no inherent reason why all the parallel flusher threads must share the single doublewrite buffer. Each thread can have its own private buffer, and doing so allows us to add to the buffers and flush them independently. This means a lot of synchronisation simply disappears. Adding pages to parallel buffers is fully asynchronous:

And so is flushing them:

This behavior is what we shipped in the 5.7.11-4 release, and the performance results were shown in a previous post. To see how the private doublewrite buffer affects flusher threads, let’s look at isolated data for those threads again.

Performance Schema:

It shows the redo log mutex as the current top contention source from the PFS point of view, which is not caused directly by flushing.

PMP data looks better too:

112 libaio::??(libaio.so.1),LinuxAIOHandler::collect(os0file.cc:2455),...,io_handler_thread(srv0start.cc:330),... 54 pthread_cond_wait,...,buf_dblwr_flush_buffered_writes(buf0dblwr.cc:1287),...,buf_flush_LRU_list(buf0flu.cc:2341),buf_lru_manager(buf0flu.cc:2341),... 35 pthread_cond_wait,...,PolicyMutex<TTASEventMutex<GenericPolicy>(ut0mutex.ic:89),buf_page_io_complete(buf0buf.cc:5986),...,io_handler_thread(srv0start.cc:330),... 27 pthread_cond_wait,...,buf_flush_page_cleaner_worker(buf0flu.cc:3489),... 10 pthread_cond_wait,...,enter(ib0mutex.h:845),buf_LRU_block_free_non_file_page(ib0mutex.h:845),buf_LRU_block_free_hashed_page(buf0lru.cc:2567),...,buf_page_io_complete(buf0buf.cc:6070),...,io_handler_thread(srv0start.cc:330),...

The buf_dblwr_flush_buffered_writes now waits for its own thread I/O to complete and doesn’t block other threads from proceeding. The other top mutex waits belong to the LRU list mutex, which is again not caused directly by flushing.

This concludes the description of the current flushing implementation in Percona Server. To sum up, in these post series we took you through the road to the current XtraDB 5.7 flushing implementation:

  • Under high concurrency I/O-bound workloads, the server has a high demand for free buffer pages. This demand can be satisfied by either LRU batch flushing, either single page flushing.
  • Single page flushes cause a lot of doublewrite buffer contention and are bad even without the doublewrite.
  • Same as in XtraDB 5.6, we removed the single page flushing altogether.
  • Existing cleaner LRU flushing could not satisfy free page demand.
  • Multi-threaded LRU flushing design addresses this issue – if the doublewrite buffer is disabled.
  • If the doublewrite buffer is enabled, MT LRU flushing contends on it, negating its improvements.
  • Parallel doublewrite buffers address this bottleneck.

Is Percona PAM Authentication plugin compatible with MySQL 5.7?

Lastest Forum Posts - May 9, 2016 - 12:55am
Is Percona PAM Authentication plugin compatible with MySQL 5.7?

TIA,
Vitaly

Solid Wood Furniture UK

Lastest Forum Posts - May 7, 2016 - 12:44am

S-o-l-i-d- -W-o-o-d- -F-u-r-n-i-t-u-r-e- -U-K- -G-o- -T-o- -w-w-w-(.)-s-o-l-i-d-w-o-

o-d-b-e-d-s-(.)-c-0-(.)-u-k

http://naturalsmaleenhancement.com/maxtropin-and-testropin/

Lastest Forum Posts - May 7, 2016 - 12:10am
Don't pay for a Maxtropin And Testropin using PayPal. For what it's worth, "They whom dance must pay the band." Allow me put this in clear language. I continue to build my Maxtropin And Testropin portfolio. It will just anger other Maxtropin And Testropin wanderers. I'm really proficient at Maxtropin And Testropin. Everybody likes this.

Why should they be allowed to state something that provides an overview of Maxtropin And Testropin? That will require many serious time from the beginning. I was enriched. As a matter of fact, there are several different Maxtropin And Testropin to solve the issue of Maxtropin And Testropin.

http://naturalsmaleenhancement.com/m...and-testropin/

CPU governor performance

Latest MySQL Performance Blog posts - May 6, 2016 - 3:53pm

In this blog, we’ll examine how CPU governor performance affects MySQL.

It’s been a while since we looked into CPU governors and with the new Intel CPUs and new Linux distros, I wanted to check how CPU governors affect MySQL performance.

Before jumping to results, let’s review what drivers manage CPU frequency. Traditionally, the default driver was “acpi-cpufreq”, but for the recent Intel CPUs and new Linux kernel it was changed to “intel_pstate”.

To check what driver is being used, run the command cpupower frequency-info .

cpupower frequency-info analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 1.20 GHz - 2.00 GHz available frequency steps: 2.00 GHz, 2.00 GHz, 1.90 GHz, 1.80 GHz, 1.70 GHz, 1.60 GHz, 1.50 GHz, 1.40 GHz, 1.30 GHz, 1.20 GHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 1.20 GHz and 2.00 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 1.20 GHz (asserted by call to hardware). cpufreq stats: 2.00 GHz:29.48%, 2.00 GHz:0.00%, 1.90 GHz:0.00%, 1.80 GHz:0.00%, 1.70 GHz:0.00%, 1.60 GHz:0.00%, 1.50 GHz:0.00%, 1.40 GHz:0.00%, 1.30 GHz:0.37%, 1.20 GHz:70.15% (7) boost state support: Supported: yes Active: yes

In this case, we can see that the driver is “acpi-cpufreq”, and the governor is “ondemand”.

On my server (running Ubuntu 16.04, running “Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz” CPUs), I get following output by default settings:

analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 0.97 ms. hardware limits: 1.20 GHz - 3.00 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 1.20 GHz and 3.00 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 1.50 GHz (asserted by call to hardware). boost state support: Supported: yes Active: yes

So, it’s interesting to see that “intel_pstate” with the “performance” governor is chosen by default, and the CPU frequency range is 1.20GHz to 3.00GHz (even though the CPU specification is 2.ooGHz). If we check CPU specification page, it says that 2.00GHz is the “base frequency” and “3.00GHz” is the “Max Turbo” frequency.

In contrast to “intel_pstate”, “acpi-cpufreq” says “frequency should be within 1.20 GHz and 2.00 GHz.”

Also, “intel_pstate” only supports “performance” and “powersave” governors, while “acpi-cpufreq” has a wider range. For this blog, I only tested “ondemand” and “performance”.

Switching between CPU drivers is not easy, as it requires a server reboot — you need to pass a parameter to the kernel startup line. In Ubuntu, you can do this in /etc/default/grub by changing GRUB_CMDLINE_LINUX_DEFAULT to GRUB_CMDLINE_LINUX_DEFAULT="intel_pstate=disable", which will disable intel_pstate and will load acpi-cpufreq.

Is there a real difference in performance between different CPU drivers and CPU governors? To check , I took a sysbench OLTP read-only workload over a 10Gb network, where the data fits into memory (so it is CPU-burning workload).

The results are as follows. This is a chart for absolute throughput:

And to better understand relative performance, here is a chart on how other governors perform compared to “intel-pstate” with the performance governor. In this case, I showed relative performance to “PSTATE performance”, which equals “1”. In the chart, the orange bar is “PSTATE powersave” and shows the relative difference between “PSTATE powersave” and “PSTATE performance” (=1):

Here are the takeaways I see:

  • The combination of CPU driver and CPU governors still affect performance
  • ACPI ondemand might be not the best choice to achieve the best throughput
  • Intel_pstate “powersave” is slower on a fewer number of threads (I guess the Linux scheduler assign execution to “sleeping” CPU cores)
  • Both ACPI and Intel_pstate “performance” governor shows the best (and practically identical) performance
  • My Ubuntu 16.04 starts with “intel_pstate” + “performance” governor by default, but you still may want to check what the settings are in your case (and change to “performance” if it is not set)

Help with rolling Upgrade

Lastest Forum Posts - May 6, 2016 - 4:44am
Could you please advise whether the plan below is a sensible and supported approach to a rolling minor release upgrade (5.6.24 to 5.6.29)? Could you please point us at any relevant documentation or blogs? We are new at MySQL support and this forum, so any help would be really appreciated.

Architecture:
MySQL 5.6.24 two node active-passive cluster managed by Pacemaker (with MySQL replication), with DRDB mirrored file system for the Magento application tier.

What we need to do:
Rolling upgrade to 5.6.29, with no interruption to service and no data loss.

High Level Plan:
  1. Put Pacemaker in maintenance mode
  2. Shutdown the slave database (database B).
  3. Upgrade the MySQL binary packages for database B.
  4. Start database B.
  5. Run the upgrade script
  6. Restart MySQL replication and let the Slave (B) resync with the master (A). Check DRBD is up to date.
  7. Promote B to Master.
  8. Shutdown the old master database (A)
  9. Upgrade the MySQL binary packages for database A
  10. Start database A.
  11. Run the upgrade script
  12. Restart MySQL replication and let it sync with database B. Check DRBD is up to date
  13. Promote A to master
  14. Turn Pacemaker maintenance mode off.
  15. Ensure Slave (B) resyncs with the master (A).

Getting error while placing cold backup in linux server having mysql

Lastest Forum Posts - May 6, 2016 - 3:29am
I am trying to migrate data from mysql 5.6 hosted in windows to mysql 5.7 in linux.I had taken cold backup and trying to put data folder and getting below error,


2016-05-06T10:09:57.280800Z 0 [ERROR] InnoDB: Space ID in fsp header is 2852258390, but in the page header it is 3285098619.
2016-05-06T10:09:57.280844Z 0 [ERROR] InnoDB: Data file './ibdata1' uses page size 0, but the innodb_page_size start-up parameter is 4096
2016-05-06T10:09:57.280883Z 0 [ERROR] InnoDB: Corrupted page [page id: space=4294967295, page number=0] of datafile './ibdata1' could not be found in the doublewrite buffer.
2016-05-06T10:09:57.280897Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2016-05-06T10:09:57.881650Z 0 [ERROR] Plugin 'InnoDB' init function returned error.
2016-05-06T10:09:57.881675Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2016-05-06T10:09:57.881682Z 0 [ERROR] Failed to initialize plugins.
2016-05-06T10:09:57.881687Z 0 [ERROR] Aborting

Percona Server 5.7: multi-threaded LRU flushing

Latest MySQL Performance Blog posts - May 5, 2016 - 6:34am

In this blog post, we’ll discuss how to use multi-threaded LRU flushing to prevent bottlenecks in MySQL.

In the previous post, we saw that InnoDB 5.7 performs a lot of single-page LRU flushes, which in turn are serialized by the shared doublewrite buffer. Based on our 5.6 experience we have decided to attack the single-page flush issue first.

Let’s start with describing a single-page flush. If the working set of a database instance is bigger than the available buffer pool, existing data pages will have to be evicted or flushed (and then evicted) to make room for queries reading in new pages. InnoDB tries to anticipate this by maintaining a list of free pages per buffer pool instance; these are the pages that can be immediately used for placing the newly-read data pages. The target length of the free page list is governed by the innodb_lru_scan_depth parameter, and the cleaner threads are tasked with refilling this list by performing LRU batch flushing. If for some reason the free page demand exceeds the cleaner thread flushing capability, the server might find itself with an empty free list. In an attempt to not stall the query thread asking for a free page, it will then execute a single-page LRU flush (buf_LRU_get_free_block calling buf_flush_single_page_from_LRU in the source code), which is performed in the context of the query thread itself.

The problem with this flushing mode is that it will iterate over the LRU list of a buffer pool instance, while holding the buffer pool mutex in InnoDB (or the finer-grained LRU list mutex in XtraDB). Thus, a server whose cleaner threads are not able to keep up with the LRU flushing demand will have further increased mutex pressure – which can further contribute to the cleaner thread troubles. Finally, once the single-page flusher finds a page to flush it might have trouble in getting a free doublewrite buffer slot (as shown previously). That suggested to us that single-page LRU flushes are never a good idea.  The flame graph below demonstrates this:

Note how a big part of the server run time is attributed to a flame rooted at JOIN::optimize, whose run time in turn is almost fully taken by buf_dblwr_write_single_page in two branches.

The easiest way not to avoid a single-page flush is, well, simply not to do it! Wait until a cleaner thread finally provides some free pages for the query thread to use. This is what we did in XtraDB 5.6 with the innodb_empty_free_list_algorithm server option (which has a “backoff” default). This is also present in XtraDB 5.7, and resolves the issues of increased contentions for the buffer pool (LRU list) mutex and doublewrite buffer single-page flush slots. This approach handles the the empty free page list better.

Even with this strategy it’s still a bad situation to be in, as it causes query stalls when page cleaner threads aren’t able to keep up with the free page demand. To understand why this happens, let’s look into a simplified scheme of InnoDB 5.7 multi-threaded LRU flushing:

The key takeaway from the picture is that LRU batch flushing does not necessarily happen when it’s needed the most. All buffer pool instances have their LRU lists flushed first (for free pages), and flush lists flushed second (for checkpoint age and buffer pool dirty page percentage targets). If the flush list flush is in progress, LRU flushing will have to wait until the next iteration. Further, all flushing is synchronized once per second-long iteration by the coordinator thread waiting for everything to complete. This one second mark may well become a thirty or more second mark if one of the workers is stalled (with the telltale sign: “InnoDB: page_cleaner: 1000ms intended loop took 49812ms”) in the server error log. So if we have a very hot buffer pool instance, everything else will have to wait for it. And it’s long been known that buffer pool instances are not used uniformly (some are hotter and some are colder).

A fix should:

  • Decouple the “LRU list flushing” from “flush list flushing” so that the two can happen in parallel if needed.
  • Recognize that different buffer pool instances require different amounts of flushing, and remove the synchronization between the instances.

We developed a design based on the above criteria, where each buffer pool instance has its own private LRU flusher thread. That thread monitors the free list length of its instance, flushes, and sleeps until the next free list length check. The sleep time is adjusted depending on the free list length: thus a hot instance LRU flusher may not sleep at all in order to keep up with the demand, while a cold instance flusher might only wake up once per second for a short check.

The LRU flushing scheme now looks as follows:

This has been implemented in the Percona Server 5.7.10-3 RC release, and this design the simplified the code as well. LRU flushing heuristics are simple, and any LRU flushing is now removed from the legacy cleaner coordinator/worker threads – enabling more efficient flush list flushing as well. LRU flusher threads are the only threads that can flush a given buffer pool instance, enabling further simplification: for example, InnoDB recovery writer threads simply disappear.

Are we done then? No. With the single-page flushes and single-page flush doublewrite bottleneck gone, we hit the doublewrite buffer again. We’ll cover that in the next post.

pt-table-sync but no unique index

Lastest Forum Posts - May 5, 2016 - 2:33am
I'm trying to fix an out-of-sync table on a slave database server, but pt-table-sync used with the --sync-to-master option won't work, because there is no unique index on the table. I assume that I will need to run pt-table-sync with the --nocheck-slave option (and cross my fingers) - but I'm puzzled by the following bit of the pt-table-sync documentation:

"Source and destination hosts must be independent; they cannot be in the same replication topology. pt-table-sync will die with an error if it detects that a destination host is a slave because changes are written directly to destination hosts (and it’s not safe to write directly to slaves)."

I assume that --nocheck-slave will override the behaviour described here - but there's nothing in the documentation to confirm this. Is this correct? I'd like to understand a bit more about what I'm trying to do, rather than just go ahead and risk turning a problem into a disaster!

(Presumably it would be a good idea to issue "stop slave" on the slave, before running pt-table-sync with the --nocheck-slave option?)


General Inquiries

For general inquiries, please send us your question and someone will contact you.