EmergencyEMERGENCY? Get 24/7 Help Now!

Restoring A Specific Table Using XtraBackup

Lastest Forum Posts - 1 hour 13 min ago
We're currently backing and up and restoring using the --export option in order to allow specific tables restore once a full backup and its incremental backups has been prepared in the following way:

1. first with the --apply-log --redo-only arguments
2. then applying incremental backups using --apply-log --incremental-dir=
3. and eventually --apply-log --export

This generates a restored DB directory structure in which every table has its own set of files,
Which can then be selectively copied and used to restore a specific table on another DB.

Thing is - this is taking exactly the same it as if it would have taken to simply prepare the DB and open it temporarily so this table can be export/imported onwards to the other DB.

Is there any way of specifically selecting a Table(s) to be restored without having to restore the entire backup set?

Thanks,
Avraham K

documentation perconaFT server parameters

Lastest Forum Posts - February 13, 2016 - 9:55pm
Hi,

I'm missing documentation about available server parameters for perconaFT engine as well as parameters for Collections and Indexes, like they existed with TokuMX?

Aren't there any parameters any more to be changed/optimized per Collection?

Seems to be available regarding to the source here..
https://github.com/percona/percona-s...ry_options.cpp
but can't find any documentation about it.

Best,
Michael

Not Able to see the graphs in zabbiox for mysql.

Lastest Forum Posts - February 12, 2016 - 9:31am
Hi ,

I have followed the document exactly as mentioned in https://www.percona.com/doc/percona-...bix/index.html.
I updated the script /var/lib/zabbix/percona/scripts/ss_get_mysql_stats.php with username and password which is global for all our databases with @'%'.

from Zabbiz from end i see everything looks cool, but i am not sure what i am missing graphs are not getting generated.

Can you please do let me know if i need to do something extra?

Amit

pt-online-schema-change has not optimized my mysql table

Lastest Forum Posts - February 11, 2016 - 11:57pm
Hello,

I understand that with that pt-online-schema-change tool you can optimize your mysql table, but I used it with one table after deleting colums from this table, but I don't won any free space...

The table has 41G space and deleting columns I had calculated gain 11G space, but using this tool I still with 41G...

This is the command that I used:

pt-online-schema-change --alter "ENGINE=InnoDB" D=my_database,t=mytable --max-lag=30000 Threads_running=100 --execute

More información:

I'm using Percona-server-5.5

Any suggestion?

Thank you!

MySQL Server used memory is not getting decreased

Lastest Forum Posts - February 11, 2016 - 9:08pm
Hi,

We are using Percona MySQL 5.6.27 version for our production environment. We have allocated 320GB of RAM and buffer_pool_size is 192GB. When the application is running memory usage is reached up to 192GB or 200GB max. Later it doesn't comes down even we stopped entire application. Every time we used to restart MySQL service to reduce the RAM usage. Is MySQL normally behaves like this or do we need to optimize anything. Please help us.

Thanks,
Srini

Measuring Docker IO overhead

Latest MySQL Performance Blog posts - February 11, 2016 - 12:38pm

This will be another post on using Percona Server via a Docker image. I want to follow up on my previous post regarding CPU/Network overhead in Docker “Measuring Percona Server Docker CPU/network overhead” by measuring  if there is any docker IO overhead on operations.

After running several tests, it appears (spoiler alert) that there is no Docker IO overhead. I still think it is useful to understand the different ways Docker can be used with data volumes, however. Docker’s philosophy is to provide ephemeral containers, but ephemeral does not work well for data – we do not want our data to disappear.

So, the first pattern is to create data inside a docker container. This is the default mode:

docker run -e MYSQL_ALLOW_EMPTY_PASSWORD=1 --name ps13 -p 3306:3306 -v /data/flash/my.cnf:/etc/my.cnf --net=host percona/percona-server:5.6.28

(I am using --net=host to avoid network overhead; check the previous post for more information.)

The second pattern is to use an external data volume, there we need to substitute the data volume with -v /data/flash/d1/:/var/lib/mysql. The full command is:

docker run -e MYSQL_ALLOW_EMPTY_PASSWORD=1 --name ps13 -p 3306:3306 -v /data/flash/d1/:/var/lib/mysql -v /data/flash/my.cnf:/etc/my.cnf --net=host percona/percona-server:5.6.28

Finally, there is third pattern: using data volume containers. For this example, I created a dummy container:

docker run -e MYSQL_ALLOW_EMPTY_PASSWORD=1 --name ps13-data-volume -v /var/lib/mysql -v /data/flash/my.cnf:/etc/my.cnf --net=host percona/percona-server:5.6.28

After stopping the ps13-data-volume container, we can start a real one using the data volume from ps13-data-volume  as:

docker run -e MYSQL_ALLOW_EMPTY_PASSWORD=1 --name ps14 --volumes-from ps13-data-volume -v /data/flash/my.cnf:/etc/my.cnf --net=host percona/percona-server:5.6.28

I compared all these modes with Percona Server running on a bare metal box, and direct mounted in sysbench, for both read-intensive and write-intensive IO workloads. For the reference, sysbench command is:

./sysbench --test=tests/db/oltp.lua --oltp_tables_count=16 --oltp_table_size=10000000 --num-threads=16 --mysql-host=127.0.0.1 --mysql-user=root --oltp-read-only=off --max-time=1800 --max-requests=0 --report-interval=10 run

I’m not going to show the final numbers or charts, as the results are identical for all docker modes and for the bare metal case. So I can confidently say there is NO IO overhead for any docker data volume pattern described above.

As next experiment, I want to measure the Docker container overhead in a multi-host network environment.

Failed State Transfer

Lastest Forum Posts - February 11, 2016 - 4:49am
Hi there,

I've been trying to setup a Percona Cluster but have so far gotten to one part and I can't get past.

Setup:
I used binary installs, yum installs and a mix of the two and always get the same problem. My current setup is a Binary only install. It is different than the documentation I know.

PC 1: 10.0.0.11 (as example) CentOS 7.1
- Percona-XtraDB-Cluster-5.6.28-rel76.1-25.14.1.Linux.x86_64.ssl101.tar.gz
- percona-xtrabackup-2.3.3-Linux-x86_64.tar.gz
- Users: root, mysql
- DB Users: root@localhost, root@10.0.0.11, sstuser@localhost, sstuser@10.0.0.22
PC 2: 10.0.0.22 (as example) CentOS 7.1
- Percona-XtraDB-Cluster-5.6.28-rel76.1-25.14.1.Linux.x86_64.ssl101.tar.gz
- percona-xtrabackup-2.3.3-Linux-x86_64.tar.gz
- Users: root, mysql
- DB Users: N/A

PC 1 - my.cnf
[mysqld]
basedir=/opt/current/percona
datadir=/opt/current/percona/data
socket=/tmp/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
#
# Settings
#
wsrep_provider=/opt/current/percona/lib/libgalera_smm.so
wsrep_cluster_address=gcomm://
wsrep_slave_threads=8
wsrep_node_address=10.0.0.11
wsrep_node_name=master1
wsrep_sst_method=xtrabackup-v2
wsrep_cluster_name=percona_test
wsrep_sst_auth="sstuser:s3cret"
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
pid_file=/opt/current/percona/socket/mysql.pid
user=mysql
server-id=1

[mysqld_safe]
log-error=/opt/current/percona/log/percona.log
pid-file=/opt/current/percona/socket/mysql.pid

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

Starts and works when bootstrapped with:

service mysql bootstrap-pxc

PC 2 - my.cnf
[mysqld]
basedir=/opt/current/percona
datadir=/opt/current/percona/data
socket=/tmp/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
#
# Settings
#
wsrep_provider=/opt/current/percona/lib/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.11,10.0.0.22
wsrep_slave_threads=8
wsrep_node_address=10.0.0.22
wsrep_node_name=slave1
wsrep_sst_method=xtrabackup-v2
wsrep_cluster_name=percona_test
wsrep_sst_auth="sstuser:s3cret"
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
pid_file=/opt/current/percona/socket/mysql.pid
user=mysql
server-id=1

[mysqld_safe]
log-error=/opt/current/percona/log/percona.log
pid-file=/opt/current/percona/socket/mysql.pid

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

However when I try to start it with mysqld start I get a error about State Transfer failing.

http://pastebin.com/raw/ZBFtxsrV

Does anyone know what I can do to fix this "as is". Meaning I'd like to keep the setup as I've done it. I have the variables MYSQL_HOME setup on both and xtrabackup is in the PATH variable of the mysql user on 10.0.0.22. SOCAT, NC and RSYNC are installed and selinux is disabled.

credentials in /etc/my.cnf

Lastest Forum Posts - February 11, 2016 - 4:25am
I do have user and password speciefied in /etc/my.cnf
Code: [client] user=aaa password=bbb after upgrading from xtrabackup 2.2 to xtrabackup 2.3 innobackupex fails to authenticate.
It does not read credentials from /etc/my.cnf anymore? Is this a known issue with 2.3?

Regards, Markus

Percona Live featured talk with Sergej Jurecko: ActorDB — an alternative view of a distributed database

Latest MySQL Performance Blog posts - February 10, 2016 - 1:28pm

Welcome to the first of several discussions with some of our upcoming Percona Live Data Performance Conference 2016 speakers! In this series of blogs, we’ll highlight some of the talks that will happen at this year’s conference, as well as discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live registration bonus!

In this first installment, we’ll meet Sergej Jurecko, co-founder of Biokoda d.o.o. His talk will be ActorDB – an alternative view of a distributed database.  ActorDB is a database that was developed using a distributed model: it uses an SQL database that speaks the MySQL client/server protocol. I had a chance to speak with Sergej and get some insight into what his talk will cover:

Percona: Give me brief history of yourself: how did you get into database development, where do you work, what you love about it?

Sergej: I am a co-founder of a private company in Slovenia named Biokoda. Our clients range from small companies to telecoms who offer our solutions to their customers and government institutions. The requirements that our products try to solve always include high availability, ease-of-management, ease-of-scale and self-hosted.

A few years ago we were tasked with building a file sync app. This requires you to store a potentially very large file hierarchy for every user. When it came to choosing a database, our options were KV stores, traditional SQL databases and document stores (which were much less mature then they are now).

Designing a database that would be an ideal fit for our use case and requirements became a fun engineering challenge. Then writing it became a fun engineering challenge. Sure, it would have been safer and easier to stick with an existing mature SQL database, but I’m an eternal engineering optimist. Sometimes you have to take a crazy chance if you believe in it!

Percona: Your talk is going to be on “ActorDB – an alternative view of a distributed database.” What is it about distributed databases that causes concern for people? What are the pros and cons? And what affects could it have on application performance?

Sergej: The biggest concern, and rightfully so, is safety. There are many pitfalls developers of distributed databases can fall into. The most basic issues are: What is the consensus algorithm, is it implemented correctly and thoroughly tested? What is the storage engine, is it custom built? If yes how well is its reliability tested?

The advantage of getting it right is a way to store state without a single point of failure. It is a way to horizontally grow your database with your needs. If your database is a part of your products, these things become important selling points for your products.

When it comes to performance, distributed databases tend to lose out on a per-node basis. But because they can scale out to more nodes, they can achieve  higher performance by an order of magnitude.

What we tried to do is base ActorDB on as much solid, proven ground as possible. We avoided developing our own storage and SQL engine, and instead based it on existing proven technology. We even avoided developing our own client protocol and libraries.

Percona: Does scaling horizontally cause difficulties with expense justification? How would you characterize the ROI for horizontal versus vertically scaling?

Sergej: I’m not sure how much that is even a factor. It depends on what kind of customers you are speaking to and what their needs are. The kind of companies we are in contact with often use horrifically inefficient languages as a base for their products, because those languages make solving problems in them easier. The ROI is in faster development time.

One could make the same case with distributed databases. Excluding KV stores, you still have structured values, indexes and sometimes SQL. If horizontally distributed databases solve more problems for you than vertically distributed ones, then that is the ROI. They spare you from solving those difficult problems.

The industry has moved on from the one-size-fits-all mentality. Distributed databases are not a replacement for traditional monolithic ones. There are things possible in monolithic databases that are not possible in distributed ones, and vice versa. There is room for both, and now developers have a choice as to what best fits their needs.

I think the tech industry has more in common with the fashion industry than we like to admit. Technologies grow and die in popularity much like fashion. When new concepts like eventual consistency rise up, we sometimes get a bit too enthusiastic about them. Right now I think the traditional way of thinking is coming back a bit. It turns out a nice SQL interface to the database is important, and new and untested storage engines are pretty dangerous.

Percona: What do you see as an issue that we the community needs to be on top of with regard to distributed database development? What keeps you up at night with regard to ActorDB and the implementation of your solution?

Sergej: Well I’ve already mentioned the main issues: distributed consensus and storage engine. The key issue for us is in our Raft implementation. At the end of the day, a database must have solid performance – which means you can’t just grab an off the shelf Raft implementation and use it. It must be tightly integrated with the storage engine.

But what literally keeps me up at night is the unexplored potential that we see in the product. There are so many interesting avenues we have not developed yet.

Percona: What are you most looking forward to at Percona Live Data Performance Conference 2016?

Sergej: I’m looking forward to discovering something new and finding out what others are doing. But mostly getting more feedback, especially if negative! That is often the most useful kind of feedback.

You can read more of Sergej’s thoughts and about ActorDB at the Biokoda blog.

Want to find out more about Sergej and ActorDB? Register for Percona Live Data Performance Conference 2016, and come see his talk ActorDB – an alternative view of a distributed database. Use the code “FeaturedTalk”and receive $100 off the current registration price!

The Percona Live Data Performance Conference is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 18-21 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Estimating potential for MySQL 5.7 parallel replication

Latest MySQL Performance Blog posts - February 10, 2016 - 10:19am

Unlike MySQL 5.6, where parallel replication can only be used when replicas have several schemas, MySQL 5.7 replicas can read binlog group commit information coming from the master to replicate transactions in parallel even when a single schema is used. Now the question is: how many replication threads should you use?

A simple benchmark

Let’s assume we have one master and three slaves, all running MySQL 5.7.

One slave is using regular single-threaded replication (the control slave in the graph below), one is using 20 parallel workers (MTS 20 workers) and the last one is using 100 parallel workers (MTS 100 workers).

As a reminder, here is the settings that need to be adjusted for 5.7 parallel replication:

slave_parallel_type = LOGICAL_CLOCK slave_parallel_workers = 20

GTID replication is also highly recommended, if you don’t want to face annoying issues.

Now let’s run a simple sysbench workload, inserting records in 25 separate tables in the same database with 100 concurrent threads:

sysbench --mysql-user=root --mysql-db=db1 --test=/usr/share/doc/sysbench/tests/db/insert.lua --max-requests=500000 --num-threads=100 --oltp-tables-count=25 run

Because we’re using 100 concurrent threads on the master, we can expect that some parallelization is possible. This means that if we see replication lag with the control slave, we’ll probably see less lag with the 20-worker slave and even less with the 100-worker slave.

This is not exactly what we get:

Parallel replication is indeed useful, but the 100-worker slave doesn’t provide any benefits compared to the 20-worker slave. Replication lag is even slightly worse.

What happened?

Some instrumentation with performance_schema

To have a better understanding of how efficiently the parallel replication threads are used, let’s enable some instrumentation on slaves (in other words, recording executed transactions):

UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME LIKE 'events_transactions%'; UPDATE performance_schema.setup_instruments SET ENABLED = 'YES', TIMED = 'YES' WHERE NAME = 'transaction';

For better readability of the results, let’s create a new view (tracking how many transactions are executed by each replication thread):

CREATE VIEW mts_summary_trx AS select performance_schema.events_transactions_summary_by_thread_by_event_name.THREAD_ID AS THREAD_ID, performance_schema.events_transactions_summary_by_thread_by_event_name.COUNT_STAR AS COUNT_STAR from performance_schema.events_transactions_summary_by_thread_by_event_name where performance_schema.events_transactions_summary_by_thread_by_event_name.THREAD_ID in (select performance_schema.replication_applier_status_by_worker.THREAD_ID from performance_schema.replication_applier_status_by_worker);

Now, after running sysbench again, let’s do some math to see how often each replication thread was run:

SELECT SUM(count_star) FROM mts_summary_trx INTO @total; SELECT 100*(COUNT_STAR/@total) AS PCT_USAGE FROM mts_summary_trx; +-----------+ | PCT_USAGE | +-----------+ | 39.5845 | | 31.4046 | | 12.0119 | | 5.9081 | | 3.0375 | | 1.6527 | | 1.0550 | | 0.7576 | | 0.6089 | | 0.5208 | | 0.4642 | | 0.4157 | | 0.3832 | | 0.3682 | | 0.3408 | | 0.3247 | | 0.3076 | | 0.2925 | | 0.2866 | | 0.2749 | +-----------+

We can see that the workload has a limited potential for parallelism – therefore, it’s not worth configuring more than 3-4 replication threads.

The slight performance degradation with 100 replication threads is probably due to the overhead of the coordinator thread.

Conclusion

Estimating the optimal number of replication threads with MySQL 5.7 parallel replication is quite difficult if your just guessing. The performance_schema provides a simple way to understand how the workload is handled by the replication threads.

It also allows you to see if tuning binlog_group_commit_sync_delay provides more throughput on slaves without too much impact on the master’s performance.

Documentation for new TokuDB variables?

Lastest Forum Posts - February 10, 2016 - 8:23am
Is there any documentation at all available for the newly added TokuDB variables? `tokudb_client_pool_threads`, `tokudb_cachetable_pool_threads`, etc. Can't find anything about it.

EXPLAIN FORMAT=JSON: buffer_result is not hidden!

Latest MySQL Performance Blog posts - February 9, 2016 - 12:41pm

Time for another entry in the EXPLAIN FORMAT=JSON is cool! series. Today we’re going to look at how you can view the buffer result using JSON (instead of the regular EXPLAIN command.

Regular EXPLAIN does not identify if SQL_BUFFER_RESULT was used at all. To demonstrate, let’s run this query:

mysql> explain select * from salariesG *************************** 1. row *************************** id: 1 select_type: SIMPLE table: salaries partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 2557022 filtered: 100.00 Extra: NULL 1 row in set, 1 warning (0.01 sec) Note (Code 1003): /* select#1 */ select `employees`.`salaries`.`emp_no` AS `emp_no`,`employees`.`salaries`.`salary` AS `salary`,`employees`.`salaries`.`from_date` AS `from_date`,`employees`.`salaries`.`to_date` AS `to_date` from `employees`.`salaries`

Now, let’s compare it to this query:

mysql> explain select sql_buffer_result * from salariesG *************************** 1. row *************************** id: 1 select_type: SIMPLE table: salaries partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 2557022 filtered: 100.00 Extra: Using temporary 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select sql_buffer_result `employees`.`salaries`.`emp_no` AS `emp_no`,`employees`.`salaries`.`salary` AS `salary`,`employees`.`salaries`.`from_date` AS `from_date`,`employees`.`salaries`.`to_date` AS `to_date` from `employees`.`salaries`

Notice there is no difference, except the expected "Using temporary" value in the "Extra" row of the second query. The field "Using temporary"  is expected here, because SQL_BUFFER_RESULT  directly instructs the MySQL server to put a result set into a temporary table to free locks. But what if the query uses the temporary table by itself? For example, for a grouping operation? In this case, the EXPLAIN result for the original query and the query that contains the SQL_BUFFER_RESULT  clause will be 100% identical.

Compare:

mysql> explain select emp_no, salary/avg(salary) from salaries group by emp_no, salaryG *************************** 1. row *************************** id: 1 select_type: SIMPLE table: salaries partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 2557022 filtered: 100.00 Extra: Using temporary; Using filesort 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select `employees`.`salaries`.`emp_no` AS `emp_no`,(`employees`.`salaries`.`salary` / avg(`employees`.`salaries`.`salary`)) AS `salary/avg(salary)` from `employees`.`salaries` group by `employees`.`salaries`.`emp_no`,`employees`.`salaries`.`salary`

With:

mysql> explain select sql_buffer_result emp_no, salary/avg(salary) from salaries group by emp_no, salaryG *************************** 1. row *************************** id: 1 select_type: SIMPLE table: salaries partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 2557022 filtered: 100.00 Extra: Using temporary; Using filesort 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select sql_buffer_result `employees`.`salaries`.`emp_no` AS `emp_no`,(`employees`.`salaries`.`salary` / avg(`employees`.`salaries`.`salary`)) AS `salary/avg(salary)` from `employees`.`salaries` group by `employees`.`salaries`.`emp_no`,`employees`.`salaries`.`salary`

There is no difference! We not able to tell if we used a temporary table to resolve the query, or simply put the result set into the buffer. The EXPLAIN FORMAT=JSON  command can help in this case as well. Its output is clear, and shows all the details of the query optimization:

mysql> explain format=json select sql_buffer_result emp_no, salary/avg(salary) from salaries group by emp_no, salaryG *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "cost_info": { "query_cost": "3073970.40" }, "grouping_operation": { "using_temporary_table": true, "using_filesort": true, "cost_info": { "sort_cost": "2557022.00" }, "buffer_result": { "using_temporary_table": true, "table": { "table_name": "salaries", "access_type": "ALL", "rows_examined_per_scan": 2557022, "rows_produced_per_join": 2557022, "filtered": "100.00", "cost_info": { "read_cost": "5544.00", "eval_cost": "511404.40", "prefix_cost": "516948.40", "data_read_per_join": "39M" }, "used_columns": [ "emp_no", "salary", "from_date" ] } } } } } 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select sql_buffer_result `employees`.`salaries`.`emp_no` AS `emp_no`,(`employees`.`salaries`.`salary` / avg(`employees`.`salaries`.`salary`)) AS `salary/avg(salary)` from `employees`.`salaries` group by `employees`.`salaries`.`emp_no`,`employees`.`salaries`.`salary`

Firstly, we can see how the grouping_operation was optimized:

"grouping_operation": { "using_temporary_table": true, "using_filesort": true,

And it does indeed use the temporary table.

Now we can follow the details for SQL_BUFFER_RESULT:

"buffer_result": { "using_temporary_table": true,

With this output, we can be absolutely certain that the temporary table was created for both the  SQL_BUFFER_RESULT and the grouping operation. This is especially helpful for support engineers who need the EXPLAIN  output to help their customers to tune queries, but are afraid to ask for the same query twice — once with the SQL_BUFFER_RESULT clause and once without.

Conclusion: EXPLAIN FORMAT=JSON  does not hide important details for query optimizations.

pt-online-schema-change

Lastest Forum Posts - February 9, 2016 - 6:24am
I am using the tool to rebuild a table while compressing it.
The remaining time gets up to 99% 00:00 but it is still going for a long time.
I am looking at the process list and see that it is not even half way though data (going by PK ID).
Amy ideas why the progress/remain time is way off.

Variable_name,Value
innodb_version,5.6.28-76.1
protocol_version,10
slave_type_conversions,
version,5.6.28-76.1-log
version_comment,Percona Server (GPL), Release 76.1, Revision 5759e76
version_compile_machine,x86_64
version_compile_os,Linux

강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마

Lastest Forum Posts - February 8, 2016 - 10:31pm
강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마강남오피【밤워】 선릉오피〔b a m w a r〕1 1.〔c o m〕な부평오피 강남건마

강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피

Lastest Forum Posts - February 8, 2016 - 10:29pm
강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피강남건마^밤전^B a m w a R 1 1. C o m 신림오피 강남오피 분당오피

Loading two CSV files from two sessions into TokuDB / Timeout exceeded

Lastest Forum Posts - February 8, 2016 - 12:09pm
Hi,

I am playing with Percona Server 5.6.27 and TokuDB engine.

I tried to load two (different) CSV files into the same table with LOAD DATA INFILE from two separate 'mysql' sessions. It worked with InnoDB (XtraDB) and MyISAM, but not with TokuDB. The session that starts loading first succeeds, but the other doesn't and returns: "ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction."

Besides increasing the timeout value, is there a way to do this with TokuDB?

Thanks,

Andy.

Percona Server 5.7.10-2 second RC available

Latest MySQL Performance Blog posts - February 8, 2016 - 9:37am

Percona is glad to announce the second release candidate of Percona Server 5.7.10-2 on February 8, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

New Features:

  • Complete list of changes between Percona Server 5.6 and 5.7 can be seen in Changed in Percona Server 5.7.
  • 5.7 binlog group commit algorithm is now supported in TokuDB as well.
  • New TokuDB index statistics reporting has been implemented to be compatible with the changes implemented in upstream 5.7. Following the InnoDB example, the default value for tokudb_cardinality_scale_percent has been changed from 50% to 100%. Implementing this also addresses a server crash deep in the optimizer code.

Known Issues:

  • In Percona Server 5.7 super_read_only feature has been replaced with the upstream implementation. There are currently two known issues compared to Percona Server 5.6 implementation:
    • Bug #78963, super_read_only aborts STOP SLAVE if variable relay_log_info_repository is set to TABLE which could lead to a server crash in Debug builds.
    • Bug #79328, super_read_only set as a server option has no effect.
  • InnoDB crash recovery might fail if innodb_flush_method is set to ALL_O_DIRECT. The workaround is to set this variable to a different value before starting up the crashed instance (bug #1529885).

Bugs Fixed:

  • Clustering secondary index could not be created on a partitioned TokuDB table. Bug fixed #1527730 (#720).
  • Percona TokuBackup was failing to compile with Percona Server 5.7. Bug fixed #123.
  • Granting privileges to a user authenticating with PAM Authentication Plugin could lead to a server crash. Bug fixed #1521474.
  • TokuDB status variables were missing from Percona Server 5.7.10-1. Bug fixed #1527364 (#923).
  • Attempting to rotate the audit log file would result in audit log file name foo.log.%u (literally) instead of a numeric suffix. Bug fixed #1528603.
  • Adding an index to an InnoDB temporary table while expand_fast_index_creation was enabled could lead to server assertion. Bug fixed #1529555.
  • TokuDB would not be upgraded on Debian/Ubuntu distributions while performing an upgrade from Percona Server 5.6 to Percona Server 5.7 even if explicitly requested. Bug fixed #1533580.
  • Server would assert when both TokuDB and InnoDB tables were used within one transaction on a replication slave which has binary log enabled and slave updates logging disabled. Bug fixed #1534249 (upstream bug #80053).
  • MeCab Full-Text Parser Plugin has not been included in the previous release. Bug fixed #1534617.
  • Fixed server assertion caused by Performance Schema memory key mix-up in SET STATEMENT ... FOR ... statements. Bug fixed #1534874.
  • Setting the innodb_sched_priority_purge (available only in debug builds) while purge threads were stopped would cause a server crash. Bug fixed #1368552.
  • Enabling TokuDB with ps_tokudb_admin script inside the Docker container would cause an error due to insufficient privileges even when running as root. In order for this script to be used inside docker containers this error has been changed to a warning that a check is impossible. Bug fixed #1520890.
  • Write-heavy workload with a small buffer pool could lead to a deadlock when free buffers are exhausted. Bug fixed #1521905.
  • InnoDB status will start printing negative values for spin rounds per wait, if the wait number, even though being accounted as a signed 64-bit integer, will not fit into a signed 32-bit integer. Bug fixed #1527160 (upstream #79703).
  • Percona Server 5.7 couldn’t be restarted after TokuDB has been installed with ps_tokudb_admin script. Bug fixed #1527535.
  • Fixed memory leak when utility_user is enabled. Bug fixed #1530918.
  • Page cleaner worker threads were not instrumented for Performance Schema. Bug fixed #1532747 (upstream bug #79894).
  • Busy server was preferring LRU flushing over flush list flushing too strongly which could lead to performance degradation. Bug fixed #1534114.
  • libjemalloc.so.1 was missing from a binary tarball. Bug fixed #1537129.
  • When cmake/make/make_binary_distribution workflow was used to produce binary tarballs it would produce tarballs with mysql-... naming instead of percona-server-.... Bug fixed #1540385.
  • Added proper memory cleanup if for some reason a table is unable to be opened from a dead closed state. This prevents an assertion from happening the next time the table is attempted to be opened. Bug fixed #917.
  • Variable tokudb_support_xa has been modified to prevent setting it to anything but ON/ENABLED and to print a SQL warning anytime an attempt is made to change it, just like innodb_support_xa. Bug fixed #928.

Other bugs fixed: #1179451, #1534246, #1524763, #1525109 (upstream #79569), #1530102, #897, #898, #899, #900, #901, #902, #903, #905, #906, #907, #908, #909, #910, #911, #912, #913, #915, #919, and #904.

Release notes for Percona Server 5.7.10-2 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Percona XtraBackup 2.4.0-rc1 is now available

Latest MySQL Performance Blog posts - February 8, 2016 - 9:35am

Percona is glad to announce the first release candidate of Percona XtraBackup 2.4.0-rc1 on February 8th 2016. Downloads are available from our download site and from apt and yum repositories.

This is a Release Candidate quality release and it is not intended for production. If you want a high quality, Generally Available release, the current Stable version should be used (currently 2.3.3 in the 2.3 series at the time of writing).

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release contains all of the features and bug fixes in Percona XtraBackup 2.3.3, plus the following:

New Features:

  • Percona XtraBackup has implemented basic support for MySQL 5.7 and Percona Server 5.7.

Known Issues:

  • Backed-up table data could not be recovered if backup was taken while running OPTIMIZE TABLE (bug #1541763) or ALTER TABLE ... TABLESPACE (bug #1532878) on that table.
  • Compact Backups currently don’t work due to bug #1192834.

Release notes with all the bugfixes for Percona XtraBackup 2.4.0-rc1 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Attribute (innodb_version) does not pass the type constraint...

Lastest Forum Posts - February 8, 2016 - 9:24am
"Attribute (innodb_version) does not pass the type constraint because: Validation failed for 'Str' with value undef at /usr/bin/pt-table-sync line 1432."

I'm using pt-table-sync to sync one table from a MySQL 5.5 server to a MySQL 5.7 server. Seems like there should be a simple fix to this error. It just throws this error and quits.

Should I worry about "Some authentication plugins will not work." note?

Lastest Forum Posts - February 8, 2016 - 9:14am
Hello to all

I am trying to set up TLS in percona-server 5.6 (on debian jessie).

I have created my root CA and key pair and set it up in my.cnf (under the 'mysqld' section)

ssl-ca=/etc/ssl/certs/ca-certificates.crt
ssl-cert=/etc/mysql/certs/db.investme.com.crt
ssl-key=/etc/mysql/certs/db.investme.com.key

I gave mysql user and group permissions to them:

chown -R mysql:mysql /etc/mysql/certs/

But I still get this in the error log:

2016-02-08 08:47:27 12930 [Note] RSA private key file not found: /var/lib/mysql//private_key.pem. Some authentication plugins will not work.
2016-02-08 08:47:27 12930 [Note] RSA public key file not found: /var/lib/mysql//public_key.pem. Some authentication plugins will not work.

I tried to create symlinks in /var/lib/mysql but then the startup mysql script would freeze.

Any idea? Is this something I should worry about?


General Inquiries

For general inquiries, please send us your question and someone will contact you.