Buy Percona SupportEmergency? Get 24/7 Help Now!

TokuDB bulk delete after not decrease disk size?

Lastest Forum Posts - September 7, 2016 - 5:12pm
I have been using Percona server with TokuDB engine for 2 years. Now I have a one problem. I have used TokuDB row format LZMA by COMPRESSED option on LOGSERVER. This table 200M rows and now our disk 91%. So I need to descrease disk size. Then I have deleted 100M row. But not disk descrease and table open select is very slowed.

Why our table very slow? / select * from LOGTABLE limit 0,10 / this query almost 10 min executed.

I used this delete query. / delete from LOGTABLE where ID < 100M /. ID is Primary Key and Auto Increment field.

What can I do?

Please assist me on this issue.

Percona Live Europe featured talk with Igor Canadi — Everything you wanted to know about MongoRocks

Latest MySQL Performance Blog posts - September 7, 2016 - 10:47am

Welcome to another Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Igor Canadi, Software Engineer at Facebook, Inc. His talk will be on Everything you wanted to know about MongoRocks. MongoRocks is MongoDB with RocksDB storage engine. It was developed by Facebook, where it’s used to power mobile backend as a service provider Parse.

I had a chance to speak with Igor and learn a bit more about these questions:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it?

Igor: After I finished my undergrad at the University of Zagreb in Croatia, I joined University of Wisconsin-Madison’s Masters program. Even though UW-M is famous for its work on databases, during my two years there I worked in a different area. However, as I joined Facebook after school, I heard of a cool new project called RocksDB. Everything about building a new storage engine sounded exciting to me, although I had zero idea how thrilling the ride will actually be. The best part was working with and getting to know amazing people from Facebook, Parse, MongoDB, Percona, and many other companies that are using or experimenting with RocksDB.

Percona: Your talk is called “Everything you wanted to know about MongoRocks.” Briefly, what is MongoRocks and why did it get developed?

Igor: Back in 2014 MongoDB announced that they are building a pluggable storage engine API, which would enable MongoDB users to seamlessly choose a storage engine that works best for their workload. Their first prototype was actually using RocksDB as a storage engine, which was very exciting for us. However, they bought WiredTiger soon after, another great storage engine, and decided to abandon MongoDB+RocksDB project. At the same time, Parse was running into scaling challenges with their MongoDB deployment. We decided to help out and take over the development of MongoRocks. We started rolling it out at Parse in March of 2015 already and completed the rollout in October. Running MongoRocks instead of MongoDB with the MMap storage engine resulted in much greater efficiency and lower latencies in some scenarios. Some of the experiences are captured in Parse’s blog posts: http://blog.parse.com/announcements/mongodb-rocksdb-parse/ and http://blog.parse.com/learn/engineering/mongodb-rocksdb-writing-so-fast-it-makes-your-head-spin/

Percona: What are the workloads and database environments that are best suited for a MongoRocks deployment? Do you see and expansion of the solution to encompass other scenarios?

Igor: Generally speaking, MongoRocks should compress really well. Over the years of using LSM engines, we learned that its compression rates are hard to beat. The difference can sometimes be substantial. For example, many benchmarks of MyRocks, which is a MySQL with RocksDB storage engines, have shown that compressed InnoDB uses two times as much space as compressed RocksDB. With better compression, more of your data fits in memory, which could also improve read latencies and lower the stress on storage media. However, this is a tricky question to answer generally. It really depends on the metrics you care about. One great thing about Mongo and different storage engines is that the replication format is the same across all of them, so it’s simple to try it out and see how it performs under your workload. You can just add an additional node in your replica set that’s using RocksDB and monitor the metric you care about on that node.

Percona: What are the unique database requirements at Facebook that keep you awake at night? What would you most like to see feature-wise in MongoDB in the near future (or any database technology)?

Igor: One of the most exciting database projects that we’re working on at Facebook is MyRocks, which I mentioned previously. Currently, we use MySQL with InnoDB to store our Facebook graph and we are experimenting with replacing that with MyRocks. The main motivation behind the project is 2x better compression rates, but we also see better performance in some areas. If you’re attending Percona Live Europe I encourage you to attend either Mark Callaghan’s talk on MyRocks, or Yoshinori’s 3-hour tutorial to learn more.

Percona: What are looking forward to the most at Percona Live Europe this year?

Igor: The best part of attending conferences is the people. I am looking forward to seeing old friends and meeting new ones. If you like to talk storage engines, hit me up!

You can read more about Igor’s thoughts on MongoRocks at his twitter feed.

Want to find out more about Igor, Facebook and MongoRocks? Register for Percona Live Europe 2016, and come see his talk Everything you wanted to know about MongoRocks.

Use the code FeaturedTalk and receive €25 off the current registration price!

Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.

This conference has something for everyone!

Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.

Curious apparent index lookup failure. Can anyone explain this unexpected behavior?

Lastest Forum Posts - September 7, 2016 - 9:46am
I have run across what appears to be a strange indexing failure, and a rather counter-intuitive workaround for it. This is in Percona Server 5.6.29-76.2-log, amd64 platform, running on RHEL6.7.


Consider the following table definition:

CREATE TABLE `ip2location` (
`ip_from` int(10) unsigned zerofill NOT NULL DEFAULT '0000000000',
`ip_to` int(10) unsigned zerofill NOT NULL DEFAULT '0000000000',
`country_code` char(2) NOT NULL DEFAULT '',
`country_name` varchar(255) NOT NULL DEFAULT ' ',
PRIMARY KEY (`ip_from`,`ip_to`)
)

This table contains 378,261 rows. Each ip_from and ip_to value appears in the table exactly once, as the first and last address in INET_ATON form of a specific IP range assigned to a particular country. So not only is (ip_from, ip_to) unique, but ip_from and ip_to are themselves also unique. ip_from and ip_to both increase strictly monotonically, ip_to is always strictly greater than ip_from, and there are no overlapping ranges. Any IP address will fall between the ip_from and ip_to fields of exactly one row.

Now, we do a lookup of an IP in this table as follows:

SELECT COUNT(*) FROM ip2location WHERE ip_from <= INET_ATON('65.78.23.18') AND ip_to >= INET_ATON('65.78.23.18');

This should and does return a single row. It should also need to SCAN only a single row. INET_ATON('65.78.23.18') resolves to 1095636754, and there is exactly one row in which ip_from <= 1095636754 <= ip_to. That is this row:

mysql> SELECT * FROM ip2location WHERE ip_from <= INET_ATON('65.78.23.18') AND ip_to >= INET_ATON('65.78.23.18')\G
*************************** 1. row ***************************
ip_from: 1095628288
ip_to: 1096224919
country_code: US
country_name: United States
1 row in set (0.04 sec)

MySQL should be able use the index to resolve both WHERE conditions. However, let's look at the execution plan:

mysql> EXPLAIN SELECT * FROM ip2location WHERE ip_from <= INET_ATON('65.78.23.18') AND ip_to >= INET_ATON('65.78.23.18')\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: ip2location
type: range
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: NULL
rows: 151312
Extra: Using where
1 row in set (0.00 sec)

We can also write the query this way, with the exact same execution plan and results:

mysql> SELECT * FROM ip2location WHERE INET_ATON('65.78.23.18') BETWEEN ip_from AND ip_to\G
*************************** 1. row ***************************
ip_from: 1095628288
ip_to: 1096224919
country_code: US
country_name: United States
1 row in set (0.04 sec)

mysql> EXPLAIN SELECT * FROM ip2location WHERE INET_ATON('65.78.23.18') BETWEEN ip_from AND ip_to\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: ip2location
type: range
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: NULL
rows: 151312
Extra: Using where
1 row in set (0.00 sec)


Whichever way we choose to write the query, we should be able to resolve this to a single row from the index. However, mysqld is scanning 151,000 rows. Why is this?

Here's a clue:

mysql> EXPLAIN SELECT * FROM ip2location WHERE ip_from <= INET_ATON('65.78.23.18')\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: ip2location
type: range
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: NULL
rows: 151312
Extra: Using where
1 row in set (0.00 sec)

That's exactly the same number of rows. Apparently mysqld is looking up the first WHERE clause on ip_from against the primary key and getting 151,312 possible rows. But it is then scanning those 151,312 rows for the ip_to limit, instead of now checking the second WHERE clause against the same index to narrow it down to a single row.

I was able to devise the following somewhat counter-intuitive workaround, which exploits the query optimizer and a direct index lookup to actually get BETTER performance by adding a subquery in which the subquery's table lookup is optimized out:

mysql> EXPLAIN SELECT * FROM ip2location WHERE ip_from = (select max(ip_from) FROM ip2location WHERE ip_from <= INET_ATON('65.78.23.18'))\G
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: ip2location
type: ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: const
rows: 1
Extra: Using where
*************************** 2. row ***************************
id: 2
select_type: SUBQUERY
table: NULL
type: NULL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: NULL
Extra: Select tables optimized away
2 rows in set (0.00 sec)


This resolves the query with only a single-row table lookup. But it should *already* be only needing to look up a single row, without having to use this slightly cryptic workaround.

Can anyone explain to me why the index is only resolving the ip_from condition in the "normal" query, instead of both the ip_from and ip_to conditions? Is it because the conditions are expressed as <= and >=?

innobackupex unable to restore tables with FULLTEXT index

Lastest Forum Posts - September 6, 2016 - 11:00pm
we have been trying restore Percona (Partial + Incremental) mysql backup, but failing to recreate FULLTEXT index for one of the tables. The process followed is as below:

**For Backup**

# Step I: Take Partial-Full Backup (db1 out of many dbs)

innobackupex --user=root --password=**** --databases="db1" /path/to/basebackupdir --no-timestamp

# Step II: Take Partial-Incremental Backup

innobackupex --incremental /path/to/incbackupdir --databases="db1" --incremental-basedir=/path/to/basebackupdir/ --user=root --password=**** --no-timestamp

**For Restoration**

# Step I: apply log to full backup
innobackupex --apply-log --redo-only fullbackup

#Step II: apply-log incremental backup
innobackupex --apply-log fullbackup --incremental-dir=/tmp/tmp_inc

#Step III: Export Backup
innobackupex --apply-log --export fullbackup

#Step IV: CREATE Required database in MySQL
mysql -uroot -p**** -e "DROP DATABASE IF EXISTS db1"
mysql -uroot -p**** -e "CREATE DATABASE db1"

#Step V: Import DATABASE Structure
mysql -uroot -p**** db1 < db1.sql

#Step VI: Discard tablespace

#Step VII: Import table
cd /tmp/fullbackup/db1
cp -R *.ibd /var/lib/mysql/db1
cp -R *.cfg /var/lib/mysql/db1

#Step VIII: Import tablespace

#Step IX: restart mysql
service mysql restart

Restoration process works fine, Still we are failing, that One of the tables could not create the structure for FULLTEXT index(it gets corrupted), when we try to create table manually error is table already exists.

**Please Help**

MyRocks Docker images

Latest MySQL Performance Blog posts - September 6, 2016 - 1:28pm

In this post, I’ll point you to MyRocks Docker images with binaries, allowing you to install and play with the software.

During the @Scale conference, Facebook announced that MyRocks is mature enough that it has been installed on 5% of Facebook’s MySQL slaves. This has saved 50% of the space on these slaves, which allows them to decrease the number of servers by half. Check out the announcement here:  https://code.facebook.com/posts/190251048047090/myrocks-a-space-and-write-optimized-mysql-database/

Those are pretty impressive numbers, so I decided to take a serious look at MyRocks. The biggest showstopper is usually binary availability, since Facebook only provides the source code: https://github.com/facebook/mysql-5.6.

You can get the image from https://hub.docker.com/r/perconalab/myrocks/.

To start MyRocks:

docker run -d --name myr -P  perconalab/myrocks

To access it, use a regular MySQL client:

mysql -h127.0.0.1

From there you should see RocksDB installed:

show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | ROCKSDB | DEFAULT | RocksDB storage engine | YES | YES | YES |

I hope it makes easier to start experimenting with MyRocks!

MongoDB at Percona Live Europe

Latest MySQL Performance Blog posts - September 6, 2016 - 8:28am

This year, you will find a great deal about MongoDB at Percona Live Europe.

As we continue to work on growing the independent MongoDB ecosystem, this year’s Percona Live Europe in Amsterdam includes many talks about MongoDB. If your company uses MongoDB technologies, is focused exclusively on developing with MongoDB or MongoDB operations, or is just evaluating MongoDB, attending Percona Live Europe will prove a valuable experience.  

As always with Percona Live conferences, the focus is squarely on the technical content — not sales pitches. We encourage our speakers to tell the truth: the good, the bad and the ugly. There is never a “silver bullet” when it comes to technology — only tradeoffs between different solution options.

As someone who has worked in database operations for more than 15 years, I recognize and respect the value of “negative information.” I like knowing what does not work, what you should not do and where trouble lies. Negative information often proves more valuable than knowing how great the features of a specific technology work — especially since the product’s marketing team tends to highlight those very well (and they seldom require independent coverage).

For MongoDB at this year’s Percona Live Europe:
  • We have talks about MongoRocks, a RocksDB powered storage engine for MongoDB — the one you absolutely need to know about if you’re looking to run the most efficient MongoDB deployment at scale!  
  • We will cover MongoDB Backups best practices, as well as several talks about MongoDB monitoring and management  (1, 2, 3) — all of them with MongoDB Community Edition and Percona Server for MongoDB (so they don’t require a MongoDB Enterprise subscription).

There will also be a number of talks about how MongoDB interfaces with other technologies. We show how ToroDB can use the MongoDB protocol while storing data in a relational database (and why that might be a good idea), we contrast and compare MySQL and MongoDB Geospatial features, and examine MongoDB from MySQL DBA point of view.

We also how to use Apache Spark to unify data from MongoDB, MySQL, and Redis, and what are generally the best practices for choosing databases for different application needs.

Finally, if you’re just starting with MongoDB and would like a jump start before attending more detailed MongoDB talks, we’ve got a full day MongoDB 101 tutorial for you.

Join us for the full conference, or register for just one day if that is all your schedule allows. But come to Percona Live Europe in Amsterdam on October 3-5 to get the best and latest MongoDB information.

MariaDB FailOver Cluster using GTID?

Lastest Forum Posts - September 6, 2016 - 7:25am
Hello,

We are looking into ways of implementing a "Failover Cluster" with three servers spread around two DataCenters (S1 and S2 on the primary DataCenter and S3 on the secondary).

Currently we have a master master between S1 and S2 (with log-slave-updates = OFF) and S3 replicating from both of them. We know it isn't a ideal setup and it has some big flaws when it comes to promote S3 as a Master.
We will soon be conducting Business Continuity test switching the load from one server to the other and checking that all our services continue working. For this we would love to have a master-master-master kind of replication. (Our services are more than just a website and a database, so maybe we switch the load to Sever 3 and then because of other issues decide to fall back to Sever 1 while we work the issues).

We are toying the idea to set up a topology using GTID, but we aren't sure we understand GTIDs fully and we found a few things we couldn't find answers for.

Are we doing something that makes sense (Using MariaDB w/ GTIDs for a "failover cluster")? Or we should switch to something like Galera Cluster? (We were hesitant to do such a big change) (Question 1)

The setup we are thinking is;
S1: Server_id:1, Domain_id:1
S2: Server_id:2, Domain_id:2
S3: Server_id:3, Domain_id:3

Server 1 is a slave from 2 and 3
Server 2 is a slave from 1 and 3
Server 3 is a slave from 1 and 2

Variables
gtid_ignore_duplicates = ON (We figured as this is a must, right?) (Question 2))
gtid_strict_mode = ? (Isn't this redundant with ignore_duplicates? What should we set this to?) (Question 3))
log_slave_updates = ON (We do need this so all the domain streams are recorded in every server right? (Question 4))
​​​​​​​
If we were to add a fourth server just as a slave, say for backup purposes, if we just "connect it" to replicate from Server 3, will it also get all the transactions happening in Server 1 and 2 right? We aren't sure if it will just replicate S3's Domain_id (3) and skip the other two (1,2) (Question 5)

Thanks in advance

Error on pt-table-sync

Lastest Forum Posts - September 5, 2016 - 4:34am
Synchronizing two large table that contains BLOB data I get this error "Use of uninitialized value in string ne at /usr/bin/pt-table-sync line 5153".
It's a bug? With other table it works fine

InnoDB Troubleshooting: Q & A

Latest MySQL Performance Blog posts - September 2, 2016 - 2:12pm

In this blog, I will provide answers to the Q & A for the InnoDB Troubleshooting webinar.

First, I want to thank everybody for attending the August 11 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: What’s a good speed for buffer pool speed/size for maximum query performance?

A: I am sorry, I don’t quite understand the question. InnoDB buffer pool is an in-memory buffer. In an ideal case, your whole active dataset (rows that are accessed by application regularly) should be in the buffer pool. There is a good blog post by Peter Zaitsev describing how to find the best size for the buffer pool.

Q: Any maximum range for these InnoDB options?

A: I am again sorry, I only see questions after the webinar and don’t know which slide you were on when you asked about options. But generally speaking, the maximum ranges should be limited by hardware: the size of InnoDB buffer pool limited by the amount of physical memory you have, the size of innodb_io_capacity  limited by the number of IOPS which your disk can handle, and the number of concurrent threads limited by the number of CPU cores.

Q: On a AWS r3.4xlarge, 16 CPU, 119GB of RAM, EBS volumes, what innodb_thread_concurrency, innodb_read_io_threads, innodb_write_io_threads would you recommend? and innodb_read_io_capacity?

A: innodb_thread_concurrency = 16, innodb_read_io_threads = 8, innodb_write_io_threads = 8, innodb_io_capacity — but it depends on the speed of your disks. As far as I know, AWS offers disks with different speeds. You should consult IOPS about what your disks can handle when setting innodb_io_capacity, and “Max IOPS” when setting innodb_io_capacity_max.

Q: About InnoDB structures and parallelism: Are there InnoDB settings that can prevent or reduce latching (causes semaphore locks and shutdown after 600s) that occur trying to add an index object to memory but only DML queries on the primary key are running?

A: Unfortunately, semaphore locks for the CREATE INDEX command are not avoidable. You only can affect other factors that speed up index creation. For example, how fast you write records to the disk or how many concurrent queries you run. Kill queries that are waiting for a lock too long. There is an old feature request asking to handle long semaphore waits gracefully. Consider clicking “Affects Me” button to bring it to the developers’ attention.

Q: How can we check these threads?

A: I assume you are asking about InnoDB threads? You can find information about running threads in SHOW ENGINE INNODB STATUS :

-------- FILE I/O -------- I/O thread 0 state: waiting for completed aio requests (insert buffer thread) I/O thread 1 state: waiting for completed aio requests (log thread) I/O thread 2 state: waiting for completed aio requests (read thread) I/O thread 3 state: waiting for completed aio requests (read thread) I/O thread 4 state: waiting for completed aio requests (read thread) I/O thread 5 state: waiting for completed aio requests (read thread) I/O thread 6 state: waiting for completed aio requests (write thread) I/O thread 7 state: waiting for completed aio requests (write thread) I/O thread 8 state: waiting for completed aio requests (write thread) I/O thread 9 state: waiting for completed aio requests (write thread) Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] , ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 1; buffer pool: 0 529 OS file reads, 252 OS file writes, 251 OS fsyncs 0.74 reads/s, 16384 avg bytes/read, 7.97 writes/s, 7.94 fsyncs/s

And in the Performance Schema THREADS table:

mysql> select thread_id, name, type from performance_schema.threads where name like '%innodb%'; +-----------+----------------------------------------+------------+ | thread_id | name | type | +-----------+----------------------------------------+------------+ | 2 | thread/innodb/io_handler_thread | BACKGROUND | | 3 | thread/innodb/io_handler_thread | BACKGROUND | | 4 | thread/innodb/io_handler_thread | BACKGROUND | | 5 | thread/innodb/io_handler_thread | BACKGROUND | | 6 | thread/innodb/io_handler_thread | BACKGROUND | | 7 | thread/innodb/io_handler_thread | BACKGROUND | | 8 | thread/innodb/io_handler_thread | BACKGROUND | | 9 | thread/innodb/io_handler_thread | BACKGROUND | | 10 | thread/innodb/io_handler_thread | BACKGROUND | | 11 | thread/innodb/io_handler_thread | BACKGROUND | | 13 | thread/innodb/srv_lock_timeout_thread | BACKGROUND | | 14 | thread/innodb/srv_monitor_thread | BACKGROUND | | 15 | thread/innodb/srv_error_monitor_thread | BACKGROUND | | 16 | thread/innodb/srv_master_thread | BACKGROUND | | 17 | thread/innodb/srv_purge_thread | BACKGROUND | | 18 | thread/innodb/page_cleaner_thread | BACKGROUND | | 19 | thread/innodb/lru_manager_thread | BACKGROUND | +-----------+----------------------------------------+------------+ 17 rows in set (0.00 sec)

Q: Give brief on InnoDB thread is not same as connection thread.

A: You create a MySQL connection thread each time the client connects to the server. Generally, the lifetime of this thread is the same as the connection (I won’t discuss the thread cache and thread pool plugin here to avoid unnecessary complexity). This way, if you have 100 connections you have 100 connection threads. But not all of these threads do something. Some are actively querying MySQL, but others are sleeping. You can find the number of threads actively doing something if you examine the status variable Threads_running. InnoDB doesn’t create as many threads as connections to perform its job effectively. It creates fewer threads (ideally, it is same as the number of CPU cores). So, for example just 16 InnoDB threads can handle100 and more connection threads effectively.

Q: How can we delete bulk data in Percona XtraDB Cluster?  without affecting production? nearly 6 million records worth 40 GB size table

A: You can use the utility pt-archiver. It deletes rows in chunks. While your database will still have to handle all these writes, the option --max-flow-ctl  pauses a purge job if the cluster spent too much time pausing for flow control.

Q: Why do we sometimes get “–tc-heuristic-recover” message in error logs? Especially when we recover after a crash? What does this indicate? And should we commit or rollback?

A: This means you used two transactional engines that support XA in the same transaction, and mysqld crashed in the middle of the transaction. Now mysqld cannot determine which strategy to use when recovering transactions: either COMMIT or ROLLBACK. Strangely, this option is documented as “not used”. It certainly is, however. Test case for bug #70860 proves it. I reported a documentation bug #82780.

Q: Which parameter controls the InnoDB thread count?

A: The main parameter is innodb_thread_concurrency. For fine tuning, use innodb_read_io_threads, innodb_write_io_threads, innodb_purge_threads, innodb_page_cleaners. Q:

Q: At what frequency will the InnoDB status be dumped in a file by using innodb-status-file?

A: Approximately every 15 seconds, but it can vary slightly depending on the server load.

Q: I faced an issue that once disk got detached from running server due to some issue on AWS ec2. MySQL went to default mode. After MySQL stopped and started, we observed slave skipped some around 15 mins data. We got it by foreign key relationship issue. Can you please explain why it was skipped data in slave?

A: Amazon Aurora supports two kinds of replication: physical as implemented by Amazon (this is the default for replicas in the same region), and the regular asynchronous replication for cross-region replication. If you use the former, I cannot help you because this is a closed-source Amazon feature. You need to report a bug to Amazon. If you used the latter, this looks buggy too. According to my experience, it should not happen. With regular replication you need to check which transactions were applied (best if you use GTIDs, or at least the log-slave-updates option) and which were not. If you find a gap, report a bug at bugs.mysql.com.

Q: Can you explain more about adaptive hash index?

A: InnoDB stores its indexes on disks as a B-Tree. While B-Tree indexes are effective in general, some queries can take advantage of using much simpler hash indexes. While your server is in use, InnoDB analyzes the queries it is currently processing and builds an in-memory hash index inside the buffer pool (using the prefix of the B-Tree key). While adaptive hash index generally works well, “with some workloads, the speedup from hash index lookups greatly outweighs the extra work to monitor index lookups and maintain the hash index structure” Another issue with adaptive hash index is that until version 5.7.8, it was protected by a single latch — which could be a contention point under heavy workloads. Since 5.7.8, adaptive hash index can be partitioned. The number of parts is controlled by option innodb_adaptive_hash_index_parts.

Save

xtrabackup - is it a complete replacement for innobackupex?

Lastest Forum Posts - September 1, 2016 - 11:28pm
For some time, we've been using innobackupex an it works great (thanks for it). I saw that innobackup is deprecated and should be replaced with xtrabackup binary, but according to the documnetation, that could not backup anything besides InnoDB. I was thinking some other "wrapper" will be introduced, but I didn't find anything.

Today I just blindly tried out xtrabackup and it seems like it backups also frm, myisam...

...
160902 06:03:38 Executing FLUSH NO_WRITE_TO_BINLOG TABLES...
160902 06:03:38 Executing FLUSH TABLES WITH READ LOCK...
...
160902 06:03:38 [01] Copying ./mysql/column_stats.MYD to /backup/mysql/column_stats.MYD
160902 06:03:38 [01] ...done
160902 06:03:38 [01] Copying ./mysql/column_stats.MYI to /backup/mysql/column_stats.MYI
160902 06:03:38 [01] ...done
160902 06:03:38 [01] Copying ./mysql/column_stats.frm to /backup/mysql/column_stats.frm
160902 06:03:38 [01] ...done
160902 06:03:38 [01] Copying ./mysql/columns_priv.MYD to /backup/mysql/columns_priv.MYD
160902 06:03:38 [01] ...done
160902 06:03:38 [01] Copying ./mysql/columns_priv.MYI to /backup/mysql/columns_priv.MYI
160902 06:03:38 [01] ...done
160902 06:03:38 [01] Copying ./mysql/columns_priv.frm to /backup/mysql/columns_priv.frm
...
160902 06:03:40 Executing UNLOCK TABLES
160902 06:03:40 All tables unlocked

Should the documentation (https://www.percona.com/doc/percona-...imitation.html) be updated then?

Thanks

innobackup --database restore Mariadb 10.1.16 not support Drop tablespace

Lastest Forum Posts - September 1, 2016 - 11:18pm
Hi all,

I try to backup --database only on 150GB data for master (single mariadb 10.1.16) and slave replication (one node of the galara cluster). sqldump is not a option which takes too long. I successfull did the backup with following:
innobackupex --user=root --password=password --databases="XXX" /opt/mysql/innobackupbackup/xxxbackup --slave-info --no-timestamp --parallel=4
innobackupex --apply-log --export /opt/mysql/innobackupbackup/xxxbackup

But when i restore it. Mariadb 10.1.16 not support the below:
ALTER TABLE mydatabase.mytable DISCARD TABLESPACE;
ALTER TABLE mydatabase.mytable IMPORT TABLESPACE;

How can we restore it with knowing the binlog position of master to be able to startup slave?
innodb_file_per_table option is enabled in both servers.
Thanks.
Ritchie

Percona Live Europe featured talk with Manyi Lu — MySQL 8.0: what’s new in Optimizer

Latest MySQL Performance Blog posts - September 1, 2016 - 2:09pm

Welcome to a new Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Manyi Lu, Director Software Development at Oracle. Her talk will be on MySQL 8.0: what’s new in Optimizer. There are substantial improvements in the optimizer in MySQL 5.7 and MySQL 8.0. Most noticeably, users can now combine relational data with NoSQL using the new JSON features. I had a chance to speak with Manyi and learn a bit more about the MySQL 8.0:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it.

Manyi: Oh, my interest in database development goes way back to university almost twenty years ago. After graduation, I joined local startup Clustra and worked on the development of a highly available distributed database system for the telecom sector. Since then, I have worked on various aspects of the database, kernel, and replication. Lately I am heading the MySQL optimizer and GIS team.

What I love most about my work are the talented and dedicated people I am surrounded by, both within the team and in the MySQL community. We are passionate about building a database used by millions.

Percona: Your talk is called “MySQL 8.0: what’s new in Optimizer.” So, obvious question, what is new in the MySQL 8.0 Optimizer?

Manyi: There are a number of interesting features in 8.0. CTE or Common Table Expression has been one of the most demanded SQL features. MySQL 8.0 will support both the WITH and WITH RECURSIVE clausesA recursive CTE is quite useful for reproducing reports based on hierarchical data. For DBAs, Invisible Index should make life easier. They can mark an index invisible to the optimizer, check the performance and then decide to either drop it or keep it. On the performance side, we have improved the performance of table scans, range scans and similar queries by batching up records read from the storage engine into the server. We have significant work happening in the cost model area. In order to produce more optimal query plans, we have started the work on adding support for histograms, and for taking into account whether data already is in memory or needs to be read from disk.

Besides the optimizer, my team is also putting a major effort into utf8mb4 support. We have added a large set of utf8mb4 collations based on the latest Unicode standard. These collations have better support for emojis and languages. Utf8 is the dominating character encoding for the web, and this move will make the life easier for the vast majority of MySQL users. We also plan to add support for accent and case sensitive collations.

Please keep in mind that 8.0.0 is the first milestone release. There are quite a few features in the pipeline down the road.

Percona: How are some of the bridges between relational and NoSQL environments (like JSON support) of benefit to database deployments?

Manyi: The JSON support that we introduced in 5.7 has been immensely popular because it solves some very basic day-to-day problems. Relational database forces you to have a fixed schema, and the JSON datatype gives you the flexibility to store data without a schema. In the past, people stored relational data in MySQL and had to install yet another datastore to handle unstructured or semi-structured data that are schema-less in nature. With JSON support, you can store both relational and non-relational data in the same database, which makes database deployment much simpler. And not only that, but you can also perform queries across the boundaries of relational and non-relational data.

Clients that communicate with a MySQL Server using the newly introduced X Protocol can use the X DevAPI to develop applications. Developers do not even need to understand SQL if they do not want to. There are a number of connectors that support the X protocol, so you can use X DevApi in your preferred programming language. We have made MySQL more appealing to a larger range of developers.

Percona: What is the latest on the refactoring of the MySQL Optimizer and Parser?

Manyi: The codebase of optimizer and parser used to be quite messy. The parsing, optimizing and execution stages were intermingled, and the code was hard to maintain. We have had a long-running effort to clean up the codebase. In 5.7, the optimization stage was separated from the execution stage. In 8.0, the focus is refactoring the prepare stage and complete parser rewrite.

We have already seen the benefits of the refactoring work. Development time on new features has been reduced. CTE is a good example. Without refactoring done previously, it would have taken much longer to implement CTE. With a cleaner codebase, we also managed to reduce the bug count, which means more development resources can be allocated to new features instead of maintenance.

Percona: Where do you see MySQL heading in order to deal with some of the database trends that keep you awake at night?

Manyi: One industry trend is cloud computing and Database as a Service becoming viable options to in-house databases. In particular, it speeds up technology deployments and reduces initial investments for smaller organizations. MySQL, being the most popular open source database, fits well into the cloud data management trend.

What we can do is make MySQL even better in the cloud setting. E.g., better support for horizontal scaling, fail-over, sharding, cross-shard queries and the like.

Percona: What are looking forward to the most at Percona Live Europe this year?

Manyi: I like to speak and get feedback from MySQL users. Their input has a big impact on our roadmap. I also look forward to learning more about innovations by web-scale players like Facebook, Alibaba and others. I always feel more energized after talking to people who are passionate about MySQL and databases in general.

You can learn more about Manyi and her thoughts on MySQL 8.0 here: http://mysqlserverteam.com/

Want to find out more about Manyi, MySQL and Oracle? Register for Percona Live Europe 2016, and see her talk MySQL 8.0: what’s new in Optimizer.

Use the code FeaturedTalk and receive €25 off the current registration price!

Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.

This conference has something for everyone!

Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.

pt-online-schema-change pausing because {something}=0, bug?

Lastest Forum Posts - August 31, 2016 - 1:14am
I run pt-online-schema-change like this:
pt-online-schema-change --alter 'ADD `test` TEXT CHARACTER SET utf8 COLLATE utf8_general_ci' --charset 'utf8' --host 'localhost' --user 'root' --password 'password' --max-load wsrep_flow_control_recv=1,wsrep_local_send_queue=1 --critical-load Threads_running=400 --null-to-not-null --no-check-alter --execute --alter-foreign-keys-method none --force --statistics D=database,t=table

Comand output:
No slaves found. See --recursion-method if host loaclhosthas slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
analyze_table, 10, 1
copy_rows, 10, 0.25
create_triggers, 10, 1
drop_triggers, 10, 1
swap_tables, 10, 1
update_foreign_keys, 10, 1
Not updating foreign keys because --alter-foreign-keys-method=none. Foreign keys that reference the table will no longer work.
Altering `database`.`table`...
Creating new table...
Created new table database._table_new OK.
Altering new table...
Altered `database`.`_table_new` OK.
2016-08-31T09:54:45 Creating triggers...
2016-08-31T09:54:45 Created triggers OK.
2016-08-31T09:54:45 Copying approximately 85 rows...
Pausing because wsrep_flow_control_recv=0, wsrep_local_send_queue=0.
Pausing because wsrep_flow_control_recv=0, wsrep_local_send_queue=0.
Pausing because wsrep_flow_control_recv=0, wsrep_local_send_queue=0.
Pausing because wsrep_flow_control_recv=0, wsrep_local_send_queue=0.
^C# Exiting on SIGINT.
Not dropping triggers because the tool was interrupted. To drop the triggers, execute:
DROP TRIGGER IF EXISTS `database`.`pt_osc_database_table_del`;
DROP TRIGGER IF EXISTS `database`.`pt_osc_database_table_upd`;
DROP TRIGGER IF EXISTS `database`.`pt_osc_database_table_ins`;
Not dropping the new table `database`.`_table_new` because the tool was interrupted. To drop the new table, execute:
DROP TABLE IF EXISTS `database`.`_table_new`;
# Event Count
# ====== =====
# INSERT 1
`database`.`table` was not altered. From doc for --max_load "pause if any status variables are higher than their thresholds" - nothing about zero value.

So, how use max_load with variables whitch can take zero value in normal state of db?

I look into source code and found, as i think, reason why this happend:
https://github.com/percona/percona-t...a-change#L5188
PHP Code: if ( !$val || $val >= $self->{max_val_for}->{$var} ) {
    $vals_too_high{$var} = $val;

$val - current value of status variable


Memory leak suspected in 5.6.31-77.0

Lastest Forum Posts - August 30, 2016 - 9:42pm
Hi,

On our production master servers, we have noticed that mysql is using more memory than the max calculated memory by mysqltuner.sql. As a result, after three days of activity, oom-killer is restarting mysqld process. Also we have observed that mysqld is swapping even when there is enough free memory.

MySQL Version: 5.6.31-77.0
CentOS Version: 7.2.1511
Engine: InnoDB

On one of the server, the issue was resolved after downgrading to 5.6.28-76.1.

Is there a known issue of memory leak in 5.6.31? If not, what relevant info do you want to investigate this.

Appreciate any help with this issue.

Thanks
Suresh

Webinar Thursday, September 1 – MongoDB Security: A Practical Approach

Latest MySQL Performance Blog posts - August 30, 2016 - 2:23pm

Please join David Murphy as he presents a webinar Thursday, September 1 at 10 am PDT (UTC-7) on MongoDB Security: A Practical Approach. (Date changed*)


This webinar will discuss the many features and options available in the MongoDB community to help secure your database environment. First, we will cover how these features work and how to fill in known gaps. Next, we will look at the enterprise-type features shared by Percona Server for MongoDB and MongoDB Enterprise. Finally, we will examine some disk and network designs that can ensure your security out of the gate – you don’t want to read about how your MongoDB database leaked hundreds of gigs of data on someone’s security blog!

We will cover the following topics:

  • Using SSL for all things
  • Limiting network access
  • Custom roles
  • The missing true SUPER user
  • Wildcarding databases and collections
  • Adding specific actions to a normal role
  • LDAP
  • Auditing
  • Disk encryption
  • Network designs

Register for the webinar here.

David Murphy, MongoDB Practice Manager David joined Percona in October 2015 as Practice Manager for MongoDB. Prior to that, David was on the ObjectRocket by Rackspace team as the Lead DBA. With the growth involved with any recently acquired startup, David’s role covered a wide range from evangelism, research, run book development, knowledgebase design, consulting, technical account management, mentoring and much more. Prior to the world of MongoDB had been a MySQL and NoSQL architect at Electronic Arts working with some of the largest titles in the world like FIFA, SimCity, and Battle Field providing tuning, design, and technology choice responsibilities. David maintains an active interest in database speaking and exploring new technologies.

Percona Live Europe Discounted Pricing and Community Dinner!

Latest MySQL Performance Blog posts - August 30, 2016 - 10:47am

Get your Percona Live Europe discounted tickets now, and sign up for the community dinner.

The countdown is on for the annual Percona Live Europe Open Source Database Conference! This year the conference will be taking place in the great city of Amsterdam October 3-5. This three-day conference will focus on the latest trends, news and best practices in the MySQL, MongoDB, PostgreSQL and other open source databases, while tackling subjects such as analytics, architecture and design, security, operations, scalability and performance. Percona Live provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs.

With breakout sessions, tutorial sessions and keynote speakers, there will certainly be no lack of content.
Advanced Rate Registration ENDS September 5, so make sure to register now to secure the best price possible.

As it is a Percona Live Europe conference, there will certainly be no lack of FUN either!!!!

As tradition holds, there will be a Community Dinner. Tuesday night, October 4, Percona Live Diamond Sponsor Booking.com hosts the Community Dinner at their very own headquarters located in historic Rembrandt Square in the heart of the city. After breakout sessions conclude, attendees are picked up right outside of the venue and taken to booking.com’s headquarters by canal boats! This gives all attendees the opportunity to play “tourist” while viewing the beauty of Amsterdam from the water. Attendees are dropped off right next to Booking.com’s office (return trip isn’t included)! The Lightning Talks for this year’s conference will be featured at the dinner.

Come and show your support for the community while enjoying dinner and drinks! The first 50 people registered for the dinner get in the doors for €10 (after that the price goes to €15 euro). Space is limited so make sure to sign up ASAP!

So don’t forget, register for the conference and sign up for the community dinner before space is gone! See you in Amsterdam!

Percona LDAP configuration

Lastest Forum Posts - August 30, 2016 - 10:22am
Hi all,

First time posting however could do with any help anyone could provide. I've setup percona mongodb inside a container and as part of that container i've also got saslauthd running. I'm able to do the testsaslauthd piece fine and it proves my connection to our AD is working fine.

The container is RHEL based and i've put my mongo.conf definition in /etc/sasl2/ however I'm unable to get LDAP authentication working. When i attempt with the following command:

db.getSiblingDB("$external").auth( { user : "user", pwd : "pass", mechanism: "PLAIN", digestPassword: false } ) ;

I get the following error:

Error: Missing expected field "mechanism"

Despite it being referenced in the authentication string, the log file isn't yielding anything valuable either and neither is the saslauthd log but I suspect it isn't even getting as far as trying to talk go the saslauthd daemon.

If anyone can assist in shedding any light on this i'd be greatly appreciative.

Thanks

innobackupex using ssl doesn't work even though SSL does

Lastest Forum Posts - August 30, 2016 - 5:21am
Trying to use innobackupex with SSL and cannot get it to work.


# innobackupex --version
innobackupex version 2.4.4 Linux (x86_64) (revision id: df58cf2)

# /etc/my.cnf [mysqld] block
ssl
ssl-ca=/etc/pki/MYSQL/ca-cert.pem
ssl-cert=/etc/pki/MYSQL/server-cert.pem
ssl-key=/etc/pki/MYSQL/server-key.pem

# $HOME/.my.cnf
[client]
user = __user__
password = __pass__
ssl-ca = /etc/pki/MYSQL/ca-cert.pem
ssl-cert = /etc/pki/MYSQL/client-cert.pem
ssl-key = /etc/pki/MYSQL/client-key.pem

If I use mysql on the command line to local host it works fine.

If I try the backup I get:

# innobackupex --defaults-file=/etc/my.cnf --slave-info --safe-slave-backup --compress --compress-threads=8 --stream=xbstream --parallel=8 ./ | nc 0.0.0.0 0000
160830 13:19:14 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints "completed OK!".

160830 13:19:14 version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;my sql_socket=/var/lib/mysql/mysql.sock' as '__user__' (using password: YES).
Failed to connect to MySQL server: DBI connect(';mysql_read_default_group=xtrabackup;mysq l_socket=/var/lib/mysql/mysql.sock','__user__',...) failed: SSL connection error at - line 1314
160830 13:19:14 Connecting to MySQL server host: localhost, user: __user__, password: set, port: 0, socket: /var/lib/mysql/mysql.sock
Failed to connect to MySQL server: Access denied for user '__user__'@'localhost' (using password: YES).

mysql command

# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3564711
Server version: 5.6.21-69.0-log Percona Server (GPL), Release 69.0, Revision 675

Copyright (c) 2009-2014 Percona LLC and/or its affiliates
Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

[localhost] (__user__@(none)) 13:19:58 > status
--------------
mysql Ver 14.14 Distrib 5.6.21-69.0, for Linux (x86_64) using EditLine wrapper

Connection id: 3564711
Current database:
Current user: rootsec@localhost
SSL: Cipher in use is AES256-SHA
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 5.6.21-69.0-log Percona Server (GPL), Release 69.0, Revision 675
Protocol version: 10
Connection: Localhost via UNIX socket
Server characterset: latin1
Db characterset: latin1
Client characterset: utf8
Conn. characterset: utf8
UNIX socket: /var/lib/mysql/mysql.sock
Uptime: 344 days 4 hours 29 min 58 sec

Threads: 3 Questions: 60704991 Slow queries: 2439 Opens: 1353 Flush tables: 1 Open tables: 1042 Queries per second avg: 2.041
--------------

I've tried taking --defaults-file out, adding --defaults-extra-file etc. and still nothing.

How are you supposed to backup using SSL?

page_cleaner issues after upgrading to Percona XtraDB 57 from 56

Lastest Forum Posts - August 30, 2016 - 4:18am
Hi There,
After upgrading from 56 to 57 I'm facing a lot of issues with page_cleaner, for example:
InnoDB: page_cleaner: 1000ms intended loop took 6834ms. The settings might not be optimal. (flushed=201, during the time.)
which after some time causes flow control, and this in turn crashes my application.

I remembered that there was a number of new options for controlling page_cleaners introduced in PXCD 56 (https://www.percona.com/doc/percona-...rent-workloads), to my surprise those options are gone from PXCD 57.

Could you assist me with finding those or a way how i can modify some options ?

xtrabackup blocked in prepare stage

Lastest Forum Posts - August 30, 2016 - 12:16am
Good Day Forum

I run a MariaDB backup job for my InnoDB tables with xtrabackup. The database is fairly simple (no foreign keys, no triggers, but tables are partitioned) and not excessively large (9GB). The actual backup runs fine, but the first prepare run sometimes gets stuck with message:
Code: InnoDB: Waited for 30530 seconds for 128 pending reads The script looks as follows:
Code: BACKUPDIR=/data/backup/mysql-percona-backup/ EXTRAFILE=/etc/my.cnf.percona xtrabackup --defaults-file=$EXTRAFILE --backup --no-timestamp --target_dir=$BACKUPDIR xtrabackup --defaults-file=$EXTRAFILE --prepare --target_dir=$BACKUPDIR xtrabackup --defaults-file=$EXTRAFILE --prepare --target_dir=$BACKUPDIR the EXTRAFILE has the following content:
[mysqld]
innodb_log_file_size=5M
socket=/var/lib/mysql/mysql.sock xtrabackup version: xtrabackup version 2.4.4 based on MySQL server 5.7.13 Linux (x86_64) (revision id: df58cf2)
MariaDB version: mysql Ver 15.1 Distrib 5.5.51-MariaDB, for Linux (x86_64) using readline 5.1
OS: CentOS release 6.8 (Final)

Am I missing something? How can this be prevented? Please let me know if more information is needed.


General Inquiries

For general inquiries, please send us your question and someone will contact you.