Buy Percona ServicesBuy Now!

Webinar Thursday 3/30: MyRocks Troubleshooting

Latest MySQL Performance Blog posts - March 29, 2017 - 1:51pm

Please join Percona’s Principal Technical Services Engineer Sveta Smirnova, and Senior Software Engineer George Lorch, MariaDB’s Query Optimizer Developer Sergei Petrunia and Facebook’s Database Engineer Yoshinori Matsunobu as they present MyRocks Troubleshooting on March 30, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

Register Now

MyRocks is an alternative storage engine designed for flash storage. It provides great write workload performance and space efficiency. Like any other powerful engine, it has its own specific configuration scenarios that require special troubleshooting solutions.

This webinar will discuss how to deal with:

  • Data corruption issues
  • Inconsistent data
  • Locks
  • Slow performance

We will use well-known instruments and tools, as well as MyRocks-specific tools, and demonstrate how they work with the MyRocks storage engine.

Register for this webinar here.

Percona database on SSD RAID-0

Lastest Forum Posts - March 29, 2017 - 9:09am
Hello everybody.

I have a very weird behavior of my database which I tried to move to RAID-0 array consisting of two SSD drives.

So I have two "Team Group L3 EVO 120Gb" drives in my PowerEdge R710 server which is using PERC H700 controller.
I've installed drives and created RAID-0 (all defaults) array for larger space and faster RW and because of RAID-1 has no redundancy win with SSD.
I've created only one partition for the whole space and formatted it to ext4.
Then I've stopped Percona Server, moved mysql folder to SSD array, started server and got a lot of errors in the mysql error log looking like:

InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 47123.
InnoDB: You may have to recover from a backup.
2017-03-16 03:38:04 7f5db15ae700 InnoDB: Page dump in ascii and hex (16384 bytes):
len 16384; hex 994f02880000b8130000000000000000000000231813383a00 0a0000000000000000000000cf00000f68ffffffffd29bd18b d0bbd0b0d0b9d0b4d18b2c20d0b0d188d0b0d0b4d18b2c20d1 96d0b7d0b4d0b5d0bdd0b5d0b4d1962e$
InnoDB: End of page dump
2017-03-16 03:38:04 7f5db15ae700 InnoDB: uncompressed page, stored checksum in field1 2572092040, calculated checksums for field1: crc32 2618809217, innodb 3732643505, none 3735928559, stored checks$
InnoDB: Page may be a BLOB page
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 47123.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: See also http://dev.mysql.com/doc/refman/5.6/...-recovery.html
InnoDB: about forcing recovery.
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 47123.
InnoDB: You may have to recover from a backup.

...
2017-03-16 03:38:04 7f5db15ae700 InnoDB: Assertion failure in thread 140040384210688 in file buf0buf.cc line 4480
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/...-recovery.html
InnoDB: about forcing recovery.
21:38:04 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=8388608
read_buffer_size=262144
max_used_connections=39
max_threads=602
thread_count=32
connection_count=32
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 19896992 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f5db0100000
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f5db15add00 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x8d7fdc]
/usr/sbin/mysqld(handle_fatal_signal+0x461)[0x65a441]
/lib64/libpthread.so.0[0x3c1720f7e0]
/lib64/libc.so.6(gsignal+0x35)[0x3c16e325e5]
/lib64/libc.so.6(abort+0x175)[0x3c16e33dc5]
/usr/sbin/mysqld[0xab3899]
/usr/sbin/mysqld[0xaca661]
/usr/sbin/mysqld[0xaad279]
/usr/sbin/mysqld[0xa8eecc]
/usr/sbin/mysqld[0xa9cae3]
/usr/sbin/mysqld[0x568b4e]
/usr/sbin/mysqld[0x568d53]
/usr/sbin/mysqld[0xa3775e]
/usr/sbin/mysqld[0x9871d9]
/usr/sbin/mysqld(_ZN7handler17ha_index_read_mapEPhPKhm16ha_r key_function+0xb6)[0x5a3486]
/usr/sbin/mysqld[0x6bd36c]
/usr/sbin/mysqld(_Z10sub_selectP4JOINP13st_join_tableb+0xdd)[0x6ba82d]
/usr/sbin/mysqld(_ZN4JOIN4execEv+0x3e8)[0x6b9bb8]
/usr/sbin/mysqld(_Z12mysql_selectP3THDP10TABLE_LISTjR4ListI4 ItemEPS4_P10SQL_I_ListI8st_orderESB_S7_yP13select_ resultP18st_select_lex_unitP13st_select_lex+0x275)[0x703ec5]
/usr/sbin/mysqld(_Z13handle_selectP3THDP13select_resultm+0x1 95)[0x704755]
/usr/sbin/mysqld[0x55d138]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1626)[0x6dcd36]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x5b 8)[0x6e2a88]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3 THDPcj+0xff0)[0x6e4210]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1a2)[0x6b0932]
/usr/sbin/mysqld(handle_one_connection+0x40)[0x6b09d0]
/usr/sbin/mysqld(pfs_spawn_thread+0x146)[0x90f5d6]
/lib64/libpthread.so.0[0x3c17207aa1]
/lib64/libc.so.6(clone+0x6d)[0x3c16ee8aad]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7f5dc141b010): is an invalid pointer
Connection ID (thread ID): 799
Status: NOT_KILLED


And something like that. After that I've used innodb_force_recovery = 1, dump data, drop database, import to non-SSD drive. Try other options. Repeat.

I've tried with LVM and without, default mount options and non-default (with discard also), tried xfs instead of ext4 and the result was always the same. All data was correct every time so I can assume the problem was with indexes. Each time I've tried to use SSD there were problems with random tables.

Actually my problem described here: https://bugs.mysql.com/bug.php?id=69476 but it had no answer.

Percona Server version - 5.6.35-80, Centos 6.8 x64. Database size is ~25G.

I'm looking forward for any help or advice. I'm totally stuck.

Thanks in advance.

pre-commit stage

Lastest Forum Posts - March 29, 2017 - 4:34am
There is a problem concerning large ALTER or for example adding the index - the whole database comes to pre-commit stage and until the request is executed it will remain in this stage. This means that all other databases which are on the same server can not process requests and are waiting for that request to be executed. Is there any parameter that would allow us not to block the database to which the request does not apply?
I.E. In Db1 ALTER is processed while DB2 can normally handle requests and is not waiting in queue.

Find Queries with huge Result-Set

Lastest Forum Posts - March 29, 2017 - 1:52am
Hello,
we are using MySQL as DB for our CMS. Lately I noticed a massive growth in Traffic between our HTTP <--> DB Server.
Normally we got about 10MBit/s but since a few Days it's over 100MBit/s ... My first guess would be a Query which delivers a really huge Result-Set back to the HTTP Server - but I'm unable to find out which Query this could be (hundreds of Queries / Second)
Already searched hours for an Answer - but no luck so far ... so I hope someone here can help me finding the Bogus Query ... ;-)
Thank you, bye from sunny Austria
Andreas Schnederle-Wagner

pmm-server was unable to connect pmm-client to collect linux:metrics

Lastest Forum Posts - March 28, 2017 - 4:28pm
pmm-server was unable to connect pmm-client to collect linux:metrics. The follow is the output of pmm-admin check-network. I checked pmm-admin list and linux:metrics showing as running. I verified firewall issue also. There is no firewall issue between pmm-server and pmm-client for 42000 port.

* Connection: Client <-- Server
-------------- ----------------- ------------------ ------- ---------- ---------
SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD
-------------- ----------------- ------------------ ------- ---------- ---------
linux:metrics pmm-client client_ip_address::42000 DOWN YES -
mysql:metrics dbcrpmysqlsbxha2 client_ip_address:42002 OK YES -




-------------- ----------------- ----------- -------- --------------------------------------------- ------------------------
SERVICE TYPE NAME LOCAL PORT RUNNING DATA SOURCE OPTIONS
-------------- ----------------- ----------- -------- --------------------------------------------- ------------------------
linux:metrics pmm_client ip address 42000 YES -


Thanks,
Vishnu

Troubleshooting Issues with MySQL Character Sets Q & A

Latest MySQL Performance Blog posts - March 28, 2017 - 11:36am

In this blog, I will provide answers to the Q & A for the Troubleshooting Issues with MySQL Character Sets webinar.

First, I want to thank everybody for attending the March 9 MySQL character sets troubleshooting webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: We’ve had some issues converting tables from utf8  to utf8mb4. Our issue was that the collation we wanted to use – utf8mb4_unicode_520_ci – did not distinguish between spaces and ideographic (Japanese) spaces, so we were getting unique constraint violations for the varchar fields when two entries had the same text with different kinds of spaces. Have you seen this problem and is there a workaround? We were wondering if this was related to the mother-child character bug with this collation.

A: Unfortunately this issue exists for many languages. For example, in Russian you cannot distinguish “е” and “ё” if you use utf8 or utf8mb4. However, there is hope for Japanese: Oracle announced that they will implement new language-specific utf8mb4 collations in MySQL 8.0. I already see 21 new collations in my 8.0.0 installation.

mysql> show collation like '%0900%'; +----------------------------+---------+-----+---------+----------+---------+ | Collation | Charset | Id | Default | Compiled | Sortlen | +----------------------------+---------+-----+---------+----------+---------+ | utf8mb4_0900_ai_ci | utf8mb4 | 255 | | Yes | 8 | | utf8mb4_cs_0900_ai_ci | utf8mb4 | 266 | | Yes | 8 | | utf8mb4_da_0900_ai_ci | utf8mb4 | 267 | | Yes | 8 | | utf8mb4_de_pb_0900_ai_ci | utf8mb4 | 256 | | Yes | 8 | | utf8mb4_eo_0900_ai_ci | utf8mb4 | 273 | | Yes | 8 | | utf8mb4_es_0900_ai_ci | utf8mb4 | 263 | | Yes | 8 | | utf8mb4_es_trad_0900_ai_ci | utf8mb4 | 270 | | Yes | 8 | | utf8mb4_et_0900_ai_ci | utf8mb4 | 262 | | Yes | 8 | | utf8mb4_hr_0900_ai_ci | utf8mb4 | 275 | | Yes | 8 | | utf8mb4_hu_0900_ai_ci | utf8mb4 | 274 | | Yes | 8 | | utf8mb4_is_0900_ai_ci | utf8mb4 | 257 | | Yes | 8 | | utf8mb4_la_0900_ai_ci | utf8mb4 | 271 | | Yes | 8 | | utf8mb4_lt_0900_ai_ci | utf8mb4 | 268 | | Yes | 8 | | utf8mb4_lv_0900_ai_ci | utf8mb4 | 258 | | Yes | 8 | | utf8mb4_pl_0900_ai_ci | utf8mb4 | 261 | | Yes | 8 | | utf8mb4_ro_0900_ai_ci | utf8mb4 | 259 | | Yes | 8 | | utf8mb4_sk_0900_ai_ci | utf8mb4 | 269 | | Yes | 8 | | utf8mb4_sl_0900_ai_ci | utf8mb4 | 260 | | Yes | 8 | | utf8mb4_sv_0900_ai_ci | utf8mb4 | 264 | | Yes | 8 | | utf8mb4_tr_0900_ai_ci | utf8mb4 | 265 | | Yes | 8 | | utf8mb4_vi_0900_ai_ci | utf8mb4 | 277 | | Yes | 8 | +----------------------------+---------+-----+---------+----------+---------+ 21 rows in set (0,03 sec)

In 8.0.1 they promised new case-sensitive and Japanese collations. Please see this blog post for details. The note about the planned Japanese support is at the end.

Meanwhile, I can only suggest that you implement your own collation as described here. You may use utf8_russian_ci collation from Bug #51976 as an example.

Although the user manual does not list utf8mb4 as a character set for which it’s possible to create new collations, you can actually do it. What you need to do is add a record about the character set utf8mb4 and the new collation into Index.xml, then restart the server.

<charset name="utf8mb4"> <collation name="utf8mb4_russian_ci" id="1033"> <rules> <reset>u0415</reset><p>u0451</p><t>u0401</t> </rules> </collaiton> </charset> mysql> show collation like 'utf8mb4_russian_ci'; +--------------------+---------+------+---------+----------+---------+ | Collation | Charset | Id | Default | Compiled | Sortlen | +--------------------+---------+------+---------+----------+---------+ | utf8mb4_russian_ci | utf8mb4 | 1033 | | | 8 | +--------------------+---------+------+---------+----------+---------+ 1 row in set (0,03 sec) mysql> create table test_yo(gen varchar(100) CHARACTER SET utf8mb4, yo varchar(100) CHARACTER SET utf8mb4 collate utf8mb4_russian_ci) engine=innodb default character set=utf8mb4; Query OK, 0 rows affected (0,25 sec) mysql> set names utf8mb4; Query OK, 0 rows affected (0,02 sec) mysql> insert into test_yo values('ел', 'ел'), ('ель', 'ель'), ('ёлка', 'ёлка'); Query OK, 3 rows affected (0,05 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> insert into test_yo values('Ел', 'Ел'), ('Ель', 'Ель'), ('Ёлка', 'Ёлка'); Query OK, 3 rows affected (0,06 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> select * from test_yo order by gen; +----------+----------+ | gen | yo | +----------+----------+ | ел | ел | | Ел | Ел | | ёлка | ёлка | | Ёлка | Ёлка | | ель | ель | | Ель | Ель | +----------+----------+ 6 rows in set (0,00 sec) mysql> select * from test_yo order by yo; +----------+----------+ | gen | yo | +----------+----------+ | ел | ел | | Ел | Ел | | ель | ель | | Ель | Ель | | ёлка | ёлка | | Ёлка | Ёлка | +----------+----------+ 6 rows in set (0,00 sec)

Q: If receiving utf8 on latin1 charset it will be corrupted. Just want to confirm that you can reformat as utf8 and un-corrupt the data? Also, is there a time limit on how quickly this needs to be done?

A: It will be corrupted only if you store utf8 data in the latin1 column. For example, if you have a table, defined as:

create table latin1( f1 varchar(100) ) engine=innodb default charset=latin1;

And then insert a word in utf8 format into it that contains characters that are not in the latin1 character set:

mysql> set names utf8; Query OK, 0 rows affected (0,00 sec) mysql> set sql_mode=''; Query OK, 0 rows affected, 1 warning (0,00 sec) mysql> insert into latin1 values('Sveta'), ('Света'); Query OK, 2 rows affected, 1 warning (0,04 sec) Records: 2 Duplicates: 0 Warnings: 1

The data in UTF8 will be corrupted and can never be recovered:

mysql> select * from latin1; +-------+ | f1 | +-------+ | Sveta | | ????? | +-------+ 2 rows in set (0,00 sec) mysql> select f1, hex(f1) from latin1; +-------+------------+ | f1 | hex(f1) | +-------+------------+ | Sveta | 5376657461 | | ????? | 3F3F3F3F3F | +-------+------------+ 2 rows in set (0,01 sec)

However, if your data is stored in the UTF8 column and you use latin1 for a connection, you will only get a corrupted result set. The data itself will be left untouched:

mysql> create table utf8(f1 varchar(100)) engine=innodb character set utf8; Query OK, 0 rows affected (0,18 sec) mysql> insert into utf8 values('Sveta'), ('Света'); Query OK, 2 rows affected (0,15 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql> set names latin1; Query OK, 0 rows affected (0,00 sec) mysql> select f1, hex(f1) from utf8; +-------+----------------------+ | f1 | hex(f1) | +-------+----------------------+ | Sveta | 5376657461 | | ????? | D0A1D0B2D0B5D182D0B0 | +-------+----------------------+ 2 rows in set (0,00 sec) mysql> set names utf8; Query OK, 0 rows affected (0,00 sec) mysql> select f1, hex(f1) from utf8; +------------+----------------------+ | f1 | hex(f1) | +------------+----------------------+ | Sveta | 5376657461 | | Света | D0A1D0B2D0B5D182D0B0 | +------------+----------------------+ 2 rows in set (0,00 sec)

Q: Can you discuss how charsets affect mysqldump? Specifically, how do we dump a database containing tables with different default charsets?

A: Yes, you can. MySQL can successfully convert data that uses different character sets, so your only job is to specify option --default-character-set for mysqldump. In this case, strings in any character set you use can be converted to the character set specified. For example, if you use cp1251 and latin1, you may set option --default-character-set to cp1251, utf8 and utf8mb4. However, you cannot set it to latin1 because Cyrillic characters exist in the cp1251 character set, but do not exist in latin1.

The default value for mysqldump is utf8. You only need to change this default if you use values that are outside of the range supported by utf8 (for example, the smileys in utf8mb4).

Q: But if you use the --single-transaction option for mysqldump, you can only specify one character set in the default?

A: Yes, and this is OK: all data will be converted into this character set. And then, when you will restore the dump, it will be converted back to the character set specified in column definitions.

Q: I noticed that MySQL doesn’t support case-sensitive UTF-8 character sets. What do you recommend for implementing case-sensitive UTF-8, if it’s at all possible?

A: In the link I provided earlier, Oracle promises to implement case-sensitive collations for utf8mb4 in version 8.0.1. Before that happens, I recommend you to implement your own case-sensitive collation.

Q: How are tools like pt-table-checksum affected by charsets? Is it safe to use a 4-byte charset (like utf8mb4) as the default charset for all comparisons? Assuming our tables are a mix of latin1 , utf8 and utf8mb4.

A: With this combination, you won’t have any issues: pt-table-checksum uses a complicated set of functions that joins columns and calculates a crc32 checksum on them. In your case, all data will be converted to utf8mb4 and no conflicts will happen.

However, if you use incompatible character sets in a single table, you may get the error "Illegal mix of collations for operation 'concat_ws' ":

mysql> create table cp1251(f1 varchar(100) character set latin1, f2 varchar(100) character set cp1251) engine=innodb; Query OK, 0 rows affected (0,32 sec) mysql> set names utf8; Query OK, 0 rows affected (0,00 sec) mysql> insert into cp1251 values('Sveta', 'Света'); Query OK, 1 row affected (0,07 sec) sveta@Thinkie:~/build/mysql-8.0/mysql-test$ ~/build/percona-toolkit/bin/pt-table-checksum h=127.0.0.1,P=13000,u=root,D=test Diffs cannot be detected because no slaves were found. Please read the --recursion-method documentation for information. 03-18T03:51:58 Error executing EXPLAIN SELECT COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS('#', `f1`, `f2`, CONCAT(ISNULL(`f1`), ISNULL(`f2`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `db1`.`cp1251` /*explain checksum table*/: DBD::mysql::st execute failed: Illegal mix of collations for operation 'concat_ws' [for Statement "EXPLAIN SELECT COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS('#', `f1`, `f2`, CONCAT(ISNULL(`f1`), ISNULL(`f2`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `db1`.`cp1251` /*explain checksum table*/"] at /home/sveta/build/percona-toolkit/bin/pt-table-checksum line 11351. 03-18T03:51:58 Error checksumming table db1.cp1251: Error executing checksum query: DBD::mysql::st execute failed: Illegal mix of collations for operation 'concat_ws' [for Statement "REPLACE INTO `percona`.`checksums` (db, tbl, chunk, chunk_index, lower_boundary, upper_boundary, this_cnt, this_crc) SELECT ?, ?, ?, ?, ?, ?, COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS('#', `f1`, `f2`, CONCAT(ISNULL(`f1`), ISNULL(`f2`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `db1`.`cp1251` /*checksum table*/" with ParamValues: 0='db1', 1='cp1251', 2=1, 3=undef, 4=undef, 5=undef] at /home/sveta/build/percona-toolkit/bin/pt-table-checksum line 10741. TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 03-18T03:51:58 2 0 0 1 0 0.003 db1.cp1251 03-18T03:51:58 0 0 2 1 0 0.167 db1.latin1 03-18T03:51:58 0 0 6 1 0 0.198 db1.test_yo ...

The tool continues working, and will process the rest of your tables. I reported this behavior as Bug #1674266.

Thanks for attending the Troubleshooting Issues with MySQL Character Sets webinar.

Percona Toolkit 3.0.2 is now available

Lastest Forum Posts - March 28, 2017 - 7:31am
Percona announces the availability of Percona Toolkit 3.0.2 on March 27, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

This release includes the following changes:

New Features
  • PT-73: Added support for SSL connections to pt-mongodb-summary and pt-mongodb-query-digest
  • 1642751: Enabled gathering of information about locks and transactions by pt-stalk using Performance Schema if it is enabled (Thanks, Agustin Gallego)
Bug Fixes
  • PT-74: Fixed gathering of security settings when running pt-mongodb-summary on a mongod instance that is specified as the host
  • PT-75: Changed the default sort order in pt-mongodb-query-digest output to descending
  • PT-76: Added support of & and # symbols in passwords for pt-mysql-summary
  • PT-77: Updated Makefile to support new MongoDB tools
  • PT-89: Fixed pt-stalk to run top more than once to collect useful CPU usage
  • PT-93: Fixed pt-mongodb-query-digest to make query ID match query key (Thanks, Kamil Dziedzic)
  • PT-94: Fixed pt-online-schema-change to not make duplicate rows in _t_new when updating the primary key. Also, see 1646713.
  • PT-101: Fixed pt-table-checksum to correctly use the –slave-user and –slave-password options. Also, see 1651002.
  • PT-105: Fixed pt-table-checksum to continue running if a database is dropped in the process
You can find release details in the release notes. Bugs can be reported on Toolkit’s launchpad bug tracker.

Problems after update 5.7.17-11 to 5.7.17-12(mysqldump: Couldn't execute 'SELECT COUNT(*) FROM INFORMATION_SCHEMA...)

Lastest Forum Posts - March 28, 2017 - 6:44am
1. Update Percona Server from 5.7.17-11 to 5.7.17-12
2. Restart mysqld
3. mysqldump --all-databases > backup.sql

mysqldump: Couldn't execute 'SELECT COUNT(*) FROM INFORMATION_SCHEMA.SESSION_VARIABLES WHERE VARIABLE_NAME LIKE 'rocksdb\_skip\_fill\_cache'': The 'INFORMATION_SCHEMA.SESSION_VARIABLES' feature is disabled; see the documentation for 'show_compatibility_56' (3167)

Problem exist on ubuntu 16.10 and CentOS 7

Need information on Latency on Query analytics

Lastest Forum Posts - March 28, 2017 - 5:28am
Hello Experts,

I am MS SQL Developer and new to MySQL. I came across Percona Monitoring and Management tool --> Query Analytics. There is a column called "Latency" and "Load". Can any experts explain me this please?

Regards,
Naveen
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.