Buy Percona ServicesBuy Now!

problem starting mysql node

Lastest Forum Posts - April 20, 2018 - 11:40pm
when restoring a backup
of a few of the percona mysql node the same does not start, there is some configuration that does not allow synchronization greater than 1h.

Access denied you need at least SUPER priviledge

Lastest Forum Posts - April 20, 2018 - 1:32pm
I am getting this error: error configuring MySQL: Error 1227: Access denied; you need (at least one of) the SUPER privilege(s) for this operation warning

I have slow query turned on in Aurora. Why am I getting this message in QAN? I have the collect from set to slow log also. I think this was answered somewhere else, but couldn't find it.

PMM Error: "context deadline exceeded". Need to update Scrape_timeout.

Lastest Forum Posts - April 20, 2018 - 12:57pm
I have a PMM docker container deployed and I'm trying to monitor Ubuntu server. On my prometheus/targets page, I keep getting the error: Context Deadline Exceeded. I would really appreciate any help.

I read in prometheus forums that I might need to change the "scrape_timeout" variable on prometheus. This is being set by the /etc/prometheus.yml file.
How would I edit the scrape_timeout variable?

Error #1071 restoring dump.

Lastest Forum Posts - April 20, 2018 - 11:57am
Hello.
I have 3 nodes XtraDB Cluster 5.7.21 and when try to restore a dump file from MariaDB I see:

#mysql -p ocs < ocs.dump
#ERROR 1071 (42000) at line 597: Specified key was too long; max key length is 1000 bytes

Small part of the dump file:
DROP TABLE IF EXISTS `engine_mutex`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `engine_mutex` (
`NAME` varchar(255) NOT NULL DEFAULT '',
`PID` int(11) DEFAULT NULL,
`TAG` varchar(255) NOT NULL DEFAULT '',
PRIMARY KEY (`NAME`,`TAG`)
) ENGINE=MEMORY DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = @saved_cs_client */;

I changed the engine from MEMORY to InnoDB and restore it w/o errors.

The question is: if this change on the engine is the correct form to avoid the error ?
It's a third-party application and we can not change the code.

BR

XtraDB tablespace encryption setup questions

Lastest Forum Posts - April 19, 2018 - 4:23pm
Warning: I'm a noob - I know next to nothing about MySQL, and trying to learn as I deploy an enormously important project for my company.

Can someone help me understand the tablespace encryption setup? I'm confused about exactly what I've enabled, and what I should be enabling.

I first found this: https://www.percona.com/blog/2018/03...b-cluster-5-7/

I followed the above and have a cluster running with 'pxc-encrypt-cluster-traffic=ON' set. But does this only do cluster traffic encryption like the variable says? Or does this actually do tablespace encryption?

I searched a bit more and found this: https://www.percona.com/doc/percona-...l#installation

That seems a lot more involved - was all of that abstracted by the cluster configuration? I definitely haven't setup a vault server or anything. Can anyone tell me if I've done what's required?


+++++++

Bonus questions:

1. How do I restore an XtraBackup backup on an XtraDB cluster? I can't find any documentation about this. Do I restore on one node at a time and wait for sync or something? If I've enabled tablespace encryption on the cluster, does the database restore encrypt the tables in the restored databases?

2. (I'll ask this over in the XtraBackup forum too) If I have an XtraBackup backup that contains multiple databases, and I need to have different databases go to different clusters, do I get a choice in where the restores take place? It looks like I just copy the DB files into the MySQL data directory (/var/lib/mysql for me) - is each database its own set of files, and can I just put them on another machine as I wish?


I really appreciate any help - like I said I'm a noob and really trying hard to learn.

Docker deployed as a kubernetes pod doesn't show the source/client on Dashboard.

Lastest Forum Posts - April 19, 2018 - 10:30am
I have PMM running on AWS as a kubernetes pod and a client connected to it but it won't show as a source on my dashboard.

Description:

I set up PMM as a docker container and deployed it to my Kubernetes cluster on AWS as a pod. I setup a service and endpoint for that as well. I have an in-house linux server that I'm trying to monitor linux:metrics on. I set up a pmm-client on my server and it is at-least talking to my container. I know it's not a networking issue because when I do "pmm-admin ping" on my client, I see that my qan-api.log gets a ping on my container. The pod is measuring it's own pmm-server but no external source.


When I deploy the container locally using the methods in the following link, it works fine but it doesn't work when setup as a pod using kubernetes remotely. I really can use any insights you might have. Let me know what other details might be needed.

https://www.percona.com/doc/percona-...etting-up.html

Customize Dashboard Layout

Lastest Forum Posts - April 19, 2018 - 8:58am
We recently installed PMM Server and added to it ~30 DB servers for monitoring. The graphs are pretty and all, but the layout of the overview page could be much improved. Aside from CPU and RAM graphs (which are unneeded), everything is just a number sitting in a box 5x as the height of the text. Not very efficient at all... Only 9 lines fit on a a 1920x1080 screen so there's more than 3 screens to scroll through to see everything; very inconvenient. We would like to decrease the row height so that the basic stats for all DBs can be seen at once allowing to see at a glance for example if a DB server has recently restarted. How can we get the row height under control? Ideally it should look like a spreadsheet.

PMM server requirements

Lastest Forum Posts - April 19, 2018 - 8:20am
What are the actual system requirements to have PMM server function correctly? I expect some basic formula involving number of monitored hosts to determine the required CPU and RAM.

We tried PMM in Docker initially as a test, and while it worked for one server, when we tried to add several to be monitored it basically stopped collecting any data. It appears the Docker container was overwhelmed, and since that was only meant as a preview we decided to move on with a trial installation in a full VM.

We started the PMM Server VM with 2 CPU and 4GB RAM initially. We have 20 MySQL servers and 3 MongoDB servers that have pmm-client installed and are feeding data to this server. Again, it worked with a few hosts but was showing significant gaps in the data once we had all the hosts added. Seeing the prometheus process is sitting at 200% CPU all the time we increased the VM to 4 vCPU only to see prometheus using 400% CPU all the time and still gaps in the data.

Researching requirements has not come up with much. In the FAQ we found mention that "one host requires 2GB RAM while 20 hosts should be possible with 16GB" but no actual sizing metrics. That one sentence would lead me to expect that there is insufficient RAM, except that there is only ~2.3GB used with ~1.1GB buffers, ~0.5GB free, and 0 swap use. CPU seems to be the bottleneck, but how many servers can be monitored per core, and would more RAM (and some config adjustment to use it) reduce CPU load?

Oh the irony.

Notes after upgrade to 2.4.10

Lastest Forum Posts - April 16, 2018 - 5:06am
Hi!

I'm running 3 nodes Percona XtraDB Cluster. After I upgrade XtraBackup to 2.4.10, I see at log file:

2018-04-16T00:03:03.580612-03:00 2 [Note] WSREP: You have configured 'xtrabackup-v2' state snapshot transfer method which cannot be performed on a running server. Wsrep provider won't be able to fall back to it if other means of state transfer are unavailable. In that case you will need to restart the server.

What does it means ?
BR

pt-archiver - Character set mismatch

Lastest Forum Posts - April 16, 2018 - 2:25am
I am using pt-archiver 3.0.8 (3.0.8-1.stretch) on Debian 9 "Stretch" with mariaDB (10.1.26-0+deb9u1). When using this command:

pt-archiver --source h=dbserver.example.local,D=fabitest,t=test --dest h=dbarchiveserver.example.local,D=fabitestarchive, t=test --where 'headerid IN (SELECT id FROM test WHERE (state="ENDED" OR state="ERROR") AND time< DATE_SUB('"2018-04-16 09:31:06"', INTERVAL '40' DAY))' --user archiver --password superpassword --skip-foreign-key-checks --ignore --no-version-check

I am running in the following issue:

Character set mismatch: --source DSN uses utf8mb4, table uses latin1. You can disable this check by specifying --no-check-charset.

The character set seems the same on source and destination.
dbserver.example.local
MariaDB [information_schema]> select * from information_schema.schemata;
+--------------+--------------------+----------------------------+------------------------+----------+
| CATALOG_NAME | SCHEMA_NAME | DEFAULT_CHARACTER_SET_NAME | DEFAULT_COLLATION_NAME | SQL_PATH |
+--------------+--------------------+----------------------------+------------------------+----------+
| def | fabitest | latin1 | latin1_swedish_ci | NULL |
| def | information_schema | utf8 | utf8_general_ci | NULL |
| def | mysql | utf8mb4 | utf8mb4_general_ci | NULL |
| def | performance_schema | utf8 | utf8_general_ci | NULL |
+--------------+--------------------+----------------------------+------------------------+----------+

dbarchiveserver.example.local:
MariaDB [information_schema]> select * from information_schema.schemata;
+--------------+--------------------+----------------------------+------------------------+----------+
| CATALOG_NAME | SCHEMA_NAME | DEFAULT_CHARACTER_SET_NAME | DEFAULT_COLLATION_NAME | SQL_PATH |
+--------------+--------------------+----------------------------+------------------------+----------+
| def | fabitestarchive | latin1 | latin1_swedish_ci | NULL |
| def | information_schema | utf8 | utf8_general_ci | NULL |
| def | mysql | utf8mb4 | utf8mb4_general_ci | NULL |
| def | performance_schema | utf8 | utf8_general_ci | NULL |
+--------------+--------------------+----------------------------+------------------------+----------+

Can somebody explain how pt-archiver checks the character set?
I used pt-archiver on Debian 8 but with mysql. After upgrading my script isn't working anymore.

Thanks

PMM linux:metrics service causing abnormal load on client machine

Lastest Forum Posts - April 14, 2018 - 4:04am
Hi,

PMM - 1.9.0
PMM linux:metrics causing HIGH CPU load on client machine however SAR report doesn't suggest load.
CPU LOAD goes upto 500-1000 (using "w"command).

at the same time
PMM "system overview", "load average" graph also showing the same load. (900+) post which it stopped populating the "system overview" graphs.
we have multiple machine configured to monitor using PMM and the problem is only for this one client.

The moment PMM linux:metrics service is stopped CPU load came down to normal.
We started and stopped the PMM linux:metrics multiple times to confirm the issue.

Attached here-with the pmm-server.log and pmm-client.log


PMM and Mongo sharded replica set cluster

Lastest Forum Posts - April 13, 2018 - 7:27am
Hi,

I'm new to Mongo and am trying to learn the mongo-lingo, so I apologize if I use the incorrect terms.

I currently have a 7 system Mongo setup that I'm trying to monitor with PMM - named mongo-r, a1, a2, a3, b1, b2 and b3. Server mongo-r is acting as the router, Servers a1, a2 and a3 are acting supporting replica set rs_a - with a1 set as the main server. Servers b1, b2 and b3 are supporting replica set rs_b with b1 acting as the main server for that replica set. The database name that is being sharded/replica set is name "intdata".

Servers a2, a3 and b2 are acting as configuration servers.

The listening port layouts for each system is:

mongo-r: 27017
a1: 27018
a2: 27018 (mongod) 27019 (mongoc)
a3: 27018 (mongod) 27019 (mongoc)
b1: 27018
b2: 27018 (mongod) 27019 (mongoc)
b3: 27018

I've followed the documentation that shows how to run "pmm-admin add mongodb", but I'm not sure how to run the correct commands on the router and the other systems - to properly show the statistics for the shards and replica sets.

Any help in this would be greatly appreciated!!
Thanks!

Change timezone for my setup

Lastest Forum Posts - April 13, 2018 - 7:05am
Hi,

All of my servers are set for EDT/EST. Our PMM virtual machine is set for UTC. Right now our servers are sending Linux and Mongo information to the PMM server. But when we look at the graphs the activity shows does not show up in the right place. For instance - I'm currently seeing activity for the last 15 minutes that is showing up on the graph 4-5 hours ago.

How can I fix this?
Thanks!

Percona Monitoring and Management 1.9.1 Is Now Available

Lastest Forum Posts - April 13, 2018 - 2:49am
Percona announces the release of Percona Monitoring and Management 1.9.1. PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

This release contains bug fixes only and supersedes Percona Monitoring and Management 1.9.0. This release effectively solves the problem in QAN when the Count column actually displayed the number of queries per minute, not per second, as the user would expect. The following screenshot demonstrates the problem. The value of the Count column for the TOTAL row is 649.38 QPS (queries per second). The total number 38.96 k (38960) is only sixty times greater than the reported value of QPS. Thus, queries were counted for each minute within the selected time range of Last 1 hour.

Query Analytics in PMM version 1.9.0.




The corrected version of QAN in PMM 1.9.1 shows that queries are now counted per second. The total number of queries is 60 * 60 greater than the value of QPS, as should be expected for the chosen time range.

Query Analytics in PMM version 1.9.1.





Bug fixesHelp us improve our software quality by reporting any bugs you encounter using our bug tracking system.

Percona Server for MongoDB 3.4.14-2.12 Is Now Available

Lastest Forum Posts - April 13, 2018 - 2:48am
Percona announces the release ofPercona Server for MongoDB 3.4.14-2.12 on April 12, 2018. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.4 Community Edition. It supports MongoDB 3.4 protocols and drivers.

Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features. It requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.4.14 and does not include any additional changes.

The Percona Server for MongoDB 3.4.14-2.12 release notes are available in the official documentation.

USE INDEX (PRIMARY) is much faster, even when not looking for anything in that column

Lastest Forum Posts - April 11, 2018 - 11:18am
Scenario:
  • `table_of_data` has 1 million rows
  • primary_col is the PRIMARY key. It is not AUTO_INCREMENT'ed
  • each column in the WHERE clause is indexed.
    ​​​
Trying to understand why a query that is forced to use the Primary key is 3 times faster, when the query is not looking for anything the Primary key column:SELECT SQL_NO_CACHE primary_col, secondary_col
FROM table_of_data
WHERE col_3 = 'car'
AND col_4 = 'red ferarri'
AND col_5 IN ("2 seater")
AND col_6 IN ("Active", "Transferred", "New")
Executes in 1.5 seconds in a table of 1M rows.But this same query, forcing MySQL to use the PRIMARY key (`primary_col`) index, is 3 times faster, even though the WHERE clause isn't looking for anything in the `primary_col` column:SELECT SQL_NO_CACHE primary_col, secondary_col
FROM table_of_data
USE INDEX (PRIMARY)
WHERE col_3 = 'car'
AND col_4 = 'red ferarri'
AND col_5 IN ("2 seater")
AND col_6 IN ("Active", "Transferred", "New")

Executes in .45 seconds in a table of 1M rows.
Is there some magic to the PRIMARY key that I'm not understanding?

Thank you
Steve

pmp-check-aws-rds - Service check timed out after 60.01 seconds

Lastest Forum Posts - April 11, 2018 - 4:19am
Hi,

I'm working with pmp-check-aws-rds.py plugin (1.1.8) over Nagios Core (4.3.2) and in the last month we are reciving from time to time service alert "(Service check timed out after 60.01 seconds)". It happens in all RDSs (60 instances over 2 regions - MySQL 5.7.17).
Quick investigration shows that those timeouts are being counted as "connection errors" on every RDS instance (SELECT * FROM performance_schema.host_cache).

Nothing was changed on Nagios server, and i can't understand why sometimes it works and sometimes not - almost 1 failed on every 10 checks.

Thanks.

question about using tokuDB or InnoDB for financial data.

Lastest Forum Posts - April 11, 2018 - 1:44am
Hello, dear friends,
we want to use Percona xtraDBcluster for clustering 4 nodes (Master-Master) and we have a finance project which should have a rather large database. The number of tables is not more than 40, but we have about 2 tables which store financial transaction data and the estimated amount of data in these tables is going to grow at 350,000 records per day, and we should keep them at least 10 years to be able to do various reports.

Most of the operation (98%) in these tables is Insert/read but we can separate updatable fields in a separate table.

My questions are the following:
  1. Should I use a tokuDB storage engine for such large amounts of data or InnoDB?
  2. What is the best solution for such large and sensitive database?
Best Regards
Ali

Why default value of rocksdb_deadlock_detect option is FALSE ?

Lastest Forum Posts - April 9, 2018 - 5:28am
According to the MyRocks storage engine variables of Percona server for MySQL
(https://www.percona.com/doc/percona-...eadlock_detect), rocksdb_deadlock_detect variables' default value is FALSE.

Does this mean there's no deadlock checks in RocksDB engine ?
And we should rely on rocksdb_lock_wait_timeout ?

Is there any reason deadlock check is disabled on RocksDB ?

Thanks in advance.

pt-osc does not reinstate 'set-vars' if mysql loses its connection, and reconnects

Lastest Forum Posts - April 7, 2018 - 3:21pm
Actually tripped up on a weird issue with pt-osc on Mysql 5.7. We do have a clustered setup, working across data centres. We are also experiencing network issues outside our control, so were trying this.....:


We needed to create 2 indices on a large table, 60m rows. So we did the usual, and use pt-osc. This caused some errors when used across the network on the slaves, so went for the single server at a time approach.
We have 4 servers in AWS, and 3 in a hosting providers. We added --set-vars="SQL_LOG_BIN=OFF", to stop commands being replicated. All worked well, except for 2 servers in the hosted area, which arbitrarily stare=ted replicating the pt-osc instrcutuions.
After investigation. we narrowed this down to a high load scenario. WE fixed this by shifting the load to other servers, then running the script, and shifting the load back (thx HAproxy).

However, this is less than suboptimal. It became apparent that a timeout is kicking in, dropping the connection. and then allowing mysql to reconnect on the next instruction. This loses the settings of set-var, which is session based.

Is there a way to avoid this happening? Is it a known issue.. with a fix we are unaware of? Wehuntedhigh and low... an dfound nothing. I do fully accept that keeping a session variable alive between reconnects is not simple, but I would have thought tha a check on threads within pt-osc.. and if different, saving an reloading the session vars should not be too difficult.. but I culd also be easily wrong.

Any ideas? Anyone else met this?
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.