Buy Percona ServicesBuy Now!

Percona Monitoring and Management 1.9.0 Is Now Available

Lastest Forum Posts - April 4, 2018 - 11:58pm
Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

There are a number of significant updates in Percona Monitoring and Management 1.9.0 that we hope you will like, some of the key highlights include:
  • Faster loading of the index page: We have enabled performance optimizations using gzip and HTTP2.
  • AWS improvements: We have added metrics from CloudWatch RDS to 6 dashboards, as well as changed our AWS add instance workflow, and made some changes to credentials handling.
  • Percona Snapshot Server: If you are a Percona customer you can now securely share your dashboards with Percona Engineers.
  • Exporting Percona Monitoring and Management Server logs: Retrieve logs from PMM Server for troubleshooting using single button-click, avoiding the need to log in manually to the docker container.
  • Low RAM support: We have reduced the memory requirement so PMM Server will run on systems with 512MB
  • Dashboard improvements: We have changed MongoDB instance identification for MongoDB graphs, and set maximum graph Y-axis on Prometheus Exporter Status dashboard

AWS Improvements

CloudWatch RDS metrics


Since we are already consuming Amazon Cloudwatch metrics and persisting them in Prometheus, we have improved six node-specific dashboards to now display Amazon RDS node-level metrics:
  • Cross_Server (Network Traffic)
  • Disk Performance (Disk Latency)
  • Home Dashboard (Network IO)
  • MySQL Overview (Disk Latency, Network traffic)
  • Summary Dashboard (Network Traffic)
  • System Overview (Network Traffic)
AWS Add Instance changes


We have changed our AWS add instance interface and workflow to be more clear on information needed to add an Amazon Aurora MySQL or Amazon RDS MySQL instance. We have provided some clarity on how to locate your AWS credentials.






AWS Settings


We have improved our documentation to highlight connectivity best practices, and authentication options – IAM Role or IAM User Access Key.

Enabling Enhanced Monitoring



Credentials Screen




Low RAM Support


You can now run Percona Monitoring and Management Server on instances with memory as low as 512MB RAM, which means you can deploy to the free tier of many cloud providers if you want to experiment with PMM. Our memory calculation is now:

Shell 1
2
3
4 METRICS_MEMORY_MULTIPLIED=$(( (${MEMORY_AVAIABLE} - 256*1024*1024) / 100 * 40 ))
if [[ $METRICS_MEMORY_MULTIPLIED < $((128*1024*1024)) ]]; then
METRICS_MEMORY_MULTIPLIED=$((128*1024*1024))
fi
Percona Snapshot Server


Snapshots are a way of sharing PMM dashboards via a link to individuals who do not normally have access to your PMM Server. If you are a Percona customer you can now securely share your dashboards with Percona Engineers. We have replaced the button for sharing to the Grafana publicly hosted platform onto one administered by Percona. Your dashboard will be written to Percona snapshots and only Percona Engineers will be able to retrieve the data. We will be expiring old snapshots automatically at 90 days, but when sharing you will have the option to configure a shorter retention period.



Export of PMM Server Logs


In this release, the logs from PMM Server can be exported using single button-click, avoiding the need to log in manually to the docker container. This simplifies the troubleshooting process of a PMM Server, and especially for Percona customers, this feature will provide a more consistent data gathering task that you will perform on behalf of requests from Percona Engineers.

Faster Loading of the Index Page


In Percona Monitoring and Management version 1.8.0, the index page was redesigned to reveal more useful information about the performance of your hosts as well an immediate access to essential components of PMM, however the index page had to load much data dynamically resulting in a noticeably longer load time. In this release we enabled gzip and HTTP2 to improve the load time of the index page. The following screenshots demonstrate the results of our tests on webpagetest.org where we reduce page load time by half. We will continue to look for opportunities to improve the performance of the index page and expect that when we upgrade to Prometheus 2 we will see another improvement.

The load time of the index page of PMM version 1.8.0



The load time of the index page of PMM version 1.9.0

Issues in this release

New Features
  • PMM-781: Plot new PXC 5.7.17, 5.7.18 status variables on new graphs for PXC Galera, PXC Overview dashboards
  • PMM-1274: Export PMM Server logs as zip file to the browser
  • PMM-2058: Percona Snapshot Server
Improvements
  • PMM-1587: Use mongodb_up variable for the MongoDB Overview dashboard to identify if a host is MongoDB.
  • PMM-1788: AWS Credentials form changes
  • PMM-1823: AWS Install wizard improvements
  • PMM-2010: System dashboards update to be compatible with RDS nodes
  • PMM-2118: Update grafana config for metric series that will not go above 1.0
  • PMM-2215: PMM Web speed improvements
  • PMM-2216: PMM can now be started on systems without memory limit capabilities in the kernel
  • PMM-2217: PMM Server can now run in Docker with 512 Mb memory
  • PMM-2252: Better handling of variables in the navigation menu
Bug fixes
  • PMM-605: pt-mysql-summary requires additional configuration
  • PMM-941: ParseSocketFromNetstat finds an incorrect socket
  • PMM-948: Wrong load reported by QAN due to mis-alignment of time intervals
  • PMM-1486: MySQL passwords containing the dollar sign ($) were not processed properly.
  • PMM-1905: In QAN, the Explain command could fail in some cases.
  • PMM-2090: Minor formatting issues in QAN
  • PMM-2214: Setting Send real query examples for Query Analytic OFF still shows the real query in example.
  • PMM-2221: no Rate of Scrapes for MySQL & MySQL Errors
  • PMM-2224: Exporter CPU Usage glitches
  • PMM-2227: Auto Refresh for dashboards
  • PMM-2243: Long host names in Grafana dashboards are not displayed correctly
  • PMM-2257: PXC/galera cluster overview Flow control paused time has a percentage glitch
  • PMM-2282: No data is displayed on dashboards for OVA images
  • PMM-2296: The mysql:metrics service will not start on Ubuntu LTS 16.04
Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

(No such file or directory) trying to add node to cluster

Lastest Forum Posts - April 4, 2018 - 4:35pm
I'm trying to build an xtradb cluster following the instructions in the Percona documentation but am running into an error trying to add a node to the cluster. Here is the error message from the joiner:

Code: 2018-04-04T23:10:36.778283Z 1 [Note] WSREP: Setting wsrep_ready to false 2018-04-04T23:10:36.778399Z 0 [Note] WSREP: Initiating SST/IST transfer on JOINER side (wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.130.35.11' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '28534' --binlog 'f-tier-va-db-1-bin' ) 2018-04-04T23:10:36.778948Z 0 [ERROR] WSREP: Failed to read 'ready <addr>' from: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.130.35.11' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '28534' --binlog 'f-tier-va-db-1-bin' Read: '(null)' 2018-04-04T23:10:36.778968Z 0 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.130.35.11' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '28534' --binlog 'f-tier-va-db-1-bin' : 2 (No such file or directory) 2018-04-04T23:10:36.779013Z 1 [ERROR] WSREP: Failed to prepare for 'xtrabackup-v2' SST. Unrecoverable. 2018-04-04T23:10:36.779021Z 1 [ERROR] Aborting On the doner side there is not much

Code: 2018-04-04T23:04:41.909344Z 0 [Note] WSREP: (4ad1cc04, 'tcp://0.0.0.0:4567') connection established to 89ef1676 tcp://10.130.35.11:4567 2018-04-04T23:04:41.909388Z 0 [Warning] WSREP: discarding established (time wait) 89ef1676 (tcp://10.130.35.11:4567) 2018-04-04T23:04:42.840177Z 0 [Note] WSREP: cleaning up 89ef1676 (tcp://10.130.35.11:4567) My guess is that is is saying it can't find wsrep_sst_xtrabackup-v2 but that file is in /usr/bin. If I try to run it on the command line as the mysql user I get this result

Code: [root@f-tier-va-db-1 percona-xtradb-cluster.conf.d]# sudo su - mysql Last login: Wed Apr 4 14:51:48 PDT 2018 on pts/0 -bash-4.2$ cd /var/lib/mysql -bash-4.2$ wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.130.35.11' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '28534' --binlog 'f-tier-va-db-1-bin' 2018-04-04T23:25:10.779548Z WSREP_SST: [DEBUG] The xtrabackup version is 2.4.10 2018-04-04T23:25:11.029867Z WSREP_SST: [DEBUG] Streaming with xbstream 2018-04-04T23:25:11.030998Z WSREP_SST: [DEBUG] Using socat as streamer 2018-04-04T23:25:11.095480Z WSREP_SST: [DEBUG] Evaluating (@ Joiner-Recv-sst-info) timeout -k 110 100 socat -u TCP-LISTEN:4444,reuseaddr,retry=30 stdio | xbstream $xbstreameopts -x; RC=( ${PIPESTATUS[@]} ) ready 10.130.35.11:4444/xtrabackup_sst//1 It looks like a good result returning ready 10.130... etc.

I don't know how to continue debugging the problem. Other peoples issues here don't seem to be quite the same as mine.

Code: -bash-4.2$ cat mysqld.cnf wsrep.cnf # Template my.cnf for PXC # Edit to your requirements. [client] socket=/var/lib/mysql/mysql.sock [mysqld] server-id=2 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-bin log_slave_updates expire_logs_days=7 # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 innodb_buffer_pool_size=6G innodb_log_file_size=128M enforce_gtid_consistency = 1 gtid_mode = ON # Path to Galera library wsrep_provider=/usr/lib64/galera3/libgalera_smm.so # Cluster connection URL contains IPs of nodes #If no IP is found, this implies that a new cluster needs to be created, #in order to do that you need to bootstrap this node wsrep_cluster_address=gcomm://10.131.35.11,10.130.35.11,10.132.35.11 wsrep_provider_options='gcache.size=512M' # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # Slave thread to use wsrep_slave_threads= 8 wsrep_log_conflicts # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node IP address wsrep_node_address=10.130.35.11 # Cluster name wsrep_cluster_name=trans_db #If wsrep_node_name is not specified, then system hostname will be used wsrep_node_name=va-db-1 #pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER pxc_strict_mode=ENFORCING # SST method wsrep_sst_method=xtrabackup-v2 #Authentication for SST method wsrep_sst_auth="sstuser:sstuser" [sst] inno-apply-ops="--use-memory=2G" wsrep_debug=1 #compressor="pigz -p 1" #decompressor="pigz -d"
Here are my packages

Code: -bash-4.2$ rpm -qa | grep -i percona Percona-XtraDB-Cluster-garbd-57-5.7.21-29.26.1.el7.x86_64 Percona-XtraDB-Cluster-shared-compat-57-5.7.21-29.26.1.el7.x86_64 Percona-XtraDB-Cluster-client-57-5.7.21-29.26.1.el7.x86_64 percona-xtrabackup-24-2.4.10-1.el7.x86_64 Percona-XtraDB-Cluster-57-debuginfo-5.7.21-29.26.1.el7.x86_64 Percona-XtraDB-Cluster-full-57-5.7.21-29.26.1.el7.x86_64 Percona-XtraDB-Cluster-shared-57-5.7.21-29.26.1.el7.x86_64 Percona-XtraDB-Cluster-server-57-5.7.21-29.26.1.el7.x86_64 Percona-XtraDB-Cluster-devel-57-5.7.21-29.26.1.el7.x86_64 Percona-XtraDB-Cluster-test-57-5.7.21-29.26.1.el7.x86_64 percona-release-0.1-5.noarch

pt-online-schema-change fatal error: undeclared value

Lastest Forum Posts - April 4, 2018 - 4:48am
After the latest update pt-online-schema-change (version 3.0.8) isn't working anymore. Seems like a common error which is easy to fix, however for us it means we can't do any changes for the time being.

Message: Can't use an undefined value as an ARRAY reference at /usr/bin/pt-online-schema-change line 7514.

node 3 does not sync

Lastest Forum Posts - April 3, 2018 - 11:31am
It had 3 servers node in Percona XtraDB Cluster 5.6.
I had a problem on node 3 and needed to reinstall.
The size of my database is at 60GB, when I try to start node 3 it presents the error:

root@XXXXXXX:/home# /etc/init.d/mysql start
[....] Starting MySQL (Percona XtraDB Cluster) database server: mysqld . .[....] State transfer in progress, setting sleep higher: mysqld .[.[FAILThe server quit without updating PID file (/var/run/mysqld/mysqld.pid). ... failed!
failed!


I did some testing and apparently the error is because of the size of the database, causing timeout.

Is it possible to increase this time to synchronize with other nodes?
Does anyone have a solution?

Thank you

Mongo Sharded Cluster monitoring (sanity check) and other noob q's...

Lastest Forum Posts - April 3, 2018 - 10:39am
Hi
I've got PMM successfully installed across several ubuntu nodes and communicating to the docker-based pmm-server.

I've been scouring doc and haven't been able to find a how-to or blog post about monitoring a mongo sharded-cluster. There seems to be a lot of information taken-for-granted that's not spelled out for newcomers. So, after a couple days of poking around blowing stuffs up, I'm basically looking for confirmation that things here are correctly set-up while I do this product evaluation.

In my dev env, I have a three machines each running a config server, and a repl-set node for one of the two shards. My primary work laptop is running the mariaDB master and the mongo-router for the sharded-cluster. I have a pmm-client running on the work laptop and I've used pmm-admin to connect to the mongos (router) -- I have an accurate report under "cluster summary" in the grafana dash. I was not able to start the mongo-router with the operationProfiling section as documented, but I did add it to the shard-1.1 node and was able to start mongod with those options.

I am not able to add the local mongod instance (for the shard) using pmm-admin add however. So, my first question - is this the correct way to monitor a sharded repl-set? That you attach the client to the router instance and that's it as long as you enable the operationProfiling section on all mongod instances in the sharded repl-set?

Code:
root@gordito:~# pmm-admin add --uri mongodb://localhost:27018 mongodb [linux:metrics] OK, already monitoring this system. [mongodb:metrics] Cannot connect to MongoDB using uri mongodb://localhost:27018: no reachable servers root@gordito:~# mongo --port 27018 MongoDB shell version: 3.2.19 connecting to: 127.0.0.1:27018/test Server has startup warnings: 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] namasteShard1:PRIMARY>
Second question is: how do you monitor the config-servers? As they're just another repl-set serving hash-keys, don't you also want these under PMM?

Third question: Why does the repl-set option in the grafana dash show blank, as in it's not connecting or detecting the shard's replication set?

If, in the grafana dash, I switch away from the mongo overview (which is linked (according to the instance drop down) to the laptop, and then click to it using the drop-down menu option, I lose all data until I go back to the main window and click the system node (left column) to regenerate the page. The laptop, when I request the overview page, is missing from the instance drop down in the top left.

I think all of these questions can be answered in the appropriate how-to on setting up monitoring for a repl-set or sharded-cluster -- if such a doc exists, can someone please provide a link?

On the primary shard node, I cannot add the shard

Next, on the mysql side, when I look at the dash for mysql replication, I don't see anything other than graphs in the left side. No top-bar summary info or right-side data. I am assuming one also needs to add the repl-set using the pmm-admin tool? (I didn't find this explicitly stated so am just guessing...)

Finally, on the main dashboard, I have a correct count of the number of systems monitored, but the db count (Monitored DB Instances) is at 1 even though I can get graphs (as described above) from the mysql and mongo connections... why is this?

Ok - that's it for now, I am going to keep trying various permutations to see if I can get things to mesh on my own. However, I would be deeply grateful if someone could point out my more glaring mistakes, assumptions and errors...

Thanks!

---mike

Mongo Sharded Cluster monitoring (sanity check) and other noob q's...

Lastest Forum Posts - April 3, 2018 - 8:59am
Hi -

I've got PMM successfully installed across several ubuntu nodes and communicating to the docker-based pmm-server.

I've been scouring doc and haven't been able to find a how-to or blog post about monitoring a mongo sharded-cluster. There seems to be a lot of information taken-for-granted that's not spelled out for newcomers. So, after a couple days of poking around blowing stuffs up, I'm basically looking for confirmation that things here are correctly set-up while I do this product evaluation.

In my dev env, I have a three machines each running a config server, and a repl-set node for one of the two shards. My primary work laptop is running the mariaDB master and the mongo-router for the sharded-cluster. I have a pmm-client running on the work laptop and I've used pmm-admin to connect to the mongos (router) -- I have an accurate report under "cluster summary" in the grafana dash. I was not able to start the mongo-router with the operationProfiling section as documented, but I did add it to the shard-1.1 node and was able to start mongod with those options.

I am not able to add the local mongod instance (for the shard) using pmm-admin add however. So, my first question - is this the correct way to monitor a sharded repl-set? That you attach the client to the router instance and that's it as long as you enable the operationProfiling section on all mongod instances in the sharded repl-set?

Code: root@gordito:~# pmm-admin add --uri mongodb://localhost:27018 mongodb [linux:metrics] OK, already monitoring this system. [mongodb:metrics] Cannot connect to MongoDB using uri mongodb://localhost:27018: no reachable servers root@gordito:~# mongo --port 27018 MongoDB shell version: 3.2.19 connecting to: 127.0.0.1:27018/test Server has startup warnings: 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-04-02T15:39:30.280-0700 I CONTROL [initandlisten] namasteShard1:PRIMARY>
Second question is: how do you monitor the config-servers? As they're just another repl-set serving hash-keys, don't you also want these under PMM?

Third question: Why does the repl-set option in the grafana dash show blank, as in it's not connecting or detecting the shard's replication set?

If, in the grafana dash, I switch away from the mongo overview (which is linked (according to the instance drop down) to the laptop, and then click to it using the drop-down menu option, I lose all data until I go back to the main window and click the system node (left column) to regenerate the page. The laptop, when I request the overview page, is missing from the instance drop down in the top left.

I think all of these questions can be answered in the appropriate how-to on setting up monitoring for a repl-set or sharded-cluster -- if such a doc exists, can someone please provide a link?

On the primary shard node, I cannot add the shard

Next, on the mysql side, when I look at the dash for mysql replication, I don't see anything other than graphs in the left side. No top-bar summary info or right-side data. I am assuming one also needs to add the repl-set using the pmm-admin tool? (I didn't find this explicitly stated so am just guessing...)

Finally, on the main dashboard, I have a correct count of the number of systems monitored, but the db count (Monitored DB Instances) is at 1 even though I can get graphs (as described above) from the mysql and mongo connections... why is this?

Ok - that's it for now, I am going to keep trying various permutations to see if I can get things to mesh on my own. However, I would be deeply grateful if someone could point out my more glaring mistakes, assumptions and errors...

Thanks!

---mike


PS: I had to register a new account b/c I deleted the old email my account was linked to (and is still active here.) Anyway to get that reconciled?

xtrabackup restore one database to percona-server

Lastest Forum Posts - April 3, 2018 - 5:56am
there is a server with several databases, on crowns are made database backups, separately for each database:
/usr/bin/xtrabackup --defaults-file=/etc/mysql/my.cnf --user='root' --password='password' --datadir=/var/lib/mysql --databases "$DATABASE" --target-dir=$BACKUP_DIR/$DATABASE --backup 2>&1

you need to restore one database without changing others, how to do this?

Percona XtraBackup 2.4.10 Is Now Available

Lastest Forum Posts - April 3, 2018 - 3:24am
Percona announces the GA release of Percona XtraBackup 2.4.10 on March 30, 2018. This release is based on MySQL 5.7.19. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQLbackups.

Starting from now, Percona XtraBackup issue tracking system was moved from launchpad to JIRA.

Bugs Fixed:
  • xbcrypt with the --encrypt-key-file option was failing due to regression in Percona XtraBackup 2.4.9. Bug fixed bug PXB-518.
  • Simultaneous usage of both the --lock-ddl and --lock-ddl-per-table options caused Percona XtraBackup lock with the backup process never completed. Bug fixed PXB-792.
  • Compilation under Mac OS X was broken. Bug fixed PXB-796.
  • A regression of the maximum number of pending reads and the unnoticed earlier possibility of a pending reads related deadlock caused Percona XtraBackup to stuck in prepare stage. Bug fixed PXB-1467.
  • Percona XtraBackup skipped tablespaces with a corrupted first page instead of aborting the backup. Bug fixed PXB-1497.
Other bugs fixed: PXB-513.

Release notes with all the bugfixes for version 2.4.10 are available in our online documentation. Please report any bugs to the issue tracker.

About deadlock of FLUSH TABLES WITH READ LOCK in xtrabackup

Lastest Forum Posts - April 3, 2018 - 1:47am
I am using Xtrabackup.
percona-xtrabackup-24-2.4.8-1.el6.x86_64

Is there a backup method that does not use FLUSH TABLES WITH READ LOCK ?
I am using slave and multithread slave. A deadlock bug may occur.

https://bugs.mysql.com/bug.php?id=87489

Please tell me the way to avoid this bug.

PMM Linux targets flapping state

Lastest Forum Posts - April 1, 2018 - 10:08pm
Hello,

Have a setup of PMM server (v1.7.0) installed as docker and monitoring linux and mysql metrics of 3 PXC nodes using PMM client of same version. This setup was working fine without any issue till last Thusday. Since then, I am observing a flapping in the state of linux metrics from 2 nodes. PMM list shows the service as up always, but when we query pmm check-network, we can see the Client <-- Server status for linux-metrics as down and after couple of seconds or minutes then it is back as OK for couple of minutes before flapping again. During the same time when check on 'prometheus/targets', the linux metrics is down due to error "context deadline exceeded".

The issue is with only 2 nodes out of 3. All the nodes and PMM server are in same N/W and DC. It will be great if I get some guidance on how to troubleshoot the issue.

With Regards
Raghupradeep

A PXC Cluster across three datacenters

Lastest Forum Posts - March 30, 2018 - 8:25am
I am a fresh about PXC.i wonder if i can deploy a cluster across three datacenters.for example:nodeA in clusterA,nodeB in clusterB,nodeC in clusterC.the lantency of dcA and dcB is low because they are in the same city.dataC is far away from them.nodeA and nodeB both provide write and read,nodeC only a full backup and vote to nodeA or nodeB to decide which one is the donater which never provide write or read.nodeA synchronize with nodeB,nodeC asynchronize with nodeA and nodeB.And the donater only be nodeA or nodeB. IF IS IT POSSIBLE AND HOW TO CONFIG. THANKS VERY MUCH.

PMM 1.8.1 Client Install broke MySQL Performance_Schema Table

Lastest Forum Posts - March 28, 2018 - 7:54am
After installing PMM clients on 5 servers in a cluster running Percona XtraDB MySQL Cluster (MySQL 5.7), none of the servers allow any queries against performance schema table. I get the error message "Native table 'performance_schema'.'global_variables' has the wrong structure after PMM"

I have tried a process restart, and running "sudo mysql_upgrade -u root -p".

This has broken our phpMyAdmin installs, so we can not view any variables on it, as well as made it impossible for anything to get any performance metrics from MySQL other then PMM.

Anyone have any advice? Is this an issue with PMM 1.8.1? It is very concerning if it really did adjust the database structure without informing me of a change.

pt-online-schema-change - rename FK problem

Lastest Forum Posts - March 27, 2018 - 12:43am
Hi everyone,

I have a question regarding the toolkit ‘pt-online-schema-change’ (percona-toolkit-3.0.5-1.el6.x86_64).
We are using this toolkit for OPTIMIZE tables of our production databases (MySQL 5.5.42). Until now it is working well except name of constraints (foreign keys). Always when this tool execute a MySQL table, it renames (add or remove an underscore in front of FK name) all foreign keys on this table. I know that this is done because of MySQL restriction to not allow two foreign keys with the same name. But does it have this toolkit (pt-online-schema-change’) any parametrization that will rename again the foreign keys in previous name like before executing the toolkit (like rename of tables: _tablename_new --> tablename)?

Execution of this tool is done like below:
pt-online-schema-change -h ${MYSQL_HOST} -u ${MYSQL_USER} -p${MYSQL_PASSWORD} --execute --print --alter "ENGINE=InnoDB" --alter-foreign-keys-method=rebuild_constraints D=${DATABASE_NAME},t=$table

If anyone can support us in this question, I will appreciate.


Thanks,
Shkemb

xtrabackup using stream

Lastest Forum Posts - March 26, 2018 - 6:50pm
I'm new to percona xtrabackup and I've been trying to perform a full backup stream from my local ( test server around 600GB ) to remote server.
I just have some questions and I need guides, and I think this is the best place .

I have this command which I executed in my local
innobackupex --user=user --password=password --stream=tar /which/directory/ | pigz | ssh user@10.11.12.13 "cat - > /mybackup/backup.tar.gz"

My questions are :
  • My log scan is not changing / increasing
>> log scanned up to (270477048535)
>> log scanned up to (270477048535)
>> log scanned up to (270477048535)
>> log scanned up to (270477048535)
>> log scanned up to (270477048535)

I've read a comment before and someone says log scan will not increase if no one is using the database. ( Yes, no one is using the database )
  • It's been running for a while.
I've tried to use xtrabackup to a local test server with around 1.7TB and finished in just a few hours. Is this because I'm using stream that's why it is slow ?
  • What is the purpose of "/which/directory/" in my command? Is it going to store the file in /which/directory/ first and then transfer to my remote server ? Why do I have to specify a directory?
  • No created file on my local server /which/directory/ and to my remote server /mybackup/.

Am I doing something wrong ? Is there a much easier way to perform this?
My only goal is to backup my local database to a remote server, I'm doing this stream because I don't have enough disk space to store my backup locally.

I'm using MariaDB 5.5 and Percona xtrabackup 2.2

Can anyone help me ? Thanks in advance .

Can't set up docker volume persistence on Amazon ECS

Lastest Forum Posts - March 25, 2018 - 6:51am
I'm trying to set up percona server on Amazon ECS, using CloudFormation.
If I just start percona server without setting up volumes for persistence, the container starts just fine. The problem occurs only after setting volumes. Starting server always fails with following errors.

Running --initialize-insecure datadir: /var/lib/mysql/
mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
2018-03-25T13:09:29.624331Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 5000)
2018-03-25T13:09:29.624386Z 0 [Warning] Changed limits: table_open_cache: 431 (requested 2000)
2018-03-25T13:09:29.624523Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-03-25T13:09:29.625959Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
2018-03-25T13:09:29.626041Z 0 [ERROR] Aborting

Here is my CloudFormation template to create an ECS cluster with percona-server and adminer.

https://gist.github.com/vroad/7f8217...d168e00e633253

If I use mysql:5.7 instead of percona/percona-server 5.7, the server starts fine. Somehow this problem only occurs with percona-server.
Probably I need to fix permissions in some way, but why the same configuration does not work on percona-server? (And how to do that on ECS's task definition?)

https://github.com/docker-library/mysql/issues/219

Bug found in Percona Server Version: 5.6.34-79.1-1.jessie

Lastest Forum Posts - March 24, 2018 - 7:09am
Hi,

Today we encountered bug in Percona Server Version: 5.6.34-79.1-1.jessie server.
Mysql service was unavailable for few seconds during bug logging process.

Attached mysql-error.log with bug details. Please help.

Here are few bug related logs.

key_buffer_size=33554432
read_buffer_size=131072
max_used_connections=414
max_threads=502
thread_count=81
connection_count=81
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 232495 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x36fb450
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7fa997040e88 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x8c10bc]
/usr/sbin/mysqld(handle_fatal_signal+0x469)[0x649249]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf8d0)[0x7fb15df7d8d0]
/usr/sbin/mysqld(_ZN8MDL_lock11Ticket_list13remove_ticketEP1 0MDL_ticket+0x11)[0x63b981]
/usr/sbin/mysqld(_ZN8MDL_lock13remove_ticketEMS_NS_11Ticket_ listEP10MDL_ticket+0x88)[0x63be48]
/usr/sbin/mysqld(_ZN11MDL_context12release_lockE17enum_mdl_d urationP10MDL_ticket+0x1a)[0x63ceea]
/usr/sbin/mysqld(_ZN18Global_backup_lock18release_protection EP3THD+0x1c)[0x7bee6c]
/usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG6commitEP3THDb+0x4c5)[0x86e645]
/usr/sbin/mysqld(_Z15ha_commit_transP3THDbb+0x310)[0x591290]
/usr/sbin/mysqld(_Z17trans_commit_stmtP3THD+0x29)[0x751679]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x2eb9)[0x6c9c99]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x5e 8)[0x6ccfe8]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3 THDPcj+0xd2b)[0x6ce3cb]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1a2)[0x69b1d2]
/usr/sbin/mysqld(handle_one_connection+0x40)[0x69b270]
/usr/sbin/mysqld(pfs_spawn_thread+0x146)[0x8fc596]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x80a4)[0x7fb15df760a4]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fb15bfb662d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fa86246b900): is an invalid pointer
Connection ID (thread ID): 46143427
Status: NOT_KILLED
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.