This post is a follow-up to my November 19 webinar, “Tips from the Trenches: A Guide to Preventing Downtime for the Over-Extended DBA,” during which I described some of the most common reasons DBAs experience avoidable downtime. The session was aimed at the “over-stretched DBA,” identified as the MySQL DBA short of time or an engineer of another discipline without the depth of the MySQL system. The over-stretched DBA may be prone to making fundamental mistakes that cause downtime through poor response time, operations that cause blocking on important data or administrative mishaps through the lack of best practice monitoring and alerting. (You can download my slides and view the recorded webinar here.)
Monitor the things
One of the aides to keeping the system up and running is ensuring that your finger is on the pulse of the environment. Here on the Percona Managed Services team, we leverage Percona Monitoring Plugins (open source plugins for Nagios, Cacti and Zabbix) to ensure we have visibility of our client’s operations. Having a handle on basics such as disk space, memory usage and MySQL operational metrics ensures that we avoid trivial downtime that would affect the client’s uptime or worse, their bottom line.
One of the most common reasons that an application is unable to serve data to its end user is that access to a table is being blocked due to another ongoing operation. This can be blamed on a variety of sources: backups, schema changes, poor configuration and long running transactions can all lend themselves to costly blocking. Understanding the impact of actions on a MySQL server can be the difference between a happy end user and a frustrated one.
During the webinar I made reference to some resources and techniques that can assist the over extended DBA avoid downtime and here are some highlights….
Monitoring and Alerting
It’s important that you have some indications that something is reaching its capacity. It might be the disk, connections to MySQL or auto_increment limit on a highly used table. There is quite the landscape to cover but here are a handful of helpful tools:
* Percona Monitoring Plugins
* New Relic
Poorly performing SQL can be indicative that the configuration is incorrect, that there’s a missing index or that your development team needs a quick lesson on MySQL anti-patterns. Arm yourself with proof that the SQL statements are substandard using these resources and work with the source to make things more efficient:
* Percona Cloud Tools
* pt-query-digest, explain, indexes
If you need to ensure that your application survives hiccups such as hardware failure or network impairment, a well deployed HA solution will give you the peace of mind that you can quickly mitigate bumps in the road.
* Percona XtraDB Cluster, Galera
* Percona Replication Manager
A wise man once quoted “A backup today saves you tomorrow.” Covering all bases can be the difference between recovering from a catastrophic failure and job hunting. Mixing logical, physical and incremental backups while adding in some offsite copies can provide you with the safety net in the event that a small mistake like a dropped table is met or worse, all working copies of data and backups are lost in a SAN failure. It happens so be prepared.
* Percona XtraBackup
* mysqlbinlog (5.6)
We had some great questions from the attendees and regrettably were unable to answer them all, so here are some of them with my response.
Q: I use MySQL on Amazon RDS. Isn’t much of the operations automated or do these tips still apply?
A: It’s not completely automated. There are still challenges to address and configuration opportunities, but understanding the limitations of RDS is key. For example, the location and size of the tmpdir is something you are unable to customise on RDS. You would typically review this config in a production environment if your workload required it. Any costly queries that perform operations requiring tmp area to sort (think OLAP) might not be a good fit on RDS due to this limitation. Getting to know the limitations around hosted or DBaaS services is time well spent to avoid explaining what keeps taking the application down in peak hours.
Q: What other parts of Percona Toolkit do you recommend for MySQL operations?
A: Percona Toolkit is a well-evolved suite of tools that all MySQL DBAs should familiarize themselves with. In particular I will fit many tools into my weekly workflow:
- pt-duplicate -key-checker
The key with Percona Toolkit is that many common tasks or problems that could cause you to reinvent the wheel are covered, mature and production ready. As with any tool, you should always read the label or in this case the documentation so you’re well aware what the tools can do, the risks and the features that you can make use of.
Q: HA – are there any solutions that you would stay away from?
A: Using any particular HA solution is going to be another R&D exercise. You will need to understand the tradeoffs, configuration options and compare between products. Some might have a higher TCO or lack functionality. Once the chosen solution is implemented it’s pertinent that the engineers understand the technology to be able to troubleshoot or utilize the functionality in the situation where failover needs to be instigated. I like HA solutions to be fast to failover to and some entail starting MySQL from cold.
Q: You mentioned having tested backups. How do you perform this?
A: Percona’s method is using a dedicated host with access to the backup files. Then with a combination of mysqlsandbox and pt-table-checksum we can discover if we trust the files we capture for disaster recovery. Many people underestimate the importance of this task.
Q: Percona Cloud Tools – how much does it cost?
A: Right now it’s a free service. Visit cloud.percona.com for more information, but in a nutshell Percona Cloud Tools is a hosted service providing access to query performance insights for all MySQL uses.
Q: Is there API access to Percona Cloud Tools for application integration?
A: There is currently not a public API available. It is on the roadmap, though. We’d be interested to hear more about your use case so please sign up for the service and try it out. After signing in, all pages include a Feedback link to share your thoughts and ideas such as how you’d like to use a public API.
Q: Can you use MHA with Percona XtraDB Cluster?
A: MHA is not something that can be used with Percona XtraDB Cluster (PXC). It’s common to partner PXC with HAProxy for making sure your writes are going to the appropriate node.
Q: Can MHA make automatic failover? If MHA has automatic failover, what do you recommend? Configure it for automatic failover?
A: MHA can make an automatic failover. Personally I prefer managed failover. When working with automated failover it’s important that failback is manual to avoid “flapping.” “Splitbrain” is an ailment that you don’t want to suffer from as well and auto failover removes the human judgment from the decision to relocate operations from a failed node onto a standby node. If you are going to vote for an automatic failover it is advised to test all potential failure scenarios and to employ a STONITH method to really ensure that the unresponsive node is not serving read/write traffic.
Q: What is the best way to detect database blocking from DML statements? Is there a tool that will show blocking after the fact so you don’t have to catch it real-time?
A: Once again, Percona has a tool called pt-deadlock-logger that can detect and log deadlocks. Detecting locking can be achieved using “SHOW ENGINE INNODB STATUS” or utilizing the information_schema.innodb_locks table. Some engineering might be required for this to be logged but those resources exist for use.
Q: Since you mentioned tinkering with ELK I was wondering if you had any tips on good Kibana dashboards to build to monitor MySQL databases/clusters?
A: ELK is something that I’m looking to publish some information on soon so watch this space!
Thanks again everyone for the great questions! And as a reminder, you can download my slides and view the recorded webinar here.
The post Tips from the trenches for over-extended MySQL DBAs appeared first on MySQL Performance Blog.
Im not sure what causing this. My guess is it has to do with replication and a daily quorem vote but thats just a guess. Anyone have any Ideas.
I'm using Percona Server 5.6.21-70.1 on Ubuntu 12.04 LTS, and my replication broke (anonymized):
: mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: schaduw Master_Port: 3307 Connect_Retry: 10 Master_Log_File: mysql-bin.000182 Read_Master_Log_Pos: 469336089 Relay_Log_File: relay-bin.001073 Relay_Log_Pos: 408 Relay_Master_Log_File: mysql-bin.000176 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1032 Last_Error: Could not execute Update_rows event on table DATABASE.TABLE; Can't find record in 'TABLE', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000176, end_log_pos 833933307 Skip_Counter: 0 Exec_Master_Log_Pos: 816823387 Relay_Log_Space: 3425877976 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 1032 Last_SQL_Error: Could not execute Update_rows event on table DATABASE.TABLE; Can't find record in 'TABLE', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000176, end_log_pos 833933307 Replicate_Ignore_Server_Ids: Master_Server_Id: 3 Master_UUID: 497de104-1697-11e4-9b81-00163e943835 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: 141201 14:35:03 Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: e3bd03cc-e98f-11e3-b5e0-00163e943836:14528432-14839710 Executed_Gtid_Set: 060affc4-2c58-11e4-a959-00163e910310:1-18, 497de104-1697-11e4-9b81-00163e943835:1-3, 7c3ce949-e7ed-11e3-ab38-00163e943835:1-21376:21393-42453, e3bd03cc-e98f-11e3-b5e0-00163e943836:1-6153708:6153725-14528431 Auto_Position: 1 1 row in set (0.00 sec) My MySQL setup is as follows:
- two MySQL servers in Master/Master replication, with a virtual IP managed by Pacemaker, so only on master is the real master
- one offsite MySQL server as a slave of one of the two servers mentioned above. This is the MySQL with the replication problem.
Who can help me? If you need more info, please let me know.
By default in MySQL 5.6, each time MySQL is started (regular start or crash recovery), it iterates through all the binlog files when GTIDs are not enabled. This can take a very long time if you have a large number of binary log files. MySQL and Percona Server 5.6.21+ have a fix with the simplified-binlog-gtid-recovery option. Let’s explore this issue.Understanding the issue
It was first reported by Yoshinori @ Facebook (bug #69097).
Let’s start by looking at a MySQL 5.6 instance where binary logging is enabled but GTIDs are disabled.
If we restart MySQL with strace, we’ll see:# strace -e open service mysql start [...] open("/var/lib/mysql5621/mysql-bin.000004", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000003", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000002", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000001", O_RDONLY) = 13 [...]
MySQL opens all binary log files in reverse order. This can bite if you have lots of binlog files or if the binlog files are large.
This does not happen with MySQL 5.5, so why such a change? The reason is … GTIDs. If you look at the bug report, MySQL tries to initialize a few GTID-related settings even if gtid_mode = OFF
The same kind of issue happens when you have binlog files with GTIDs and binlog files without GTIDs:# strace -e open service mysql start [...] open("/var/lib/mysql5621/mysql-bin.000010", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000001", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000002", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000003", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000004", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000005", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000006", O_RDONLY) = 13 -> First binlog file with GTIDs [...]
Actually you can see that MySQL will do 2 scans: a reverse scan and a forward scan. Here not all binlogs need to be opened: MySQL will stop scanning files as soon as it finds GTID information. But that can again bite if you have just turned on GTIDs (and therefore most binlog files do not contain any GTID information).
Now what happens if you set gtid_mode = ON from the start or if all the binlog files without any GTID information have been removed?# strace -e open service mysql start [..] open("/var/lib/mysql5621/mysql-bin.000011", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000006", O_RDONLY) = 13 [...]
Only the newest and the oldest binlog files are opened, which is expected.The fix
The fix is easy: simply add simplified-binlog-gtid-recovery = 1 in your configuration file. When this is set, MySQL will open at most 2 binlog files: the newest one and the oldest one. See the documentation.
Let’s see what happens with our server containing some binlog files without GTID information:# strace -e open service mysql start [...] open("/var/lib/mysql5621/mysql-bin.000012", O_RDONLY) = 13 open("/var/lib/mysql5621/mysql-bin.000001", O_RDONLY) = 13 [...]
What is the performance overhead of the binlog scans? Of course YMMV, but I did a quick test on my laptop by creating 80x 100MB binlog files: by default, startup takes 6s while with simplified-binlog-gtid-recovery=1 it only takes 2s. 3x improvement with a single setting, not bad!Conclusion
It is good to see that regressions introduced in MySQL 5.6 are being fixed over time. This one is pretty nasty as most people not using GTIDs will never think that something related to GTIDs can actually create performance issues. Is there any drawback if you enable this setting? I can’t think of any, so I’m hoping it will be enabled by default in 5.7.
The post Faster restarts for MySQL and Percona Server 5.6.21+ appeared first on MySQL Performance Blog.
I can see this lib in the /usr/local/mysql/lib, and my PATH is set correctly.
We are evaulating to use Percona MySQL server in our environment. We have collecting millions of rows time series data each day from our customers and we have background jobs to delete these rows at 4 hours interval. We are not able to use partitioning by 4 hours because each customer data time is not same.
Currently we delete 100000 rows at each delete but seems that it is not efficient and create fragementation. Also, we are not able to run optimize table in our production systems.
Do you have any recommendation for such operations in Percona server? Is percona more efficient than normal mysql?
After a new bootstrap my error.log file on all four servers is getting entries that did not happen before.
2014-11-30 13:20:58 28579 [Warning] WSREP: Failed to report last committed 208532168, -4 (Interrupted system call)
And this is happening every other minute, the configuration files nor the data files have been changed, the database and the software that uses it is the same and has not been changed.
What can I do?
I can'n start the MySQL after installation (both Mysql & Percona).
I received the same error about innodb cannot use linux native AIO.
So I disable it temporarily.
I've some articles & comments saying it'll affect DB engine performance.
Is it true? If so, how many percents?
BTW, I only run 1 MySQL instance on the machine.
sent data is 6GB per hour. I listed in show processlist; and view the slow query log, it seems no problem.
Here is my my.cnf
# I need some remote connection, but I have stopped them in crontab, then monitor again, in processlist and slow query log, all the queries have already come from localhost, but the sent traffic still looks terrible.
what caused such a huge mysql traffic, Is this normal?
i have a Problem with the sync in galera.
have installed xtrabackup and mysql + galera.
i have (for test) 2 nodes. node1 10.0.0.241, node2 10.0.0.242.
so my config in node1:
# All files in this package is subject to the GPL v2 license
# More information is in the COPYING file in the top directory of this package.
# Copyright (C) 2011 severalnines.com
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
basedir = /usr
datadir = /var/lib/mysql
### INNODB OPTIONS
: [HTML] [/HTML]
## You may want to tune the below depending on number of cores and disk sub
innodb_flush_method = O_DIRECT
## avoid statistics update when doing e.g show tables
# CHARACTER SET
#collation_server = utf8_unicode_ci
#init_connect='SET NAMES utf8'
#character_set_server = utf8
# REPLICATION SPECIFIC
#server_id must be unique across all mysql servers participating in replication.
# OTHER THINGS, BUFFERS ETC
key_buffer_size = 24M
tmp_table_size = 64M
max_heap_table_size = 64M
max_allowed_packet = 512M
#sort_buffer_size = 256K
#read_buffer_size = 256K
#read_rnd_buffer_size = 512K
#myisam_sort_buffer_size = 8M
query_cache_type = 0
query_cache_size = 0
## WSREP options
# Full path to wsrep provider library or 'none'
# Provider specific configuration options
# Logical cluster name. Should be the same for all nodes.
# Group communication system handle
# Human-readable node name (non-unique). Hostname by default.
# Address for incoming client connections. Autodetect by default.
# How many threads will process writesets from other nodes
# DBUG options for wsrep provider
# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
# Location of the directory with data files. Needed for non-mysqldump
# state snapshot transfers. Defaults to mysql_real_data_home.
# Maximum number of rows in write set
# Maximum size of write set
# to enable debug level logging, set this to 1
# convert locking sessions into transactions
# how many times to retry deadlocked autocommits
# change auto_increment_increment and auto_increment_offset automatically
# replicate myisam
# retry autoinc insert, which failed for duplicate key error
# enable "strictly synchronous" semantics for read operations
# Command to call when node status or cluster membership changes.
# Will be passed all or some of the following options:
# --status - new status of this node
# --uuid - UUID of the cluster
# --primary - whether the component is primary or not ("yes"/"no")
# --members - comma-separated list of members
# --index - index of this node in the list
## WSREP State Transfer options
# State Snapshot Transfer method
# ClusterControl currently DOES NOT support wsrep_sst_method=mysqldump
# Address on THIS node to receive SST at. DON'T SET IT TO DONOR ADDRESS!!!
# (SST method dependent. Defaults to the first IP of the first interface)
# SST authentication string. This will be used to send SST to joining nodes.
# Depends on SST method. For mysqldump method it is root:<root password>
# Desired SST donor name.
# Protocol version to use
port = 3306
max_allowed_packet = 512M
socket = /var/run/mysqld/mysqld.sock
nice = 0
config on node 2 (same as config 1, only the following parameters was changed):
on node 1 have done a (after mysql Server starts):
mysql -u root --execute="SET wsrep_on = OFF; GRANT ALL ON *.* TO user@'%' identified by 'pass';"
so the node 1 i can start without Problems. but if i start the node 2 i get the following error(s) on node 1:
141128 11:34:22 [ERROR] WSREP: Failed to read from: wsrep_sst_xtrabackup --role 'donor' --address '10.0.0.242:4444/xtrabackup_sst' --auth 'userass' --socket '/var/run/mysqld/mysqld.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --gtid '06ab79ea-76ea-11e4-bfd5-9f746c398e32:1'
InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.
This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup, documentation, and help resources:
141128 11:34:22 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/etc/mysql/my.cnf;mysql_read_default_group=xtrabackup;mysql_s ocket=/var/run/mysqld/mysqld.sock' as 'user' (using password: YES).
innobackupex: Error: Failed to connect to MySQL server: DBI connect(';mysql_read_default_file=/etc/mysql/my.cnf;mysql_read_default_group=xtrabackup;mysql_s ocket=/var/run/mysqld/mysqld.sock','user',...) failed: Access denied for user 'root'@'localhost' (using password: YES) at /usr//bin/innobackupex line 2945
why? i only will connect via the replication user (user).
can anybody help me?
[root@xxx ~]# pt-query-digest --user root --password='my123' --socket /var/lib/mysql/mysql.sock --processlist --host localhost
[root@xxx ~]# pt-query-digest --host localhost --user root --password='my123' --socket=/var/lib/mysql/mysql.sock
^C# Caught SIGINT.
^C# Exiting on SIGINT.
[root@xxx ~]# pt-query-digest --host localhost --user root --password='my123' --socket=/var/lib/mysql/mysql.sock --processlist
A connection with the info displaying INTERNAL DDL LOG RECOVER IN PROGRESS and the state being a system lock.
After querying the performance schema database I found there are 4 locks. 3 wait/synch/rwlock/myisam/MYISAM_SHARE::key_root_lock and one ::mmap_lock. Each then has a unique object instance begin numerical value. write lcked by thread id are all null for all of those and read locked by count are all 0.
'wait/synch/rwlock/myisam/MYISAM_SHARE::key_root_lock', '111175228706368', NULL, '0'
'wait/synch/rwlock/myisam/MYISAM_SHARE::key_root_lock', '111175228706432', NULL, '0'
'wait/synch/rwlock/myisam/MYISAM_SHARE::key_root_lock', '111175228706496', NULL, '0'
'wait/synch/rwlock/myisam/MYISAM_SHARE::mmap_lock', '111175228705272', NULL, '0'
It seems like in the performance schema rwlock_instances table there are no locks listed now. This is after i restarted the server. Yet on the main thread I am still seing the system lock that INTERAL DDL LOG RECOVER IN PROGRESS.
What do I need to do to solve this issue and remove the system lock?
All nodes in the cluster failed to start now.
What's shutdown sequence and startup sequence points to note?
Please help tell me what to do, thanks!
centos 6.5 (2.6.32-431.el6.x86_64 )
Percona XtraDB Cluster 5.6.20-68.0
Often enough I find MySQL benchmark results where the difference between results is 1% or even less and some conclusions are drawn. Now it is not that 1% is not important – especially when you’re developing the product you should care about those 1% improvements or regressions because they tend to add up. However with such a small difference it is very important to understand whenever this is for real or it is just the natural variance for your baseline test.
Take a look at this graph:
Click the image for a larger view)
This is the result for a simple in-memory, read-only “select by primary key” SysBench benchmark on dedicated physical hardware that is otherwise idle, simple 1 socket system. I tried to stabilize it as much as possible, for example disabling CPU frequency scaling. But still I saw some 3% difference between “good runs” and bad runs.
What is the difference between those runs? Simply mysqld restarts.
Does this mean you can’t measure a smaller difference? You can by setting the appropriate test plan. Often having several runs makes sense, in others you need to make sure the system warms up before taking measurements or having benchmark runs that are long enough. Whatever method you use it is a good idea to apply your test methodology by conducting several runs of your baseline run to ensure the results are stable enough for your purpose. For example If I decided to do five 30-minute runs and average the results, if they all run within 0.1% I will consider 0.3% differences as meaningful.
Another practical trick that often helps me to separate real differences from some side effects is mixing the tests. Say if I have configurations I’m testing A and B instead of doing AAA BBB I would do ABABAB which helps with the case when there is some regression that can accumulate over time, such as with Flash.
You should also note that in modern systems there is almost always something happening in the background that can change performance – the SSD is doing garbage collection, MySQL (or Kernel) is flushing dirty pages, the CPU can even simply cool off and as a result being able to support Turbo-boost operations a little longer. So when you are stat running your benchmarks make sure you keep the idle time between runs the same – scripting benchmarks and iterating scenarios helps here.
Happy MySQL benchmarking!
Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on Novmeber 25th 2014. Binaries are available from downloads area or from our software repositories. We’re also happy to announce that Ubuntu 14.10 and CentOS 7 users can download, install, and upgrade Percona XtraDB Cluster 5.6 from Percona’s software repositories.
Based on Percona Server 5.6.21-70.1 including all the bug fixes in it, Galera Replicator 3.8, and on Codership wsrep API 25.7, Percona XtraDB Cluster 5.6.21-25.8 is now the current General Availability release. All of Percona‘s software is open-source and free, and all the details of the release can be found in the 5.6.21-25.8 milestone at Launchpad.
- Galera 3.8 introduces auto-eviction for nodes in the cluster experiencing network issues like packet loss. It is off by default and is turned on with evs.auto_evict option. This feature requires EVS protocol version (evs.version) 1. During the EVS protocol upgrade all membership changes are communicated over EVS protocol version 0 to preserve backwards compatibility, protocol is upgraded to the highest commonly supported version when forming a new group so if there exist a single node with older version in the group, the group protocol version remains as 0 and auto-eviction is not functional. (#1274192).
- Percona XtraDB Cluster now supports backup locks in XtraBackup SST (in the default xtrabackup-v2 wsrep_sst_method). Backup locks are used in lieu of FLUSH TABLES WITH READ LOCK on the donor during SST. This should allow for minimal disruption of existing and incoming queries, even under high load. Thus, this should allow for even faster SST and node being in ‘donor/desynced’ state. This also introduces following constraints: Percona XtraDB Cluster 5.6.21 requires Percona XtraBackup 2.2.5 or higher; An older (< 5.6.21) joiner cannot SST from a newer (>= 5.6.21) donor. This is enforced through SST versioning (sent from joiner to donor during SST) and logged to error log explicitly. (#1390552).
- Percona XtraDB Cluster is now shipped with Galera MTR test suite.
- Percona XtraDB Cluster now shows a warning in case additional utilities, like pv which may not affect critical path of SST, are not installed. Bug fixed #1248688.
- mysqldump SST can now use username/password from wsrep_sst_auth under group of [sst] in my.cnf in order not to display the credentials in the error log. Bug fixed #1293798.
- Normal shutdown under load would cause server to remain hanging because replayer failed to finish. Bug fixed #1358701.
- wsrep_causal_reads variable was not honored when declared as global. Bug fixed #1361859.
- garbd would not work when cluster address was specified without the port. Bug fixed #1365193.
- If mysqld gets killed during the SST it will leave an unclean data directory behind. This would cause Percona XtraDB Cluster to fail when the server would be started next time because the data directory would be corrupted. This was fixed by resuming the startup in case wsrep-recover failed to recover due to corrupted data directory. The old behavior is still achievable through --exit-on-recover-fail command line parameter to mysqld_safe or exit-on-recover-fail under [mysqld_safe] in my.cnf. Bug fixed #1378578.
- Percona XtraDB Cluster now reads environment variables for mysqld from following files (if present): /etc/default/mysql in Debian/Ubuntu; /etc/sysconfig/mysql in CentOS 6 or lower; /etc/sysconfig/mysql in CentOS 7 with mysql.service; /etc/sysconfig/XYZ in CentOS 7 with mysql@XYZ.service (/etc/sysconfig/bootstrap is supplied by default). Bug fixed #1381492.
- gvwstate.dat file was removed on joiner when Xtrabackup SST method was used. Bug fixed #1388059.
- Percona XtraDB Cluster now detects older joiners which don’t have the backup lock support. Bug fixed #1390552.
- Inserts to a table with auto-increment primary key could result in duplicate key error if another node joined or dropped from the cluster during the insert processing. Bug fixed #1366997.
- TOI now skips replication if all tables are temporary. Bugs fixed #11 and #13.
- Bootstrapping a node tried to resolve gcomm address list specified in wsrep-cluster-address. Bug fixed #88.
- In case stored procedure containing a non-InnoDB statement (MyISAM) performed autocommit, that commit would be entered two times: at statement end and next time at stored procedure end. Bug fixed #2.
- Two appliers conflicting with local transaction and resulting later in (acceptable) BF-BF lock conflict, would cause cluster to hang when the other BF thread would not grant the lock back after its local transaction got aborted. Bug fixed #7.
Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!
The post Percona XtraDB Cluster 5.6.21-25.8 is now available appeared first on MySQL Performance Blog.
I have a problem with load balancing with haproxy to a 3 nodes percona xtradb cluster.
I made this setting in production and everything works perfect. But now i do exactly the same in Kvm enviroment with debian.
Here is my haproxy.cfg
log 127.0.0.1 local0 notice
timeout connect 5000
timeout server 50000
timeout client 50000
option mysql-check user haproxy_check
server mysql1-dev 192.168.1.104:3306 check
server mysql2-dev 192.168.1.105:3306 check
server mysql3-dev 192.168.1.106:3306 check
listen 192.168.1.107 :8080
stats uri /
stats realm Strictly\ Private
stats auth haproxy_root:debian
stats refresh 5s
The issue is that if I comment the User parameter ( option mysql-check #user haproxy_check) it seems to work fine. But otherwise don't work.
The log tell me this:
Nov 25 11:31:49 localhost haproxy: Server educaedu/mysql1-dev is DOWN, reason: Layer7 wrong status, code: 0, info: "Access denied for user 'haproxy_root'@'192.168.1.107' (using password: NO)", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 11:31:50 localhost haproxy: Server educaedu/mysql2-dev is DOWN, reason: Layer7 wrong status, code: 0, info: "Access denied for user 'haproxy_root'@'192.168.1.107' (using password: NO)", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 11:31:50 localhost haproxy: Server educaedu/mysql3-dev is DOWN, reason: Layer7 wrong status, code: 0, info: "Access denied for user 'haproxy_root'@'192.168.1.107' (using password: NO)", check duration: 1ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 11:31:50 localhost haproxy: proxy educaedu has no server available!
and telnet to ip and port tell me this:
root@haproxy-dev:~# telnet 192.168.1.107 3306
Connected to 192.168.1.107.
Escape character is '^]'.
Connection closed by foreign host.
Connect but close connection instantly.
When I try to connect to mysql this is the message:
root@haproxy-dev:~# mysql -u haproxy_check -P3306 -h 192.168.1.107 -pdebian
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
The user Mysql_check has admin privileges on databases. My haproxy version is 1.4.25 but i tried with multiple versions.
Thanks in advanced.
I log into a lot of different servers running MySQL and one of the first things I do is create a file in my home directory called ‘.my.cnf’ with my credentials to that local mysql instance:[client] user=root password=secret
This means I don’t have to type my password in every time, nor am I tempted to include it on the command line with -p and get the dreaded (but completely accurate):Warning: Using a password on the command line interface can be insecure.
MySQL 5.6 introduces a utility to make this easier and more secure. First, let’s start with a new mysqld instance with a blank root password and make it more secure:[vagrant@localhost ~]$ mysqladmin -u root password New password:secret Confirm new password:secret [vagrant@localhost ~]$ mysql -u root ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)
Ok, so I’ve added a password, now I want to create my .my.cnf file:[vagrant@localhost ~]$ mysql_config_editor set --user=root --password Enter password:secret [vagrant@localhost ~]$ mysql Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 10 Server version: 5.6.21-70.0 Percona Server (GPL), Release 70.0, Revision 688 Copyright (c) 2009-2014 Percona LLC and/or its affiliates Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql>
What did ‘mysql_config_editor set’ actually do? It creates a .mylogin.cnf file (which stands in for a .my.cnf) in my home directory that contains my credentials, just in encrypted form:[vagrant@localhost ~]$ ls -alh .my* -rw-------. 1 vagrant vagrant 100 Nov 19 16:20 .mylogin.cnf -rw-------. 1 vagrant vagrant 29 Nov 19 16:20 .mysql_history [vagrant@localhost ~]$ cat .mylogin.cnf ��>NTv�&�S���/�, >ј$%KZ 9i�V�jK䉦H[��� k. [vagrant@localhost ~]$ mysql_config_editor print [client] user = root password = *****
The mysql client picks this up right away and will use it by default. This file has good default filesystem permissions, is local to my homedir, and is a whole lot better than specifying it on the command line or typing it in every time.
This utility also supports a feature called ‘login-path’ wherein you can add multiple mysql logins (perhaps to different servers) and refer to them with the —login-path option in the mysql client:[vagrant@localhost ~]$ mysql_config_editor set --login-path=remote --host=remote --user=remote --password Enter password:secure [vagrant@localhost ~]$ mysql_config_editor print --all [client] user = root password = ***** [remote] user = remote password = ***** host = remote [vagrant@localhost ~]$ mysql --login-path=remote ERROR 2005 (HY000): Unknown MySQL server host 'remote' (0)
The ‘remote’ host doesn’t exist here, but you get the idea. You can create as many login-paths as you want with varied hostnames, credentials and other login parameters and quickly access them with any client supporting login-path.
Now, how secure is this really? This isn’t secure from anyone who roots your DB server. I would say the benefits are more about reducing careless password storage and tidier management of local credentials.
Switch from “selling” to “engaging”
If you haven’t gotten the memo that engagement is the key to marketing, you haven’t become a modern marketer yet. Today’s consumers expect more than an infomercial or a blank-eyed billboard ad. They want to feel as though you care about what they think and know who they are as individuals.
John Ellett, a marketing consultant writer for Forbes, explained, “By shifting the lens of your organization from pushing products to engaging customers . . . you can become purpose-driven and stop focusing purely on product positioning.” Listen to customer feedback and strive to meet their needs, rather than trying to convince them they don’t know what they need.
Create positive brand experiences
If you’re still relying on ad campaigns which have concrete beginning and ending points, you still aren’t embracing the modern preferences of your customers. People don’t want to look at your brand or product, they want to experience it. Whether they have a negative or positive experience, you can be sure their friends are going to hear about it, and every good marketer knows how vital a good customer review is and how damaging a negative one can be. Making sure customer service representatives are well-trained and websites are well-maintained is key to making sure your customers’ experiences are good.
Utilize personalization tools to the fullest
Personalization isn’t just inserting some code to an email that puts the customer’s name at the top of the message. Being personal means knowing your customer’s likes, dislikes, and shopping habits. Ellet explained, “Brands with which we have relationships have information about us that, when used properly, helps those brands deliver superior experiences.”
This should happen no matter what device the customer is using—a feat which few companies have yet to achieve. Yet modern marketers know how important it is to strive towards this goal, that their customers might consistently receive the right information about the right products at the right time.
Marketing News brought to you by ClickToCallMarket.com
Tags: modern marketing, marketer, engagement, key to marketing, brand experiences, personalization tools, good marketers, customer review
Based on MySQL 5.6.21, including all the bug fixes in it, Percona Server 5.6.21-70.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free. Complete details of this release can be found in the 5.6.21-70.1 milestone on Launchpad.
- A slave replicating in RBR mode would crash, if a table definition between master and slave differs with an allowed conversion, and the binary log contains a table map event followed by two row log events. This bug is an upstream regression introduced by a fix for bug #72610. Bug fixed #1380010.
- An incorrect source code function attribute would cause MySQL to crash on an InnoDB row write, if compiled with a recent GCC with certain compilation options. Bug fixed #1390695 (upstream #74842).
- MTR tests for Response Time Distribution were not packaged in binary packages. Bug fixed #1387170.
- The RPM packages provided for CentOS 5 were built using a debugging information format which is not supported in the gdb version included with CentOS 5.10. Bug fixed #1388972.
- A session on a server in mixed mode binlogging would switch to row-based binlogging whenever a temporary table was created and then queried. This switch would last until the session end or until all temporary tables in the session were dropped. This was unnecessarily restrictive and has been fixed so that only the statements involving temporary tables were logged in the row-based format whereas the rest of the statements would continue to use the statement-based logging. Bug fixed #1313901 (upstream #72475).