]]>
]]>

You are here

Feed aggregator

log rotation oddly broken

Lastest Forum Posts - March 2, 2015 - 12:39pm
I have log rotation setup in /etc/logrotate.d/mysql:

: /var/lib/mysql/mysql*.log { # create 600 mysql mysql notifempty daily dateext rotate 7 missingok compress postrotate # just if mysqld is really running if test -x /usr/bin/mysqladmin && \ /usr/bin/mysqladmin ping &>/dev/null then /usr/bin/mysqladmin flush-logs fi endscript } However, this does not appear to be working correctly. On one system I have log rotation occuring as desired (new log file daily) but I have a month of log files (not 7). On another system I have log rotation occuring but writes are happening to a file labeled mysql-slow.log-20150204. Not quite sure where the break down is at this point and the blog has some good guidance but maybe it's out of date?

Any help (or logrotate.d examples) would be much appreciated.

Emulating MySQL roles with the Percona PAM plugin and proxy users

Latest MySQL Performance Blog posts - March 2, 2015 - 8:50am

From time to time people wonder how to implement roles in MySQL. This can be useful for companies having to deal with many user accounts or for companies with tight security requirements (PCI or HIPAA for instance). Roles do not exist in regular MySQL but here is an example on how to emulate them using Percona Server, the PAM plugin and proxy users.

The goal

Say we have 2 databases: db1 and db2, and we want to be able to create 3 roles:

  • db1_dev: can read and write on db1 only.
  • db2_dev: can read and write on db2 only.
  • stats: can read on db1 and db2

For each role, we will create one user: joe (db1_dev), mike (db2_dev) and tom (stats).

Setting up the Percona PAM plugin

The Percona PAM plugin is distributed with Percona Server 5.5 and 5.6. I will be using Percona Server 5.6 in this post and I will authenticate users with /etc/shadow. As explained here, the setup is easy:

  • Make sure /etc/shadow can be read by the mysql user:
    # chgrp mysql /etc/shadow # chmod g+r /etc/shadow
  • Install the plugin:
    mysql> INSTALL PLUGIN auth_pam SONAME 'auth_pam.so';
  • Create a /etc/pam.d/mysqld file containing:
    auth required pam_warn.so auth required pam_unix.so audit account required pam_unix.so audit

Tinkering with the permissions of /etc/shadow may a security concern. Authenticating users against an LDAP server may be a better option. The configuration of the PAM plugin is similar (replace pam_unix.so with pam_ldap.so and forget the part about /etc/shadow).

Testing authentication with the PAM plugin

Now let’s create a user:

# adduser test_pam # passwd test_pam mysql> GRANT ALL PRIVILEGES ON db1.* TO test_pam@localhost IDENTIFIED WITH auth_pam;

And let’s check that the authentication is working as we expect:

mysql -utest_pam -p Enter password: mysql> show grants; +-----------------------------------------------------------+ | Grants for test_pam@localhost | +-----------------------------------------------------------+ | GRANT USAGE ON *.* TO 'test_pam'@'localhost' | | GRANT ALL PRIVILEGES ON `db1`.* TO 'test_pam'@'localhost' | +-----------------------------------------------------------+

That works! We can delete the user and go to the next step.

Creating proxy user

The key to emulate a role is to create a MySQL account for which nobody will know the password (then nobody will be able to use it directly). Instead we will use the PROXY privilege to make sure we map an anonymous account that will match any incoming user to the right MySQL user.

So the first step is to create an anonymous user:

mysql> CREATE USER ''@'' IDENTIFIED WITH auth_pam AS 'mysqld, pam_db1=db1_dev, pam_db2=db2_dev, pam_stats=stats';

The goal of this user is simply to map Unix users in the pam_db1 group to the db1_dev MySQL user, Unix users in the pam_db2 group to the db2_dev MySQL user and Unix users in the pam_stats group to the stats MySQL user.

Creating the proxied accounts

Now we can create the MySQL users corresponding to each of the roles we want to create:

mysql> GRANT SELECT, INSERT ON db1.* TO 'db1_dev'@localhost IDENTIFIED BY 'XXXXX'; mysql> GRANT PROXY ON 'db1_dev'@'localhost' TO ''@''; mysql> GRANT SELECT, INSERT ON db2.* TO 'db2_dev'@localhost IDENTIFIED BY 'YYYYY'; mysql> GRANT PROXY ON 'db2_dev'@'localhost' TO ''@''; mysql> GRANT SELECT ON db1.* TO 'stats'@localhost IDENTIFIED BY 'ZZZZZ'; mysql> GRANT SELECT ON db2.* TO 'stats'@localhost; mysql> GRANT PROXY ON 'stats'@'localhost' TO ''@'';

Creating the Unix user accounts

The last step is to create the Unix users joe, mike and tom and assign them the correct group:

# useradd joe # passwd joe # groupadd pam_db1 # usermod -g pam_db1 joe # useradd mike # passwd mike # groupadd pam_db2 # usermod -g pam_db2 mike # useradd tom # passwd tom # groupadd pam_stats # usermod -g pam_stats tom

Again you may prefer using an LDAP server to avoid creating the users at the OS level.

Testing it out!

Let’s try to connect as mike:

# mysql -umike -p Enter password: mysql> show grants; +----------------------------------------------------------------------------------------------------------------+ | Grants for db2_dev@localhost | +----------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'db2_dev'@'localhost' IDENTIFIED BY PASSWORD '*C1DDB6E980040762275B29A316FD993B4A19C108' | | GRANT SELECT, INSERT ON `db2`.* TO 'db2_dev'@'localhost' | +----------------------------------------------------------------------------------------------------------------+

Not bad!

Alternatives

The Percona PAM plugin is not the only option to use roles:

  • MariaDB 10 supports roles from version 10.0.5
  • Oracle distributes a PAM plugin for MySQL 5.5 and MySQL 5.6 as part of the MySQL Enterprise subscription
  • Securich is a set of stored procedures that has many features regarding user management
  • Google has been offering support for roles through its google-mysql-tools for a long time.
Conclusion

Even if they are not officially supported, roles can be emulated with an authentication plugin and a proxy user. Let’s hope that roles will be added in MySQL 5.7!

The post Emulating MySQL roles with the Percona PAM plugin and proxy users appeared first on MySQL Performance Blog.

Weird LSN is in the Future Error after Updating

Lastest Forum Posts - March 2, 2015 - 1:47am
Hi!,
Iam running an 5 Node Multi-Instances Galera Cluster since 2 years now.
SST is encrypted xtrabackup. The Donor is doing nothing but backup and sst.

I did a minor Update to 5.5.39-36.0-55-log / wsrep_25.11.r4023
The biggest change was the Update from xtrabackup 2.1 to xtrabackup 2.2.

Normally i let the node re-sync the content after an update. So i delete the grastate.dat.
After starting and successfully SST the Log started to fill up with:

---
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
150210 12:57:21 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
150210 12:57:21 InnoDB: Error: page 2 log sequence number 44208161331
---
Endless scrolling until hard-killing it.

After stopping the Node, cleaning the data-dir and restarting it the sst is completing fine and the error is gone.
This is happening only once per instance an node, so Debugging is really hard!

The same happend on the second node. First I deleted the ib_* Files. The Node recreated them and crashed again.
Deleting the table ibd Files did nothing, too.
Deleting the xtrabackup_* Files on the joiner fixes the error!

My current impression is that the new xtrabackup is not deleting all essential files. The old Version wipes all Content.
Does somebody knows something about this?

Greetz

alter view privileges

Lastest Forum Posts - February 28, 2015 - 3:09pm
I would like to allow a user to create/alter/drop views in a database, but not have any ability to create/alter/drop tables. I have given the user the CREATE VIEW, SHOW VIEW, and SELECT privileges. Creating a view works fine, but when the user tries to alter the view, he gets an error that the DROP command is denied. From my reading of the manual, if I give the user the DROP privilege, he'll be able to drop any table in the database which is not wanted.

Is there a way that I can allow this user only to drop views and not tables? Or be able to edit views in some other method?

One idea I had was to restrict the DROP privilege to a list of views based on a wildcard, assuming all views he creates starts with that wildcard, but it seems that requires an exact list of views upfront. I tried "GRANT DROP ON test.'vw_%' TO user@localhost", but that gives a syntax error. I could list out the individual views, but that means I would have to run a new GRANT query each time he creates a view, which is not really ideal.

Thanks for your help.

Ryan

Disable binary logs while purging the records

Lastest Forum Posts - February 27, 2015 - 1:18pm
Hi Team,

We have a parameter "--binlog-ignore-db" to disable logging for a particular database. Here, if the parameter is not existing in my.cnf file we have to add in the configuration file and restart mysql to have the changes affective. Below are the variables that are currently enabled as per the DB:

+-----------------------------------------+-------+
| Variable_name | Value |
+-----------------------------------------+-------+
| binlog_cache_size | 32768 |
| binlog_direct_non_transactional_updates | OFF |
| binlog_format | ROW |
| binlog_stmt_cache_size | 32768 |
+-----------------------------------------+-------+

Is there a way where I can add the parameter dynamically as we will not be able to restart the mysql instance as there are other DB's running which cannot be down for a second. Please suggest...!

Best Regards,
Krishna

Cluster crash and not restart

Lastest Forum Posts - February 27, 2015 - 7:55am
Hello,
I have XtraDB Cluster 5.6 on 3 node with Ubuntu Server 12.04.2 LTS.
All works fine.
Today one node has crash and when I try to restart service, other 2 nodes goes down.
Now, I restart first node with "bootstrap-pxc" and this node is ok.
When I try to add second node, this join fail and first node crash :|

On first node, on log error I find this:


2015-02-27 16:02:02 7f87a406c700 InnoDB: Error: page 7595 log sequence number 138747407668
InnoDB: is in the future! Current system log sequence number 119269942957.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: http://dev.mysql.com/doc/refman/5.6/...-recovery.html
InnoDB: for more information.
2015-02-27 16:02:04 15322 [Warning] Client failed to provide its character set. 'latin1' will be used as client character set.
2015-02-27 16:02:04 15322 [Warning] Client failed to provide its character set. 'latin1' will be used as client character set.
15:02:04 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=25165824
read_buffer_size=131072
max_used_connections=33
max_threads=202
thread_count=35
connection_count=33
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 105196 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7735e50
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
WSREP_SST: [ERROR] innobackupex finished with error: 9. Check /var/lib/mysql//innobackup.backup.log (20150227 16:02:04.738)
WSREP_SST: [ERROR] Cleanup after exit with status:22 (20150227 16:02:04.743)
WSREP_SST: [INFO] Cleaning up temporary directories (20150227 16:02:04.751)
Segmentation fault
150227 16:02:04 mysqld_safe Number of processes running now: 0
150227 16:02:04 mysqld_safe WSREP: not restarting wsrep node automatically
150227 16:02:04 mysqld_safe mysqld from pid file /var/lib/mysql/cls-mysql1-db1.pid ended



on second node, on log error, I find this:

WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20150227 16:01:03.021)
2015-02-27 16:01:03 26922 [Note] WSREP: (711054fc, 'tcp://0.0.0.0:4567') turning message relay requesting off
grep: /var/lib/mysql//xtrabackup_checkpoints: No such file or directory
WSREP_SST: [INFO] Preparing the backup at /var/lib/mysql/ (20150227 16:02:04.742)
WSREP_SST: [INFO] Evaluating innobackupex --no-version-check --apply-log $rebuildcmd ${DATA} &>${DATA}/innobackup.prepare.log (20150227 16:02:04.744)
WSREP_SST: [ERROR] Cleanup after exit with status:1 (20150227 16:02:04.965)
2015-02-27 16:02:04 26922 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.10.20.80' --auth 'sstuser:Arght64dGyTR32P' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '26922' '' : 1 (Operation not permitted)
2015-02-27 16:02:04 26922 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2015-02-27 16:02:04 26922 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2015-02-27 16:02:04 26922 [ERROR] Aborting

2015-02-27 16:02:05 26922 [Note] WSREP: (711054fc, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.10.20.79:4567
2015-02-27 16:02:06 26922 [Note] WSREP: (711054fc, 'tcp://0.0.0.0:4567') reconnecting to 805bc8ba (tcp://10.10.20.79:4567), attempt 0
2015-02-27 16:02:06 26922 [Note] WSREP: Closing send monitor...
2015-02-27 16:02:06 26922 [Note] WSREP: Closed send monitor.
2015-02-27 16:02:06 26922 [Note] WSREP: gcomm: terminating thread
2015-02-27 16:02:06 26922 [Note] WSREP: gcomm: joining thread
2015-02-27 16:02:06 26922 [Note] WSREP: gcomm: closing backend
2015-02-27 16:02:09 26922 [Note] WSREP: evs:roto(711054fc, LEAVING, view_id(REG,711054fc,5)) suspecting node: 805bc8ba
2015-02-27 16:02:09 26922 [Note] WSREP: evs:roto(711054fc, LEAVING, view_id(REG,711054fc,5)) suspected node without join message, declaring inactive
2015-02-27 16:02:09 26922 [Note] WSREP: view(view_id(NON_PRIM,711054fc,5) memb {
711054fc,0
} joined {
} left {
} partitioned {
805bc8ba,0
})
2015-02-27 16:02:09 26922 [Note] WSREP: view((empty))
2015-02-27 16:02:09 26922 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2015-02-27 16:02:09 26922 [Note] WSREP: gcomm: closed
2015-02-27 16:02:09 26922 [Note] WSREP: Flow-control interval: [16, 16]
2015-02-27 16:02:09 26922 [Note] WSREP: Received NON-PRIMARY.
2015-02-27 16:02:09 26922 [Note] WSREP: Shifting JOINER -> OPEN (TO: 2268)
2015-02-27 16:02:09 26922 [Note] WSREP: Received self-leave message.
2015-02-27 16:02:09 26922 [Note] WSREP: Flow-control interval: [0, 0]
2015-02-27 16:02:09 26922 [Note] WSREP: Received SELF-LEAVE. Closing connection.
2015-02-27 16:02:09 26922 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 2268)
2015-02-27 16:02:09 26922 [Note] WSREP: RECV thread exiting 0: Success
2015-02-27 16:02:09 26922 [Note] WSREP: recv_thread() joined.
2015-02-27 16:02:09 26922 [Note] WSREP: Closing replication queue.
2015-02-27 16:02:09 26922 [Note] WSREP: Closing slave action queue.
2015-02-27 16:02:09 26922 [Note] WSREP: Service disconnected.
2015-02-27 16:02:09 26922 [Note] WSREP: rollbacker thread exiting
2015-02-27 16:02:10 26922 [Note] WSREP: Some threads may fail to exit.
2015-02-27 16:02:10 26922 [Note] Binlog end
2015-02-27 16:02:10 26922 [Note] /usr/sbin/mysqld: Shutdown complete

Error in my_thread_global_end(): 1 threads didn't exit
150227 16:02:16 mysqld_safe mysqld from pid file /var/lib/mysql/cls-mysql1-db2.pid ended

Confusion about Xtrabackup Versions

Lastest Forum Posts - February 27, 2015 - 7:21am
Hi!,

I am running an 5 Head Percona XtraDB-Cluster 5.5.39-36.0-55-log.

Xtrabackup Version installed is
percona-xtrabackup-2.2.8

I found in my logs that the xtrabackup is starting a wrong version of innodb:
xtrabackup version 2.2.8 based on MySQL server 5.6.22

I remember that it is important to use the "right" binary of xtrabackup for the Server. In the xtrabackup version percona-xtrabackup-2.1.9 where 3 binarys and the log said:
xtrabackup_55 version 2.1.9 for Percona Server 5.5.35

I am a little bit concerned that the 5.6 innodb is possibly writing unreadable datafiles for 5.5 innodb.

Can somebody explain this to me?

Greetz
Julian

Explanation of the entries in grastate.dat file

Lastest Forum Posts - February 27, 2015 - 12:49am
Hello.

I'm trying to understand the entries in grastate.dat file, particularly seqno value. If I understand correctly, the 'seqno' value increases as node gets written to. If all nodes are shut down I can use 'seqno' to figure out which node was last written to - it is the one which has the highest 'seqno' value.

Now, am I correct in assuming that in the working (not degraded) cluster all the seqno values should be the same? That is, if I stop the writes completely, then shut down all the units, the seqno values across the cluster should be the same? (Which, in turn, means that I can safely boostrap from any pxc node?)

What also confuses me are values of '0' and '-1' for the seqno. I'm imagining that -1 corresponds to an error of some kind. But, when I bootstrap off the note which has '-1' as a seqno, and then shut it down, the seqno won't change - how am I to determine, then, if the node has failed?
What does seqno value of '0' represents? I was under impression that '0' means that there were no writes to the cluster since the bootstrap, but that seems to be in contradiction with '-1' described above.

Lastly, when is grastate.dat file created, when it gets written to, and is it ever removed by the pxc?
Mario

3 handy tools to remove problematic MySQL processes

Latest MySQL Performance Blog posts - February 27, 2015 - 12:00am

DBAs often encounter situations where they need to kill queries to ensure there are no long-running queries on a MySQL server that would impact performance. Long-running queries can be the result of many factors. Fortunately, Percona Server contains some handy tools to remove problematic MySQL processes. I will highlight all of the tools via some examples in this post.

pt-kill:
There have been some good posts on this blog about the pt-kill tool, like this one by Arunjith Aravindan titled “How a set of queries can be killed in MySQL using Percona Toolkit’s pt-kill.” Let’s dive into pt-kill a bit further with a few more examples. What does pt-kill do? It kills MySQL connections. Say you wanted to run pt-kill from a cronjob and then get an email on every killed process/query. Here is typical example for that.

$ pt-kill --interval 1 --run-time 1 --busy-time 5 --log /path/to/kill_long_running_thread.log --match-info "^(select|SELECT|Select)" --kill --print --user=xxxxxx --password=xxxxxxxxxx

Assume this is running from a cronjob, When pt-kill executes, it will kill queries longer than 5 seconds. By default, pt-kill runs forever –run-time option tells how long pt-kill to run before exiting If –interval and –busy-time parameters are used together then the explicit –interval value is used. Otherwise the default interval is 30 seconds. Note: this will only kill all read queries as per the –match-info parameter.

The above command will log all killed queries in the file referenced with the –log option. If you need to be notified via email for every killed query, the command below will do it. Off-course, you need to have the system configured to send e-mail.

#!/bin/bash tail -n 0 -F /path/to/kill_long_running_thread.log | while read LOG do echo "$LOG" | mail -s "pt-kill alert" sample@test.com done

You can execute this shell script in the background within screen or with the nohup utility continuously to monitor the log file. It will send an email whenever any new killed query arrives to the referenced log file in the pt-kill command. Unfortunately, there is no option to notify-by-email in pt-kill at the moment, so this is sort of a workaround.

In order to log all killed queries into a database table you will need to use the –log-dsn option as per the example below.

$ pt-kill --interval 1 --busy-time 1 --create-log-table --log-dsn=h=localhost,D=percona,t=kill_log --daemonize --match-info "^(select|SELECT|Select)" --kill

All killed queries will be logged into percona.kill_log table. The –daemonize option will run this command in the background forever and will kill all SELECT queries running longer than 1 second (–busy-time 1). The –interval option instructs pt-kill to scan processes every 1 second (–interval 1).

mysql> select * from kill_log; +---------+-----------+---------------------+-------------------------+------------+-------+------+-----------+------+---------+------+------------+-----------------+---------+ | kill_id | server_id | timestamp | reason | kill_error | Id | User | Host | db | Command | Time | State | Info | Time_ms | +---------+-----------+---------------------+-------------------------+------------+-------+------+-----------+------+---------+------+------------+-----------------+---------+ | 17 | 1 | 2015-01-10 08:38:33 | Query matches Info spec | | 35146 | root | localhost | NULL | Query | 0 | User sleep | SELECT SLEEP(5) | NULL | | 20 | 1 | 2015-01-10 08:38:34 | Query matches Info spec | | 35223 | root | localhost | NULL | Query | 0 | User sleep | SELECT SLEEP(5) | NULL | +---------+-----------+---------------------+-------------------------+------------+-------+------+-----------+------+---------+------+------------+-----------------+---------+

With the help of logging killed queries into a database tables. You can easily get all the trends/and /statistics on killed queries via SQL.

By default the tool kills the oldest of the queries that would have been running for more than a given –busy-time.  If you need to kill all the threads that have been busy for more than a specified –busy-time, then this will do it:

$ pt-kill --victims=all --busy-time=60

Statement Timeout in Percona Server:
The max-statement-time feature is ported from the Twitter patches. This feature can be used to limit the query execution time by specifying the timeout value in the max_statement_time variable. When the specified number of milliseconds is reached the server aborts the statement and returns the error below to the client.

ERROR 1877 (70101): Query execution was interrupted, max_statement_time exceeded

Let me demonstrate this through another example:

mysql [localhost] {msandbox} (world) > SET max_statement_time=1; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (world) > show variables like 'max_statement_time'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | max_statement_time | 1 | +--------------------+-------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (world) > SELECT * FROM City WHERE District = 'abcd'; ERROR 1885 (70101): Query execution was interrupted, max_statement_time exceeded mysql [localhost] {msandbox} (world) > UPDATE City SET District='abcd' WHERE ID = 2001; ERROR 1885 (70101): Query execution was interrupted, max_statement_time exceeded mysql [localhost] {msandbox} (world) > ALTER TABLE City ADD INDEX district_idx (district); ERROR 1885 (70101): Query execution was interrupted, max_statement_time exceeded

As you can see from this example statement, the timeout feature works for all statements including SELECT/DML/DDL queries.

mysql [localhost] {msandbox} (world) > show status like 'Max_statement%'; +-------------------------------+-------+ | Variable_name | Value | +-------------------------------+-------+ | Max_statement_time_exceeded | 3 | | Max_statement_time_set | 19 | | Max_statement_time_set_failed | 0 | +-------------------------------+-------+ 3 rows in set (0.00 sec)

The above mentioned status variables are stats for a statement timeout feature. Max_statement_time_exceeded will inform you that the total number of statements exceeded the defined timeout. Max_statement_time_set defines the number of statements for which execution time limit was set. You can find more details in this documentation. The statement timeout feature was introduced in Percona Server 5.6. You can check if your specific version of Percona Server supports this feature or not via the have_statement_timeout variable.

mysql [localhost] {msandbox} (world) > show global variables like 'have_statement_timeout'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | have_statement_timeout | YES | +------------------------+-------+ 1 row in set (0.00 sec)

Bugs you should be aware of:

https://bugs.launchpad.net/percona-server/+bug/1388533 -> This affects how the feature interacts with stored procedures. If you use stored procedures, max_statement_time might not behave as you expect.
https://bugs.launchpad.net/percona-server/+bug/1307432 -> This is documentation bug. Percona Server timeouts might not be safe for some statements like DDL and should not be used with such queries, The documentation does not reflect this. You should be very careful if you set a global statement timeout, It affects data changing queries as well. For best results set the max_statement_time variable in a session before running queries that you want to be killed if they execute too long, instead of using a global variable.
https://bugs.launchpad.net/percona-server/+bug/1376934 -> This affects the statement timeout feature on the query level. You must set max_statement_time in a session or globally instead however, this bug is fixed in latest version i.e. Percona Server 5.6.22-72.0

InnoDB Kill Idle Transactions:
This feature was introduced in Percona Server 5.5. It limits the age of idle XtraDB transactions and will kill idle transactions longer than a specified threshold for innodb_kill_idle_transaction. This feature is useful when autocommit is disabled on the server side and you are relying on the application to commit transactions and want to avoid long running transactions that are uncommitted. Application logic errors sometimes leaves transactions uncommitted. Let me demonstrate it quickly through one example:

mysql [localhost] {msandbox} (world) > show variables like 'autocommit'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | autocommit | OFF | +---------------+-------+ mysql [localhost] {msandbox} (world) > show global variables like 'innodb_kill_idle_transaction'; +------------------------------+-------+ | Variable_name | Value | +------------------------------+-------+ | innodb_kill_idle_transaction | 10 | +------------------------------+-------+ mysql [localhost] {msandbox} (world) > START TRANSACTION; SELECT NOW(); INSERT INTO City_backup (Name,CountryCode,District,Population) VALUES ('Karachi','PK','Sindh','1000000'); Query OK, 0 rows affected (0.00 sec) +---------------------+ | NOW() | +---------------------+ | 2015-01-31 07:11:39 | +---------------------+ 1 row in set (0.00 sec) Query OK, 1 row affected (0.01 sec) mysql [localhost] {msandbox} (world) > SHOW ENGINE INNODB STATUSG *************************** 1. row *************************** ------------ TRANSACTIONS ------------ ---TRANSACTION 173076, ACTIVE 10 sec 1 lock struct(s), heap size 360, 0 row lock(s), undo log entries 1 MySQL thread id 15, OS thread handle 0x7f862e9bf700, query id 176 localhost msandbox init SHOW ENGINE InnoDB STATUS TABLE LOCK table `world`.`City_backup` trx id 173076 lock mode IX ---------------------------- END OF INNODB MONITOR OUTPUT ============================ mysql [localhost] {msandbox} (world) > SELECT NOW(); SELECT * FROM City_backup; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 16 Current database: world +---------------------+ | NOW() | +---------------------+ | 2015-01-31 07:12:06 | +---------------------+ 1 row in set (0.01 sec) Empty set (0.00 sec)

 Conclusion:
In this post, I shared some tools that can help you get rid of long-running transactions to help ensure that you don’t run into performance issues. This is one of the many good reasons to use Percona Server, which has some extra features on top of vanilla MySQL server.

 

The post 3 handy tools to remove problematic MySQL processes appeared first on MySQL Performance Blog.

whole cluster crashed due to table not synced

Lastest Forum Posts - February 26, 2015 - 2:23am
Hi All,
We had an 7 nodes cluster which crossing three datacenter. with 2/2/3 on different datacenter.
we got problem as the whole cluster not function due to table not synced with 11 minutes delay..

the table was first created on node 1 with the time 2015-02-25 23:59:50, but it not synced to other nodes immediately, and other node failed at 0:11:00 when there is some operation on the table. this make the node 1 an standalone node.

we want to understanding why the replication not replicated, is there any monitoring metric could be alerting in such case ?
any bug for such case?

logs from other 6 nodes
-------------------------------------

150226 0:11:00 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1146, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 0 seqno: 4190686)
150226 0:11:00 [ERROR] Slave SQL: Error executing row event: 'Table 'keystone.credential' doesn't exist', Error_code: 1146
150226 0:11:00 [Warning] WSREP: RBR event 4963 Write_rows apply warning: 1146, 4190686
150226 0:11:00 [Warning] WSREP: Failed to apply app buffer: seqno: 4190686, status: 1
at galera/src/replicator_smm.cpp:apply_wscoll():57
Retrying 2th time
..........
..........
150226 0:11:04 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1146, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 0 seqno: 4190686)
150226 0:11:04 [ERROR] Slave SQL: Error executing row event: 'Table 'keystone.credential' doesn't exist', Error_code: 1146
150226 0:11:04 [Warning] WSREP: RBR event 4963 Write_rows apply warning: 1146, 4190686
150226 0:11:04 [ERROR] WSREP: Failed to apply trx: source: 8b8e06eb-594e-11e4-99d3-822b0d796705 version: 2 local: 0 state: APPLYING flags: 1 conn_id: 15113818 trx_id: 4453338 seqnos (l: 4198354, g: 4190686, s: 4190685, d: 4190660, ts: 1424934617949390287)
150226 0:11:04 [ERROR] WSREP: Failed to apply trx 4190686 10 times
150226 0:11:04 [ERROR] WSREP: Node consistency compromized, aborting...
150226 0:11:04 [Note] WSREP: Closing send monitor...
150226 0:11:04 [Note] WSREP: Closed send monitor.
150226 0:11:04 [Note] WSREP: gcomm: terminating thread
150226 0:11:04 [Note] WSREP: gcomm: joining thread
150226 0:11:04 [Note] WSREP: gcomm: closing backend
150226 0:11:04 [Note] WSREP: view(view_id(NON_PRIM,3630e80f-594f-11e4-9fd8-b2a0ad1b9db3,33) memb {
adb89b17-5a25-11e4-833d-c63263a4e8ac,
} joined {
} left {
} partitioned {
3630e80f-594f-11e4-9fd8-b2a0ad1b9db3,
8b8e06eb-594e-11e4-99d3-822b0d796705,
9fd32195-594f-11e4-937f-2b0004fbf107,
cd7de3db-594f-11e4-bc4d-c70bf0d0977b,
f19c53ac-9c38-11e4-a5cb-df74780de7f1,
f814b175-594e-11e4-b14e-becb87dc9620,
})
150226 0:11:04 [Note] WSREP: view((empty))
150226 0:11:04 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
150226 0:11:04 [Note] WSREP: gcomm: closed
150226 0:11:04 [Note] WSREP: Flow-control interval: [16, 16]
150226 0:11:04 [Note] WSREP: Received NON-PRIMARY.
150226 0:11:04 [Note] WSREP: Shifting SYNCED -> OPEN (TO: 4190686)
150226 0:11:04 [Note] WSREP: Received self-leave message.
150226 0:11:04 [Note] WSREP: Flow-control interval: [0, 0]
150226 0:11:04 [Note] WSREP: Received SELF-LEAVE. Closing connection.
150226 0:11:04 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 4190686)
150226 0:11:04 [Note] WSREP: RECV thread exiting 0: Success
150226 0:11:04 [Note] WSREP: recv_thread() joined.
150226 0:11:04 [Note] WSREP: Closing replication queue.
150226 0:11:04 [Note] WSREP: Closing slave action queue.
150226 0:11:04 [Note] WSREP: /mysql/home/products/mysql/bin/mysqld: Terminated.

log from the node 1, which shows it can't connect to all other node after 0:11:04
-------------------------------
150226 0:11:04 [Note] WSREP: (8b8e06eb-594e-11e4-99d3-822b0d796705, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.126.52.43:4567
150226 0:11:05 [Note] WSREP: (8b8e06eb-594e-11e4-99d3-822b0d796705, 'tcp://0.0.0.0:4567') reconnecting to adb89b17-5a25-11e4-833d-c63263a4e8ac (tcp://10.126.52.43:4567), attempt 0
150226 0:11:05 [Note] WSREP: declaring 3630e80f-594f-11e4-9fd8-b2a0ad1b9db3 stable
150226 0:11:05 [Note] WSREP: declaring 9fd32195-594f-11e4-937f-2b0004fbf107 stable
150226 0:11:05 [Note] WSREP: declaring cd7de3db-594f-11e4-bc4d-c70bf0d0977b stable
150226 0:11:05 [Note] WSREP: declaring f19c53ac-9c38-11e4-a5cb-df74780de7f1 stable
150226 0:11:05 [Note] WSREP: declaring f814b175-594e-11e4-b14e-becb87dc9620 stable
150226 0:11:05 [Note] WSREP: Node 3630e80f-594f-11e4-9fd8-b2a0ad1b9db3 state prim
150226 0:11:05 [Note] WSREP: declaring cd7de3db-594f-11e4-bc4d-c70bf0d0977b stable
150226 0:11:05 [Note] WSREP: declaring f19c53ac-9c38-11e4-a5cb-df74780de7f1 stable
150226 0:11:05 [Note] WSREP: Node 8b8e06eb-594e-11e4-99d3-822b0d796705 state prim
150226 0:11:05 [Warning] WSREP: 8b8e06eb-594e-11e4-99d3-822b0d796705 sending install message failed: Resource temporarily unavailable

Thanks

Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message?

Latest MySQL Performance Blog posts - February 26, 2015 - 12:00am

If you use Percona Server 5.5 and you have configured it to use multiple buffer pool instances than sooner or later you’ll see the following lines on the server’s error log and chances are you’ll be worried about them:

InnoDB: detected cycle in LRU for buffer pool 5, skipping to next buffer pool. InnoDB: detected cycle in LRU for buffer pool 3, skipping to next buffer pool. InnoDB: detected cycle in LRU for buffer pool 7, skipping to next buffer pool.

Worry not as this is mostly harmless. It’s becoming a February tradition for me (Fernando) to face a question about this subject (ok, it’s maybe a coincidence) and this time I’ve teamed up with my dear colleague and software engineer George Lorch to provide you the most complete blog post ever published on this topic(with a belated thank you! to Ernie Souhrada, with whom I’ve also discussed this same matter one year ago).

InnoDB internals: what is “LRU” ?

There’s a short and to-the-point section of the MySQL manual that explains in a clear way what is the InnoDB buffer pool, how it operates and why it plays such an important role in MySQL performance. If you’re interested in understanding InnoDB internals then that page is a good start. In this section we’ll refrain ourselves to explain what the “LRU” that shows in our subject message is so we’ll only slightly dig into InnoDB internals, enough to make for some context. Here’s a quick introduction to the buffer pool, quoting from the above manual page:

InnoDB maintains a storage area called the buffer pool for caching data and indexes in memory. (…) Ideally, you set the size of the buffer pool to as large a value as practical, leaving enough memory for other processes on the server to run without excessive paging. The larger the buffer pool, the more InnoDB acts like an in-memory database, reading data from disk once and then accessing the data from memory during subsequent reads.

In practice, however, we can rarely fit our whole dataset inside the InnoDB buffer pool so there must be a process to manage this limited pool of memory pages:

InnoDB manages the pool as a list, using a variation of the least recently used (LRU) algorithm. When room is needed to add a new block to the pool, InnoDB evicts the least recently used block and adds the new block to the middle of the list.

There you go, InnoDB employs a variation of the Least Recently Used algorithm called midpoint insertion strategy to manage the pages within the buffer pool. We should mention it does makes exceptions, such as during a full table scan, when it knows the loaded pages might end up being read only a single time.

Dumping and reloading the buffer pool

Before we can get to the main point of this article lets first examine why would you want to dump the buffer pool to disk, which is at the core of the matter here: that’s when those warning messages we’re discussing may appear.

When you start a MySQL server the buffer pool is empty by default. Performance is at it’s worse at this point because no data can be found in memory so in practice each request for data results in an I/O operation to retrieve the data in the disk and bring it to memory. With time the buffer pool gets filled and performance improves – more and more data can now be found in memory. With yet more time we reach a peek performance state: the buffer pool not only is full but it is filled with the most popular data. The time between the start of the server and reaching this optimum state in the buffer pool is called server warm up. How long it takes depends mostly on two things: the size of the buffer pool and the level of activity of the server – the less busy it is the less requests it will get and thus more time is needed until the popular data is fully loaded.

Now, there could be a shortcut: what if before we save the buffer pool on a disk file before we stop MySQL? We could later use it to reload the buffer pool to an optimum state when we restart the server, thus decreasing the warm up period dramatically.

Percona was a pioneer in this field related to other MySQL distributions and implemented this functionality in Percona Server 5.5. Later on, MySQL 5.6 was released with a similar functionality which also allowed preloading the buffer pool for a faster warm up. Percona Server 5.6 incorporates this upstream feature, effectively replacing its own implementation.

“Detected cycle in LRU”

In the section above we introduced a functionality that allows to dump the contents of the buffer pool to disk so we can later reload it at server restart. What we didn’t mention was that this is yet most useful outside of maintenance time and planned shutdows – that is, when the server crashes. When a crash happens it’s that more important to bring it back to a warm up state soon, so it can resume providing data fast enough. And giving we cannot predict a crash the only way we can arrange to have the latest buffer pool on disk is by flushing it often.

While the buffer pool is divided into pages for efficiency of high-volume read operations it is implemented as a linked list of pages, for efficiency of cache management. During the process of dumping the buffer pool to disk a mutex is acquired on the LRU list. However, this mutex is not hold for the duration of the process – it is periodically released to prevent stalling of the system. The problem is: in between the release of the mutex and the moment it is acquired again the list may get reshuffled. Since the dump keeps a pointer to its position across the mutex boundry, the dump can get put into some artificial cycling.

Lets consider a linked list:

A > B > C > D > E

where each letter corresponds to a memory page. Now lets say the initial dump was partially taken and covered the first three pages, “A > B > C”, placing a pointer on “C” before releasing the mutex. Once the mutex is reacquired the list has been reshuffled:  “A > C > B > D > E”. The resulting junction of the partial list we have already copied and the reshuffled list now includes a loop, which would incur in a cycle: “(A > B > C) > B > D > E”. When the dumping process detects a cycle on the LRU list it stops copying from the actual buffer pool, throws in a warning message, and moves on to the next buffer pool instance – otherwise it would keep dumping in an infinite loop.

How harmless are those messages ?

It is fairly harmless except for the fact you will only have a partial LRU list dump for that buffer pool instance – that is, until the next dump occurs. If the server crashes or is shutdown before the next dump takes place the existing one won’t be totally up to date for the server warm up to complete – it will still be used and will still provide a partially filled, somewhat “warm” buffer pool, just not as optimal as it could have been if the last dump had been taken fully.

The post Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message? appeared first on MySQL Performance Blog.

Software is not installed

Lastest Forum Posts - February 25, 2015 - 5:26am
I try to install the software on my VPS.
This is the server that I have:

PRETTY_NAME="Debian GNU/Linux 7 (wheezy)"
NAME="Debian GNU/Linux"
VERSION_ID="7"
VERSION="7 (wheezy)"
ID=debian

I added this line:

deb http://repo.percona.com/apt wheezy main
deb-src http://repo.percona.com/apt wheezy main

And I run this command: apt-get update
The result:
Hit http://security.debian.org wheezy/updates Release.gpg
Hit http://security.debian.org wheezy/updates Release
Hit http://security.debian.org wheezy/updates/main Sources
Hit http://security.debian.org wheezy/updates/main amd64 Packages
Hit http://security.debian.org wheezy/updates/main Translation-en
Hit http://repo.percona.com wheezy Release.gpg
Hit http://ftp.us.debian.org wheezy Release.gpg
Hit http://repo.percona.com wheezy Release
Hit http://repo.percona.com wheezy/main Sources
Hit http://ftp.us.debian.org wheezy-updates Release.gpg
Hit http://repo.percona.com wheezy/main amd64 Packages
Hit http://ftp.us.debian.org wheezy Release
Hit http://ftp.us.debian.org wheezy-updates Release
Hit http://ftp.us.debian.org wheezy/main Sources
Ign http://repo.percona.com wheezy/main Translation-en
Hit http://ftp.us.debian.org wheezy/main amd64 Packages
Hit http://ftp.us.debian.org wheezy/main Translation-en
Hit http://ftp.us.debian.org wheezy-updates/main Sources
Hit http://ftp.us.debian.org wheezy-updates/main amd64 Packages/DiffIndex
Hit http://ftp.us.debian.org wheezy-updates/main Translation-en/DiffIndex

Then I run this command: apt-get upgrade

If I look at the folder /usr/bin I dont see any files named innobackupex .
If I run this command: innobackupex --user=USER--password=PW /backups
I get this error:
innobackupex: command not found
Trying this command: /usr/bin/innobackupex --user=USER--password=PW /backups
Doest work too.

Its seems like the software is not installed.

Nagios State Unkown

Lastest Forum Posts - February 25, 2015 - 5:03am
Good day

we have a small problem with the percona monitoring plugins for nagios.

We use for a MySQL Galera Cluster several checks with pmp-check-mysql-status.

Primary Cluster
pmp-check-mysql-status -x wsrep_cluster_status -C '==' -T str -c non-Primary -L local

Node State
pmp-check-mysql-status -x wsrep_local_state_comment -C '\!=' -T str -w Synced -L local

Cluster Size
pmp-check-mysql-status -x wsrep_cluster_size -C "<=" -w 2 -c 1 -L local

Flow Control
pmp-check-mysql-status -x wsrep_flow_control_paused -w 0.1 -c 0.9 -L local

The Problem is if the database offline or not reachable or the login is wrong, the check change it state to unkown. But we want for this case an critical if the check cannot be processed.

So i could modify the script itself but it's a very dirty solution and i didn't find another solution at the moment.

What would be the best practice way?

Using MySQL Event Scheduler and how to prevent contention

Latest MySQL Performance Blog posts - February 25, 2015 - 3:00am

MySQL introduced the Event Scheduler in version 5.1.6. The Event Scheduler is a MySQL-level “cron job”, which will run events inside MySQL. Up until now, this was not a very popular feature, however, it has gotten more popular since the adoption of Amazon RDS – as well as similar MySQL database as a service offerings where there is no OS level.

What is important to understand about the Event Scheduler is that it does not have any protection against multiple execution (neither does linux cron). Let’s imagine you have created an event that executes every 10 seconds, but the logic inside the event (i.e. queries or stored procedure call) can take longer than 10 seconds (may be in case of the high load), so it can pile-up. In the worst case, when an event contains a set of “insert” + “update”/”delete” statement inside a transaction, it can cause a deadlock.

Adding “get_lock” conditions inside of the event will help to prevent such situation:

If a repeating event does not terminate within its scheduling interval, the result may be multiple instances of the event executing simultaneously. If this is undesirable, you should institute a mechanism to prevent simultaneous instances. For example, you could use the GET_LOCK() function, or row or table locking. Read more at event_scheduler documentation.

Function GET_LOCK() can be used for communications between threads:

The following example can illustrate using get_lock:

DELIMITER // CREATE EVENT testlock_event ON SCHEDULE EVERY 2 SECOND DO BEGIN DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN DO RELEASE_LOCK('testlock_event'); END; IF GET_LOCK('testlock_event', 0) THEN -- add some business logic here, for example: -- insert into test.testlock_event values(NULL, NOW()); END IF; DO RELEASE_LOCK('testlock_event'); END; // DELIMITER ;

DECLARE CONTINUE HANDLER FOR SQLEXCEPTION is needed here to release lock even if the event failed or was killed.

The above GET_LOCK / RELEASE_LOCK combination will help to prevent contention inside the MySQL Event Scheduler.

The post Using MySQL Event Scheduler and how to prevent contention appeared first on MySQL Performance Blog.

XtraDB Initial SST fails.

Lastest Forum Posts - February 25, 2015 - 1:44am
Hello,

I've been following the Ubuntu installation guide, but have hit a few problems when trying to bring the second node into the cluster. The first node is bootstrapped and seems to be fine, but when I start the second node the initial State transfer fails.

Packages (same on both nodes)
ii percona-xtradb-cluster-server-5.6 5.6.21-25.8-938.trusty
ii percona-xtrabackup 2.2.9-5067-1.trusty

my.cnf

: # Path to Galera library wsrep_provider=/usr/lib/libgalera_smm.so # Cluster connection URL wsrep_cluster_address=gcomm://10.17.40.150,10.17.40.135,10.17.40.132 #wsrep_cluster_address=gcomm:// # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how |InnoDB| autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Authentication for SST method wsrep_sst_auth="sstuser:passwordhere" # Node #1 address wsrep_node_address=10.17.40.150 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=brp #innodb_buffer_pool_size=145774M innodb_flush_log_at_trx_commit=2 innodb_file_per_table=1 innodb_data_file_path = ibdata1:100M:autoextend ## You may want to tune the below depending on number of cores and disk sub innodb_read_io_threads=4 innodb_write_io_threads=4 innodb_io_capacity=200 innodb_doublewrite=1 innodb_log_file_size=1024M innodb_log_buffer_size=96M innodb_buffer_pool_instances=8 innodb_log_files_in_group=2 innodb_thread_concurrency=64 #innodb_file_format=barracuda innodb_flush_method = O_DIRECT innodb_autoinc_lock_mode=2 ## avoid statistics update when doing e.g show tables innodb_stats_on_metadata=0 innodb_data_home_dir=/var/lib/mysql innodb_log_group_home_dir=/var/lib/mysql
: xtrabackup would have been started with the following arguments: --wsrep_provider=/usr/lib/libgalera_smm.so --wsrep_cluster_address=gcomm://10.17.40.150,10.17.40.135,10.17.40.132 --binlog_format=ROW --default_storage_engine=InnoDB --innodb_autoinc_lock_mode=2 --wsrep_sst_auth=sstuser:passwordhere --wsrep_node_address=10.17.40.150 --wsrep_sst_method=xtrabackup-v2 --wsrep_cluster_name=brp --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 --basedir=/usr --datadir=/var/lib/mysql --tmpdir=/tmp --lc-messages-dir=/usr/share/mysql --skip-external-locking --bind-address=0.0.0.0 --key_buffer=16M --max_allowed_packet=16M --thread_stack=192K --thread_cache_size=8 --myisam-recover=BACKUP --innodb_flush_log_at_trx_commit=2 --innodb_file_per_table=1 --innodb_data_file_path=ibdata1:100M:autoextend --innodb_read_io_threads=4 --innodb_write_io_threads=4 --innodb_io_capacity=200 --innodb_doublewrite=1 --innodb_log_file_size=1024M --innodb_log_buffer_size=96M --innodb_buffer_pool_instances=8 --innodb_log_files_in_group=2 --innodb_thread_concurrency=64 --innodb_flush_method=O_DIRECT --innodb_autoinc_lock_mode=2 --innodb_stats_on_metadata=0 --innodb_data_home_dir=/var/lib/mysql --innodb_log_group_home_dir=/var/lib/mysql --query_cache_limit=1M --query_cache_size=16M --log_error=/var/log/mysql/error.log --expire_logs_days=10 --max_binlog_size=100M Errors from mysql-error.log (node 2)

: 2015-02-25 09:32:48 26570 [Note] WSREP: State transfer required: Group state: 8e461731-bc35-11e4-8a30-32847f70120f:3 Local state: 00000000-0000-0000-0000-000000000000:-1 2015-02-25 09:32:48 26570 [Note] WSREP: New cluster view: global state: 8e461731-bc35-11e4-8a30-32847f70120f:3, view# 16: Primary, number of nodes: 2, my index: 0, protocol version 3 2015-02-25 09:32:48 26570 [Warning] WSREP: Gap in state sequence. Need state transfer. 2015-02-25 09:32:48 26570 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.17.40.135' --auth 'sstuser:passwordhere' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '26570' '' ' WSREP_SST: [INFO] Streaming with xbstream (20150225 09:32:48.996) WSREP_SST: [INFO] Using socat as streamer (20150225 09:32:49.000) WSREP_SST: [INFO] Stale sst_in_progress file: /var/lib/mysql//sst_in_progress (20150225 09:32:49.395) WSREP_SST: [INFO] Evaluating timeout -k 110 100 socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20150225 09:32:49.465) 2015-02-25 09:32:49 26570 [Note] WSREP: Prepared SST request: xtrabackup-v2|10.17.40.135:4444/xtrabackup_sst//1 2015-02-25 09:32:49 26570 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 2015-02-25 09:32:49 26570 [Note] WSREP: REPL Protocols: 6 (3, 2) 2015-02-25 09:32:49 26570 [Note] WSREP: Service thread queue flushed. 2015-02-25 09:32:49 26570 [Note] WSREP: Assign initial position for certification: 3, protocol version: 3 2015-02-25 09:32:49 26570 [Note] WSREP: Service thread queue flushed. 2015-02-25 09:32:49 26570 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (8e461731-bc35-11e4-8a30-32847f70120f): 1 (Operation not permitted) at galera/src/replicator_str.cpp:prepare_for_IST():456. IST will be unavailable. 2015-02-25 09:32:49 26570 [Note] WSREP: Member 0.0 (brppxdbl02) requested state transfer from '*any*'. Selected 1.0 (brppxdbcl01)(SYNCED) as donor. 2015-02-25 09:32:49 26570 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 3) 2015-02-25 09:32:49 26570 [Note] WSREP: Requesting state transfer: success, donor: 1 WSREP_SST: [INFO] Proceeding with SST (20150225 09:32:50.439) WSREP_SST: [INFO] Cleaning the existing datadir (20150225 09:32:50.443) removed ‘/var/lib/mysql/ib_logfile1’ removed ‘/var/lib/mysql/ibdata1’ removed ‘/var/lib/mysql/ib_logfile0’ removed ‘/var/lib/mysql/backup-my.cnf’ removed ‘/var/lib/mysql/auto.cnf’ WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20150225 09:32:50.549) 2015-02-25 09:32:51 26570 [Note] WSREP: (426f677b, 'tcp://0.0.0.0:4567') turning message relay requesting off grep: /var/lib/mysql//xtrabackup_checkpoints: No such file or directory WSREP_SST: [INFO] Preparing the backup at /var/lib/mysql/ (20150225 09:33:00.698) 2015-02-25 09:33:00 26570 [Warning] WSREP: 1.0 (brppxdbcl01): State transfer to 0.0 (brppxdbl02) failed: -22 (Invalid argument) 2015-02-25 09:33:00 26570 [ERROR] WSREP: gcs/src/gcs_group.cpp:int gcs_group_handle_join_msg(gcs_group_t*, const gcs_recv_msg_t*)():722: Will never receive state. Need to abort. 2015-02-25 09:33:00 26570 [Note] WSREP: gcomm: terminating thread 2015-02-25 09:33:00 26570 [Note] WSREP: gcomm: joining thread 2015-02-25 09:33:00 26570 [Note] WSREP: gcomm: closing backend WSREP_SST: [INFO] Evaluating innobackupex --no-version-check --apply-log $rebuildcmd ${DATA} &>${DATA}/innobackup.prepare.log (20150225 09:33:00.707) WSREP_SST: [ERROR] Cleanup after exit with status:1 (20150225 09:33:01.308) 2015-02-25 09:33:01 26570 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '10.17.40.135' --auth 'sstuser:passwordhere' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '26570' '' : 1 (Operation not permitted) 2015-02-25 09:33:01 26570 [ERROR] WSREP: Failed to read uuid:seqno from joiner script. 2015-02-25 09:33:01 26570 [ERROR] WSREP: SST failed: 1 (Operation not permitted) 2015-02-25 09:33:01 26570 [ERROR] Aborting 2015-02-25 09:33:01 26570 [Note] WSREP: view(view_id(NON_PRIM,426f677b,16) memb { 426f677b,0 } joined { } left { } partitioned { 5ef44e98,0 }) 2015-02-25 09:33:01 26570 [Note] WSREP: view((empty)) 2015-02-25 09:33:01 26570 [Note] WSREP: gcomm: closed 2015-02-25 09:33:01 26570 [Note] WSREP: /usr/sbin/mysqld: Terminated. Aborted 150225 09:33:01 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

I've seen several over posts similar to this, and so heres the permissions from my.cnf:
-rw-r--r-- 1 mysql root 4879 Feb 24 17:15 /etc/mysql/my.cnf
...and I do have datadir also, although it wasn't included in the snip above...
datadir = /var/lib/mysql


Could I please have some suggestions here? As a newcomer it is virtually impossible to troubleshoot, and this is a fairly basic setup following a guide on your website.

Thanks,

J / robothands

Is it safe to drop page cache on ubuntu ?

Lastest Forum Posts - February 24, 2015 - 5:01pm
Hello,

I am using MySQL 5.6 on ubuntu 14.04.
MySQL server was killed by OS yesterday. (oom kill)

I investigated the memory usage.

$ free -m
total used free shared buffers cached
Mem: 122952 121468 1483 0 285 44494
-/+ buffers/cache: 76689 46263
Swap: 0 0 0
$

It is a dedicated database server but so much "cached".
So I have a plan for dropping page cache.
  • To free pagecache:

    $ echo 1 > /proc/sys/vm/drop_caches
  • To free dentries and inodes:

    $ echo 2 > /proc/sys/vm/drop_caches
  • To free pagecache, dentries and inodes:

    $ echo 3 > /proc/sys/vm/drop_caches
Can anyone let me know it is safe to drop page cache ?
Does it can have any side effect ?

I have a ubuntu server that just one MySQL server is running on.

Thanks.

Circular and Multi-Master Replication tools

Lastest Forum Posts - February 24, 2015 - 5:24am
I am trying to find out the solution/tools of multi-master replication (4 mysql nodes) when one node goes offline then it should not effected the replication but when it back be online then all replication should work properly.
Please let me know if any tools for above situation.

Thanks

Circular and Multi-Master Replication tools

Lastest Forum Posts - February 24, 2015 - 5:24am
I am trying to find out the solution/tools of multi-master replication (4 mysql nodes) when one node goes offline then it should not effected the replication but when it back be online then all replication should work properly.
Please let me know if any tools for above situation.

Thanks

InnoDB: Warning: you are running out of new single-table tablespace id's.

Lastest Forum Posts - February 24, 2015 - 4:34am
I am looking for general information regarding this message:
InnoDB: Warning: you are running out of new single-table tablespace id's. InnoDB: Current counter is 2152000000 and it must not exceed 4294967280! InnoDB: To reset the counter to zero you have to dump all your tables and InnoDB: recreate the whole InnoDB installation.
Which counter? How do you query it? Does an entire restore fix this problem?

Thanks.

Is MySQL’s innodb_file_per_table slowing you down?

Latest MySQL Performance Blog posts - February 24, 2015 - 3:00am

MySQL’s innodb_file_per_table is a wonderful thing – most of the time. Having every table use its own .ibd file allows you to easily reclaim space when dropping or truncating tables. But in some use cases, it may cause significant performance issues.

Many of you in the audience are responsible for running automated tests on your codebase before deploying to production. If you are, then one of your goals is having tests run as quickly as possible so you can run them as frequently as possible. Often times you can change specific settings in your test environment that don’t affect the outcome of the test, but do improve throughput. This post discusses how innodb_file_per_table is one of those settings.

I recently spoke with a customer whose use case involved creating hundreds of tables on up to 16 schemas concurrently as part of a Jenkins testing environment. This was not in production, so performance was far more important than durability. They’d run their tests, and then drop the schemas. This process took close to 20 minutes. They asked “How can we make this faster?”

Due to the number of tables involved innodb_file_per_table seemed a likely culprit.

It’s been noted here on the MySQL Performance Blog that innodb_file_per_table can cause table creation and drops to slow down. But what exactly is the performance hit? We wanted to find out.

The innodb_file_per_table Test:

On a test server running CentOS release 6.5, xfs filesystem, and 5.6.22-71.0-log Percona Server, I ran the following homemade benchmark bash script:

[root@host ~]# time $(for db in {1..16}; do mysql -e "create database bench$db"; $(for tb in {1..500}; do $(mysql bench$db -e "create table tab${tb} (i int) engine=innodb"); done) & done)

If you open the mysql client in another screen or terminal, you should see something like this:

... +-------+------+-----------+---------+---------+------+----------------+------------------------------------------+-----------+---------------+          | Id    | User | Host      | db      | Command | Time | State          | Info                                     | Rows_sent | Rows_examined |          +-------+------+-----------+---------+---------+------+----------------+------------------------------------------+-----------+---------------+          | 80013 | root | localhost | NULL    | Query   |    0 | init           | show processlist                         |         0 |             0 |          | 89462 | root | localhost | bench5  | Query   |    0 | creating table | create table tab95 (i int) engine=innodb |         0 |             0 |          | 89466 | root | localhost | bench8  | Query   |    0 | creating table | create table tab81 (i int) engine=innodb |         0 |             0 |          | 89467 | root | localhost | bench1  | Query   |    0 | creating table | create table tab91 (i int) engine=innodb |         0 |             0 |          | 89468 | root | localhost | bench13 | Query   |    0 | creating table | create table tab90 (i int) engine=innodb |         0 |             0 |          | 89469 | root | localhost | bench15 | Query   |    0 | creating table | create table tab90 (i int) engine=innodb |         0 |             0 |          | 89472 | root | localhost | bench9  | Query   |    0 | creating table | create table tab86 (i int) engine=innodb |         0 |             0 |          | 89473 | root | localhost | bench10 | Query   |    0 | creating table | create table tab94 (i int) engine=innodb |         0 |             0 |          | 89474 | root | localhost | bench11 | Query   |    0 | creating table | create table tab80 (i int) engine=innodb |         0 |             0 |          | 89475 | root | localhost | bench3  | Query   |    0 | creating table | create table tab80 (i int) engine=innodb |         0 |             0 |          | 89476 | root | localhost | bench2  | Query   |    0 | creating table | create table tab82 (i int) engine=innodb |         0 |             0 |          | 89478 | root | localhost | bench4  | Query   |    0 | creating table | create table tab91 (i int) engine=innodb |         0 |             0 |          | 89479 | root | localhost | bench16 | Query   |    0 | creating table | create table tab88 (i int) engine=innodb |         0 |             0 |          | 89481 | root | localhost | bench12 | Query   |    0 | creating table | create table tab90 (i int) engine=innodb |         0 |             0 |          | 89483 | root | localhost | bench6  | Query   |    0 | creating table | create table tab96 (i int) engine=innodb |         0 |             0 |          | 89484 | root | localhost | bench14 | Query   |    0 | creating table | create table tab95 (i int) engine=innodb |         0 |             0 |          +-------+------+-----------+---------+---------+------+----------------+------------------------------------------+-----------+---------------+          ...        

After creating the tables, I dropped all schemas concurrently:

[root@host ~]# time $(for db in {1..16}; do mysql -e "drop database bench${db}" & done)

So what was the difference with innodb_file_per_table ON vs OFF?

  • With innodb_file_per_table=ON
    • Schema and table creation = 1m54.852s
    • Schema drops = 1m21.682s
  • With innodb_file_per_table=OFF
    • Schema and table creation = 0m59.968s
    • Schema drops = 0m54.870s

So creation time decreased by 48%, drop time decreased by 33%.

I think its worth noting that this benchmark creates and drops empty tables. Dropping InnoDB tables created with innodb_file_per_table=ON can take much longer if they have large amounts of data.

Please also be aware that there are always trade-offs when modifying your InnoDB settings. That is outside the scope of this post, so please research and test before making changes. The MySQL documentation discusses that here.  In 5.6.6 and up, innodb_file_per_table is ON by default. MySQL 5.6 will also create temp tables as InnoDB, as noted here.

So there you have it. If your primary goal is to improve create and drop table time, turning OFF innodb_file_per_table will save significant amounts of time.

The post Is MySQL’s innodb_file_per_table slowing you down? appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
]]>