]]>
]]>

You are here

Feed aggregator

Not Connecting ?

Lastest Forum Posts - 4 hours 18 min ago
Hi,

I tried playback this morning, as per display message its not clear the connection was successful or not ? seem its not connection to database and on other side it can login to mysql from same machine to address and credential I'm providing to playback .

Any idea how to debug ? or check if there is no connection issue ? Below is playback output.


percona-playback --mysql-host=<AWS ENDPOINT> --mysql-user=<some user> --mysql-password=<some password> --mysql-schema=<db name> --query-log-file=test.log
Database Plugin: libmysqlclient
Running...
Connection 0 Rows Sent: 1 != expected 0 for query: select count(*) from test;
Error Report finished

Detailed Report
----------------
SELECTs : 0 queries (0 faster, 0 slower)
INSERTs : 0 queries (0 faster, 0 slower)
UPDATEs : 0 queries (0 faster, 0 slower)
DELETEs : 0 queries (0 faster, 0 slower)
REPLACEs : 0 queries (0 faster, 0 slower)
DROPs : 0 queries (0 faster, 0 slower)


Report
------
Executed 1 queries
Spent 00:00:00.225578 executing queries versus an expected 00:00:00 time.
0 queries were quicker than expected, 0 were slower
A total of 0 queries had errors.
Expected 0 rows, got 1 (a difference of 1)
Number of queries where number of rows differed: 1.

Average of 1.00 queries per connection (1 connections).


why tokudb crash when using statement &amp;quot;replace into&amp;quot;?

Lastest Forum Posts - 4 hours 48 min ago
: OS : Linux TENCENT64 2.6.32.43-tlinux-1.0.9-default #1 SMP Fri Jun 27 14:18:34 CST 2014 x86_64 x86_64 x86_64 GNU/Linux(Based on CentOS 6) DB : 5.6.25-73.1-log Percona Server (GPL), Release 73.1, Revision 07b797f TokuDB : tokudb-7.5.8 I use sysbench to test, but I modify insert.lua 's "insert into" statement to "replace into",below is my sysbench command:
: #!/bin/bash # init table_rows=10000000 query_num=10000000 for engine in "tokudb" do for test_type in "/tmp/sysbench-0.5/sysbench/tests/db/insert.lua" do # benchmark for different threads for threads in 24 do echo "" echo "------------Begin test:engine--$engine ;test_type--$test_type ; thread_number--$thread_num ; Time:$(date)----------------------" echo "" ./sysbench \ --test=${test_type} \ --mysql-host=127.0.0.1 \ --mysql-port=3306 \ --mysql-user=sbtest \ --mysql-password=sbtest \ --mysql-db=sbtest_$engine \ --mysql-table-engine=${engine} \ --oltp-tables-count=8 \ --oltp-test-mode=complex \ --oltp-read-only=off \ --oltp-reconnect=on \ --oltp-table-size=${table_rows} \ --num-threads=${threads} \ --report-interval=1 \ --report-checkpoints=10 \ --rand-init=on \ --max-requests=${query_num} \ run done done done Then Mysqld gives error--"failed: 2013 Lost connection to MySQL server during query" as below:
: ALERT: mysql_drv_query() for query 'replace INTO sbtest1 (id, k, c, pad) VALUES (0, 5002186, '58536965495-78773881251-34917510647-97357903386-71273091302-82245336611-03903471871-97500059011-30392441317-37372903787', '81653820896-01377506960-65675023330-38622363197-27224403063')' failed: 2013 Lost connection to MySQL server during query ALERT: mysql_drv_query() for query 'replace INTO sbtest8 (id, k, c, pad) VALUES (0, 5037668, '05069356639-56212159833-52658096667-80229040948-57572676893-03567184272-52225645313-98776709024-48773142329-43534595806', '54851911957-38855482592-16106245067-69454628330-96967394971')' failed: 2013 Lost connection to MySQL server during query ALERT: mysql_drv_query() for query 'replace INTO sbtest5 (id, k, c, pad) VALUES (0, 4972552, '69055276063-43896213498-66445197428-33159881555-57475295544-23474704055-59334563380-90000749927-81616390566-51696050375', '58609786519-56043350264-46943200872-62486635192-90807996939')' failed: 2013 Lost connection to MySQL server during query ALERT: mysql_drv_query() for query 'replace INTO sbtest6 (id, k, c, pad) VALUES (0, 5244590, '70686473544-81685735035-19257510634-93032728167-26640870795-01349760706-77384806727-47139896920-91469343596-78191787149', '06868973854-84821871579-01570156066-52552011347-66169492803')' failed: 2013 Lost connection to MySQL server during query ALERT: mysql_drv_query() for query 'replace INTO sbtest5 (id, k, c, pad) VALUES (0, 4970284, '09493593876-68752512190-34739388240-00120715715-00726234437-44338656972-32600027230-40682733701-06922053650-59590518876', '18633596638-75136226624-24500001003-88852002762-10602199479')' failed: 2013 Lost connection to MySQL server during query ................................................................................... And mysql.err gives below information:
: 06:42:15 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=33554432 read_buffer_size=2097152 max_used_connections=24 max_threads=1002 thread_count=24 connection_count=24 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 4150794 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7ff2f98fe000 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7ff75e275ce0 thread_stack 0x30000 /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(my_print_stacktrace+0x2c)[0x8f44ec] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(handle_fatal_signal+0x461)[0x67bc71] /lib64/libpthread.so.0(+0xf4a0)[0x7ff7875134a0] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/lib/mysql/plugin/ha_tokudb.so(+0x7c8c6)[0x7ff75b4ce8c6] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/lib/mysql/plugin/ha_tokudb.so(_Z53toku_txn_manager_handle_snapshot_create_for_child_txnP7tokutxnP11txn_manager19__TXN_SNAPSHOT_TYPE+0x1b5)[0x7ff75b50e145] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/lib/mysql/plugin/ha_tokudb.so(_Z23toku_txn_begin_with_xidP7tokutxnPS0_P10tokulogger12txnid_pair_s19__TXN_SNAPSHOT_TYPEP13__toku_db_txnbb+0x143)[0x7ff75b566d63] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/lib/mysql/plugin/ha_tokudb.so(_Z14toku_txn_beginP13__toku_db_envP13__toku_db_txnPS2_j+0x30b)[0x7ff75b5671bb] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/lib/mysql/plugin/ha_tokudb.so(_ZN9ha_tokudb9write_rowEPh+0x5a5)[0x7ff75b4b8c25] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(_ZN7handler12ha_write_rowEPh+0x97)[0x5c6907] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(_Z12write_recordP3THDP5TABLEP9COPY_INFOS4_+0x215)[0x6e5045] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(_Z12mysql_insertP3THDP10TABLE_LISTR4ListI4ItemERS3_IS5_ES6_S6_15enum_duplicatesb+0x10a1)[0x6ea951] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(_Z21mysql_execute_commandP3THD+0x1d73)[0x6fd803] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x5a8)[0x702b28] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x107b)[0x70433b] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(_Z24do_handle_one_connectionP3THD+0x162)[0x6d1842] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(handle_one_connection+0x40)[0x6d1930] /usr/local/Percona-Server-5.6.25-rel73.1-Linux.x86_64.ssl098e/bin/mysqld(pfs_spawn_thread+0x143)[0x92bb43] /lib64/libpthread.so.0(+0x77f1)[0x7ff78750b7f1] /lib64/libc.so.6(clone+0x6d)[0x7ff785ccaccd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7ff2b1c1e010): is an invalid pointer Connection ID (thread ID): 26 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 150730 14:42:15 mysqld_safe Number of processes running now: 0 150730 14:42:15 mysqld_safe mysqld restarted ------------------------------------------------------------------------------------------------------------ This ignored section is mysql's crash restart 's information.because word count limit ,I ignored -------------------------------------------------------------------------------------------------------------

Why base64-output=DECODE-ROWS does not print row events in MySQL binary logs

Lately I saw many cases when users specified option--base64-output=DECODE-ROWS  to print out a statement representation of row events in MySQL binary logs just to get nothing. Reason for this is obvious: option--base64-output=DECODE-ROWS  does not convert row events into its string representation, this is job of option--verbose. But why users mix these two options so often? This blog post is result of my investigations.

There are already two great blog posts about printing row events on the Percona blog: “Debugging problems with row based replication” by Justin Swanhart and “Identifying useful info from MySQL row-based binary logs” by Alok Pathak.

Both authors runmysqlbinlog  with options –base64-output=decode-rows -vv  and demonstrate how a combination of them can produce human-readable output of row events. However, one thing which is not clear yet is what the differences are between these options. I want to underline the differences in this post.

Let’s check the user manual first.

–base64-output=value

This option determines when events should be displayed encoded as base-64 strings using BINLOG statements. The option has these permissible values (not case sensitive):

    AUTO (“automatic”) or UNSPEC (“unspecified”) displays BINLOG statements automatically when necessary (that is, for format description events and row events). If no –base64-output option is given, the effect is the same as –base64-output=AUTO.
    Note

    Automatic BINLOG display is the only safe behavior if you intend to use the output of mysqlbinlog to re-execute binary log file contents. The other option values are intended only for debugging or testing purposes because they may produce output that does not include all events in executable form.

    NEVER causes BINLOG statements not to be displayed. mysqlbinlog exits with an error if a row event is found that must be displayed using BINLOG.

    DECODE-ROWS specifies to mysqlbinlog that you intend for row events to be decoded and displayed as commented SQL statements by also specifying the –verbose option. Like NEVER, DECODE-ROWS suppresses display of BINLOG statements, but unlike NEVER, it does not exit with an error if a row event is found.

For examples that show the effect of –base64-output and –verbose on row event output, see Section 4.6.8.2, “mysqlbinlog Row Event Display”.

Literally--base64-output=DECODE-ROWS  just suppressesBINLOG  statement and does not print anything.

To test its effect I run command

insert into t values (2, 'bar');

on an InnoDB table while binary log uses ROW format. As expected if I specify no option I will receive unreadable output:

$mysqlbinlog var/mysqld.1/data/master-bin.000002 /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #150720 15:19:15 server id 1 end_log_pos 120 CRC32 0x3d52aee2 Start: binlog v 4, server v 5.6.25-73.1-debug-log created 150720 15:19:15 BINLOG ' Q+esVQ8BAAAAdAAAAHgAAAAAAAQANS42LjI1LTczLjEtZGVidWctbG9nAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAXAAEGggAAAAICAgCAAAACgoKGRkAAeKu Uj0= '/*!*/; # at 120 #150720 15:19:21 server id 1 end_log_pos 192 CRC32 0xbebac59d Query thread_id=2 exec_time=0 error_code=0 SET TIMESTAMP=1437394761/*!*/; SET @@session.pseudo_thread_id=2/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1073741824/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 192 #150720 15:19:21 server id 1 end_log_pos 239 CRC32 0xe143838b Table_map: `test`.`t` mapped to number 70 # at 239 #150720 15:19:21 server id 1 end_log_pos 283 CRC32 0x75523a2d Write_rows: table id 70 flags: STMT_END_F BINLOG ' SeesVRMBAAAALwAAAO8AAAAAAEYAAAAAAAEABHRlc3QAAXQAAgMPAv8AA4uDQ+E= SeesVR4BAAAALAAAABsBAAAAAEYAAAAAAAEAAgAC//wCAAAAA2Jhci06UnU= '/*!*/; # at 283 #150720 15:19:21 server id 1 end_log_pos 314 CRC32 0xd183c769 Xid = 14 COMMIT/*!*/; # at 314 #150720 15:19:22 server id 1 end_log_pos 362 CRC32 0x892fe43b Rotate to master-bin.000003 pos: 4 DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

TheINSERT  is here:

BINLOG ' SeesVRMBAAAALwAAAO8AAAAAAEYAAAAAAAEABHRlc3QAAXQAAgMPAv8AA4uDQ+E= SeesVR4BAAAALAAAABsBAAAAAEYAAAAAAAEAAgAC//wCAAAAA2Jhci06UnU= '/*!*/;

But this string is not for humans.

What will happen if I add option--base64-output=DECODE-ROWS ?

$mysqlbinlog var/mysqld.1/data/master-bin.000002 --base64-output=DECODE-ROWS /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #150720 15:19:15 server id 1 end_log_pos 120 CRC32 0x3d52aee2 Start: binlog v 4, server v 5.6.25-73.1-debug-log created 150720 15:19:15 # at 120 #150720 15:19:21 server id 1 end_log_pos 192 CRC32 0xbebac59d Query thread_id=2 exec_time=0 error_code=0 SET TIMESTAMP=1437394761/*!*/; SET @@session.pseudo_thread_id=2/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1073741824/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 192 #150720 15:19:21 server id 1 end_log_pos 239 CRC32 0xe143838b Table_map: `test`.`t` mapped to number 70 # at 239 #150720 15:19:21 server id 1 end_log_pos 283 CRC32 0x75523a2d Write_rows: table id 70 flags: STMT_END_F # at 283 #150720 15:19:21 server id 1 end_log_pos 314 CRC32 0xd183c769 Xid = 14 COMMIT/*!*/; # at 314 #150720 15:19:22 server id 1 end_log_pos 362 CRC32 0x892fe43b Rotate to master-bin.000003 pos: 4 DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

Row event was just suppressed!

Lets now check option verbose:

–verbose, -v

Reconstruct row events and display them as commented SQL statements. If this option is given twice, the output includes comments to indicate column data types and some metadata.

For examples that show the effect of –base64-output and –verbose on row event output, see Section 4.6.8.2, “mysqlbinlog Row Event Display”.

Surprisingly--base64-output=DECODE-ROWS  is not needed!:

$mysqlbinlog var/mysqld.1/data/master-bin.000002 --verbose /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #150720 15:19:15 server id 1 end_log_pos 120 CRC32 0x3d52aee2 Start: binlog v 4, server v 5.6.25-73.1-debug-log created 150720 15:19:15 BINLOG ' Q+esVQ8BAAAAdAAAAHgAAAAAAAQANS42LjI1LTczLjEtZGVidWctbG9nAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAXAAEGggAAAAICAgCAAAACgoKGRkAAeKu Uj0= '/*!*/; # at 120 #150720 15:19:21 server id 1 end_log_pos 192 CRC32 0xbebac59d Query thread_id=2 exec_time=0 error_code=0 SET TIMESTAMP=1437394761/*!*/; SET @@session.pseudo_thread_id=2/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1073741824/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 192 #150720 15:19:21 server id 1 end_log_pos 239 CRC32 0xe143838b Table_map: `test`.`t` mapped to number 70 # at 239 #150720 15:19:21 server id 1 end_log_pos 283 CRC32 0x75523a2d Write_rows: table id 70 flags: STMT_END_F BINLOG ' SeesVRMBAAAALwAAAO8AAAAAAEYAAAAAAAEABHRlc3QAAXQAAgMPAv8AA4uDQ+E= SeesVR4BAAAALAAAABsBAAAAAEYAAAAAAAEAAgAC//wCAAAAA2Jhci06UnU= '/*!*/; ### INSERT INTO `test`.`t` ### SET ### @1=2 ### @2='bar' # at 283 #150720 15:19:21 server id 1 end_log_pos 314 CRC32 0xd183c769 Xid = 14 COMMIT/*!*/; # at 314 #150720 15:19:22 server id 1 end_log_pos 362 CRC32 0x892fe43b Rotate to master-bin.000003 pos: 4 DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

INSERT statement successfully restored as:

### INSERT INTO `test`.`t` ### SET ### @1=2 ### @2='bar' # at 283

Why do the bloggers mentioned above suggest to use--base64-output=DECODE-ROWS ? Lets try to use both options:

$mysqlbinlog var/mysqld.1/data/master-bin.000002 --base64-output=DECODE-ROWS --verbose /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #150720 15:19:15 server id 1 end_log_pos 120 CRC32 0x3d52aee2 Start: binlog v 4, server v 5.6.25-73.1-debug-log created 150720 15:19:15 # at 120 #150720 15:19:21 server id 1 end_log_pos 192 CRC32 0xbebac59d Query thread_id=2 exec_time=0 error_code=0 SET TIMESTAMP=1437394761/*!*/; SET @@session.pseudo_thread_id=2/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1073741824/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 192 #150720 15:19:21 server id 1 end_log_pos 239 CRC32 0xe143838b Table_map: `test`.`t` mapped to number 70 # at 239 #150720 15:19:21 server id 1 end_log_pos 283 CRC32 0x75523a2d Write_rows: table id 70 flags: STMT_END_F ### INSERT INTO `test`.`t` ### SET ### @1=2 ### @2='bar' # at 283 #150720 15:19:21 server id 1 end_log_pos 314 CRC32 0xd183c769 Xid = 14 COMMIT/*!*/; # at 314 #150720 15:19:22 server id 1 end_log_pos 362 CRC32 0x892fe43b Rotate to master-bin.000003 pos: 4 DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

In this case row event was suppressed and statement is printed. Also, the resulting file cannot be used to re-apply events, because statements are commented out. This is very useful when binary log is big and you just need to investigate what it contains, not re-apply events.

This is not main purpose of this post, but you can also find information about columns metadata if specify option--verbose  twice:

$mysqlbinlog var/mysqld.1/data/master-bin.000002 --base64-output=DECODE-ROWS --verbose --verbose /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #150720 15:19:15 server id 1 end_log_pos 120 CRC32 0x3d52aee2 Start: binlog v 4, server v 5.6.25-73.1-debug-log created 150720 15:19:15 # at 120 #150720 15:19:21 server id 1 end_log_pos 192 CRC32 0xbebac59d Query thread_id=2 exec_time=0 error_code=0 SET TIMESTAMP=1437394761/*!*/; SET @@session.pseudo_thread_id=2/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1073741824/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 192 #150720 15:19:21 server id 1 end_log_pos 239 CRC32 0xe143838b Table_map: `test`.`t` mapped to number 70 # at 239 #150720 15:19:21 server id 1 end_log_pos 283 CRC32 0x75523a2d Write_rows: table id 70 flags: STMT_END_F ### INSERT INTO `test`.`t` ### SET ### @1=2 /* INT meta=0 nullable=1 is_null=0 */ ### @2='bar' /* VARSTRING(255) meta=255 nullable=1 is_null=0 */ # at 283 #150720 15:19:21 server id 1 end_log_pos 314 CRC32 0xd183c769 Xid = 14 COMMIT/*!*/; # at 314 #150720 15:19:22 server id 1 end_log_pos 362 CRC32 0x892fe43b Rotate to master-bin.000003 pos: 4 DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

Note, this is, again, job of--verbose , not--base64-output=DECODE-ROWS

To conclude:

If you want to see statement representation of row events use option--verbose (-v)
If you want to see metadata of columns specify--verbose  twice:--verbose --verbose  or-vv
If you want to suppress output of row events specify option--base64-output=DECODE-ROWS

The post Why base64-output=DECODE-ROWS does not print row events in MySQL binary logs appeared first on MySQL Performance Blog.

Xtrabackup crash

Lastest Forum Posts - July 29, 2015 - 3:46am
Hi All,

I have FreeBSD server 10.1-RELEASE-p14, MySQL server 5.5, xtrabackup-2.2.10 are installed from default packages repository. I'm using custom backup script:
https://github.com/vitobotta/admin-s.../xtrabackup.sh

Older version of Xtrabackup worked very good, but now I have core dumping:

(pts/2)[root@server:~/backup]# ./xtrabackup.sh incr
Loading configuration from /root/.xtrabackup.config.

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

150729 13:38:10 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/etc/my.cnf;mysql_read_default_group=xtrabackup' as 'root' (using password: YES).
150729 13:38:10 innobackupex: Connected to MySQL server
150729 13:38:10 innobackupex: Executing a version check against the server...
150729 13:38:11 innobackupex: Done.
150729 13:38:11 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints "completed OK!".

innobackupex: Using server version 5.5.43

innobackupex: Created backup directory /usr/backup/mysql/full/2015-07-29_13-38-31

150729 13:38:32 innobackupex: Starting ibbackup with command: xtrabackup --defaults-file="/etc/my.cnf" --defaults-group="mysqld" --backup --suspend-at-end --target-dir=/usr/backup/mysql/full/2015-07-29_13-38-31 --datadir="/var/db/mysql/" --innodb_log_file_size="5242880" --innodb_data_file_path="ibdata1:10M:autoextend" --tmpdir=/tmp/ --extra-lsndir='/tmp/'
innobackupex: Waiting for ibbackup (pid=80791) to suspend
innobackupex: Suspend file '/usr/backup/mysql/full/2015-07-29_13-38-31/xtrabackup_suspended_2'

innobackupex: got a fatal error with the following stacktrace: at /usr/local/bin/innobackupex line 2703.
main::wait_for_ibbackup_file_create("/usr/backup/mysql/full/2015-07-29_13-38-31/xtrabackup_suspend"...) called at /usr/local/bin/innobackupex line 2723
main::wait_for_ibbackup_suspend("/usr/backup/mysql/full/2015-07-29_13-38-31/xtrabackup_suspend"...) called at /usr/local/bin/innobackupex line 1976
main::backup() called at /usr/local/bin/innobackupex line 1601
innobackupex: Error: The xtrabackup child process has died at /usr/local/bin/innobackupex line 2703.
ls: -t: No such file or directory
grep: /usr/backup/mysql//incr/**/backup.chain: No such file or directory
(pts/2)[root@server:~/backup]#


I would be grateful for any help.

Thanks

Multi-source replication in MySQL 5.7 vs Tungsten Replicator

Latest MySQL Performance Blog posts - July 29, 2015 - 12:00am

MySQL 5.7 comes with a new set of features and multi-source replication is one of them. In few words this means that one slave can replicate from different masters simultaneously.

During the last couple of months I’ve been playing a lot with this trying to analyze its potential in a real case that I’ve been facing while working with a customer.

This was motivated because my customer is already using multi-sourced slaves with Tungsten Replicator and I wanted to do a side-by-side comparison between Tungsten Replicator and Multi-source Replication in MySQL 5.7

Consider the following scenario:


DB1 is our main master attending mostly writes from several applications, it also needs to serve read traffic which is putting it’s capacity close to the limit. It has attached 6 replication slaves using regular replication.
A1, A2, A3, B1, B2 and DB7 are reporting slaves used to offload some reads from master and also woking on some offline ETL processes.

Since they had some idle capacity customer decided to go further and set a different architecture:
A1 and B1 became also masters of other slaves using Tungsten Replicator, in this case group A is a set of servers for a statistics application and B is attending a finance application, so A2, A3 and B2 became multi sourced slaves.
New applications writes directly to A1 and B1 without impacting write capacity of main master.

Pros and Cons of this approach

Pros

  • It just works. We’ve been running this way for a long time now and we haven’t suffered major issues.
  • Tungsten Replicator has some built in tools and scripts to make slave provision easy.

Cons

  • Tungsten Replicator is a great product but bigger than needed for this architecture. In some cases we had to configure Java Virtual Machine with 4GB of RAM to make it work properly.
  • Tungsten is a complex tool that needs some extra expertise to deploy it, make it work and troubleshoot issues when errors happen (i.e. handling duplicate keys errors)

With all this in mind we moved a step forward and started to test if we can move this architecture to use legacy replication only.

New architecture design:

We added some storage capacity to DB7  for our testing purposes and the goal here is to replace all Tungsten replicated slaves by a single server where all databases are consolidated.

For some data dependency we weren’t able to completely separate A1 and B1 servers to become master-only so they are currently acting as masters of DB7 and slaves of DB1 By data dependency I mean DB1 replicates it’s schemas to all of it’s direct slaves, including DB7.  DB7 also gets replication of the finance DB running locally to B1 and stats DB running locally to A1.

Some details about how this was done and what multi source is implemented:
  • The main difference between regular replication, as known up to 5.6 version, is that now you have replication channels, each channel means a different source, in other words each master has it’s own replication channel.
  • Replication needs to be set as crash safe, meaning that both master_info_repository and
    relay_log_info_repository variables needs to be set to TABLE
  • We haven’t considered GTID because servers acting as masters have different versions than our test multi-sourced slave.
  • log_slave_updates needs to be disabled in A1 and B2 to avoid having duplicate data in DB7 due replication flow.
Pros and Cons of this approach

Pros

  • MySQL 5.7 can replicate from different versions of master, we tested multi-source replication working along with 5.5 and 5.6 simultaneously and didn’t suffer problems besides those known changes with timestamp based fields.
  • Administration becomes easier. Any DBA already familiar with legacy replication can adapt to handle multiple channels without much learning, some new variables and a couple of new tables and you’re ready to go here.

Cons

  • 5.7 is not production ready yet. At this point we don’t have a GA release data which means that we may expect bugs to appear in the short/mid term.
  • Multi-source is still tricky for some special cases: database and table filtering works globally (can’t set per-channel filters) and administration commands like sql_slave_skip_counter is a global command still which means you can’t easily skip a statement in a particular channel.
Now the funny part: The How

It was easier than you think. First of all we needed to start from a backup of data coming from our masters. Due to versions used in production (main master is 5.5, A1 and B1 are 5.6) we started from a logical dump so we avoided to deal with mysql_upgrade issues.

Disclaimer: this does not pretend to be a guide on how to setup multi-source replication

For the matter of our case we did the backup/restore using mydumper/myloader as follow:

[root@db1]$ mydumper -l 600 -v 3 -t 8 --outputdir /mnt/backup_db1/20150708 --less-locking --regex="^(database1.|database2.|database3.)" [root@a1]$ mydumper -l 600 -v 3 -t 8 --outputdir /mnt/backup_a1/20150708 --less-locking --regex="^(tungsten_stats.|stats.)" [root@b1]$ mydumper -l 600 -v 3 -t 8 --outputdir /mnt/backup_b1/20150708 --less-locking --regex="^(tungsten_finance.|finance.)"

Notice each command was run in each master server, now the restore part:

[root@db7]$ myloader -d /mnt/backup_db1/20150708 -o -t 8 -q 10000 -h localhost [root@db7]$ myloader -d /mnt/backup_a1/20150708 -o -t 8 -q 10000 -h localhost [root@db7]$ myloader -d /mnt/backup_b1/20150708 -o -t 8 -q 10000 -h localhost

So at this point we have a new slave with a copy of databases from 3 different masters, just for context we need to dump/restore tungsten* databases because they are constantly updated by Replicator (which at this point is still in use). Pretty easy right?

Now the most important part of this whole process, setting up replication. The procedure is very similar than regular replication but now we need to consider which binlog position is necessary for each replication channel, this is very easy to get from each backup by reading in this case the metadata file created by mydumper. In known backup methods (either logical or physical) you have a way to get binlog coordinates, for example –master-data=2 in mysqldump or xtrabackup_binlog_info file in xtrabackup.

Once we get the replication info (and created a replication user in master) then we only need to run the known CHANGE MASTER TO and START SLAVE commands, but here we have our new way to do it:

db7:information_schema> change master to master_host='db1', master_user='rep', master_password='rep', master_log_file='db1-bin.091487', master_log_pos=74910596 FOR CHANNEL 'main_master'; Query OK, 0 rows affected (0.02 sec) db7:information_schema> change master to master_host='a1', master_user='rep', master_password='rep', master_log_file='a1-bin.394460', master_log_pos=56004 FOR CHANNEL 'a1_slave'; Query OK, 0 rows affected (0.02 sec) db7:information_schema> change master to master_host='b1', master_user='rep', master_password='rep', master_log_file='b1-bin.1653245', master_log_pos=2563356 FOR CHANNEL 'b1_slave'; Query OK, 0 rows affected (0.02 sec)

Replication is set and now we are good to go:

db10:information_schema> START SLAVE FOR CHANNEL 'main_master'; Query OK, 0 rows affected (0.00 sec) db10:information_schema> START SLAVE FOR CHANNEL 'a1_slave'; Query OK, 0 rows affected (0.00 sec) db10:information_schema> START SLAVE FOR CHANNEL 'b1_slave'; Query OK, 0 rows affected (0.00 sec)

New commands includes the FOR CHANNEL 'channel_name' option to handle replication channels independently

At this point we have a slave running 3 replication channels from different sources, we can check the status of replication with our known command SHOW SLAVE STATUS (TL;DR)

db10:information_schema> show slave statusG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: db1 Master_User: rep Master_Port: 3306 Connect_Retry: 60 Master_Log_File: db1-bin.077011 Read_Master_Log_Pos: 15688468 Relay_Log_File: db7-relay-main_master.000500 Relay_Log_Pos: 18896705 Relay_Master_Log_File: db1-bin.076977 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: mysql.%,temp.% Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 18896506 Relay_Log_Space: 2260203264 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 31047 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1004 Master_UUID: 65107c0c-7ab5-11e4-a85a-bc305bf01f00 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: System lock Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: main_master *************************** 2. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: a1 Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: a1-bin.072336 Read_Master_Log_Pos: 10329256 Relay_Log_File: db7-relay-db3_slave.000025 Relay_Log_Pos: 10329447 Relay_Master_Log_File: a1-bin.072336 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: mysql.%,temp.% Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 10329256 Relay_Log_Space: 10329697 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 4000 Master_UUID: 0f061ec4-6fad-11e4-a069-a0d3c10545b0 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: a1_slave *************************** 3. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: b1.las1.fanops.net Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: b1-bin.093214 Read_Master_Log_Pos: 176544432 Relay_Log_File: db7-relay-db8_slave.000991 Relay_Log_Pos: 176544623 Relay_Master_Log_File: b1-bin.093214 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: mysql.%,temp.% Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 176544432 Relay_Log_Space: 176544870 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1001 Master_UUID: Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: b1_slave 3 rows in set (0.00 sec)

Yeah I know, output is too large and the Oracle guys noticed it, too, so they have created a set of new tables in performance_schema DB to help us retrieving this information in a friendly manner, check this link for more information. We could also run SHOW SLAVE STATUS FOR CHANNEL 'b1_slave' for instance

Some limitations found during tests:
  • As mentioned some configurations are still global and can’t be set per replication channel, for instance replication filters which can be set without restarting MySQL but they will affect all replication channels as you can see here.
  • Replication events are somehow serialized at slave side, just like a global counter that is not well documented yet. In reality this means that you need to be very careful when troubleshooting issues because you may suffer unexpected issues, for instance if you have 2 replication channels failing with a duplicate key error then is not easy to predict which even you will skip when running set global sql_slave_skip_counter=1

Conclusions
So far this new feature looks very nice and provides some extra flexibility to slaves which helps to reduce architecture complexity when we want to consolidate databases from different sources into a single server. After some time testing it I’d say that I prefer this type of replication over Tungsten Replicator in this kind of scenarios due it’s simplicity for administration, i.e. pt-table-checksum and pt-table-sync will work without proper limitations of Tungsten.

With the exception of some limitations that need to be addressed, I believe this new feature is game changing and will definitely make DBA’s life easier. I still have a lot to test still but that is material for a future post.

The post Multi-source replication in MySQL 5.7 vs Tungsten Replicator appeared first on MySQL Performance Blog.

MySQL QA Episode 9: Reducing Testcases for Experts: multi-threaded reducer.sh

Latest MySQL Performance Blog posts - July 28, 2015 - 3:00am

Welcome to MySQL QA Episode 9. This episode will go more in-depth into reducer.sh: Reducing Testcases for Experts: multi-threaded reducer.sh

We will explore how to use reducer.sh to do true multi-threaded testcase reduction – a world’s first.

Topics:

  1. Expert configurable variables & their default reducer.sh settings
    1. PQUERY_MULTI
    2. PQUERY_MULTI_THREADS
    3. PQUERY_MULTI_CLIENT_THREADS
    4. PQUERY_MULTI_QUERIES
    5. PQUERY_REVERSE_NOSHUFFLE_OPT

Full-screen viewing @ 720p resolution recommended.

The post MySQL QA Episode 9: Reducing Testcases for Experts: multi-threaded reducer.sh appeared first on MySQL Performance Blog.

[Warning] WSREP: last inactive check more than PT1.5S ago, skipping check

Lastest Forum Posts - July 28, 2015 - 1:59am
I had 3 node galera cluster running on RHEL6.5x64.
node1 - 10.22.5.8
node2 - 10.22.5.11
node3 - 10.22.75.185

Version on all nodes : mysqld Ver 5.5.23 for Linux on x86_64 (Percona XtraDB Cluster (GPL), wsrep_23.5.r333)

node1 /etc/my.cnf
== [MYSQLD]
user=mysql
basedir=/usr/
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
pid-file=mysqld.pid
port=3306
log-error=error.log
character_set_client=utf8
character_set_server=utf8
innodb-buffer-pool-size=8964M
innodb-flush-log-at-trx_commit=2
innodb-file-per-table=1
innodb-doublewrite=0
innodb_log_file_size=256M
innodb-log-files-in-group=3
innodb-thread-concurrency=0
innodb-flush-method = O_DIRECT
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
engine-condition-pushdown=1
binlog_format=ROW
key_buffer_size = 256M
max-allowed-packet = 128M
sort-buffer-size = 512K
read-buffer-size = 256K
read-rnd-buffer-size = 512K
myisam-sort-buffer_size = 8M
skip-name-resolve
memlock=0
sysdate-is-now=1
max-connections=512
thread-cache-size=512
query-cache-type = 0
query-cache-size = 0
table-open_cache=1024
lower-case-table-names=0
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_node_address=10.22.5.8
wsrep_provider_options="pc.ignore_sb = yes"
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://10.22.75.185"
wsrep_node_name=10.22.5.8
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=1
wsrep_causal_reads=0
wsrep_sst_method=rsync
wsrep_sst_auth=rootasswd
wsrep_sst_donor=10.22.75.185
[MYSQL]
socket=/var/lib/mysql/mysql.sock
[client]
socket=/var/lib/mysql/mysql.sock
[MYSQLD_SAFE]
pid-file=mysqld.pid
log-error=error.log
basedir=/usr/
datadir=/var/lib/mysql



node2 /etc/my.cnf
== [MYSQLD]
user=mysql
basedir=/usr/
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
pid-file=mysqld.pid
port=3306
log-error=error.log
innodb-buffer-pool-size=8964M
innodb-flush-log-at-trx_commit=2
innodb-file-per-table=1
innodb-doublewrite=0
innodb_log_file_size=256M
innodb-log-files-in-group=3
innodb-thread-concurrency=0
innodb-flush-method = O_DIRECT
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
engine-condition-pushdown=1
binlog_format=ROW
key_buffer_size = 256M
max-allowed-packet = 128M
sort-buffer-size = 512K
read-buffer-size = 256K
read-rnd-buffer-size = 512K
myisam-sort-buffer_size = 8M
skip-name-resolve
memlock=0
sysdate-is-now=1
max-connections=512
thread-cache-size=512
query-cache-type = 0
query-cache-size = 0
table-open_cache=1024
lower-case-table-names=0
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_node_address=10.22.5.11
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://10.22.75.185"
wsrep_node_name=10.22.5.11
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=1
wsrep_causal_reads=0
wsrep_sst_method=xtrabackup-v2
wsrep_sst_receive_address=10.22.5.11
wsrep_sst_auth=rootasswd
wsrep_sst_donor=10.22.75.185
[MYSQL]
socket=/var/lib/mysql/mysql.sock
[client]
socket=/var/lib/mysql/mysql.sock
[MYSQLD_SAFE]
pid-file=mysqld.pid
log-error=error.log
basedir=/usr/
datadir=/var/lib/mysql





node3 /etc/my.cnf
== [MYSQLD]
user=mysql
basedir=/usr/
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
pid-file=mysqld.pid
port=3306
log-error=error.log
innodb-buffer-pool-size=8964M
innodb-flush-log-at-trx_commit=2
innodb-file-per-table=1
innodb-doublewrite=0
innodb_log_file_size=256M
innodb-log-files-in-group=3
innodb-thread-concurrency=0
innodb-flush-method = O_DIRECT
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
engine-condition-pushdown=1
binlog_format=ROW
key_buffer_size = 256M
max-allowed-packet = 16M
sort-buffer-size = 512K
read-buffer-size = 256K
read-rnd-buffer-size = 512K
myisam-sort-buffer_size = 8M
skip-name-resolve
memlock=0
sysdate-is-now=1
max-connections=512
thread-cache-size=512
query-cache-type = 0
query-cache-size = 0
table-open_cache=1024
lower-case-table-names=0
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_node_address=10.22.75.185
wsrep_cluster_address="gcomm://10.22.5.11"
wsrep_node_name=10.22.75.185
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=1
wsrep_causal_reads=0
wsrep_sst_method=rsync
wsrep_sst_auth=rootasswd
wsrep_sst_donor=10.22.5.11
[MYSQL]
socket=/var/lib/mysql/mysql.sock
[client]
socket=/var/lib/mysql/mysql.sock
[MYSQLD_SAFE]
pid-file=mysqld.pid
log-error=error.log
basedir=/usr/
datadir=/var/lib/mysql





I shutdown entire cluster because I had to decrease amount of 'innodb-buffer-pool-size' because mysqld was using to much SWAP space.
After changing this from most advanced node which was node1 I've boostrapped the cluster with below command
<code>service mysql start --wsrep_cluster_address="gcomm://"</code>

Now cluster size is 1 and I can't join other nodes (node2, node3) to the cluster, please observer below output from error.log of node2 during 'service mysql start' process.

/var/lib/mysql/error.log
== 150728 10:34:14 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
150728 10:34:14 [Note] Flashcache bypass: disabled
150728 10:34:14 [Note] Flashcache setup error is : ioctl failed
150728 10:34:14 [Note] WSREP: Read nil XID from storage engines, skipping position init
150728 10:34:14 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/libgalera_smm.so'
150728 10:34:14 [Note] WSREP: wsrep_load(): Galera 2.1dev(r109) by Codership Oy <info@codership.com> loaded succesfully.
150728 10:34:14 [Note] WSREP: Reusing existing '/var/lib/mysql//galera.cache'.
150728 10:34:14 [Note] WSREP: Passing config to GCS: base_host = 10.22.5.11; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 0.5; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; replicator.causal_read_timeout = PT30S; replicator.commit_order = 3
150728 10:34:14 [Note] WSREP: wsrep_sst_grab()
150728 10:34:14 [Note] WSREP: Start replication
150728 10:34:14 [Note] WSREP: Found saved state: 9ba720b3-8d53-11e1-0800-52407463bac7:304911180
150728 10:34:14 [Note] WSREP: Assign initial position for certification: 304911180, protocol version: -1
150728 10:34:14 [Note] WSREP: Setting initial position to 9ba720b3-8d53-11e1-0800-52407463bac7:304911180
150728 10:34:14 [Note] WSREP: protonet asio version 0
150728 10:34:14 [Note] WSREP: backend: asio
150728 10:34:14 [Note] WSREP: GMCast version 0
150728 10:34:14 [Note] WSREP: (6da55788-3503-11e5-0800-0de34d03910b, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
150728 10:34:14 [Note] WSREP: (6da55788-3503-11e5-0800-0de34d03910b, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
150728 10:34:14 [Note] WSREP: EVS version 0
150728 10:34:14 [Note] WSREP: PC version 0
150728 10:34:14 [Note] WSREP: gcomm: connecting to group 'my_wsrep_cluster', peer '10.22.75.185:'
150728 10:34:17 [Warning] WSREP: no nodes coming from prim view, prim not possible
150728 10:34:17 [Note] WSREP: view(view_id(NON_PRIM,6da55788-3503-11e5-0800-0de34d03910b,1) memb {
6da55788-3503-11e5-0800-0de34d03910b,
} joined {
} left {
} partitioned {
})
150728 10:34:17 [Note] WSREP: gcomm: connected
150728 10:34:17 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
150728 10:34:17 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
150728 10:34:17 [Note] WSREP: Opened channel 'my_wsrep_cluster'
150728 10:34:17 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
150728 10:34:17 [Note] WSREP: Waiting for SST to complete.
150728 10:34:17 [Note] WSREP: Flow-control interval: [8, 16]
150728 10:34:17 [Note] WSREP: Received NON-PRIMARY.
150728 10:34:17 [Note] WSREP: New cluster view: global state: 9ba720b3-8d53-11e1-0800-52407463bac7:304911180, view# -1: non-Primary, number of nodes: 1, my index: 0, protocol version -1
150728 10:34:17 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
150728 10:34:18 [Warning] WSREP: last inactive check more than PT1.5S ago, skipping check



Is my cluster back in sync?

Lastest Forum Posts - July 28, 2015 - 12:33am
Hello,

I have a 2 node percona-xtradb-cluster-56 in multi-master mode. The server was crashed (both nodes) then bootstrapped, maybe in the wrong order because the users noticed that some data wasn't up to date. It's not relevant, what I would like to know if it is back to normal synchronized mode right now or it's still in some flaky unpredictable state.

Here are the variables for NODE1:

+------------------------------+---------------------------------------------------+
| Variable_name | Value |
+------------------------------+---------------------------------------------------+
| wsrep_local_state_uuid | 53ef5e93-33de-11e5-adb5-1b37c6b643bd |
| wsrep_protocol_version | 6 |
| wsrep_last_committed | 912789 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 776 |
| wsrep_received_bytes | 56548807 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 2 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.111111 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 151 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 15.442010 |
| wsrep_local_cached_downto | 912023 |
| wsrep_flow_control_paused_ns | 458 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 1 |
| wsrep_flow_control_recv | 1 |
| wsrep_cert_deps_distance | 24.422425 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 7 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.234681 |
| wsrep_incoming_addresses | <removed> |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0.000237055/0.000330571/0.00051501/8.63108e-05/13 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | fa66fe2c-34f5-11e5-8c84-035250e696ad |
| wsrep_cluster_conf_id | 4 |
| wsrep_cluster_size | 2 |
| wsrep_cluster_state_uuid | 53ef5e93-33de-11e5-adb5-1b37c6b643bd |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 1 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.8(rf6147dd) |
| wsrep_ready | ON |
+------------------------------+---------------------------------------------------+

Attempting to restore table from xtrabackup (v2.2.11)

Lastest Forum Posts - July 27, 2015 - 4:16am
So I have a backup I'm trying to restore into a new environment, but every time I attempt to prepare the backup for restoration, the defaults file in the backup location is ignored, and there is an error that the innodb_data_file_path parameter is not there. I figured this was related to permissions, as it is world-writable on a cifs share, and so copied it off to my pwd. However, the error is the same, except that it is now showing the same error but with the new defaults file name in it instead. The file I set as the new defaults file apparently doesn't have the innodb_data_file_path entry in it (I am 100% certain that it does, I am looking at it right now!!).

The file permissions on the new file are 644. I have no idea why this is happening. I suspect this may be related to the fact the backup is on a CIFS share, but need to be sure this is the case, however we don't have any spare disks to use to put the backup on to before restoring. See output from innobackupex command below:



# innobackupex --defaults-file ./backup-my.cnf --apply-log --export /backups/20150723

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p

150727 11:59:19 innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints "completed OK!".

Warning: World-writable config file '/mnt/backup/mysql/20150723/backup-my.cnf' is ignored
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 4545
main::get_option('innodb_data_file_path') called at /usr/bin/innobackupex line 2631
main::apply_log() called at /usr/bin/innobackupex line 1578
innobackupex: Error: no 'innodb_data_file_path' option in group 'mysqld' in server configuration file '/root/backup-my.cnf' at /usr/bin/innobackupex line 4545.

Mysql 5.7 errors, need suggestion for heavy load mysql server

Lastest Forum Posts - July 26, 2015 - 10:46pm
Hi all , I got following error on mysql 5.7 (error reading communication packets)2015-07-25T22:25:36.782743Z 3314 [Note] Aborted connection 3314 to db: 'royzez' user: 'root' host: 'localhost' (Got an ) , can any one please suggest what are the main variables should be change for heavy load myql database , i have 32 GB ram and 8 core dedicated CPU, and more than 10000 concurrent users in 10 sec, Please suggest what do i change to achive this much of concurrency , all tables are innodb.

Mysql 5.7 errors, need suggestion for heavy load mysql server

Lastest Forum Posts - July 26, 2015 - 10:46pm
Hi all , I got following error on mysql 5.7 (error reading communication packets)2015-07-25T22:25:36.782743Z 3314 [Note] Aborted connection 3314 to db: 'royzez' user: 'root' host: 'localhost' (Got an ) , can any one please suggest what are the main variables should be change for heavy load myql database , i have 32 GB ram and 8 core dedicated CPU, and more than 10000 concurrent users in 10 sec, Please suggest what do i change to achive this much of concurrency , all tables are innodb.

Getting duplicate entries after restore

Lastest Forum Posts - July 26, 2015 - 2:38pm
Hi,

We've recently upgraded a MySQL 5.0 master-master setup to Percona 5.6. Slaving went b0rken due to some failures on our side, but we thought we could simply fix it by using xtrabackup to create a backup from the running server and importing it to the slave. I've been trying to do that this entire weekend (partly because it's a huge database with an insane amount of databases and tables), but to no avail. Can someone shed a light on what I might be doing wrong here?

First, I run the following on the current in production master:

: ulimit -n 409600 innobackupex --defaults-file=/etc/mysql/debian.cnf /mnt When that is done, I copy the resulting directory to the other server and run:

: innobackupex --use-memory=4G --apply-log /srv/restore It exits with an OK message eventually. Now I restore it to the database with:

: innobackupex --move-back /srv/restore All goes well and I can start MySQL again (after I chown the /srv/mysql directory, which is our datadir). Data is in there and the database is running fine. Now I start slaving on this database:

: /usr/bin/mysql --defaults-file=/etc/mysql/debian.cnf -e "CHANGE MASTER TO MASTER_HOST='10.x.x.x', MASTER_USER='replication', MASTER_PASSWORD='verysecret', MASTER_AUTO_POSITION=1; START SLAVE" Slaving starts but immediately stops due to a 1062 error. After investigation, I find out that the entry it's trying to apply was added on the master db right after I started the backup. I can fix that, but I immediately get a new error.

To me, it seems like the backup did not contain all the latest GTIDs, only the ones that were available at the start of the backup? I thought this was exactly what XtraBackup was supposed to fix? I see no alternative now to making sure no writes are done on the database during a backup. Am I doing something wrong here? Is this supposed to happen?

Running on Debian Wheezy with all the latest patches.
: Server version: 5.6.25-73.1-log Percona Server (GPL), Release 73.1, Revision 07b797f $ innobackupex --version InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. $ xtrabackup --version xtrabackup version 2.2.11 based on MySQL server 5.6.24 Linux (x86_64) (revision id: ) Any help would be appreciated!

InnoDB vs TokuDB in LinkBench benchmark

Latest MySQL Performance Blog posts - July 24, 2015 - 7:12am

Previously I tested Tokutek’s Fractal Trees (TokuMX & TokuMXse) as MongoDB storage engines – today let’s look into the MySQL area.

I am going to use modified LinkBench in a heavy IO-load.

I compared InnoDB without compression, InnoDB with 8k compression, TokuDB with quicklz compression.
Uncompressed datasize is 115GiB, and cachesize is 12GiB for InnoDB and 8GiB + 4GiB OS cache for TokuDB.

Important to note is that I used tokudb_fanout=128, which is only available in our latest Percona Server release.
I will write more on Fractal Tree internals and what does tokudb_fanout mean later. For now let’s just say it changes the shape of the fractal tree (comparing to default tokudb_fanout=16).

I am using two storage options:

  • Intel P3600 PCIe SSD 1.6TB (marked as “i3600” on charts) – as a high end performance option
  • Crucial M500 SATA SSD 900GB (marked as “M500” on charts) – as a low end SATA SSD

The full results and engine options are available here

Results on Crucial M500 (throughput, more is better)

    Engine Throughput [ADD_LINK/10sec]

  • InnoDB: 6029
  • InnoDB 8K: 6911
  • TokuDB: 14633

There TokuDB outperforms InnoDB almost two times, but also shows a great variance in results, which I correspond to a checkpoint activity.

Results on Intel P3600 (throughput, more is better)

  • Engine Throughput [ADD_LINK/10sec]
  • InnoDB: 27739
  • InnoDB 8K: 9853
  • TokuDB: 20594

To understand the reasoning why InnoDB shines on a fast storage let’s review IO usage by all engines.
Following chart shows Reads in KiB, that engines, in average, performs for a request from client.

Following chart shows Writes in KiB, that engines, in average, performs for a request from client.

There we can make interesting observations that TokuDB on average performs two times less writes than InnoDB, and this is what allows TokuDB to be better on slow storages. On a fast storage, where there is no performance penalty on many writes, InnoDB is able to get ahead, as InnoDB is still better in using CPUs.

Though, it worth remembering, that:

  • On a fast expensive storage, TokuDB provides a better compression, which allows to store more data in limited capacity
  • TokuDB still writes two time less than InnoDB, that mean twice longer lifetime for SSD (still expensive).

Also looking at the results, I can make the conclusion that InnoDB compression is inefficient in its implementation, as it is not able to get befits: first, from doing less reads (well, it helps to get better than uncompressed InnoDB, but not much); and, second, from a fast storage.

The post InnoDB vs TokuDB in LinkBench benchmark appeared first on MySQL Performance Blog.

How to let mydumper piped to ssh?

Lastest Forum Posts - July 24, 2015 - 3:54am
From https://www.percona.com/blog/2015/07...l-environment/,Mydumper could be piped, but how to pipe mydumper with ssh?

pt-online-schema-change exited with error &amp;quot;...uninitialized value $mode...&amp;quot;

Lastest Forum Posts - July 24, 2015 - 1:48am
Hi

I used pt-online-schema-change to alter a table to ROW_FORMAT=COMPRESSED. Ultimately, it ran successfully, but it did exit with an error. Perhaps this was because it could not connect to the slave server. (Note: the alter table was replcated to the slave without any problems.)

I will probably use pt-online-schema-change in the future so I would like to eliminate this potential bug or, if the fault is mine, get some confirmation so I can RTFM again and do it correctly.

Here's what I ran:

ryant@wgcptdb3:~$ time pt-online-schema-change --recursion-method processlist --alter-foreign-keys-method auto --execute --alter "ROW_FORMAT=COMPRESSED" D=wfs,t=gateway_audit_trail,h=localhost,u=xxx,p=xx x
Cannot connect to D=wfs,h=SLAVEIP,p=...,u=xxx
No slaves found. See --recursion-method if host YYYYY has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
copy_rows, 10, 0.25
create_triggers, 10, 1
drop_triggers, 10, 1
swap_tables, 10, 1
update_foreign_keys, 10, 1
Child tables:
`wfs`.`auth_audit_trail` (approx. 428865 rows)
`wfs`.`product_audit_trail` (approx. 1268930 rows)
Will automatically choose the method to update foreign keys.
Altering `wfs`.`gateway_audit_trail`...
Creating new table...
Created new table wfs._gateway_audit_trail_new OK.
Altering new table...
Altered `wfs`.`_gateway_audit_trail_new` OK.
2015-07-22T22:53:16 Creating triggers...
2015-07-22T22:53:17 Created triggers OK.
2015-07-22T22:53:17 Copying approximately 9862776 rows...
Copying `wfs`.`gateway_audit_trail`: 0% 01:35:09 remain
Copying `wfs`.`gateway_audit_trail`: 0% 01:42:34 remain
Copying `wfs`.`gateway_audit_trail`: 1% 01:48:35 remain
Copying `wfs`.`gateway_audit_trail`: 1% 01:51:39 remain
Copying `wfs`.`gateway_audit_trail`: 2% 01:57:06 remain
Copying `wfs`.`gateway_audit_trail`: 2% 01:58:26 remain
...
...
...
Copying `wfs`.`gateway_audit_trail`: 99% 00:37 remain
Copying `wfs`.`gateway_audit_trail`: 99% 00:06 remain
2015-07-23T04:11:35 Copied rows OK.
2015-07-23T04:11:35 Max rows for the rebuild_constraints method: 1316
Determining the method to update foreign keys...
2015-07-23T04:11:35 `wfs`.`auth_audit_trail`: too many rows: 428945; must use drop_swap
2015-07-23T04:11:35 Drop-swapping tables...
2015-07-23T04:12:09 Dropped and swapped tables OK.
Not dropping old table because --no-drop-old-table was specified.
2015-07-23T04:12:09 Dropping triggers...
2015-07-23T04:12:09 Dropped triggers OK.
Successfully altered `wfs`.`gateway_audit_trail`.
# Exiting on SIGPIPE.
(in cleanup) Use of uninitialized value $mode in numeric eq (==) at /usr/bin/pt-online-schema-change line 7845.

real 318m57.881s
user 0m57.836s
sys 0m6.304s



Versions:

ryant@wgcptdb3:~$ pt-online-schema-change --version
pt-online-schema-change 2.2.14


ryant@wgcptdb3:~$ mysqladmin version
mysqladmin Ver 8.42 Distrib 5.5.34, for debian-linux-gnu on x86_64
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Server version 5.5.34-0ubuntu0.12.04.1-log
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 637 days 3 hours 54 min 4 sec


Thanks,
Ryan

Bootstrapping behavior with pc.wait_prim=false ?

Lastest Forum Posts - July 23, 2015 - 9:30am
I have a quick question with pc.wait_prim=false behavior.

According to:
https://bugs.launchpad.net/percona-x...r/+bug/1413258

There's a bug that was verified that you can't use pc.wait_prim=false as it just hangs.

If I'm using this option, mysql should fire up where I can connect to it to be able to 'SET GLOBAL wsrep_provider_options="pc.bootstrap=1"; ' if I'm using the latest code, correct?


I ask, because with what's currently in the Percona repo's doesn't allow it for 5.6, though it does at least allow you to kill the daemon to make it shut down gracefully, at least for RHEL6.5.7..

Any help appreciated..
-b


Steps I used to test:

sudo yum install http://pkgs.repoforge.org/socat/soca....rf.x86_64.rpm
sudo yum install http://www.percona.com/downloads/per...1-3.noarch.rpm
sudo yum install Percona-XtraDB-Cluster-56.x86_64


Config file - bare minimums to get this to start (dns is valid and resolves for my env):

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

innodb_autoinc_lock_mode=2

#==============================
# Galera
#==============================
# Automatically adjust auto_increment_increment and auto_increment_offset system variables when cluster membership changes.
wsrep_auto_increment_control = on

# This variable is used to control if Foreign Key checking is done for applier threads.
# wsrep_slave_FK_checks

# Enable debug log output
wsrep_debug = off

# An option to explicitly specify the network address of the node if autoguessing for some reason does not produce
# desirable results (multiple network interfaces, NAT, etc.) If not explicitly overridden by
# wsrep_sst_receive_address, the <address> part will be used to listen for SST (the port is still SST method-
# dependent). And the whole <address>[ort] string will be passed to wsrep provider to be used as a base address
# in its communications.
# wsrep_node_address

# A name (given in wsrep_node_name) of the server that should be used as a source for state transfer.
# If not specified, Galera will choose the most appropriate one.
# wsrep_sst_donor

# A string of provider options passed directly to provider.
# See http://www.codership.com/wiki/doku.p...parameters_0.8
wsrep_provider_options = gmcast.listen_addr=tcp://0.0.0.0:13306;evs.keepalive_period=PT3S;evs.inacti ve_check_period=PT10S;evs.suspect_timeout=PT30S;ev s.inactive_timeout=PT1M;evs.install_timeout=PT1M;i st.recv_addr=tcp://springsing.test:13307

# This variable can be used to reject queries for that node.
wsrep_reject_queries = NONE

# This variable is used to control if Unique Key checking is done for applier threads.
# wsrep_slave_UK_checks

# Logical node name - for convenience. (Defaults to hostname)
# wsrep_node_name

# Maximum allowed writeset size. Currently it limits supported size of transaction and LOAD DATA statement.
wsrep_max_ws_size = 1073741824

# A string with authentication information for state snapshot transfer. It depends on the state transfer method.
# For mysqldump state transfer it should be <user>:<password> where user has root privileges on this server.
# For rsync method it is ignored.
wsrep_sst_auth = sstauth:sstauth

# When enabled this option will use new, transparent handling of preordered replication events (like replication from traditional master).
wsrep_preordered = off

# Logical cluster name. If a node tries to connect to a cluster with a different name, connection fails
wsrep_cluster_name = tester

# A path to wsrep provider to load. If not specified, all calls to wsrep provider will be bypassed.
wsrep_provider = /usr/lib64/libgalera_smm.so

# This variable exists for the sole purpose of notifying joining node about state transfer completion.
# See http://www.codership.com/wiki/doku.p...pshot_transfer
# wsrep_start_position

# If an autocommit query fails due to cluster-wide conflict we can retry it without returning error to client.
# This sets how many times to retry.
wsrep_retry_autocommit = 1

# This variable is used to send the DBUG option to the wsrep provider.
wsrep_dbug_option = off

# This variable controls how many replication events will be grouped together. This implementation is still experimental.
wsrep_mysql_replication_bundle = 0

# The address at which this node expects to receive state transfer. Depends on state transfer method. E.g. for
# mysqldump state transfer it is the address and the port on which this server listens.
wsrep_sst_receive_address = springsing.test:13308

# In some cases master may apply event faster than a slave, which can cause master and slave being out-of-sync for a brief moment.
# When this variable is set to ON slave will wait till that event is applied before doing any other queries.
wsrep_causal_reads = off

# Address to connect to cluster. Provider specific. Galera takes addresses in URL format:
# <backend schema>://<cluster address>[?option1=value1[&option2=value2]]
# e.g. gcomm://192.168.0.1:4567?gmcast.listen_addr=0.0.0.0:5678
wsrep_cluster_address = gcomm://springsing.test:13306,toldcontrolled.test:13306?pc .wait_prim=false

# Maximum number of rows allowed in the writeset. Currently it limits supported size of transaction and LOAD DATA statement.
wsrep_max_ws_rows = 131072

# How many threads to use for applying slave writsets.
wsrep_slave_threads = 4

# Generate primary keys for rows without ones for the purpose of certification. This is required fori
# parallel applying. We don't recommend using tables without primary keys.
wsrep_certify_nonPK = on

# A command to run when cluster membership or state of this node changes.
# wsrep_notify_cmd

# This variable is used to control whether sole cluster conflicts should be logged. When enabled details of conflicting InnoDB lock will be logged.
wsrep_log_conflicts = off

# Online schema upgrade method (MySQL >= 5.5.17).
# See http://www.codership.com/wiki/doku.p...schema_upgrade
wsrep_OSU_method = TOI

# Use wsrep replication. When turned off, no changes made in this session will be replicated.
wsrep_on = on

# When this variable is enabled SST donor node will not accept incoming queries, instead it will reject queries with UNKNOWN COMMAND error code.
# This can be used to signal load-balancer that the node isn’t available.
wsrep_sst_donor_rejects_queries = off

# Address at which server expects client connections. Intended for integration with load balancers.
# Not used for now.
# wsrep_node_incoming_address

# A method to use for state snapshot transfer. wsrep_sst_<wsrep_sst_method> command will be called with
# arguments from http://www.codership.com/wiki/doku.p...pshot_transfer
wsrep_sst_method = xtrabackup-v2




[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid




The Q&A: Creating best-in-class backup solutions for your MySQL environment

Latest MySQL Performance Blog posts - July 23, 2015 - 6:55am

Thank you for attending my July 15 webinar, “Creating Best in Class Backup solutions for your MySQL environment.” Due to the amount of content we discussed and some minor technical difficulties faced near the end of webinar we have decided to cover the final two slides of the presentation along with the questions asked by attendees during the webinar via this blog post.

The slides are available for download. And you can watch the webinar in it’s entirety here.

The final two slides were about our tips for having a good backup and recovery strategy. Lets see the bullet points along with what would have been their explanation during the webinar :

  • Use the three types of backups
    • Binary for full restores, new slaves
      • Binary backups are easy to restore, plus takes the least amount of time to restore. The mean time to recover is mostly bound by the time to transfer backup to the appropriate target server,
    • Logical for partial restores
      • Logical backups, especially when done table-wise come in handy when you’re wanting to restore one or few smaller tables,
    • Binlog for point in time recovery
      • Very often the need is to have Point In Time Recovery, with a Full backup of any type (Logical or Binary) its half the story, we still need the DML statements processed on the server in order to bring it to the latest state, thats where Binary logs (Binlog) backups come into picture.
  • Store on more than one server and off-site
    •  Store your backups on more than one location, what if the backup server goes down ? Considering offsite storages like Amazon S3 and Glacier with weekly or monthly backups retention can be cheaper options.
  • Test your backups!!!!
    • Testing your backups is very important, its always great to know backups are recoverable and not corrupted. Spin off an EC2 instance if you want, copy and restore the backup there, roll-forward a days worth of binlogs just to be sure.
  • Document restore procedures, script them and test them!!!
    • Also when you test your backups, make sure to document the steps to restore the backup to avoid last minute hassle over which commands to use.
  • If taking from a slave run pt-table-checksum
    • Backups are mostly taken from slaves, as such make sure to checksum them regularly, you dont wanna backup inconsistent data. 
  • Configuration files, scripts
    • Data is not the only thing you should be backing up, backup your config files, scripts and user access at a secure location.
  • Do you need to backup everything all days?
    • For very large instances doing a logical backup is a toughie, in such cases evaluate your backup needs, do you want to backup all the tables ? Most of the time smaller tables are the more important ones, and needs partial restore, backup only those.
  • Hardlinking backups can save lot of disk space in some circumstances
    • There are schemas which contains only a few high activity tables, rest of them are probably updated once a week or are updated by an archiver job that runs montly, make sure to hardlink the files with the previous backup, it can save good amount of space in such scenarios.
  • Monitor your Backups
    • Lastly, monitor your backups. You do not want to realize that you’re backup had been failing the whole time. Even a simple email notification from your backup scripts can help reduce the chance of failure.

Now lets try to answer some of the questions asked during the webinar :

Q : –use-memory=2G, is that pretty standard, if we have more more, should we have a higher value?
Usually we would evaluate the value based on size of xtrabackup_logfile (amount of transactions to apply). If you have more free memory feel free to provide it to –use-memory, you dont want to let the memory be a bottleneck in the restore process.

Q : which is the best backup option for a 8Tb DB?
Usually it would depend on what type of data would you have and business requirements for the backups. For eg: a full xtrabackup and later incrementals on the weekdays would be a good idea. Time required for backups play an important role here, backing up to a slow NAS share can be time consuming, and it will make xtrabackup record lot of transactions which will further increase your restore time. Also look into backing up very important medium-small size tables via logical backups.

Q : I’m not sure if this will be covered, but if you have a 3 master-master-master cluster using haproxy, is it recommended to run the backup from the haproxy server or directly on a specific server? Would it be wise to have a 4th server which would be part of the cluster, but not read from to perform the backups?
I am assuming this a Galera cluster setup, in which case you can do backups locally on any of the node by using tools like percona xtrabackup, however the best solution would be spinning off a slave from one of the nodes and running backups there.

Q : With Mudumper, can we strem the data over SSH or netcat to another server? Or would one have to use something like NFS? I’ve used mysqldump and piped it over netcat before.. curious if we can do that with Mydumper ?
Mydumper is similar in nature with other mysql client tools. They can be run remotely (–host option). Which means you can run mydumper from another server to backup from the master or slave. Mydumper can be piped for sure too.

Q : Is Mydumper still maintained. It hasn’t had a release since March of last year?
Indeed, Max Bubenick from Percona is currently maintaining the project. Actually he has added new features to the tool which  makes it more comprehensive and feature rich. He is planning the next release soon, stay tuned for the blog post.

Q : Is MyDumper an opensource ? prepare and restore are same ?
Absolutely. Right now we need to download the source and compile, however very soon we will have packages built for it too. Prepare and Restore are common terminologies used in the backup lingo, in the webinar, Restore means copying back the backup files from its storage location to the destination location, whereas Prepare means applying the transactions to the backup and making it ready to restore.

Q : Is binlog mirroring needed on Galera (PXC)?
It is good idea to keep binlog mirroring. Even though the IST and SST will do its job to join the node, the binlogs could play a role in case you wanted to rollforward a particular schema on a slave or QA instance.

Q : As we know that Percona XtraBackup takes Full & Incremental as well. Like that Does MyDumper helps in taking the incremental backup.?
At this moment we do not have the ability to take Incremental backups with mydumper or with any other logical backup tool. However, Weekly full backups (logical) and daily binlog backups can serve as the same strategy with other Incremental backup solutions, plus they are easy to restore

Q : Is it possible to encrypt the output file ? What will be Best methodology to back up data with the database size of 7 to 8 Gb and increses 25 % each day ? what is difference between innobackupex and mydumper ?
Indeed its possible to encrypt the backup files, as a matter of fact, we encrypt backup files with GPG keys before uploading to offsite storage. The best method to backup a 7 to 8G instance would be implementing all 3 types of backup we discussed in the webinar, your scenarios require planning for the future, so its always best to have different solutions available as the data grows. Innobackupex is part of the Percona-Xtrabackup toolkit and is a script which does binary backups of databases, MyDumper on the other hand is a logical backup tool which creates backups as text files.

Q : How can I optimize a MySQL dump of a large database? The main bottleneck while taking MySQL dump backup of a large database is if any table is found to be corrupted then it never goes beyond by skipping this corrupted tables temporary. Can we take database backup of large database without using locking mechanism i.e. Does someone know how to make the backup without locking the tables ? Is there any tools which would faster in restoration and backup technique or how come we use MySQL dump to optimize this kind of issue in future during crash recovery.
Mysqldump is logical backup tool, and as such it executes full table scans to backup the tables and write them down in the output file, hence its very difficult to improve performance of mysqldump (query-wise). Assuming that you’re referring the corruption to MyISAM tables, it is highly recommended you repair them before backing up, also to make sure mysqldump doesnt fail due to error on such a corrupt table try using –force option to mysqldump. If you’re using MyISAM tables first recommendation would be to switch to Innodb, with most of the tables innodb locking can be greatly reduced, actually till a point where the locking is negligible, look into –single-transaction. Faster backup recovery can be achieved with binary backups, look into using Percona Xtrabackup tool, we have comprehensive documentation to get you started.

Hope this was a good webinar and we have answered most of your questions. Stay tuned for more such webinars from Percona.

The post The Q&A: Creating best-in-class backup solutions for your MySQL environment appeared first on MySQL Performance Blog.

missing Percona-server-devel for version 5.6 in debian apt

Lastest Forum Posts - July 23, 2015 - 4:47am
Hi

I have been struggling to install devel package but no luck. kindly advise.

Logs

root@instance-5:/usr/lib/x86_64-linux-gnu# apt-get install Percona-Server-devel*
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package Percona-Server-devel*
E: Couldn't find any package by regex 'Percona-Server-devel*'

MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh

Latest MySQL Performance Blog posts - July 23, 2015 - 3:00am

Welcome to MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh

  1. Advanced configurable variables & their default/vanilla reducer.sh settings
    1. FORCE_SKIPV
    2. FORCE_SPORADIC
    3. TIMEOUT_COMMAND & TIMEOUT_CHECK
    4. MULTI_THREADS
    5. MULTI_THREADS_INCREASE
    6. QUERYTIMEOUT
    7. STAGE1_LINES
    8. SKIPSTAGE
    9. FORCE_KILL
  2. Some examples
    1. FORCE_SKIPV/FORCE_SPORADIC
    2. TIMEOUT_COMMAND/TIMEOUT_CHECK

Full-screen viewing @ 720p resolution recommended.

The post MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh appeared first on MySQL Performance Blog.

Index not working for date field used in between query

Lastest Forum Posts - July 22, 2015 - 10:51pm
Hello,

We have MySQL 5.6.21 enterprise edition.

For below query we added index on createdate field, but even after adding index query plan shows that it is not using it.

Query: select iname db1.User where iname like '0012%' and createDate between '2000-05

-11 00:00:00' and '2015-07-01 23:59:59';

Added index on : createdate

Table structure:
CREATE TABLE `User` (
`uuid_` varchar(75) DEFAULT NULL,
`userId` bigint(20) NOT NULL,
`companyId` bigint(20) DEFAULT NULL,
`createDate` datetime DEFAULT NULL,
`modifiedDate` datetime DEFAULT NULL,
`defaultUser` tinyint(4) DEFAULT NULL,
`contactId` bigint(20) DEFAULT NULL,
`password_` varchar(75) DEFAULT NULL,
`passwordEncrypted` tinyint(4) DEFAULT NULL,
`passwordReset` tinyint(4) DEFAULT NULL,
`passwordModifiedDate` datetime DEFAULT NULL,
`digest` varchar(255) DEFAULT NULL,
`reminderQueryQuestion` varchar(75) DEFAULT NULL,
`reminderQueryAnswer` varchar(75) DEFAULT NULL,
`graceLoginCount` int(11) DEFAULT NULL,
`iName` varchar(75) DEFAULT NULL,
`emailAddress` varchar(75) DEFAULT NULL,
`facebookId` bigint(20) DEFAULT NULL,
`openId` varchar(1024) DEFAULT NULL,
`portraitId` bigint(20) DEFAULT NULL,
`languageId` varchar(75) DEFAULT NULL,
`timeZoneId` varchar(75) DEFAULT NULL,
`greeting` varchar(255) DEFAULT NULL,
`comments` longtext,
`firstName` varchar(75) DEFAULT NULL,
`middleName` varchar(75) DEFAULT NULL,
`lastName` varchar(75) DEFAULT NULL,
`jobTitle` varchar(100) DEFAULT NULL,
`loginDate` datetime DEFAULT NULL,
`loginIP` varchar(75) DEFAULT NULL,
`lastLoginDate` datetime DEFAULT NULL,
`lastLoginIP` varchar(75) DEFAULT NULL,
`lastFailedLoginDate` datetime DEFAULT NULL,
`failedLoginAttempts` int(11) DEFAULT NULL,
`lockout` tinyint(4) DEFAULT NULL,
`lockoutDate` datetime DEFAULT NULL,
`agreedToTermsOfUse` tinyint(4) DEFAULT NULL,
`active_` tinyint(4) DEFAULT NULL,
PRIMARY KEY (`userId`),
UNIQUE KEY `IX_C5806019` (`companyId`,`iName`),
UNIQUE KEY `IX_9782AD88` (`companyId`,`userId`),
UNIQUE KEY `IX_5ADBE171` (`contactId`),
KEY `IX_3A1E834E` (`companyId`),
KEY `IX_5204C37B` (`companyId`,`active_`),
KEY `IX_6EF03E4E` (`companyId`,`defaultUser`),
KEY `IX_1D731F03` (`companyId`,`facebookId`),
KEY `IX_89509087` (`companyId`,`openId`(767)),
KEY `IX_762F63C6` (`emailAddress`),
KEY `IX_A18034A4` (`portraitId`),
KEY `IX_E0422BDA` (`uuid_`),
KEY `IX_CDATE` (`createDate`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

QUERY PLAN after adding index:

*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: User_
type: ALL
possible_keys: IX_CDATE
key: NULL
key_len: NULL
ref: NULL
rows: 1118483
Extra: Using where


I believe this was a bug earlier and was fixed in 5.5 and above. Please suggest.

Pages

Subscribe to Percona aggregator
]]>