]]>
]]>

You are here

Feed aggregator

Percona 5.6.21 crashing frequently

Lastest Forum Posts - March 16, 2015 - 1:56pm
Repeatedly on my slave server, I am getting assertion failures after 7-8 hrs for semaphore wait and has lasted for more than 600 seconds.
General load on server - jobs run on this server creating temporary tables
MySQL Version - 5.6.21-70.1
OS - Linux

----------------------------
END OF INNODB MONITOR OUTPUT
============================
InnoDB: ###### Diagnostic info printed to the standard error stream
InnoDB: Error: semaphore wait has lasted > 600 seconds
InnoDB: We intentionally crash the server, because it appears to be hung.
2015-03-16 08:24:21 7fd529214700 InnoDB: Assertion failure in thread 140553494808320 in file srv0srv.cc line 2137
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/...-recovery.html
InnoDB: about forcing recovery.
15:24:21 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=8388608
read_buffer_size=16777216
max_used_connections=60
max_threads=602
thread_count=31
connection_count=29
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 19743388 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x8bdcec]
/usr/sbin/mysqld(handle_fatal_signal+0x461)[0x6512a1]
/lib64/libpthread.so.0[0x3bdc60f710]
/lib64/libc.so.6(gsignal+0x35)[0x3bdbe32625]
/lib64/libc.so.6(abort+0x175)[0x3bdbe33e05]
/usr/sbin/mysqld[0x999852]
/lib64/libpthread.so.0[0x3bdc6079d1]
/lib64/libc.so.6(clone+0x6d)[0x3bdbee88fd]
You may download the Percona Server operations manual by visiting
http://www.percona.com/software/percona-server/. You may find information
in the manual which will help you identify the cause of the crash.
150316 08:30:00 mysqld_safe Number of processes running now: 0
150316 08:30:00 mysqld_safe mysqld restarted
2015-03-16 08:30:00 0 [Warning] Using unique option prefix character_set_client instead of character-set-client-handshake is deprecated and will be removed in a future release. Please use the full name instead.
2015-03-16 08:30:00 0 [Warning] /usr/sbin/mysqld: ignoring option '--character-set-client-handshake' due to invalid value 'utf8'
2015-03-16 08:30:00 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2015-03-16 08:30:00 31543 [Note] Plugin 'FEDERATED' is disabled.
2015-03-16 08:30:00 7f0306f237e0 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.
2015-03-16 08:30:00 31543 [Note] InnoDB: Using atomics to ref count buffer pool pages
2015-03-16 08:30:00 31543 [Note] InnoDB: The InnoDB memory heap is disabled
2015-03-16 08:30:00 31543 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2015-03-16 08:30:00 31543 [Note] InnoDB: Memory barrier is not used
2015-03-16 08:30:00 31543 [Note] InnoDB: Compressed tables use zlib 1.2.3
2015-03-16 08:30:00 31543 [Note] InnoDB: Using Linux native AIO
2015-03-16 08:30:00 31543 [Note] InnoDB: Using CPU crc32 instructions
2015-03-16 08:30:00 31543 [Note] InnoDB: Initializing buffer pool, size = 96.0G
2015-03-16 08:30:06 31543 [Note] InnoDB: Completed initialization of buffer pool
2015-03-16 08:30:13 31543 [Note] InnoDB: Highest supported file format is Barracuda.
2015-03-16 08:30:13 31543 [Note] InnoDB: Log scan progressed past the checkpoint lsn 27626464771723
2015-03-16 08:30:13 31543 [Note] InnoDB: Database was not shutdown normally!
2015-03-16 08:30:13 31543 [Note] InnoDB: Starting crash recovery.
2015-03-16 08:30:13 31543 [Note] InnoDB: Reading tablespace information from the .ibd files...
2015-03-16 08:30:18 31543 [Note] InnoDB: Restoring possible half-written data pages
2015-03-16 08:30:18 31543 [Note] InnoDB: from the doublewrite buffer...
InnoDB: Doing recovery: scanned up to log sequence number 27626470014464
InnoDB: Doing recovery: scanned up to log sequence number 27626475257344
InnoDB: Doing recovery: scanned up to log sequence number 27626480500224
InnoDB: Doing recovery: scanned up to log sequence number 27626485743104
InnoDB: Doing recovery: scanned up to log sequence number 27626490985984
InnoDB: Doing recovery: scanned up to log sequence number 27626496228864
InnoDB: Doing recovery: scanned up to log sequence number 27626501471744
InnoDB: Doing recovery: scanned up to log sequence number 27626506714624
InnoDB: Doing recovery: scanned up to log sequence number 27626511957504
InnoDB: Doing recovery: scanned up to log sequence number 27626517200384
InnoDB: Doing recovery: scanned up to log sequence number 27626522443264
InnoDB: Doing recovery: scanned up to log sequence number 27626527686144
InnoDB: Doing recovery: scanned up to log sequence number 27626532929024
InnoDB: Doing recovery: scanned up to log sequence number 27626538171904
InnoDB: Doing recovery: scanned up to log sequence number 27626543414784
InnoDB: Doing recovery: scanned up to log sequence number 27626548657664
InnoDB: Doing recovery: scanned up to log sequence number 27626553900544
InnoDB: Doing recovery: scanned up to log sequence number 27626559143424
InnoDB: Doing recovery: scanned up to log sequence number 27626564386304
InnoDB: Doing recovery: scanned up to log sequence number 27626569629184
InnoDB: Doing recovery: scanned up to log sequence number 27626571649635
InnoDB: 2 transaction(s) which must be rolled back or cleaned up
InnoDB: in total 7 row operations to undo
InnoDB: Trx id counter is 14572560384
2015-03-16 08:31:37 31543 [Note] InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed

Vagrant+Vbox, first steps to XtraDB. Bootstrap first node shows an error

Lastest Forum Posts - March 16, 2015 - 4:51am
Hi

I've deployed a small environment of test with Vagrant and Vbox with 3 nodes (ubuntu TLS)

IPs are private network, 1 per node

192.168.33.50
192.168.33.51
192.168.33.52

1 cpu and 1 GB RAM per node also
No firewall inside Ubuntu
each node can ping another and all knows the hostname of the others

I've read some guides about Xtradb Cluster, finally I choose:

http://redcrackle.com/blog/how-set-p...cluster-ubuntu

but when I wan to restart the first node with bootsrap it finish with an error:

2015-03-16 12:13:59 0 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
2015-03-16 12:13:59 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2015-03-16 12:13:59 10647 [Note] WSREP: Read nil XID from storage engines, skipping position init
2015-03-16 12:13:59 10647 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/libgalera_smm.so'
2015-03-16 12:13:59 10647 [Note] WSREP: wsrep_load(): Galera 3.9(r93aca2d) by Codership Oy <info@codership.com> loaded successfully.
2015-03-16 12:13:59 10647 [Note] WSREP: CRC-32C: using "slicing-by-8" algorithm.
2015-03-16 12:13:59 10647 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
2015-03-16 12:13:59 10647 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 192.168.33.50; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false
2015-03-16 12:13:59 10647 [Note] WSREP: Service thread queue flushed.
2015-03-16 12:13:59 10647 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
2015-03-16 12:13:59 10647 [Note] WSREP: wsrep_sst_grab()
2015-03-16 12:13:59 10647 [Note] WSREP: Start replication
2015-03-16 12:13:59 10647 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2015-03-16 12:13:59 10647 [Note] WSREP: protonet asio version 0
2015-03-16 12:13:59 10647 [Note] WSREP: Using CRC-32C for message checksums.
2015-03-16 12:13:59 10647 [Note] WSREP: backend: asio
2015-03-16 12:13:59 10647 [Note] WSREP: restore pc from disk successfully
2015-03-16 12:13:59 10647 [Note] WSREP: GMCast version 0
2015-03-16 12:13:59 10647 [Note] WSREP: (673c2adf, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2015-03-16 12:13:59 10647 [Note] WSREP: (673c2adf, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2015-03-16 12:13:59 10647 [Note] WSREP: EVS version 0
2015-03-16 12:13:59 10647 [Note] WSREP: gcomm: bootstrapping new group 'PXC'
2015-03-16 12:13:59 10647 [Note] WSREP: start_prim is enabled, turn off pc_recovery
2015-03-16 12:13:59 10647 [Note] WSREP: Node 673c2adf state prim
2015-03-16 12:13:59 10647 [Note] WSREP: view(view_id(PRIM,673c2adf,3) memb {
673c2adf,0
} joined {
} left {
} partitioned {
})
2015-03-16 12:13:59 10647 [Note] WSREP: save pc into disk
2015-03-16 12:13:59 10647 [Note] WSREP: discarding pending addr without UUID: tcp://192.168.33.50:4567
2015-03-16 12:13:59 10647 [Note] WSREP: discarding pending addr proto entry 0x20e1d60
2015-03-16 12:13:59 10647 [Note] WSREP: discarding pending addr without UUID: tcp://192.168.33.51:4567
2015-03-16 12:13:59 10647 [Note] WSREP: discarding pending addr proto entry 0x20ea4f0
2015-03-16 12:13:59 10647 [Note] WSREP: discarding pending addr without UUID: tcp://192.168.33.52:4567
2015-03-16 12:13:59 10647 [Note] WSREP: discarding pending addr proto entry 0x20f2ce0
2015-03-16 12:13:59 10647 [Note] WSREP: clear restored view
2015-03-16 12:13:59 10647 [ERROR] WSREP: failed to open gcomm backend connection: 11: : 11 (Resource temporarily unavailable)
at gcs/src/gcs_gcomm.cpp:connect():253
2015-03-16 12:13:59 10647 [ERROR] WSREP: gcs/src/gcs_core.cpp:long int gcs_core_open(gcs_core_t*, const char*, const char*, bool)():206: Failed to open backend connection: -11 (Resource temporarily unavailable)
2015-03-16 12:13:59 10647 [ERROR] WSREP: gcs/src/gcs.cpp:long int gcs_open(gcs_conn_t*, const char*, const char*, bool)():1379: Failed to open channel 'PXC' at 'gcomm://192.168.33.50,192.168.33.51,192.168.33.52': -11 (Resource temporarily unavailable)
2015-03-16 12:13:59 10647 [ERROR] WSREP: gcs connect failed: Resource temporarily unavailable
2015-03-16 12:13:59 10647 [ERROR] WSREP: wsrep::connect() failed: 7
2015-03-16 12:13:59 10647 [ERROR] Aborting

2015-03-16 12:13:59 10647 [Note] WSREP: Service disconnected.
2015-03-16 12:14:00 10647 [Note] WSREP: Some threads may fail to exit.
2015-03-16 12:14:00 10647 [Note] Binlog end
2015-03-16 12:14:00 10647 [Note] /usr/sbin/mysqld: Shutdown complete

150316 12:14:00 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended


It's very hard for me to debug what is happening... any ideas?

Thanks!

Deep dive into MySQL’s innochecksum tool

Latest MySQL Performance Blog posts - March 16, 2015 - 12:00am

One of our Percona Support customers recently reported that Percona XtraBackup failed with a page corruption error on an InnoDB table. The customer thought it was a problem or bug in the Percona XtraBackup tool. After investigation we found that an InnoDB page was actually corrupted and a Percona XtraBackup tool caught the error as expected and hence the backup job failed.

I thought this would be an interesting topic and worthy of a blog post. In this article I will describe the innochecksum tool, when and how to use it and what are the possible fixes if an InnoDB table suffers from page corruption.

The innochecksum tool is an offline tool that prints checksums for InnoDB files. This tool reads an InnoDB tablespace file, calculates the checksum for each page and reports mismatches, if any. A checksum mismatch is an indication of corrupt pages. Being as offline tool, innochecksum can’t be used on tablespace file that a MySQL server is currently using,  hence you need to shutdown the server prior to running the innochecksum tool. If you try to run the innochecksum tool on a running MySQL server, then there is a chance that innochecksum crashes or reports a bad checksum for a good page, resulting false positives results. There is chance when you run innochecksum on a tablespace file that is opened by mysqld, that pages are dirty and not checksummed yet by the InnoDB storage engine itself.

The point: don’t run innochecksum against a running server.

InnoDB corruption can be caused by many factors (e.g. power lost, faulty hardware, bugs).  The InnoDB storage engine validates calculated checksum while reading pages from a tablespace on disk to the stored checksum in the page. In case, InnoDB finds page checksum mismatch it will force down the MySQL server.

Let me show you a page corruption error identified by Percona XtraBackup during a backup run in which the backup failed afterward.

[01] xtrabackup: Database page corruption detected at page 25413, retrying... [01] xtrabackup: Database page corruption detected at page 25413, retrying... [01] xtrabackup: Database page corruption detected at page 25413, retrying...

First, we need to identify if the tablespace is really corrupted for that particular table. I do that with the help of the innochecksum utility as shown below. As I mentioned earlier, make sure to shut down MySQL before using the innochecksum tool.

$ innochecksum -p 25413 /path/to/datadir/database_name/table_name.ibd

I passed the -p (page) flag for innochecksum to only check the specific pages that were reported corrupt by Percona XtraBackup. Without passing any option to the innochecksum tool, it will check entire tablespace for corruption which will required additional server downtime. The innochecksum tool also supports the -d (debug) option to print the checksum for each page and the -v (verbose) parameter to print a progress indicator. You can find more details in the manual. If the tool reports page corruption then database table is really corrupted as below.

page 25413 invalid (fails log sequence number check)

In order to fix this issue, the first thing you should try is to mysqldump the corrupted table and If mysqldump succeeded then problem exists in secondary indexes for that tablespace. This is because the mysqldump utility doesn’t touch indexes as indexes are created after all rows are inserted.

If mysqldump succeeds then the problem is associated with indexes. I would suggest following options to fix the corruption.

— Execute OPTIMIZE TABLE on that table which rebuilds indexes. The table will be locked during the operation prior to MySQL 5.6.17. Since MySQL 5.6.17 OPTIMIZE TABLE is an online operation.
— Rebuild table with the pt-online-schema-change tool from Percona Toolkit. This will give the same result as OPTIMIZE TABLE a non-blocking way as the pt-online-schema=change tool is online schema change tool.
— Drop all secondary indexes and then recreate them. The table will be locked during that operation for writes only. Again, you can use pt-online-schema-change tool for this purpose without sacrificing read/writes ability on the table during the drop and create indexes operation.

Finally, I would suggest to re-run the innochecksum tool to verify the tables integrity again as this will make sure there is no more page corruption. In this case we found that the table was actually corrupted and fixing table corruption through the backup/reload table fixed the problem and Percona XtraBackup ran fine during the next run.

It is possible that mysqldump crashes a MySQL server for a corrupted table. Fortunately, Percona Server contains innodb_corrupt_table_action which you can enable. The configuration variable is dynamic in nature, this means enabling it doesn’t requires a MySQL server restart. Prior to Percona Server 5.6 innodb_corrupt_table_action was known as innodb_pass_corrupt_table. Once you enable this option, you can try mysqldump again. If you are using Oracle MySQL then I would suggest to try this with innodb_force_recovery in case mysqldump fails to dump the table contents.

As a side note, if your backup is successful without any errors while performing a backup with Percona Xtrabackup, this means your InnoDB tables don’t have any page checksum mismatch or corruption. Percona XtraBackup can validate page checksums and in case of errors it  logs error and exists as I mentioned above.

There is also a modified version of the innochecksum made available by Facebook’s Mark Callaghan and can be found in this bug report which provides extra stats on tablespace undo blocks. There is another tool made by Google’s Jeremy Cole known as the InnoDB Ruby Tool to examine the internals of InnoDB.

LIMITATIONS:

  • Innochecksum is an offline InnoDB checksum tool. This means you must stop MySQL server. Otherwise it produces “Unable to lock file” error since MySQL 5.7.
  • Old versions of innochecksum only supports files up to 2GB in size. However, since MySQL 5.6 innochecksum supports files greater than 2GB in size.
  • Percona Server variable innodb_corrupt_table_action is supported on tables existing in their tablespace (i.e. innodb_file_per_table).
  • If you are using compressed tables (ROW_FORMAT=COMPRESSED) , then you must use innochecksum from MySQL 5.7.2 or greater, as earlier versions of innochecksum don’t support compressed tables. Check this bug for details.

New Features for the innochecksum tool from MySQL 5.7:

  • As I mentioned above, since MySQL 5.7 innochecksum supports file sizes greater than 2GB.
  • Since MySQL 5.7 you can log the output with the –log option.
  • –page-type-summary option added for page type summaries.
  • MySQL 5.7 also includes another nice option –page-type-dump which dumps the details of each page to standard output (STDOUT) or standard error (STDERR).
  • Since MySQL 5.7 innochecksum can be executed on multiple user-defined system tablespace files.
  • Since MySQL 5.7 innochecksum can be executed on multiple system tablespace files.

You can read more about this is in the MySQL 5.7 manual page of innochecksum.

Conclusion:
In this post, we identified InnoDB page corruption using the logs generated by Percona XtraBackup and fixed  them by using the mysqldump tool. But again, unfortunately, there are chances that Percona XtraBackup will not always fail in the same way when it finds corruption. So in some cases, it’s not easy to tell whether Percona XtraBackup has failed due to a bad checksum or a bug of its own. But in most cases, page corruption is the culprit if Percona XtraBackup fails to complete.

To summarize, I would say that Percona XtraBackup is a good way of verifying whether or not InnoDB pages are corrupted – and you can also verify the same thing via the mysqldump utility.

The post Deep dive into MySQL’s innochecksum tool appeared first on MySQL Performance Blog.

Production Percona DB crashing repeatedly

Lastest Forum Posts - March 15, 2015 - 11:35am
Hi all, I've got a production box that crashes repeatedly with the following:
Mar 15 12:13:01 box863 mysqld: 18:13:01 UTC - mysqld got signal 11 ;
Mar 15 12:13:01 box863 mysqld: This could be because you hit a bug. It is also possible that this binary
Mar 15 12:13:01 box863 mysqld: or one of the libraries it was linked against is corrupt, improperly built,
Mar 15 12:13:01 box863 mysqld: or misconfigured. This error can also be caused by malfunctioning hardware.
Mar 15 12:13:01 box863 mysqld: We will try our best to scrape up some info that will hopefully help
Mar 15 12:13:01 box863 mysqld: diagnose the problem, but since we have already crashed,
Mar 15 12:13:01 box863 mysqld: something is definitely wrong and this may fail.
Mar 15 12:13:01 box863 mysqld: Please help us make Percona Server better by reporting any
Mar 15 12:13:01 box863 mysqld: bugs at http://bugs.percona.com/
Mar 15 12:13:01 box863 mysqld:
Mar 15 12:13:01 box863 mysqld: key_buffer_size=268435456
Mar 15 12:13:01 box863 mysqld: read_buffer_size=4194304
Mar 15 12:13:01 box863 mysqld: max_used_connections=0
Mar 15 12:13:01 box863 mysqld: max_threads=1502
Mar 15 12:13:01 box863 mysqld: thread_count=0
Mar 15 12:13:01 box863 mysqld: connection_count=0
Mar 15 12:13:01 box863 mysqld: It is possible that mysqld could use up to
Mar 15 12:13:01 box863 mysqld: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 12585173 K bytes of memory
Mar 15 12:13:01 box863 mysqld: Hope that's ok; if not, decrease some variables in the equation.
Mar 15 12:13:01 box863 mysqld:
Mar 15 12:13:01 box863 mysqld: Thread pointer: 0x0
Mar 15 12:13:01 box863 mysqld: Attempting backtrace. You can use the following information to find out
Mar 15 12:13:01 box863 mysqld: where mysqld died. If you see no messages after this, something went
Mar 15 12:13:01 box863 mysqld: terribly wrong...
Mar 15 12:13:01 box863 mysqld: stack_bottom = 0 thread_stack 0x40000
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld(my_print_stacktrace+0x35)[0x7cbfc5]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld(handle_fatal_signal+0x4b4)[0x6a0ec4]
Mar 15 12:13:01 box863 mysqld: /lib64/libpthread.so.0[0x344aa0f710]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld(_ZN3THD15raise_conditionEjPKcN11MYSQL_ERROR 18enum_warning_levelES1_+0x3b)[0x56ed6b]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld(_Z19push_warning_printfP3THDN11MYSQL_ERROR1 8enum_warning_levelEjPKcz+0xdd)[0x57c08d]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld(ib_warn_row_too_big+0x86)[0x8348a6]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld[0x8e7753]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld[0x8f3c70]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld[0x8f1c52]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld[0x8f2785]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld[0x96f9f5]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld[0x963d27]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld[0x882cf7]
Mar 15 12:13:01 box863 mysqld: /usr/sbin/mysqld[0x877ebc]
Mar 15 12:13:01 box863 mysqld: /lib64/libpthread.so.0[0x344aa079d1]
Mar 15 12:13:01 box863 mysqld: /lib64/libc.so.6(clone+0x6d)[0x344a2e88fd]
Mar 15 12:13:01 box863 mysqld: You may download the Percona Server operations manual by visiting
Mar 15 12:13:01 box863 mysqld: http://www.percona.com/software/percona-server/. You may find information
Mar 15 12:13:01 box863 mysqld: in the manual which will help you identify the cause of the crash.
Mar 15 12:13:04 box863 mysqld_safe: mysqld from pid file /var/lib/mysql/box863.bluehost.com.pid ended

Percona Xtrabackup for MySQL 5.7 (Developer Version)

Lastest Forum Posts - March 15, 2015 - 10:06am
Percona Xtrabackup and MySQL 5.7 Compatibility


I am trying to install Percona Xtrabackup on Debian Wheezy and backup a MySQL 5.7.6 M16 (Developer Release) instance. I installed the MySQL community edition from source with the standard procedure and the server works fine.

I then tried to install Percona Xtrabackup by downloading the deb-bundle or by adding the Percona repository to my sources list. After the installation is done I can't stop, start or restart the MySQL instance as the binaries for the myssqld service are not there anymore. Re-installing the instance with make install does not help either, the service remains inaccessible. I noticed that during the Percona installation apt installs the mysql-common package, which should already be installed during the source code build process. After doind a little research I just notices that although Percona mentiones MySQL 5.7 in bug-fixes, the documentation does not mention MySQL 5.7 as supported version.

Is MySQL 5.7 supported yet?
How can I compile Percona Xtrabackup fpr MySQL 5.7?

Thanks in advance!
Stefan










Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7

Latest MySQL Performance Blog posts - March 13, 2015 - 8:45am

My webinar “Multi-threaded Replication in MySQL 5.6 and 5.7″ on February 25 generated several excellent questions following the presentation (available here for playback along with the slides). I didn’t have time to answer many of the questions during the session and so in this post I answer all of them. Thanks to everyone who attended!

Q: What do you expect from MTS with logical clock? Do you think performance would be good as with per database?
A: MTS with 5.6 is not usable if you have a single database. I do not have numbers, but this is quite frequent. With 5.7 everyone should be able to benefit from multi-threaded replication.

Q: When MySQL 5.6 was released, performance of MTS was lower, than in 5.5, for example. Is this addressed now?
A: I am not sure which specific issue or bug you are referring, but if your data is spread across several databases

Q: How does Percona XtraBackup work with MTS? What are the changes in mysqldump?
A: As long as you are using GTIDs, you can safely take a backup from a slave using multi-threaded replication: with XtraBackup, add the --slave-info option as usual when taking a backup from a slave and with mysqldump, use --master-data instead of --dump-slave.

Q: For checkpoint position, what if MTS thread apply Insert before creating table where it inserting data. How MTR checkpoint will keep track of these transactions applying by different thread on slave?
A: The worker threads track all execution gaps to make sure that out-of-order execution is safe and to be able to replay all events without forgetting any of them. So it is not possible that a worker thread will insert data in a table that has not been created yet.

Q: Can you use MTS with all binlog_format options?
A: Yes

Q: Is there any way to have the threads work so that no database contention happens?
A: The short answer is no: the goal of the worker threads is to execute the incoming transactions as fast as possible. If that results in database contention, you should probably decrease the number of worker threads.

Q: Why doesn’t multi-threaded replication perform well on a single DB?
A: With 5.6, parallelization is based on isolating the transactions going to each database. If you only have a single DB, no parallelization is possible. You should look at 5.7 and the logical clock algorithm.

Q: Are there any implications with regards to GTIDs and Multi-Threaded replication when running a Master-to-Master setup?
A: I cannot think of any, however I am not sure master-master replication is still very relevant when using GTIDs.

Q: Is there any inconvenience with memory or cache when using more workers than the number of databases?
A: If the number of workers is just a bit higher than the number of databases (like 5 workers for 3 databases), there should not be any issue. However with ridiculously high numbers (500 workers for 2 databases), there might be performance degradation. I have not tested such cases, so I cannot give a good answer there. However the idea is that the number of workers should be close to the number of databases and should exceed the number of cores on the server.

Q: Is there multi-threaded replication in MySQL 5.7?
A: Yes, multi-threaded replication is available in MySQL 5.7 and offers improvements compared to MySQL 5.6 (mainly the parallelization with logical clock).

Q: Have you used DIM_STAT to created load and measure SLAVE Lag? Any interesting take-a-ways from that effort?
A: I used sysbench to generate load and Seconds_Behind_Master from SHOW SLAVE STATUS to measure slave lag. That mainly shows that if your workload is a good fit for MTS (multiple databases and load shared evenly across ), performance benefits can be significant.

Q: Does multi-threaded replication also work with Percona XtraDB Cluster/Percona Server?
A: Percona Server 5.6 is based on MySQL 5.6, so you can use multi-threaded replication exactly as you would use it on MySQL 5.6.

On Percona XtraDB Cluster, it is a bit different: replication inside the cluster uses Galera replication, which has nothing to do with MySQL replication. Note that Galera has offered parallel replication from the beginning (parallel applying of the replicated writesets to be accurate). However if you are using asynchronous replicas, these replicas can benefit from multi-threaded replication if they are running on MySQL/Percona Server 5.6.

Q: What happens to cross db transactions? Do they not replicate?
A: These transactions will replicate, but they will have to wait until all preceding transactions have been executed. Stated differently, cross db transactions introduce serialization, so you should avoid them as much as possible if you want to benefit from parallel applying.

To be accurate, if you have db1, db2 and db3 and if you execute a transaction involving db1 and db2, transactions on db3 can still be applied in parallel. So if you have many databases, cross db transactions may not be that bad.

Q: When using MTS without GTIDs, is “Seconds_Behind_Master” from SHOW SLAVE STATUS valid?
A: Seconds_Behind_Master is based on Exec_Master_Log_Pos. And with MTS, Exec_Master_Log_Pos is not reliable as it indicates the position of the latest checkpoint and not the position of the latest executed transaction. However in practice, checkpoints will happen at least every 300ms by default, so Seconds_Behind_Master is still a good indication of the replication lag. Of course you should keep in mind the usual limitations, such as with multi-tiered replication (if the setup is A->B->C, C will report its lag against B, not against A) or when there is a replication error (then Seconds_Behind_Master is NULL).

Q: How can all the servers be realistically restarted at the same time? There could be a few sec intervals if you have multiple servers [That was when I explained how to enable GTID replication].
A: With MySQL 5.6, the requirements are pretty strict when it comes to enabling GTIDs: all servers must be restarted at the same point in time in the replication stream. As you mention, it is difficult if you have several servers, so the only viable solution is: stop the writes on the master, wait until replication has caught up on all slaves, stop all servers, change the configuration, restart all servers.

What it means is that there is a time range when all servers are down. This is a showstopper for many people, and that’s why Percona Server 5.6 now includes a patch from Facebook that allows an online migration to GTIDs and that’s why MySQL 5.7.6 also offers this option.

* * *

Thanks for all of the great questions – and I hope to see you next month at OpenStack Live and the Percona Live MySQL Conference and Expo 2015 (April 13-16) – both at the Santa Clara conference center in sunny Silicon Valley. Get more info here.

The post Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7 appeared first on MySQL Performance Blog.

MySQL and geospatial programming: An introduction to GIS

Latest MySQL Performance Blog posts - March 13, 2015 - 3:00am

Geographic information systems (GIS) are used by application developers to incorporate geographic information into their datasets to create apps with cool features such finding the address to the best steakhouse in town or the locations of local points of historical interest (the list is endless). In a nutshell, GIS captures, store, checks and displays data related to positions on Earth’s surface.

Next Wednesday I’m going provide an introduction to GIS functionality in MySQL along an overview of recent changes. I invite you to register now for this free webinar titled “MySQL and Geospatial Programming.” It starts at 10 a.m. Pacific time on March 18.

In this talk I’ll discuss:

  • The various tasks that deal with geocoding
  • How MySQL can solve problems such as points within a radius (e.g., “Find the 10 closest coffee shops) among others
  • Highlights from some of the new features bundled in the upcoming 5.7 release, and what benefits they can bring to your applications

I’m happy to field questions in advance in the comments section below. This webinar, like all Percona webinars, will be recorded. In addition to the video, my slides will also be available for download.

Register now and I’ll talk to you next week!

The post MySQL and geospatial programming: An introduction to GIS appeared first on MySQL Performance Blog.

mysql.sock: broken pipe

Lastest Forum Posts - March 12, 2015 - 4:44pm
My percona-agent.log is full of it:
[MySQL] 2015/03/12 20:42:13 packets.go:118: write unix /var/lib/mysql/mysql.sock: broken pipe

precise slow-query.log for use wih playback

Lastest Forum Posts - March 12, 2015 - 8:09am
Hi everyone

I am quite new to databases and so far i really like working on this "technology" hence excuse me in advance if i ask something obvius.

What i wish to achive is to replay traffic-load from one database server (live data) to another (dev) and so far what i have found out by researching is to
1) get a full backup
2) Captre the traffic with slow-query option (set to 0)]
3) Replay the traffic with percona-playback

I have done the above and all worked flawlessly including the playback.

My problem is that due to the database scheme, alot of queries fail or just hang there because provably there are some queries missing since there is a gap between the full backup and the slow-query logging.

Is there a way to get a full backup using the xtrabackup and then to make a slow-query log where the first query of the slow-query.log is the exac query executed in real life after the fullbackup has finished.

For example, lets assume that a full nackup is taken where the state of the databse is such as that includes the modifications made by the queries:
query-1-Insert
query-2-Insert
query-3-update

and the slow query-log should contain:
query-4-delete
query-5-update


I assume that there is some sort of timestamp-ordering of the queries executed in the database? right?

Thanks

percorna toolkit 2.2.13 - ​pt-online-schema-change bus error (core dumped)

Lastest Forum Posts - March 12, 2015 - 4:19am
Hi,

I'm using pt-online-schema-change for altering a table on mysql 5.6 on ubuntu 14.10, but with no success.

After "2015-03-12T15:08:06 Dropped new table OK." I get "bus error (core dumped)".

This is the command I'm running:
pt-online-schema-change --version-check --user root --password xxxx --database test h=127.0.0.1,t=order_item --alter-foreign-keys-method "rebuild_constraints" --alter "ADD COLUMN refunded_by varchar(255) DEFAULT NULL" --execute

and this is the output:
➜ pt-online-schema-change --version-check --user root --password xxxx --database test h=127.0.0.1,t=order_item --alter-foreign-keys-method "rebuild_constraints" --alter "ADD COLUMN refund_by varchar(255) DEFAULT NULL" --execute
No slaves found. See --recursion-method if host cirpux has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
copy_rows, 10, 0.25
create_triggers, 10, 1
drop_triggers, 10, 1
swap_tables, 10, 1
update_foreign_keys, 10, 1
Child tables:
`test`.`sales_order_item_status_history` (approx. 1 rows)
`test`.`sales_return` (approx. 1 rows)
Will use the rebuild_constraints method to update foreign keys.
Altering `test`.`sales_order_item`...
Creating new table...
Created new table test._order_item_new OK.
Altering new table...
Altered `test`.`_order_item_new` OK.
2015-03-12T15:12:41 Creating triggers...
2015-03-12T15:12:41 Created triggers OK.
2015-03-12T15:12:41 Copying approximately 1671767 rows...
Copying `test`.`sales_order_item`: 44% 00:37 remain
Copying `test`.`sales_order_item`: 79% 00:15 remain
2015-03-12T15:14:00 Dropping triggers...
2015-03-12T15:14:00 Dropped triggers OK.
2015-03-12T15:14:00 Dropping new table...
2015-03-12T15:14:00 Dropped new table OK.
`test`.`sales_order_item` was not altered.
[1] 20420 bus error (core dumped) pt-online-schema-change --version-check --user root --password root --databas


Do you have any suggestions on how to get more information on what's going on? Or even a solution :P


thanks

Easy query metrics with MySQL Performance Schema

Latest MySQL Performance Blog posts - March 12, 2015 - 12:00am

The MySQL Performance Schema exposes so much data that it’s not trivial to learn, configure, and use. With recently released Percona Agent 1.0.11 you can get query metrics – like min, max, and average query execution time – with a few clicks:

Click “Apply” and about two minutes later you’ll have query metrics from Performance Schema, collected and sent every minute.

Percona Cloud Tools (PCT) and Percona Agent handle all the details. You’ll need MySQL (or Percona Server) 5.6 and Percona Agent 1.0.11 or newer. One caveat at the moment: it only works for local MySQL instances (so not Amazon RDS). This limitation should be fixed soon; we’re already working on it.

Why use Performance Schema? We prefer Query Analytics with MySQL slow logs (especially Percona Server slow logs which expose more metrics) because slow logs expose the most metrics compared to other sources, but sometimes the slow log just isn’t an option, so Performance Schema is the next best choice, but the choice means tradeoffs. For example, Performance Schema does not expose actual query examples (just fingerprints), so EXPLAIN does not work.

For those who’ve been following PCT development, you know that Percona Agent 1.0.5 first introduced support for Performance Schema. What’s new in 1.0.11 is everything – we completely rewrote this part of the agent. It’s so much better that it’s now the required minimum version for using Query Analytics with Performance Schema. Upgrading is really easy: just run the single command line you used to install the agent and it will auto-update.

MySQL Performance Schema exposes a lot of data and insights into the server, but query metrics are perhaps the most important because the primary job of your MySQL database is to execute queries. That’s why Percona Cloud Tools makes Query Analytics with Performance Schema (and slow logs) so easy: to help you focus on the essential and leave the details to the tools.

Percona Cloud Tools is in beta, so it’s still free to sign up and free to use all the tools and features.

The post Easy query metrics with MySQL Performance Schema appeared first on MySQL Performance Blog.

interpreting mysql graphs for cacti

Lastest Forum Posts - March 11, 2015 - 9:23am
I am montioring my webserver with cacti using these plugins. the graph Percona MySQL Connections GT show the connections increasing every day by 10/20. when i log into the server and check using cli it is not correct.

: #mysqladmin --p extended-status | grep -wi 'threads_connected\|threads_running' | awk '{ print $2,$4}' Threads_connected 3 Threads_running 1 mysql> show processlist; +-------+------+-----------+------+---------+------+-------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +-------+------+-----------+------+---------+------+-------+------------------+ | 58625 | root | localhost | NULL | Query | 0 | NULL | show processlist | +-------+------+-----------+------+---------+------+-------+------------------+ 1 row in set (0.00 sec)

Replication Warning when replicating with replicate-ignore-db.

Lastest Forum Posts - March 11, 2015 - 1:18am
When trying to replicate from 5.5(Percona-Server-5.5.28-rel29.2-360.) to 5.6 (Percona-Server-5.6.22-rel71.0-726 ) and having some
replicate-wild-ignore-table
replicate-ignore-db

The replication is giving warnings and filling up the mysql error log, even after setting and starting the mysql server with parameter slave_skip_errors = 1237

2015-03-11 09:10:49 19579 [Warning] Slave SQL: Could not execute Query event. Detailed error: Slave SQL thread ignored the query because of replicate-*-table rules; Error log throttle is enabled. This error will not be displayed for next 60 secs. It will be suppressed, Error_code: 1237

Is this a bug ? or how to get rid of this.

How to get rid of this warnings.

Br
Singh.

Wresp breakdown WSREP: fork() failed: 12 (Cannot allocate memory)

Lastest Forum Posts - March 10, 2015 - 10:04am
We use percona cluster on debian wheezy. last cluster was build two weeks ago and worked properly.
Today we tried startuing a new one and get memory errors from wresp.


dpkg -l|grep galera
ii percona-xtradb-cluster-galera-3 3.9.3494.wheezy amd64 Metapackage for latest version of galera3.
ii percona-xtradb-cluster-galera-3.x 3.9.3494.wheezy amd64 Galera components of Percona XtraDB Cluster


150310 17:48:52 mysqld_safe Starting mysqld_using_numa daemon with databases from /data/mysql
150310 17:48:52 mysqld_safe WSREP: Running position recovery with --log_error='/data/mysql/wsrep_recovery.35GNao' --pid-file='/data/mysql/galeranode-02-recover.pid'
2015-03-10 17:48:55 0 [Warning] Using unique option prefix sort_buffer instead of sort_buffer_size is deprecated and will be removed in a future release. Please use the full
name instead.
2015-03-10 17:48:55 0 [Warning] Using unique option prefix thread_cache instead of thread_cache_size is deprecated and will be removed in a future release. Please use the fu
ll name instead.
2015-03-10 17:48:55 0 [Warning] 'THREAD_CONCURRENCY' is deprecated and will be removed in a future release.
2015-03-10 17:48:55 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more d
etails).
150310 17:48:58 mysqld_safe WSREP: Recovered position fafe51f0-c743-11e4-9db3-765772f3dde5:15
Log of wsrep recovery (--wsrep-recover):
2015-03-10 17:48:55 15254 [Warning] Using unique option prefix myisam_recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please
use the full name instead.
2015-03-10 17:48:55 15254 [Warning] option 'innodb-log-files-in-group': unsigned value 1 adjusted to 2
InnoDB: Warning: innodb_log_block_size has been changed from default value 512. (###EXPERIMENTAL### operation)
InnoDB: The log block size is set to 4096.
2015-03-10 17:48:55 7f563d5f5720 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the optio
n innodb_use_sys_malloc and with the InnoDB's internal memory allocator.
2015-03-10 17:48:55 7f563d5f5720 InnoDB: Warning: Using innodb_locks_unsafe_for_binlog is DEPRECATED. This option may be removed in future releases. Please use READ COMMITTE
D transaction isolation level instead, see http://dev.mysql.com/doc/refman/5.6/...ansaction.html.
2015-03-10 17:48:55 15254 [Note] InnoDB: Using atomics to ref count buffer pool pages
2015-03-10 17:48:55 15254 [Note] InnoDB: The InnoDB memory heap is disabled
2015-03-10 17:48:55 15254 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2015-03-10 17:48:55 15254 [Note] InnoDB: Memory barrier is not used
2015-03-10 17:48:55 15254 [Note] InnoDB: Compressed tables use zlib 1.2.7
2015-03-10 17:48:55 15254 [Note] InnoDB: Using Linux native AIO
2015-03-10 17:48:55 15254 [Note] InnoDB: Not using CPU crc32 instructions
2015-03-10 17:48:55 15254 [Note] InnoDB: Initializing buffer pool, size = 5.5G
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
InnoDB: HugeTLB: Warning: Failed to allocate 601882624 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
2015-03-10 17:48:55 15254 [Note] InnoDB: Completed initialization of buffer pool
2015-03-10 17:48:55 15254 [Note] InnoDB: Highest supported file format is Barracuda.
2015-03-10 17:48:55 15254 [Note] InnoDB: 128 rollback segment(s) are active.
2015-03-10 17:48:55 15254 [Note] InnoDB: Waiting for purge to start
2015-03-10 17:48:55 15254 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.22-72.0 started; log sequence number 1647917
2015-03-10 17:48:55 15254 [Warning] InnoDB: Skipping buffer pool dump/restore during wsrep recovery.
2015-03-10 17:48:55 15254 [Warning] /usr/sbin/mysqld: unknown variable 'loose_handlersocket_port=9998'
2015-03-10 17:48:55 15254 [Warning] /usr/sbin/mysqld: unknown variable 'loose_handlersocket_port_wr=9999'
2015-03-10 17:48:55 15254 [Warning] /usr/sbin/mysqld: unknown variable 'loose_handlersocket_threads=16'
2015-03-10 17:48:55 15254 [Warning] /usr/sbin/mysqld: unknown variable 'loose_handlersocket_threads_wr=1'
2015-03-10 17:48:56 15254 [Note] RSA private key file not found: /data/mysql//private_key.pem. Some authentication plugins will not work.
2015-03-10 17:48:56 15254 [Note] RSA public key file not found: /data/mysql//public_key.pem. Some authentication plugins will not work.
2015-03-10 17:48:56 15254 [Note] Server hostname (bind-address): '*'; port: 33306
2015-03-10 17:48:56 15254 [Note] IPv6 is available.
2015-03-10 17:48:56 15254 [Note] - '::' resolves to '::';
2015-03-10 17:48:56 15254 [Note] Server socket created on IP: '::'.
2015-03-10 17:48:56 15254 [Note] WSREP: Recovered position: fafe51f0-c743-11e4-9db3-765772f3dde5:15
2015-03-10 17:48:56 15254 [Note] Binlog end
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'partition'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'BLACKHOLE'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_CHANGED_PAGES'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_METRICS'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_CMPMEM'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_CMP'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_LOCKS'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'INNODB_TRX'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'XTRADB_RSEG'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'XTRADB_INTERNAL_HASH_TABLES'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'XTRADB_READ_VIEW'
2015-03-10 17:48:56 15254 [Note] Shutting down plugin 'InnoDB'
2015-03-10 17:48:56 15254 [Note] InnoDB: FTS optimize thread exiting.
2015-03-10 17:48:56 15254 [Note] InnoDB: Starting shutdown...
2015-03-10 17:48:58 15254 [Note] InnoDB: Shutdown completed; log sequence number 1647927
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'ARCHIVE'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'MEMORY'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'MyISAM'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'CSV'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'MRG_MYISAM'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'sha256_password'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'mysql_old_password'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'mysql_native_password'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'wsrep'
2015-03-10 17:48:58 15254 [Note] Shutting down plugin 'binlog'
2015-03-10 17:48:58 15254 [Note] /usr/sbin/mysqld: Shutdown complete

Percona XtraBackup without root

Lastest Forum Posts - March 10, 2015 - 9:16am
Sorry, duplicated post. Got an unexpected error on last post

Hit an old bug ? ( InnoDB data dictionary != tablespace )

Lastest Forum Posts - March 10, 2015 - 3:57am
Hello ,

I'm facing a problem with a backup than i've tried to prepare.

This is the situation :Host to backup:Debian Squeeze
MariaDb 5.2.4
xtrabackup version 2.2.9New Host:Debian Wheezy
MariaDb 5.5.42
xtrabackup version 2.2.9

My backup has finished with : : 50307 02:24:08 innobackupex: completed OK! Some warnings at the begining: : 150306 19:17:08 innobackupex: Warning: FLUSH ENGINE LOGS is not supported by the server. Data may be inconsistent with binary log coordinates! But it failed when i tried to prepare the backup :/ : ...... ...... InnoDB: Trx id counter is 8397891584 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Last MySQL binlog file position 0 1868814, file name /home/mysql/log/mariadb-bin.037316 2015-03-10 10:36:40 64bfd7fe7720 InnoDB: Error: table 'test_tmp/CP_VO_2013_03_26' InnoDB: in InnoDB data dictionary has tablespace id 101137, InnoDB: but the tablespace with that id has name STRUCTURE/CP_VO_2013_03_26. ..... ..... : etc... InnoDB: Table test_tmp/Q_FIELD_11142_1 in the InnoDB data dictionary has tablespace id 100386, but tablespace with that id or name does not exist. Have you deleted or moved .ibd files? This may also be a table created with CREATE TEMPORARY TABLE whose .ibd and .frm files MySQL automatically removed, but the table still exists in the InnoDB internal data dictionary. InnoDB: It will be removed from the data dictionary...... ..... etc... 2015-03-10 10:36:43 64bfc9e16700 InnoDB: Rollback of non-prepared transactions completed notice (again) If you use binary log and don't use any hack of group commit, the binary log position seems to be: InnoDB: Last MySQL binlog file position 0 1868814, file name /home/mysql/log/mariadb-bin.037316 xtrabackup: ######################################################## xtrabackup: # !!WARNING!! # xtrabackup: # The transaction log file is corrupted. # xtrabackup: # The log was not applied to the intended LSN! # xtrabackup: ######################################################## xtrabackup: The intended lsn is 10338665879040 xtrabackup: starting shutdown with innodb_fast_shutdown = 1 InnoDB: FTS optimize thread exiting. InnoDB: Starting shutdown... InnoDB: Shutdown completed; log sequence number 10338665891853 150310 10:37:18 innobackupex: Restarting xtrabackup with command: xtrabackup --defaults-file="/home/backup/backup-my.cnf" --defaults-group="mysqld" --prepare --target-dir=/home/backup for creating ib_logfile* .... .... InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0 InnoDB: New log files created, LSN=10338665891853 InnoDB: Highest supported file format is Barracuda. 2015-03-10 10:37:59 6ad7ba69e720 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: If you are installing InnoDB, remember that you must create InnoDB: directories yourself, InnoDB does not create them. InnoDB: Could not find a valid tablespace file for 'test_tmp/CP_VO_2013_03_26'. See http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html for how to resolve the issue. InnoDB: It will be removed from the data dictionary..... .... etc.... InnoDB: 1 rollback segment(s) are active. InnoDB: Waiting for purge to start InnoDB: 5.6.22 started; log sequence number 10338665892364 InnoDB: Error: tablespace id is 100384 in the data dictionary InnoDB: but in file ./test_tmp/FIELD_LIST_FOR_TABLE_10313.ibd it is 167234! 2015-03-10 10:38:04 6ad7ba69e720 InnoDB: Assertion failure in thread 117474778015520 in file fil0fil.cc line 582 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 09:38:04 UTC - xtrabackup got signal 6 ; This could be because you hit a bug or data is corrupted. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Thread pointer: 0x345fe20 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0 thread_stack 0x10000 xtrabackup(my_print_stacktrace+0x2b) [0x8ed17b] xtrabackup(handle_fatal_signal+0x252) [0x7991a2] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf0a0) [0x6ad7ba2800a0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x6ad7b889b165] /lib/x86_64-linux-gnu/libc.so.6(abort+0x180) [0x6ad7b889e3e0] xtrabackup() [0x608571] xtrabackup() [0x608649] xtrabackup(fil_io(unsigned long, bool, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, void*, void*)+0x147) [0x6104b7] xtrabackup() [0x6e57fd] xtrabackup(buf_read_page(unsigned long, unsigned long, unsigned long)+0x44) [0x6e5ea4] xtrabackup(buf_page_get_gen(unsigned long, unsigned long, unsigned long, unsigned long, buf_block_t*, unsigned long, char const*, unsigned long, mtr_t*)+0x22e) [0x67d71e] xtrabackup(main+0x1dc1) [0x5c0931] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x6ad7b8887ead] xtrabackup() [0x5d195d] Please report a bug at https://bugs.launchpad.net/percona-xtrabackup innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 2654 main::apply_log() called at /usr/bin/innobackupex line 1570 innobackupex: Error: innobackupex: xtrabackup (2nd execution) failed at /usr/bin/innobackupex line 2654.

Is there someone who faced the same problem ?

Do you think could it be related to this bugs ? https://bugs.launchpad.net/percona-x...up/+bug/722638


Best regards,

Advanced JSON for MySQL

Latest MySQL Performance Blog posts - March 10, 2015 - 3:00am
What is JSON

JSON is an text based, human readable format for transmitting data between systems, for serializing objects and for storing document store data for documents that have different attributes/schema for each document. Popular document store databases use JSON (and the related BSON) for storing and transmitting data.

Problems with JSON in MySQL

It is difficult to inter-operate between MySQL and MongoDB (or other document databases) because JSON has traditionally been very difficult to work with. Up until recently, JSON is just a TEXT document. I said up until recently, so what has changed? The biggest thing is that there are new JSON UDF by Sveta Smirnova, which are part of the MySQL 5.7 Labs releases. Currently the JSON UDF are up to version 0.0.4. While these new UDF are a welcome edition to the MySQL database, they don’t solve the really tough JSON problems we face.

Searching

The JSON UDF provide a number of functions that make working with JSON easier, including the ability to extract portions of a document, or search a document for a particular key. That being said, you can’t use JSON_EXTRACT() or JSON_SEARCH in the WHERE clause, because it will initiate a dreaded full-table-scan (what MongoDB would call a full collection scan). This is a big problem and common wisdom is that JSON can’t be indexed for efficient WHERE clauses, especially sub-documents like arrays or objects within the JSON.

Actually, however, I’ve come up with a technique to effectively index JSON data in MySQL (to any depth). The key lies in transforming the JSON from a format that is not easily indexed into one that is easily indexed. Now, when you think index you think B-TREE or HASH indexes (or bitmap indexes) but MySQL also supports FULLTEXT indexes.

A fulltext index is an inverted index where words (tokens) point to documents. While text indexes are great, they aren’t normally usable for JSON. The reason is, MySQL splits words on whitespace and non-alphanumeric characters. A JSON document doesn’t end up being usable when the name of the field (the key) can’t be associated with the value. But what if we transform the JSON? You can “flatten” the JSON down into key/value pairs and use a text index to associate the key/value pairs with the document. I created a UDF called RAPID_FLATTEN_JSON using the C++ Rapid JSON library. The UDF flattens JSON documents down into key/value pairs for the specific purpose of indexing.

Here is an example JSON document:

{ "id": "0001", "type": "donut", "name": "Cake", "ppu": 0.55, "batters": { "batter": [ { "id": "1001", "type": "Regular" }, { "id": "1002", "type": "Chocolate" }, { "id": "1003", "type": "Blueberry" }, { "id": "1004", "type": "Devil's Food" } ] }, "topping": [ { "id": "5001", "type": "None" }, { "id": "5002", "type": "Glazed" }, { "id": "5005", "type": "Sugar" }, { "id": "5007", "type": "Powdered Sugar" }, { "id": "5006", "type": "Chocolate with Sprinkles" }, { "id": "5003", "type": "Chocolate" }, { "id": "5004", "type": "Maple" } ] }

Flattened:

mysql> select RAPID_FLATTEN_JSON(load_file('/tmp/doc.json'))G *************************** 1. row *************************** RAPID_FLATTEN_JSON(load_file('/tmp/doc.json')): id=0001 type=donut name=Cake ppu=0.55 id=1001 type=Regular id=1002 type=Chocolate id=1003 type=Blueberry id=1004 type=Devil's Food type=Devil's type=Food id=5001 type=None id=5002 type=Glazed id=5005 type=Sugar id=5007 type=Powdered Sugar type=Powdered type=Sugar id=5006 type=Chocolate with Sprinkles type=Chocolate type=with type=Sprinkles id=5003 type=Chocolate id=5004 type=Maple 1 row in set (0.00 sec)

Obviously this is useful, because our keys are now attached to our values in an easily searchable way. All you need to do is store the flattened version of the JSON in another field (or another table), and index it with a FULLTEXT index to make it searchable. But wait, there is one more big problem: MySQL will split words on the equal sign. We don’t want this as it removes the locality of the keyword and the value. To fix this problem you’ll have to undertake the (actually quite easy) step of adding a new collation to MySQL (I called mine ft_kvpair_ci). I added equal (=) to the list of lower case characters as described in the manual. You just have to change two text files, no need to recompile the server or anything, and as I said, it is pretty easy. Let me know if you get stuck on this step and I can show you the 5.6.22 files I modified.

By the way, I used a UDF, because MySQL FULLTEXT indexes don’t support pluggable parsers for InnoDB until 5.7. This will be much cleaner in 5.7 with a parser plugin and there will be no need to maintain an extra column.

Using the solution:
Given a table full of complex json:

create table json2(id int auto_increment primary key, doc mediumtext);

Add a column for the index data and FULLTEXT index it:

alter table json2 add flat mediumtext character set latin1 collate ft_kvpair_ci, FULLTEXT(flat);

Then populate the index. Note that you can create a trigger to keep the second column in sync, I let that up to an exercise of the reader, or you can use Flexviews to maintain a copy in a second table automatically.

mysql> update json2 set flat=RAPID_FLATTEN_JSON(doc); Query OK, 18801 rows affected (26.34 sec) Rows matched: 18801 Changed: 18801 Warnings: 0

Using the index:

mysql> select count(*) from json2 where match(flat) against ('last_name=Vembu'); +----------+ | count(*) | +----------+ | 3 | +----------+ 1 row in set (0.00 sec)

The documents I searched for that example are very complex and highly nested. Check out the full matching documents for the query here here

If you want to only index a subportion of the document, use the MySQL UDF JSON_EXTRACT to extract the portion you want to index, and only flatten that.

Aggregating

JSON documents may contain sub-documents as mentioned a moment ago. JSON_EXTRACT can extract a portion of a document, but it is still a text document. There is no function that can extract ALL of a particular key (like invoice_price) and aggregate the results. So, if you have a document called orders which contains a varying number of items and their prices, it is very difficult (if not impossible) to use the JSON UDF to aggregate a “total sales” figure from all the order documents.

To solve this problem, I created another UDF called RAPID_EXTRACT_ALL(json, ‘key’). This UDF will extract all the values for the given key. For example, if there are 10 line items with invoice_id: 30, it will extract the value (30 in this case) for each item. This UDF returns each item separated by newline. I created a few stored routines called jsum, jmin, jmax, jcount, and javg. They can process the output of rapid_extract_all and aggregate it. If you want to only RAPID_EXTRACT_ALL from a portion of a document, extract that portion with the MySQL UDF JSON_EXTRACT first, then process that with RAPID_EXTRACT_ALL.

For example:

mysql> select json_extract_all(doc,'id') ids, jsum(json_extract_all(doc,'id')) from json2 limit 1G *************************** 1. row *************************** ids: 888 889 2312 5869 8702 jsum(json_extract_all(doc,'id')): 18660.00000 1 row in set (0.01 sec)

Aggregating all of the id values in the entire collection:

mysql> select sum( jsum(json_extract_all(doc,'id')) ) from json2 ; +-----------------------------------------+ | sum( jsum(json_extract_all(doc,'id')) ) | +-----------------------------------------+ | 296615411.00000 | +-----------------------------------------+ 1 row in set (2.90 sec)

Of course you could extract other fields and sort and group on them.

Where to get the tools:
You can find the UDF in the swanhart-tools github repo. I think you will find these tools very useful in working with JSON documents in MySQL.

(This post was originally posted on my personal blog: swanhart.livejournal.com, but is reposed here for wider distribution)

The post Advanced JSON for MySQL appeared first on MySQL Performance Blog.

mysqldump with --innodb-optimize-keys causes un-restorable dumps in some conditions

Lastest Forum Posts - March 10, 2015 - 1:38am
I've found a bug in '--innodb-optimize-keys', which causes dumps not to be able to be restored.

This is related to the FK constraints, whom FOREGIN_KEY_CHECKS=0 does not cause to be ignored under certain circumstances:

: SET FOREIGN_KEY_CHECKS = 0; # Doesn't fail, as expected # CREATE TABLE a( val INT, FOREIGN KEY testfk ( val ) REFERENCES b( val ) ); # Fails, unexpectedly (on the first statement) # CREATE TABLE b( val INT ); ALTER TABLE b ADD INDEX( val ); This is the typical structure of a dump created with --innodb-optimize-keys.

percona-toolkit 2.2.13: damaged data on master after pt-table-sync

Lastest Forum Posts - March 9, 2015 - 11:58pm
Hi!

OS: Ubuntu 12.04.5 LTS \n \l
DB server: mariadb-server-5.5 5.5.42+maria-1~precise
Percona-toolkit: 2.2.13

We have two mariadb servers (also tried mysql-server-5.5 ( 5.5.41-0ubuntu0.12.04.1 )) with master->slave replication. Recently the replication has got broken so I decided to use pt-table-sync to fix it.

First I used pt-table-checksum to identifiy which tables are out of sync:

pt-table-checksum --chunk-size 500000 --empty-replicate-table --replicate=tracker.checksum -d tracker --tables tracker.chatlogs h=mysql-master,u=root,p=passw0rd

And then I used pt-table-sync:

pt-table-sync --execute --replicate tracker.checksum -d tracker --tables tracker.chatlogs h=mysql-master.site,u=root,p=passw0rd

Everything ran without any problems except for the fact that some data that had cyrillic symbols got damaged on master server. I saw '?' signs instead of normal cyrillic letters on both master and slave. That means that pt-table-sync somehow managed to damage the data on master DB.

MariaDB [tracker]> describe chatlogs;
+-------------+-----------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-----------------------+------+-----+---------+----------------+
| id | mediumint(8) unsigned | NO | PRI | NULL | auto_increment |
| user_id | mediumint(8) | NO | MUL | NULL | |
| username | varchar(20) | NO | MUL | | |
| username_to | varchar(20) | NO | MUL | | |
| IP | varchar(8) | NO | | | |
| time | int(11) | NO | MUL | NULL | |
| message | varchar(1000) | NO | | NULL | |
| blocked | varchar(20) | NO | MUL | | |
+-------------+-----------------------+------+-----+---------+----------------+
8 rows in set (0.00 sec)


MariaDB [tracker]> show create table chatlogs\G
*************************** 1. row ***************************
Table: chatlogs
Create Table: CREATE TABLE `chatlogs` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`user_id` mediumint(8) NOT NULL,
`username` varchar(20) NOT NULL DEFAULT '',
`username_to` varchar(20) NOT NULL DEFAULT '',
`IP` varchar(8) NOT NULL DEFAULT '',
`time` int(11) NOT NULL,
`message` varchar(1000) NOT NULL,
`blocked` varchar(20) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`) USING BTREE,
KEY `time` (`time`) USING BTREE,
KEY `username_to_name` (`username_to`,`username`) USING BTREE,
KEY `id_blocked` (`id`,`blocked`) USING BTREE,
KEY `username_to` (`username_to`) USING BTREE,
KEY `username` (`username`) USING BTREE,
KEY `blocked` (`blocked`) USING BTREE,
KEY `username_id` (`username`,`id`) USING BTREE,
KEY `id_username` (`id`,`username`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=156476 DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT
1 row in set (0.00 sec)


Before pt-table-sync (on master):

MariaDB [tracker]> SELECT id, FROM_UNIXTIME(time) FROM chatlogs WHERE message LIKE '%? ?%' OR blocked LIKE '%? ?%' ORDER BY time DESC;
Empty set (0.17 sec)


After pt-table-sync (on master):

MariaDB [tracker]> SELECT id, FROM_UNIXTIME(time) FROM chatlogs WHERE message LIKE '%? ?%' OR blocked LIKE '%? ?%' ORDER BY time DESC;
+--------+---------------------+
| id | FROM_UNIXTIME(time) |
+--------+---------------------+
| 156475 | 2015-02-23 12:48:15 |
| 156474 | 2015-02-23 12:47:47 |
| 156472 | 2015-02-23 10:55:57 |
| 156471 | 2015-02-23 08:04:15 |
| 156469 | 2015-02-23 07:34:57 |
| 156468 | 2015-02-23 07:34:29 |
| 156467 | 2015-02-23 05:57:14 |
| 156466 | 2015-02-23 05:51:29 |
| 156465 | 2015-02-23 05:45:44 |
| 156464 | 2015-02-23 05:43:37 |
| 156463 | 2015-02-23 05:21:37 |
| 156462 | 2015-02-23 05:15:51 |
| 156461 | 2015-02-23 05:15:30 |
| 156460 | 2015-02-22 21:18:18 |
| 156459 | 2015-02-22 21:16:30 |
+--------+---------------------+
15 rows in set (0.17 sec)

I can provide dumps for this table for both master and slave before sync so that it would be possible to reproduce the issue.

Could anybody please tell if I'm doing anything wrong in order to sync slave db with master and why this sort of thing could happen?

Thanks in advance.

--
Best regards,
Sergey Arlashin

Percona XtraBackup without root

Lastest Forum Posts - March 9, 2015 - 1:01pm
There is anyway to use Percona XtraBackup on a server where I don't have root access?

Pages

Subscribe to Percona aggregator
]]>