EmergencyEMERGENCY? Get 24/7 Help Now!

use innobackupex to backup a specific database

Lastest Forum Posts - May 5, 2016 - 12:11am
I want to create a slave to sync a specific database which is used to read. So i use innobackupex to backup the database and use it to init my slave.
i use the command Code: innobackupex --user=xxx --password=xxx --databases=rpt_version /mnt/bak and completed ok. However when i in apply-log stage,it come to some error but it also print completed ok at the last.
The apply command is Code: innobackupex --user=xxx --password=xxx --databases=rpt_version --apply-log /mnt/bak .
The output of Code: xtrabackup --print-defatlts is
HTML Code: --log-error=/var/log/mysql/error.log --log-output=FILE --general_log=0 --general_log_file=/infobright_datadir/datadir_new/general_query.log --slow_query_log=1 --long_query_time=5 --slow_query_log_file=/infobright_datadir/datadir_new/slow_query.log --log_error_verbosity=2 --server-id=1 --binlog_format=ROW --log-bin=/infobright_datadir/datadir_new/mysql-bin.log --binlog_do_db=dj_version --binlog_do_db=out_version --binlog_do_db=rpt_version --expire_logs_days=7 --collation-server=utf8_unicode_ci --init-connect=SET NAMES utf8 --character-set-server=utf8 --secure-file-priv= --sql_mode=ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION --symbolic-links=0 --innodb_buffer_pool_size=2048M --innodb_read_io_threads=20 --query_cache_size=128M --tmp_table_size=128M --key_buffer_size=128M --max_allowed_packet=256M --innodb_file_per_table=1 --innodb_data_home_dir=/infobright_datadir/datadir_new --innodb_log_group_home_dir=/infobright_datadir/datadir_new and the output of apply-log stage :
HTML Code: 160505 14:34:04 innobackupex: Starting the apply-log operation IMPORTANT: Please check that the apply-log run completes successfully. At the end of a successful apply-log run innobackupex prints "completed OK!". innobackupex version 2.4.2 based on MySQL server 5.7.11 Linux (x86_64) (revision id: 8e86a84) xtrabackup: cd to /infobright_datadir/bak/2016-05-05_12-26-48 xtrabackup: This target seems to be already prepared. InnoDB: Number of pools: 1 xtrabackup: notice: xtrabackup_logfile was already used to '--prepare'. xtrabackup: using the following InnoDB configuration for recovery: xtrabackup: innodb_data_home_dir = . xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend xtrabackup: innodb_log_group_home_dir = . xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 50331648 xtrabackup: using the following InnoDB configuration for recovery: xtrabackup: innodb_data_home_dir = . xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend xtrabackup: innodb_log_group_home_dir = . xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 50331648 xtrabackup: Starting InnoDB instance for recovery. xtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter) InnoDB: PUNCH HOLE support available InnoDB: Mutexes and rw_locks use GCC atomic builtins InnoDB: Uses event mutexes InnoDB: GCC builtin __sync_synchronize() is used for memory barrier InnoDB: Compressed tables use zlib 1.2.3.4 InnoDB: Number of pools: 1 InnoDB: Using CPU crc32 instructions InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M InnoDB: Completed initialization of buffer pool InnoDB: page_cleaner coordinator priority: -20 InnoDB: Highest supported file format is Barracuda. InnoDB: Removed temporary tablespace data file: "ibtmp1" InnoDB: Creating shared tablespace for temporary tables InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... InnoDB: File './ibtmp1' size is now 12 MB. InnoDB: 96 redo rollback segment(s) found. 1 redo rollback segment(s) are active. InnoDB: 32 non-redo rollback segment(s) are active. InnoDB: 5.7.11 started; log sequence number 1301619091050 InnoDB: Failed to find tablespace for table `mysql`.`innodb_table_stats` in the cache. Attempting to load the tablespace with space id 321 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: Cannot open datafile for read-only: './mysql/innodb_table_stats.ibd' OS error: 71 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: Could not find a valid tablespace file for `mysql/innodb_table_stats`. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting-datadict.html for how to resolve the issue. InnoDB: Failed to find tablespace for table `mysql`.`innodb_index_stats` in the cache. Attempting to load the tablespace with space id 320 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: Cannot open datafile for read-only: './mysql/innodb_index_stats.ibd' OS error: 71 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: Could not find a valid tablespace file for `mysql/innodb_index_stats`. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting-datadict.html for how to resolve the issue. InnoDB: xtrabackup: Last MySQL binlog file position 8202245, file name mysql-bin.000122 xtrabackup: starting shutdown with innodb_fast_shutdown = 1 InnoDB: FTS optimize thread exiting. InnoDB: Starting shutdown... InnoDB: Shutdown completed; log sequence number 1301619092641 InnoDB: Number of pools: 1 xtrabackup: using the following InnoDB configuration for recovery: xtrabackup: innodb_data_home_dir = . xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend xtrabackup: innodb_log_group_home_dir = . xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 50331648 InnoDB: PUNCH HOLE support available InnoDB: Mutexes and rw_locks use GCC atomic builtins InnoDB: Uses event mutexes InnoDB: GCC builtin __sync_synchronize() is used for memory barrier InnoDB: Compressed tables use zlib 1.2.3.4 InnoDB: Number of pools: 1 InnoDB: Using CPU crc32 instructions InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M InnoDB: Completed initialization of buffer pool InnoDB: page_cleaner coordinator priority: -20 InnoDB: Highest supported file format is Barracuda. InnoDB: Removed temporary tablespace data file: "ibtmp1" InnoDB: Creating shared tablespace for temporary tables InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... InnoDB: File './ibtmp1' size is now 12 MB. InnoDB: 96 redo rollback segment(s) found. 1 redo rollback segment(s) are active. InnoDB: 32 non-redo rollback segment(s) are active. InnoDB: 5.7.11 started; log sequence number 1301619092641 xtrabackup: starting shutdown with innodb_fast_shutdown = 1 InnoDB: FTS optimize thread exiting. InnoDB: Starting shutdown... InnoDB: Failed to find tablespace for table `mysql`.`innodb_index_stats` in the cache. Attempting to load the tablespace with space id 320 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: Cannot open datafile for read-only: './mysql/innodb_index_stats.ibd' OS error: 71 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: Could not find a valid tablespace file for `mysql/innodb_index_stats`. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting-datadict.html for how to resolve the issue. InnoDB: Failed to find tablespace for table `mysql`.`innodb_table_stats` in the cache. Attempting to load the tablespace with space id 321 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: Cannot open datafile for read-only: './mysql/innodb_table_stats.ibd' OS error: 71 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: Could not find a valid tablespace file for `mysql/innodb_table_stats`. Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting-datadict.html for how to resolve the issue. InnoDB: Shutdown completed; log sequence number 1301619094187 160505 14:34:08 completed OK!

Getting error while preparing backup

Lastest Forum Posts - May 4, 2016 - 10:22pm
I have taken a full backup using innobackupex, while trying to prepare my backup using --apply-log i am getting the error as Segmentation fault (core dumped).

I went through many forums but i dint get any solution. Is this a Bug?

Unsafe statement written to the binary log using since BINLOG_FORMAT = STATEMENT...

Lastest Forum Posts - May 4, 2016 - 6:51pm
Hi, ALL

there are many warning messages like below when use pt-table-checksum to check 3 nodes data in our PXC, why it occur? And are there any way correct that ?
In the my.cnf file, BINLOG_FORMAT is set to ROW,

thanks in advance.



2016-05-05 03:37:41 24482 [Warning] Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. REPLACE... SELECT is unsafe because the order in which rows are retrieved by the SELECT determines which (if any) rows are replaced. This order cannot be predicted and may differ on master and the slave. Statement: REPLACE INTO `percona`.`checksums` (db, tbl, chunk, chunk_index, lower_boundary, upper_boundary, this_cnt, this_crc) SELECT /*!99997*/ 'sp2p', 't_wsapi_token', '2', 'PRIMARY', '694425', '1004712', COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS(' #', `ws_api_token_id`, `fk_ws_api_user_id`, `token`, `comment`, UNIX_TIMESTAMP(`create_time`), UNIX_TIMESTAMP(`update_time`), `create_by`, `update_by`, `record_status`, CONCAT(ISNULL(`comment`), ISNULL(`create_time`), ISNULL(`update_time`), ISNULL(`create_by`), ISNULL(`update_by`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `sp2p`.`t_wsapi_token` FORCE INDEX(`PRIMARY`) WHERE ((`ws_api_token_id` >= '694425')) AND ((`ws_api_token_id` <= '100471


Percona XtraDB Cluster 5.6 does not auto sync the mysql.proc

Lastest Forum Posts - May 4, 2016 - 6:37pm
There are 3 nodes in our Custer,I updated the field “definer” in mysql.proc table ,then I found that the values of others 2 nodes is the old one,PXC does not auto sync that kind of data?


is there any way to get the PXC auto sync the values in mysql.proc when update the “definer” ?

Deadlock with foreign key constraints

Lastest Forum Posts - May 4, 2016 - 5:02pm
We are using MariaDB 10.1 and facing some weird deadlock issues. While investigating we ran a sample program to test the foreign keys and found a deadlock error. Can someone please explain this in bit detail so we understand how locking works in galera.

We used a 2 node cluster for this testing. Session 1 (Node1): create table dl1(pk int primary key, data varchar(100)); create table dl2(pk int primary key, pk1 int not null, constraint dl2_fk foreign key(pk1) references dl1(pk), data varchar(100)); set autocommit=off; insert into dl1 values(1, 'a'); Session 2 (Node2): set autocommit=off; insert into dl2(pk, pk1, data) values(10, 1, 'a0'); ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (`demoApp`.`dl2`, CONSTRAINT `dl2_fk` FOREIGN KEY (`pk1`) REFERENCES `dl1` (`pk`)) -- as expected select * from dl1; Empty set (0.00 sec) -- as expected Session 1 (Node1): commit; Session 2 (Node2): select * from dl1; ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction --- strange Once I got this error I ran the 'SHOW ENGINE INNODB STATUS\G' on node2 and here is the truncated output.
------------------------
LATEST FOREIGN KEY ERROR
------------------------
2016-05-04 17:46:34 7ff98ead8b00 Transaction:
TRANSACTION 3852, ACTIVE 0 sec inserting, thread declared inside InnoDB 5000
mysql tables in use 1, locked 1
3 lock struct(s), heap size 1184, 1 row lock(s), undo log entries 1 --------------------> (LOOK HERE)
MySQL thread id 2507, OS thread handle 0x7ff98ead8b00, query id 73514 localhost root update
insert into dl2(pk, pk1, data) values(10, 1, 'a0')
Foreign key constraint fails for table `test`.`dl2`:
,
CONSTRAINT `dl2_fk` FOREIGN KEY (`pk1`) REFERENCES `dl1` (`pk`)
Trying to add in child table, in index `dl2_fk` tuple:
DATA TUPLE: 2 fields;
0: len 4; hex 80000001; asc ;;
1: len 4; hex 8000000a; asc ;;
But in parent table `test`.`dl1`, in index `PRIMARY`,
the closest match we can find is record:
PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 696e66696d756d00; asc infimum ;;
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
0 read views open inside InnoDB
1 RW transactions active inside InnoDB ------------------> (LOOK HERE)
0 RO transactions active inside InnoDB
1 out of 1000 descriptors used
Main thread process no. 1242, id 140706194630400, state: sleeping
Number of rows inserted 0, updated 0, deleted 0, read 0
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
Number of system rows inserted 0, updated 0, deleted 0, read 0
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s

ETA on Percona Server packages for Ubuntu 16.04 LTS (Xenial Xerus)?

Lastest Forum Posts - May 4, 2016 - 8:31am
I tried installing the available ones, but it doesn't end up in a usable state. If anyone has got it to work, I'd love to see your notes.

MySQL High Availability: The Road Ahead for Percona and XtraDB Cluster

Latest MySQL Performance Blog posts - May 4, 2016 - 7:07am

This blog post discusses what is going on in the MySQL high availability market, and what Percona’s plans are for helping customers with high availability solutions.

One thing I like to tell people is that you shouldn’t view Percona as a “software” company, but as a “solution” company. Our goal has always been to provide the best solution that meets each customer’s situation, rather than push our own software, regardless of whether it is the best fit or not. As a result, we have customers running all kinds of MySQL “flavors”: MySQL, MariaDB, Percona Server, Amazon RDS and Google Cloud SQL. We’re happy to help customers be successful with the technology of their choice, and advise them on alternatives when we see a better fit.

One area where I have been increasingly uneasy is our advanced high availability support with Percona XtraDB Cluster and other Galera-based technologies. In 2011, when we started working on Percona XtraDB Cluster together with Codership, we needed to find a way to arrange investment into the development of Galera technology to bring it to market. So we made a deal, which, while providing needed development resources, also required us to price Percona XtraDB Cluster support as a very expensive add-on option. While this made sense at the time, it also meant few companies could afford XtraDB Cluster support from Percona, especially at large scale.

As a few years passed, the Galera technology became the mainstream high-end high availability option. In addition to being available in Percona XtraDB Cluster, it has been included in MariaDB, as well as Galera Cluster for MySQL. Additionally, the alternative technology to solve the same problem – MySQL Group Replication – started to be developed by the MySQL Team at Oracle. With these all changes, it was impossible for us to provide affordable support for Percona XtraDB Cluster due to our previous commercial agreement with Codership that reflected a very different market situation than we now find ourselves facing.

As a result, over a year ago we exited our support partnership agreement with Codership and moved the support and development function in-house. These changes have proven to be positive for our customers, allowing us to better focus on their priorities and provide better response time for issues, as these no longer require partner escalation.

Today we’re taking the next natural step – we will no longer require customers to purchase Percona XtraDB Cluster as a separate add-on. Percona will include support for XtraDB Cluster and other Galera-based replication technologies in our Enterprise and Premier support levels, as well as our Percona Care and Managed Services subscriptions. Furthermore, we are going to support Oracle’s MySQL Group Replication technology at no additional cost too, once it becomes generally available, so our customers have access to the best high availability technology for their deployment.

As part of this change, you will also see us focusing on hardening XtraDB Cluster and Galera technology, making it better suited for demanding business workloads, as well as more secure and easier to use. All of our changes will be available as 100% open source solutions and will also be contributed back to the Galera development team to incorporate into their code base if they wish.

I believe making the Galera code better is the most appropriate action for us at this point!

Kitchen Units Clearance

Lastest Forum Posts - May 4, 2016 - 12:40am

K-i-t-c-h-e-n- -U-n-i-t-s- -C-l-e-a-r-a-n-c-e- -G-o- -T-o- -w-w-w-(.)-l-o-w-c-o-s-t-e-x-d-i-s-p-l-a-y-k-i-t-c-h-e-n-s-(.)-c-o-(.)-u-k

Partition support in Percona XtraDB Cluster 5.6

Lastest Forum Posts - May 3, 2016 - 3:09pm
Hello Percona community! As far as I can tell, it seems the support for partitioning is limited to the hash scheme. We are running an application that uses range partitions and they fail to be created. I am not finding any documentation to support or reject this idea, however, and wanted to confirm it.

MySQL 5.7: initial flushing analysis and why Performance Schema data is incomplete

Latest MySQL Performance Blog posts - May 3, 2016 - 12:22pm

In this post, we’ll examine why in an initial flushing analysis we find that Performance Schema data is incomplete.

Having shown the performance impact of Percona Server 5.7 patches, we can now discuss their technical reasoning and details. Let’s revisit the MySQL 5.7.11 performance schema synch wait graph from the previous post, for the case of unlimited InnoDB concurrency:

First of all, this graph is a little “nicer” than reality, which limits its diagnostic value. There are two reasons for this. The first one is that page cleaner worker threads are invisible to Performance Schema (see bug 79894). This alone limits PFS value in 5.7 if, for example, one tries to select only the events in the page cleaner threads or monitors low concurrency where the cleaner thread count is non-negligible part of the total threads.

To understand the second reason, let’s look into PMP for the same setting. Note that selected intermediate stack frames were removed for clarity, especially in the InnoDB mutex implementation.

660 pthread_cond_wait,enter(ib0mutex.h:850),buf_dblwr_write_single_page(ib0mutex.h:850),buf_flush_write_block_low(buf0flu.cc:1096),buf_flush_page(buf0flu.cc:1096),buf_flush_single_page_from_LRU(buf0flu.cc:2217),buf_LRU_get_free_block(buf0lru.cc:1401),... 631 pthread_cond_wait,buf_dblwr_write_single_page(buf0dblwr.cc:1213),buf_flush_write_block_low(buf0flu.cc:1096),buf_flush_page(buf0flu.cc:1096),buf_flush_single_page_from_LRU(buf0flu.cc:2217),buf_LRU_get_free_block(buf0lru.cc:1401),... 337 pthread_cond_wait,PolicyMutex<TTASEventMutex<GenericPolicy>(ut0mutex.ic:89),get_next_redo_rseg(trx0trx.cc:1185),trx_assign_rseg_low(trx0trx.cc:1278),trx_set_rw_mode(trx0trx.cc:1278),lock_table(lock0lock.cc:4076),... 324 libaio::??(libaio.so.1),LinuxAIOHandler::collect(os0file.cc:2448),LinuxAIOHandler::poll(os0file.cc:2594),... 241 pthread_cond_wait,PolicyMutex<TTASEventMutex<GenericPolicy>(ut0mutex.ic:89),trx_write_serialisation_history(trx0trx.cc:1578),trx_commit_low(trx0trx.cc:2135),... 147 pthread_cond_wait,enter(ib0mutex.h:850),trx_undo_assign_undo(ib0mutex.h:850),trx_undo_report_row_operation(trx0rec.cc:1918),... 112 pthread_cod_wait,mtr_t::s_lock(sync0rw.ic:433),btr_cur_search_to_nth_level(btr0cur.cc:1008),... 83 poll(libc.so.6),Protocol_classic::get_command(protocol_classic.cc:965),do_command(sql_parse.cc:935),handle_connection(connection_handler_per_thread.cc:301),... 64 pthread_cond_wait,Per_thread_connection_handler::block_until_new_connection(thr_cond.h:136),...

The top wait in both PMP and the graph is the 660 samples of enter mutex in buf_dblwr_write_single_pages, which is the doublewrite mutex. Now try to find the nearly as hot 631 samples of event wait in buf_dblwr_write_single_page in the PFS output. You won’t find it because InnoDB OS event waits are not annotated in Performance Schema. In most cases this is correct, as OS event waits tend to be used when there is no work to do. The thread waits for work to appear, or for time to pass. But in the report above, the waiting thread is blocked from proceeding with useful work (see bug 80979).

Now that we’ve shown the two reasons why PFS data is not telling the whole server story, let’s take PMP data instead and consider how to proceed. Those top two PMP waits suggest 1) the server is performing a lot of single page flushes, and 2) those single page flushes have their concurrency limited by the eight doublewrite single-page flush slots available, and that the wait for a free slot to appear is significant.

Two options become apparent at this point: either make the single-page flush doublewrite more parallel or reduce the single-page flushing in the first place. We’re big fans of the latter option since version 5.6 performance work, where we configured Percona Server to not perform single-page flushes at all by introducing the innodb_empty_free_list_algorithm option, with the “backoff” default.

The next post in the series will describe how we removed single-page flushing in 5.7.

Best Practices for Configuring Optimal MySQL Memory Usage

Latest MySQL Performance Blog posts - May 3, 2016 - 7:26am

In this blog post, we’ll discuss some of the best practices for configuring optimal MySQL memory usage.

Correctly configuring the use of available memory resources is one of the most important things you have to get right with MySQL for optimal performance and stability. As of MySQL 5.7, the default configuration uses a very limited amount of memory – leaving defaults is one of the worst things you can do. But configuring it incorrectly can result in even worse performance (or even crashes).

The first rule of configuring MySQL memory usage is you never want your MySQL to cause the operating system to swap. Even minor swapping activity can dramatically reduce MySQL performance. Note the keyword “activity” here. It is fine to have some used space in your swap file, as there are probably parts of the operating system that are unused when MySQL is running, and it’s a good idea to swap them out. What you don’t want is constant swapping going on during the operation, which is easily seen in the “si” and “so” columns in the vmstat output.

Example: No Significant Swapping

Example:  Heavy Swapping Going

If you’re running Percona Monitoring and Management, you can also look into the Swap Activity graph in System Overview Dashboard.

If you have spikes of more than 1MB/sec, or constant swap activity, you might need to revisit your memory configuration.

MySQL Memory allocation is complicated. There are global buffers, per-connection buffers (which depend on the workload), and some uncontrolled memory allocations (i.e., inside Stored Procedures), all contributing to difficulties in computing how much memory MySQL will really use for your workload. It is better to check it by looking at the virtual memory size (VSZ) that MySQL uses. You can get it from “top”, or by running ps aux | grep mysqld.

mysql     3939 30.3 53.4 11635184 8748364 ?    Sl   Apr08 9106:41 /usr/sbin/mysqld

The 5th column here shows VSZ usage (about 11GB).

Note that the VSZ is likely to change over time. It is often a good idea to plot it in your monitoring system and set an alert to ping you when it hits a specified threshold. Don’t allow the mysqld process VSZ exceed 90% of the system memory (and less if you’re running more than just MySQL on the system).

It’s a good idea to start on the safe side by conservatively setting your global and per connections buffers, and then increase them as you go. Many can be set online, including innodb_buffer_pool_size in MySQL 5.7.

So how do you decide how much memory to allocate to MySQL versus everything else? In most cases you shouldn’t commit more than 90% of your physical memory to MySQL, as you need to have some reserved for the operating system and things like caching binary log files, temporary sort files, etc.

There are cases when MySQL should use significantly less than 90% of memory:

  • If there are other important processes running on the same server, either all the time or periodically. If you have heavy batch jobs run from cron, which require a lot of memory, you’ll need to account for that.
  • If you want to use OS caching for some storage engines. With InnoDB, we recommend innodb_flush_method=O_DIRECT  in most cases, which won’t use Operating System File Cache. However, there have been cases when using buffered IO with InnoDB made sense. If you’re still running MyISAM, you will need OS cache for the “data” part of your tables. With TokuDB, using OS cache is also a good idea for some workloads.
  • If your workload has significant demands, Operating System Cache – MyISAM on disk temporary tables, sort files, some other temporary files which MySQL creates the need to be well-cached for optimal performance.

Once you know how much memory you want the MySQL process to have as a whole, you’ll need to think about for what purpose the memory should be used inside MySQL.The first part of memory usage in MySQL is workload related – if you have many connections active at the same time that run heavy selects using a lot of memory for sorting or temporary tables, you might need a lot of memory (especially if Performance Schema is enabled). In other cases this amount of memory is minimal. You’ll generally need somewhere between 1 and 10GB for this purpose.

Another thing you need to account for is memory fragmentation. Depending on the memory allocation library you’re using (glibc, TCMalloc, jemalloc, etc.), the operating system settings such as Transparent Huge Pages (THP) and workload may show memory usage to grow over time (until it reaches some steady state). Memory fragmentation can also account for 10% or more of additional memory usage.

Finally, let’s think about various global buffers and caching. In typical cases, you mainly only have innodb_buffer_pool_size to worry about. But you might also need to consider key_buffer_size,  tokudb_cache_size, query_cache_size  as well as table_cache and table_open_cache. These are also responsible for global memory allocation, even though they are not counted in bytes. Performance _Schema may also take a lot of memory, especially if you have a large number of connections or tables in the system.

When you specify the size of the buffers and caches, you should determine what you’re specifying. For innodb_buffer_pool_size, remember there is another 5-10% of memory that is allocated for additional data structures – and this number is larger if you’re using compression or set innodb_page_size smaller than 16K. For tokudb_cache_size, it’s important to remember that the setting specified is a guide, not a “hard” limit: the cache size can actually grow slightly larger than the specified limit.

For systems with large amounts of memory, the database cache is going to be by far the largest memory consumer, and you’re going to allocate most of your memory to it. When you add extra memory to the system, it is typically to increase the database cache size.

Let’s do some math for a specific example. Assume you have a system (physical or virtual) with 16GB of memory. We are only running MySQL on this system, with an InnoDB storage engine and use innodb_flush_method=O_DIRECT, so we can allocate 90% (or 14.4GB) of memory to MySQL. For our workload, we assume connection handling and other MySQL connection-based overhead will take up 1GB (leaving 13.4GB). 0.4GB is likely to be consumed by various other global buffers (innodb_log_buffer_size, Table Caches, other miscellaneous needs, etc.), which now leaves 13GB. Considering the 5-7% overhead that the InnodB Buffer Pool has, a sensible setting is innodb_buffer_pool_size=12G – what we very commonly see working well for systems with 16GB of memory.

Now that we have configured MySQL memory usage, we also should look at the OS configuration. The first question to ask is if we don’t want MySQL to swap, should we even have the swap file enabled?  In most cases, the answer is yes – you want to have the swap file enabled (strive for 4GB minimum, and no less than 25% of memory installed) for two reasons:

  • The operating system is quite likely to have some portions that are unused when it is running as a database server. It is better to let it swap those out instead of forcing it to keep it in memory.
  • If you’ve made a mistake in the MySQL configuration, or you have some rogue process taking much more memory than expected, it is usually a much better situation to lose performance due to a swap then to kill MySQL with an out of memory (OOM) error – potentially causing downtime.

As we only want the swap file used in emergencies, such as when there is no memory available or to swap out idle processes, we want to reduce Operating System tendency to swap   (echo 1 >  /proc/sys/vm/swappiness). Without this configuration setting you might find the OS swapping out portions of MySQL just because it feels it needs to increase the amount of available file cache (which is almost always a wrong choice for MySQL).

The next thing when it comes to OS configuration is setting the Out Of Memory killer. You may have seen message like this in your kernel log file:

Apr 24 02:43:18 db01 kernel: Out of memory: Kill process 22211 (mysqld) score 986 or sacrifice child

When MySQL itself is at fault, it’s pretty rational thing to do. However, it’s also possible the real problem was some of the batch activities you’re running: scripts, backups, etc. In this case, you probably want those processes to be terminated if the system does not have enough memory rather than MySQL.

To make MySQL a less likely candidate to be killed by the OOM killer, you can adjust the behavior to make MySQL less preferable with the following:

echo '-800' > /proc/$(pidof mysqld)/oom_score_adj

This will make the Linux kernel prefer killing other heavy memory consumers first.

Finally on a system with more than one CPU socket, you should care about NUMA when it comes to MySQL memory allocation. In newer MySQL versions, you want to enable innodb_numa_interleave=1. In older versions you can either manually run numactl --interleave=all  before you start MySQL server, or use the numa_interleave configuration option in Percona Server.

 

http://supermusclesbuild.com/actbiotics/

Lastest Forum Posts - May 2, 2016 - 9:38pm
I absolutely give it a lot of thought wherever let's get it nailed down. Perhaps you are not sure what you can do with ActBiotics that you have not tried yet. I think you are seeing how to build a portfolio of ActBiotics.
http://supermusclesbuild.com/actbiotics/

K-i-t-c-h-e-n U-n-i-t-s Clearance

Lastest Forum Posts - May 2, 2016 - 8:43pm

K-i-t-c-h-e-n- -U-n-i-t-s- -C-l-e-a-r-a-n-c-e- -G-o- -T-o- -w-w-w-(.)-l-o-w-c-o-s-t-e-x-d-i-s-p-l-a-y-k-i-t-c-h-e-n-s-(.)-c-o-(.)-u-k

3-node setup failure

Lastest Forum Posts - May 2, 2016 - 2:32pm
I am trying to setup 3 node cluster on Ubuntu, seems as my first node is bootstrapped:
mysql> show status like 'wsrep%';
+------------------------------+--------------------------------------+
| Variable_name | Value |
+------------------------------+--------------------------------------+
| wsrep_local_state_uuid | 713e12a9-10a8-11e6-a7a6-02ef683208fc |
| wsrep_protocol_version | 7 |
| wsrep_last_committed | 0 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 17 |
| wsrep_received_bytes | 1487 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 2 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.500000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.058824 |
| wsrep_local_cached_downto | 0 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_cert_deps_distance | 0.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 0.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 0.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 0 |
| wsrep_cert_bucket_count | 22 |
| wsrep_gcache_pool_size | 1707 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_incoming_addresses | 10.132.18.1:3306 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 713daa05-10a8-11e6-bb61-eaab06c0d826 |
| wsrep_cluster_conf_id | 7 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 713e12a9-10a8-11e6-a7a6-02ef683208fc |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.14(r53b88eb) |
| wsrep_ready | ON |
+------------------------------+--------------------------------------+

But when i try to start 2-nd node, it fails:
* Starting MySQL (Percona XtraDB Cluster) database server mysqld * State transfer in progress, setting sleep higher mysqld * The server quit without updating PID file (/var/lib/mysql/node-44a84216b229.pid).

Mysql logs are empty....Any idea what can be a problem here and how i can troubleshoot this issue?
Thanks

Master-Master replication between two PXC clusters

Lastest Forum Posts - May 1, 2016 - 12:05pm
How to achieve conflict resolution in Master-Master replication between two 3 node PXC clusters? Since wsrep_auto_increment_control=ON does not honor setting auto-increment-increment and auto-increment-offset, are there any other options to differentiate the data originating in each of the clusters other than to capture some local cluster identifier in each transaction like hostname:timestamp or something like that?

100% Risk Free Product - Proshred Elite Reviews

Lastest Forum Posts - April 30, 2016 - 10:38pm
However Proshred Elite is additionally seen that folks are least involved concerning its facet effects. It's better to attempt these supplements free for a while and then invest your cash in it. Free supplements provide us an plan concerning what they need to offer. There Proshred Elite Reviews so several bodybuilding merchandise obtainable and we tend to have a prepared marketplace for them. Such product are instant hit as they provide higher solutions in less time. See more at: http://www.xtgenixau.com/proshred-elite-reviews/

throw light on Innodb locking scenario

Lastest Forum Posts - April 30, 2016 - 9:46am
Hello,

Say, I have a table

CREATE TABLE `numbers` (
`id` int(10) unsigned NOT NULL,
`f1` int(11) NOT NULL,
`f2` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

And populate this table with 2097152 or so entries.

Start Transaction T1 in one terminal, >> delete from numbers where id < 2097119;

do not commit yet.
Start Transaction T2 in another terminal >> insert into numbers (f1, f2) values (100,200);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

The second transaction times out.

The question is why is the second transaction needing to wait for a lock ?

Both transactions are in repeatable-read isolation level.

In another terminal, running

>> show engine innodb status

reveals
--TRANSACTION 11DF9B, ACTIVE 931 sec
4205 lock struct(s), heap size 620984, 2101322 row lock(s), undo log entries 2097118

Noticing that undo log entries match the number of rows, but there are more row locks.

Hoping some can help with this


Percona 5.7 w/ TokuDB - &amp;quot;Got Error 1152 from storage engine&amp;quot; ErrorNr 1030

Lastest Forum Posts - April 29, 2016 - 5:27pm
Hi - we have recently moved one of our databases to its own server and migrated from an InnoDB table structure to TokuDB to test a number of purported improvements. The older server was a mySQL 5.6.x version and we upgraded to 5.7 when loading in the data from a flatfile dump. All of the standard b-tree indexes were recreated on the load-in and we opted for a pretty standard my.cnf setup.

The server itself has an upgraded IO subsystem (Dell R910 H730p with SATA SSD - EVO850s) and there is no contention contention for memory either (500GB of RAM).

Since migrating this database, we've had nothing but trouble. Query performance compared to the prior database has been not good - with inexplicable index choices and queries that took under a second to run on large tables taking upwards of 20 minutes.

Today, we got this error (subject line) "Got Error 1152 from storage engine" to which I can't find nearly any reference to for tracking down the problem.

At this point I'm kinda at a loss as to where to begin - I've changed the hardware (which should be for the better, but might not I suppose), I've upgraded the mySQL and changed the storage engine.

Here is an example of my perplexity with what is going on prior to this new 1152 error - this is me posting on stackexchange.

http://dba.stackexchange.com/questio...d-index-chosen

I'm at a loss because we have a 5.6 InnoDB Percona slave that consistently outperforms a vanilla Oracle dist with the exact same data and setup, where do I begin to debug this nightmare?

Below is the my.cnf

[mysqld]
pid-file=/var/run/mysqld/mysqld.pid
log-error=/var/log/mysqld.log
datadir=/bigguy/mysql1
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
validate_password_policy = LOW


innodb_file_per_table=1
max_allowed_packet=500M
innodb_buffer_pool_size = 100G
innodb_buffer_pool_dump_at_shutdown = 1 # dump cache pool for re-warming on restart
innodb_buffer_pool_load_at_startup = 1 # reload cache pool on startup

innodb_flush_log_at_trx_commit=2 # reset to 0 when done import
innodb_doublewrite=0 # remove when done import
innodb_log_file_size = 1024M
innodb_flush_method = O_DIRECT
innodb_buffer_pool_instances=8
innodb_thread_concurrency=8

tokudb_directio=1

query_cache_type=2 # on-demand only
query_cache_size=67108864
query_cache_limit=4194304
max_connections=2048
max_allowed_packet=500M

port = 3306
server_id=1021220
relay-log-recovery=1 # http://dev.mysql.com/doc/refman/5.6/...y-log-recovery
master-info-repository=TABLE # http://dev.mysql.com/doc/refman/5.6/...nfo-repository
relay-log-info-repository=TABLE
sync_master_info=10000
slave_net_timeout=30 # shorten the connection retry to 1 minute from the default 1 HOUR!!!
slave-skip-errors = 1062 # ignore duplicate key insertions
expire_logs_days = 14 # expunge log every 2 weeks (set to lower if your db is busy or freespahce is small
performance_schema=ON # reset to ON when done import
slow_query_log=0
slow_query_log_file=slow.log
long_query_time=10

log_bin=/bigguy/mysql1/binlogs
sync_binlog=1
max_binlog_size=500M
log_slave_updates = 1
relay_log = mysql-relay-bin
binlog_format = ROW
log_bin_trust_function_creators=TRUE
gtid_mode=OFF


max-connections = 1000
thread-cache-size = 100
open-files-limit = 65535
table-definition-cache = 4096
table-open-cache = 10240

join_buffer_size = 1M
sort_buffer_size = 1M
read_rnd_buffer_size = 8M
bulk_insert_buffer_size=512M
sql_mode=NO_ENGINE_SUBSTITUTION,NO_UNSIGNED_SUBTRA CTION

[mysqld_safe]
thp-setting=never
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


---------------

The database is about 100GB total. There are several tables with many indexes and about 7-10m rows.

nodes on different hosts?

Lastest Forum Posts - April 28, 2016 - 3:04pm
Is it possible to have nodes of ExtraDB cluster on different hosts/ips? So far i only saw examples of deployment on the same host...


General Inquiries

For general inquiries, please send us your question and someone will contact you.