]]>
]]>

You are here

Feed aggregator

Percona XtraDB Cluster 5.6.24-25.11 is now available

Latest MySQL Performance Blog posts - June 3, 2015 - 1:19pm

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on June 3rd 2015. Binaries are available from downloads area or from our software repositories.

Based on Percona Server 5.6.24-72.2 including all the bug fixes in it, Galera Replicator 3.11, and on Codership wsrep API 25.11, Percona XtraDB Cluster 5.6.24-25.11 is now the current General Availability release. All of Percona’s software is open-source and free, and all the details of the release can be found in the 5.6.24-25.11 milestone at Launchpad.

New Features:

  • Percona XtraDB Cluster now allows reads in non-primary state by introducing a new session variable wsrep_dirty_reads. This variable is boolean and is OFF by default. When set to ON, a Percona XtraDB Cluster node accepts queries that only read, but not modify data even if the node is in the non-PRIM state (#1407770).
  • Percona XtraDB Cluster now allows queries against INFORMATION_SCHEMA and PERFORMANCE_SCHEMA even with variables wsrep_ready and wsrep_dirty_reads set to OFF. This allows monitoring applications to monitor the node when it is even in non-PRIM state (#1409618).
  • wsrep_replicate_myisam variable is now both global and session variable (#1280280).
  • Percona XtraDB Cluster now uses getifaddrs for node address detection (#1252700).
  • Percona XtraDB Cluster has implemented two new status variables: wsrep_cert_bucket_count and wsrep_gcache_pool_size for better instrumentation of galera memory usage. Variable wsrep_cert_bucket_count shows the number of cells in the certification index hash-table and variable wsrep_gcache_pool_size shows the size of the page pool and/or dynamic memory allocated for gcache (in bytes).

Bugs Fixed:

  • Using concurrent REPLACE, LOAD DATA REPLACE or INSERT ON DUPLICATE KEY UPDATE statements in the READ COMMITTED isolation level or with the innodb_locks_unsafe_for_binlog option enabled could lead to a unique-key constraint violation. Bug fixed #1308016.
  • Using the Rolling Schema Upgrade as a schema upgrade method due to conflict with wsrep_desync would allows only one ALTER TABLE to run concurrently. Bugs fixed #1330944 and #1330941.
  • SST would resume even when the donor was already detected as being in SYNCED state. This was caused when wsrep_desync was manually set to OFF which caused the conflict and resumed the donor sooner. Bug fixed #1288528.
  • DDL would fail on a node when running a TOI DDL, if one of the nodes has the table locked. Bug fixed #1376747.
  • xinet.d mysqlchk file was missing type = UNLISTED to work out of the box. Bug fixed #1418614.
  • Conflict between enforce_storage_engine and wsrep_replicate_myisam for CREATE TABLE has been fixed. Bug fixed #1435482.
  • A specific trigger execution on the master server could cause a slave assertion error under row-based replication. The trigger would satisfy the following conditions: 1) it sets a savepoint; 2) it declares a condition handler which releases this savepoint; 3) the trigger execution passes through the condition handler. Bug fixed #1438990.
  • Percona XtraDB Cluster Debian init script was testing connection with wrong credentials. Bug fixed #1439673.
  • Race condition between IST and SST in xtrabackup-v2 SST has been fixed. Bugs fixed #1441762, #1443881, and #1451524.
  • SST will now fail when move-back fails instead of continuing and failing at the next step. Bug fixed #1451670.
  • Percona XtraDB Cluster .deb binaries were built without fast mutexes. Bug fixed #1457118.
  • The error message text returned to the client in the non-primary mode is now more descriptive ("WSREP has not yet prepared node for application use"), instead of "Unknown command" returned previously. Bug fixed #1426378.
  • Out-of-bount memory access issue in seqno_reset() function has been fixed.
  • wsrep_local_cached_downto would underflow when the node on which it is queried has no writesets in gcache.

Other bugs fixed: #1290526.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

The post Percona XtraDB Cluster 5.6.24-25.11 is now available appeared first on MySQL Performance Blog.

Optimizing Percona XtraDB Cluster for write hotspots

Latest MySQL Performance Blog posts - June 3, 2015 - 6:50am

Some applications have a heavy write workload on a few records – for instance when incrementing a global counter: this is called a write hotspot. Because you cannot update the same row simultaneously from multiple threads, this can lead to performance degradation. When using Percona XtraDB Cluster (PXC), some users try to solve this specific issue by writing on multiple nodes at the same time. Good idea or bad idea? Read on!

Simultaneous writes on a standalone InnoDB server

Say you have these 3 transactions being run simultaneously (id is the primary key of the table):

# T1 UPDATE t SET ... WHERE id = 100 # T2 UPDATE t SET ... WHERE id = 100 # T3 UPDATE t SET ... WHERE id = 101

All transactions will require a row lock on the record they want to modify. So T3 can commit at the same time than T1 and/or T2, because it will not lock the same record as T1 and T2.

But T1 and T2 cannot execute simultaneously because they need to set a lock on the same record. Let’s assume T2 is executed by InnoDB first, how long does T1 need to wait? It is essentially the time needed for T2 to execute.

Simultaneous writes on multiple nodes (PXC)

Now is it any different if you have a 3-node PXC cluster and if you want to run T1 on node1 on T2 on node2? Let’s review step by step how the cluster will execute these queries:

1. T1 starts executing on node1: a row lock is set on the record where id=100. T2 also starts executing on node2 and also sets a row lock on the record where id=100. How is it possible that 2 transactions set the same lock on the same record? Remember that locking in Galera is optimistic, meaning that a transaction can only set locks on the node where it is executing, but never on the remote nodes: here, T1 sets a lock on node1 and T2 sets a lock on node2.

2. Let’s assume T2 reaches commit before T1. T2 is then synchronously replicated on all nodes and it gets a global sequence number. At this point, a certification test is run on node2 to determine whether there is any write conflicts between T2 and other “in-flight” transactions in the cluster. Go to the next section if you want some clarification about “in-flight” transactions and the certification test.

Back to our transactions: T2 is the first transaction to try to commit, so no other transaction is “in-flight”: the certification test will succeed and InnoDB will be able to apply the transaction on node1. On node2, the same certification test will be run and T2 will be put in the apply queue (and it will be applied at some point in the near future).

Ok, wait a minute! No other transaction is “in-flight”, really? What about T1 on node1? Actually T1 is simply a local transaction on node1 and it is not known by the cluster yet. Therefore it is not what I called an “in-flight” transaction and it does not play any role in the certification test.

3. Now T1 reaches commit on node1. It is then synchronously replicated to all nodes and a certification test will run on node1. If T1 and T2 did commit simultaneously, there is a good chance that when T1 starts committing, T2 is still in the apply queue of node1. In this case, the certification test for T1 will fail. Why? To make sure that T2 will commit on all nodes no matter what, any other transaction that wants to set a lock on the record where id=100 will be rejected.

Then if the certification test for T1 fails, T1 is rolled back. The only option to commit T1 is to retry executing it.

Let’s assume that this second try is successful, how long did T1 have to wait before being executing? Essentially we had to execute T1 twice so we had to replicate it twice, each replication taking 1 network RTT, we had to roll T1 back on node1 (rollback is expensive in InnoDB) and the application had to decide that T1 had to be executed a second time. That is a lot more work and wait compared to the scenario on a single server.

So where is the fundamental problem when we tried to write on several nodes? Galera uses optimistic locking, so we had to go very far in the execution of T1 before realizing that the query will not succeed. Multi-node writing is therefore not a good solution at all when the system sees write hotspots.

“In-flight” transactions and certification test

“In-flight” transactions are transactions that have already been applied on at least one node of the cluster but not on all nodes. Remember that even if replicating transactions is synchronous, applying committed transactions on remote nodes is not. So a transaction Tx can be committed and applied on node1 but not on node2: on node2, Tx will simply sit in an apply queue, waiting to be executed. Tx is then an “in-flight” transaction.

The goal of the certification test is to make sure that no transaction can prevent Tx to execute on node2: as Tx is already on node1 and as we want data consistency, we must make sure that Tx will execute successfully no matter what can happen. And the magic of Galera is that the test is deterministic so group communication is not necessary when a node runs a certification test.

Conclusion

So what are your options with PXC when the workload has write hotspots? The most obvious one is to write on a single node: then you will have the same locking model as with a standalone InnoDB server. Performance will not be as good as with a standalone server as you will have to pay for synchronous replication, but you will avoid the very expensive write conflicts.

Another option would be to rearchitect your application to write less often. For instance, for a global counter, you could maintain it with Redis and only periodically flush the value to PXC.

And if you want to understand more about the benefits and drawback of writing on multiple nodes of a PXC cluster, you can read these two posts.

The post Optimizing Percona XtraDB Cluster for write hotspots appeared first on MySQL Performance Blog.

Is 80% of RAM how you should tune your innodb_buffer_pool_size?

Latest MySQL Performance Blog posts - June 2, 2015 - 3:00am

It seems these days if anyone knows anything about tuning InnoDB, it’s that you MUST tune your innodb_buffer_pool_size to 80% of your physical memory. This is such prolific tuning advice, it seems engrained in many a DBA’s mind.  The MySQL manual to this day refers to this rule, so who can blame the DBA?  The question is: does it makes sense?

What uses the memory on your server?

Before we question such advice, let’s consider what can take up RAM in a typical MySQL server in their broad categories.  This list isn’t necessarily complete, but I think it outlines the large areas a MySQL server could consume memory.

  • OS Usage: Kernel, running processes, filesystem cache, etc.
  • MySQL fixed usage: query cache, InnoDB buffer pool size, mysqld rss, etc.
  • MySQL workload based usage: connections, per-query buffers (join buffer, sort buffer, etc.)
  • MySQL replication usage:  binary log cache, replication connections, Galera gcache and cert index, etc.
  • Any other services on the same server: Web server, caching server, cronjobs, etc.

There’s no question that for tuning InnoDB, the innodb_buffer_pool_size is the most important variable.  It’s expected to occupy most of the RAM on a dedicated MySQL/Innodb server, but of course other local services may affect how it is tuned.  If it (and other memory consumption on the server) is too large, swapping can kick in and degrade your performance rapidly.

Further, the workload of the MySQL server itself may cause a lot of variation.  Does the server have a lot of open connections and active query workload consuming memory?  The memory consumption caused by this can be dramatically different server to server.

Finally, replication mechanisms like Galera have their own memory usage pattern and can require some adjustments to your buffer pool.

We can see clearly that the 80% rule isn’t as nuanced as reality.

A rule of thumb

However, for the sake of argument, let’s say the 80% rule is a starting point.  A rule of thumb to help us get a quick tuning number to get the server running.  Assuming we don’t know anything really about the workload on the system yet, but we know that the system is dedicated to InnoDB, how might our 80% rule play out?

Total Server RAMBuffer pool with 80% ruleRemaining RAM1G800MB200MB16G13G3G32G26G6G64G51G13G128G102G26G256G205G51G512G409G103G1024G819G205G

At lower numbers, our 80% rule looks pretty reasonable.  However, as we get into large servers, it starts to seem less sane.  For the rule to hold true, it must mean that workload memory consumption increases in proportion to needed size of the buffer pool, but that usually isn’t the case.  Our server that has 1TB of RAM likely doesn’t need 205G of that to handle things like connections and queries (likely MySQL couldn’t handle that many active connections and queries anyway).

So, if you really just spent all that money on a beefy server do you really want to pay a 20% tax on that resource because of this rule of thumb?

The origins of the rule

At one of my first MySQL conferences, probably around 2006-2007 when I worked at Yahoo, I attended an InnoDB tuning talk hosted by Heikki Tuuri (the original author of InnoDB) and Peter Zaitsev.  I distinctly remember asking about the 80% rule because at the time Yahoo had some beefy 64G servers and the rule wasn’t sitting right with me.

Heikki’s answer stuck with me.  He said something to the effect of (not a direct quote): “Well, the server I was testing on had 1GB of RAM and 80% seemed about right”.  He then, if memory serves, clarified it and said it would not apply similarly to larger servers.

How should you tune?

80% is maybe a great start and rule of thumb.  You do want to be sure the server has plenty of free RAM for the OS and the usually unknown workload.  However, as we can see above, the larger the server, the more likely the rule will wind up wasting RAM.   I think for most people it starts and ends at the rule of thumb, mostly because changing the InnoDB buffer pool requires a restart in current releases.

So what’s a better rule of thumb?  My rule is that you tune the innodb_buffer_pool_size as large as possible without using swap when the system is running the production workload.  This sounds good in principle, but again, it requires a bunch of restarts and may be easier said than done.

Fortunately MySQL 5.7 and it’s online buffer pool resize feature should make this an easier principle to follow.  Seeing lots of free RAM (and/or filesystem cache usage)?  Turn the buffer pool up dynamically.  Seeing some swap activity?  Just turn it down with no restart required.   In practice, I suspect there will be some performance related hiccups of using this feature, but it is at least a big step in the right direction.

The post Is 80% of RAM how you should tune your innodb_buffer_pool_size? appeared first on MySQL Performance Blog.

Percona TokuMXse 3.0.3-1.0-rc6 is now available

Latest MySQL Performance Blog posts - June 1, 2015 - 10:54am

Percona is glad to announce the release of Percona TokuMXse 3.0.3-1.0rc6 on June 1st 2015. TokuMXse is a TokuMX storage engine for MongoDB 3.0.3. Downloads are available from our download site here. Packages for this Release Candidate release, will be available in our Ubuntu Trusty testing and CentOS 7 testing repositories.

Based on MongoDB 3.0.3, including all the bug fixes in it, Percona TokuMXse 3.0.3-1.0 is the current release candidate.

This release contains minor changes to the Fractal Tree, including:

  • Improved tokuftdump information.
  • Fixed sporadic recovery issue due to rare race between transaction rollback and logging.
  • Report capped boolean for uncapped collections when using tokuft storage engine.
Getting started

After installation, you can start mongod with tokuft as the storage engine, with:

$ mongod --storageEngine tokuft

Note: Transparent huge pages must be turned off for the fractal tree engine to work properly. If you attempt to run mongod with that option enabled, an error informing you of this will be printed to the output of the mongod process and it will fail to start.

You can check if the Transparent huge pages are enabled with:

$ cat /sys/kernel/mm/transparent_hugepage/enabled

[always] madvise never

You can disable the transparent huge pages by running the following command as root (NOTE: Setting this will last only until the server is rebooted):

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

Percona TokuMXse currently supports various tuning parameters to pass to the mongod instance. Full list can be read by running mongod --help (this will print options for all available storage engines, including tokuft).

The post Percona TokuMXse 3.0.3-1.0-rc6 is now available appeared first on MySQL Performance Blog.

New PERFORMANCE_SCHEMA defaults in MySQL 5.7.7

Latest MySQL Performance Blog posts - June 1, 2015 - 7:27am

I thought it was worth a moment to reiterate on the new Performance Schema related defaults that MySQL 5.7.7 brings to the table, for various reasons.

For one, most of you might have noticed that profiling was marked as deprecated in MySQL 5.6.7. So it is expected that you invest into learning more about Performance Schema (and Mark’s sys schema!).

Second, there are lots of virtual environments and appliances out there running Community Edition MySQL where Performance Schema can be a useful tool for analyzing performance. Thus, expect to see more articles about using PERFORMANCE_SCHEMA and SYS_SCHEMA from us!

Third, we have more and more junior readers who might benefit from light reads such as this.

The new defaults that I wanted to highlight are mentioned in the MySQL 5.7.7 release notes:
– The MySQL sys schema is now installed by default during data directory installation.
– The events_statements_history and events_transactions_history consumers now are enabled by default.

Note that if you are upgrading from an earlier version of MySQL to 5.7.7 to get these goodies you will need to run mysql_upgrade and restart the database for the above changes to take effect.

So what do these mean?

If you haven’t had a chance to dig into PERFORMANCE_SCHEMA, check out the quick start guide here. PERFORMANCE_SCHEMA is a nify tool (implemented as a union of a storage engine and a schema in MySQL) to monitor MySQL server execution at a low level, with a focus on performance metrics. It monitors for events that have been “instrumented”, such as function calls, OS wait times, synchronization calls, etc. With performance nomenclature “instruments” are essentially “probes”. The events that the instruments generate can be processed by consumers. Note that not all instruments or consumers are enabled by default.

Some would say that the structure of PERFORMANCE_SCHEMA may be complex and may not be very DBA-friendly. This is what led to the birth of SYS_SCHEMA. For those who are not familiar with Mark Leith’s SYS_SCHEMA and prefer TL;DR – it provides human friendly views, functions and procedures that can help you analyze database usage using PERFORMANCE_SCHEMA. If you haven’t had a chance to check it out yet you might want to read Miguel’s article on using the sys schema or Alexander Rubin’s article about using it in multitenant environments and give it a spin!

I welcome the fact that events_statements_history and events_transactions_history consumers are enabled by default in MySQL 5.7.7 as it means that we get some handy performance details available to us out of the box in vanilla MySQL. Note that these are per-thread tables and by default the history length (the length of the number of entries present; more on those variables here) is automatically sized, thus you may need to increase them.

What details do you get off the bat with them?

Consider the following example:

mysql> select * from performance_schema.events_statements_history where event_id=353G *************************** 1. row *************************** THREAD_ID: 20 EVENT_ID: 353 END_EVENT_ID: 456 EVENT_NAME: statement/sql/select SOURCE: mysqld.cc:963 TIMER_START: 1818042501405000 TIMER_END: 1818043715449000 TIMER_WAIT: 1214044000 LOCK_TIME: 67000000 SQL_TEXT: select * from imdb.title limit 100 DIGEST: ec93c38ab021107c2160259ddee31faa DIGEST_TEXT: SELECT * FROM `imdb` . `title` LIMIT ? CURRENT_SCHEMA: performance_schema OBJECT_TYPE: NULL OBJECT_SCHEMA: NULL OBJECT_NAME: NULL OBJECT_INSTANCE_BEGIN: NULL MYSQL_ERRNO: 0 RETURNED_SQLSTATE: NULL MESSAGE_TEXT: NULL ERRORS: 0 WARNINGS: 0 ROWS_AFFECTED: 0 ROWS_SENT: 100 ROWS_EXAMINED: 100 CREATED_TMP_DISK_TABLES: 0 CREATED_TMP_TABLES: 0 SELECT_FULL_JOIN: 0 SELECT_FULL_RANGE_JOIN: 0 SELECT_RANGE: 0 SELECT_RANGE_CHECK: 0 SELECT_SCAN: 1 SORT_MERGE_PASSES: 0 SORT_RANGE: 0 SORT_ROWS: 0 SORT_SCAN: 0 NO_INDEX_USED: 1 NO_GOOD_INDEX_USED: 0 NESTING_EVENT_ID: NULL NESTING_EVENT_TYPE: NULL 1 row in set (0.00 sec)

As you can see from above you get similar data that you are used to seeing from EXPLAINs and slow query logs, such as query run time, locking time, rows sent/examined, etc. For instance, in above output my query obtained about a 100 rows (lines 26-27), avoided creating temp tables (lines 28-29) and didn’t have to sort (lines 36-38) and no index was used (line 39) and it ran for about 121 ms (TIMER_END-TIMER_START). The list of details provided is not as abundant as it could be but I imagine that with newer releases the list may grow.

If you want to read on and are curious about how to use Performance Schema for profiling check out Jervin’s great article here!

The post New PERFORMANCE_SCHEMA defaults in MySQL 5.7.7 appeared first on MySQL Performance Blog.

Error 1025 (errno -1 unknown error -1) on Alter Table Command

Lastest Forum Posts - May 31, 2015 - 9:59pm
Hi, I've been trying to alter a Table named import_permit, adding about 12 new columns for it, using the alter statements:
: ALTER TABLE eips2live.import_permit ADD `closed_by_id` bigint(20) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `date_closed` datetime DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `date_disapproved` datetime DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `date_for_cancellation` datetime DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `disapproved_by_id` bigint(20) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `for_cancellation_by_id` bigint(20) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `for_cancellation_by_etrade` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `reason_for_disapproval` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `remarks_for_cancellation` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `remarks_for_close` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `remarks_for_closed` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `date_opened` datetime DEFAULT NULL; but always ended with Error : Code: 1025. Error on rename of './eips2live/#sql-5e74_42' to './eips2live/import_permit' (errno: -1 - Unknown error -1) checking my mysql-error.log, I can see this log about the same time the error appears on workbench:

: InnoDB: Error: 'eips2live/import_permit' is already in tablespace memory cache I have been trying to find this errno-1 everywhere but the closest one I got was err 150 which seems to be related to constraints, but I still have to read more of it. Another one was a MySQL Bug, which uses the same errno: -1 and Unknown error -1, but I do not know what to make of it, like how to avoid/resolve this current issue.

Hoping someone can give me a heads up on what to do/where to look for next. thanks.

Here is my my.cnf for reference.
: [mysql] # CLIENT # #port = 3306 [mysqld] # GENERAL # server-id = 2002 user = mysql default-storage-engine = InnoDB socket = /srv/data/mysql/mysql.sock pid-file = /srv/data/mysql/mysql.pid bind-address = 0.0.0.0 # MyISAM # key-buffer-size = 32M myisam-recover = FORCE,BACKUP # SAFETY # max-allowed-packet = 16M max-connect-errors = 1000000 #sql-mode = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE sysdate-is-now = 1 innodb = FORCE # DATA STORAGE # datadir = /srv/data/mysql/ # BINARY LOGGING # #log-bin = /srv/data/mysql/mysql-bin binlog-format = row log-bin = slavedb-bin expire-logs-days = 14 sync-binlog = 1 # CACHES AND LIMITS # #tmp-table-size = 32M #max-heap-table-size = 32M query-cache-type = 0 query-cache-size = 0 max-connections = 500 thread-cache-size = 50 open-files-limit = 65535 #table-definition-cache = 4096 #table-open-cache = 4096 # INNODB # innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 #innodb-log-file-size = 512M innodb-log-file-size = 5M innodb-flush-log-at-trx-commit = 1 innodb-file-per-table = 1 innodb-buffer-pool-size = 26G # LOGGING # log-error = /srv/data/mysql/mysql-error.log log-queries-not-using-indexes = 1 slow-query-log = 1 slow-query-log-file = /srv/data/mysql/mysql-slow.log

After upgrading from MySQL 5.0 to Percona 5.5 P99 is much slower

Lastest Forum Posts - May 31, 2015 - 2:18pm
After upgrading from MySQL 5.0 to MySQL Percona 5.5 my P99 on many queries has gotten significantly worse.

Here is the mysqlslap command I ran:
mysqlslap --concurrency=50 --iterations=500 --query=test.sql --create-schema=my_schema --user my_user --password

Here is an example of one such query:
cat test.sql
SELECT aaa.big_int_val FROM aaa INNER JOIN bbb ON aaa.bbb_id = bbb.id WHERE aaa.big_int_val > 8206166128 AND aaa.big_int_val <= 8206166260 ORDER BY aaa.big_int_val;

I captured the 5.5 queries in the slow query log with profiling on and the major time difference is in Profile_Sending_data:
Here the time spend for a slow query and a fast query in MySQL Percona 5.5
Profile_Sending_data: 0.099546
Profile_Sending_data: 0.000031

When run on MySQL 5.5 (Percona)
# mysqlslap --concurrency=50 --iterations=500 --query=test3.sql
# --create-schema=nm --user root --password
Enter password:
Benchmark
Average number of seconds to run all queries: 0.003 seconds
Minimum number of seconds to run all queries: 0.002 seconds
Maximum number of seconds to run all queries: 0.106 seconds
Number of clients running queries: 50
Average number of queries per client: 1

When run on MySQL 5.0
# mysqlslap --concurrency=50 --iterations=500 --query=test2.sql
# --create-schema=nm --user root --password
Enter password:
Benchmark
Average number of seconds to run all queries: 0.001 seconds
Minimum number of seconds to run all queries: 0.001 seconds
Maximum number of seconds to run all queries: 0.005 seconds
Number of clients running queries: 50
Average number of queries per client: 1


select COUNT(*) from aaa;
1849233
select COUNT(*) from bbb;
1202229
SELECT aaa.big_int_val FROM aaa INNER JOIN bbb ON aaa.bbb_id = bbb.id WHERE aaa.big_int_val > 8206166128 AND aaa.big_int_val <= 8206166260 ORDER BY aaa.big_int_val;
0

mysql> show create table aaa \G
*************************** 1. row ***************************
Table: aaa
Create Table: CREATE TABLE `aaa` (
`aaa_id` varchar(255) NOT NULL,
`bbb_id` varchar(255) NOT NULL,
`other_id` varchar(255) DEFAULT NULL,
`int_1` int(10) unsigned NOT NULL,
`enum_val` enum('network','subnet') NOT NULL,
`big_int_val` bigint(20) unsigned NOT NULL,
`deleted_flag` tinyint(1) NOT NULL DEFAULT '0',
`creation_big_int_val` bigint(20) unsigned NOT NULL,
`deletion_big_int_val` bigint(20) unsigned DEFAULT NULL,
`creation_timestamp` timestamp NULL DEFAULT NULL,
`deletion_timestamp` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`aaa_id`),
UNIQUE KEY `big_int_val` (`big_int_val`),
KEY `bbb_id` (`bbb_id`)

) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

mysql> show create table bbb \G
*************************** 1. row ***************************
Table: bbb
Create Table: CREATE TABLE `bbb` (
`id` varchar(255) NOT NULL,
`other_id_2` varchar(255) NOT NULL,
`name_of_something` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `other_id_2` (`other_id_2`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

I captured the 5.5 queries in the slow query log with profiling enabled in
order to compare a slow query to a fast query.

Here is a slow query:
# Time: 150531 19:02:33.447599
# User@Host: root[root] @ localhost []
# Thread_id: 2607933 Schema: nm Last_errno: 0 Killed: 0
# Query_time: 0.100234 Lock_time: 0.000325 Rows_sent: 0 Rows_examined: 0
# Rows_affected: 0 Rows_read: 0
# Bytes_sent: 131
Profile_starting: 0.000118
Profile_starting_cpu: 0.000099
Profile_checking_permissions: 0.000020
Profile_checking_permissions_cpu: 0.000020
Profile_checking_permissions: 0.000019
Profile_checking_permissions_cpu: 0.000019
Profile_Opening_tables: 0.000142
Profile_Opening_tables_cpu: 0.000128
Profile_System_lock: 0.000013
Profile_System_lock_cpu: 0.000013
Profile_mysql_lock_tables()_unlocking_tables_II: 0.000018
Profile_mysql_lock_tables()_unlocking_tables_II_cp u: 0.000018
Profile_exit_mysqld_lock_tables(): 0.000017
Profile_exit_mysqld_lock_tables()_cpu: 0.000013
Profile_init: 0.000017
Profile_init_cpu: 0.000017
Profile_optimizing: 0.000034
Profile_optimizing_cpu: 0.000027
Profile_statistics: 0.000145
Profile_statistics_cpu: 0.000124
Profile_preparing: 0.000041
Profile_preparing_cpu: 0.000031
Profile_executing: 0.000024
Profile_executing_cpu: 0.000019
Profile_Sorting_result: 0.000033
Profile_Sorting_result_cpu: 0.000024
*************************************** Profile_Sending_data: 0.099546
Profile_Sending_data_cpu: 0.000084
Profile_end: 0.000005
Profile_end_cpu: 0.000005
Profile_query_end: 0.000015
Profile_query_end_cpu: 0.000014
Profile_closing_tables: 0.000019
Profile_closing_tables_cpu: 0.000019
Profile_freeing_items: 0.000026
Profile_freeing_items_cpu: 0.000026
Profile_logging_slow_query: 0.000031
Profile_logging_slow_query_cpu: 0.000018
Profile_total: 0.100281
Profile_total_cpu: 0.000719
SET timestamp=1433091753;

Here is a fast query:
# Time: 150531 19:02:33.449295
# User@Host: root[root] @ localhost []
# Thread_id: 2607952 Schema: nm Last_errno: 0 Killed: 0
# Query_time: 0.000198 Lock_time: 0.000073 Rows_sent: 0 Rows_examined: 0
# Rows_affected: 0 Rows_read: 0
# Bytes_sent: 131
Profile_starting: 0.000028
Profile_starting_cpu: 0.000028
Profile_checking_permissions: 0.000003
Profile_checking_permissions_cpu: 0.000003
Profile_checking_permissions: 0.000003
Profile_checking_permissions_cpu: 0.000003
Profile_Opening_tables: 0.000036
Profile_Opening_tables_cpu: 0.000036
Profile_System_lock: 0.000004
Profile_System_lock_cpu: 0.000004
Profile_mysql_lock_tables()_unlocking_tables_II: 0.000002
Profile_mysql_lock_tables()_unlocking_tables_II_cp u: 0.000002
Profile_exit_mysqld_lock_tables(): 0.000004
Profile_exit_mysqld_lock_tables()_cpu: 0.000004
Profile_init: 0.000013
Profile_init_cpu: 0.000013
Profile_optimizing: 0.000006
Profile_optimizing_cpu: 0.000006
Profile_statistics: 0.000030
Profile_statistics_cpu: 0.000030
Profile_preparing: 0.000012
Profile_preparing_cpu: 0.000011
Profile_executing: 0.000002
Profile_executing_cpu: 0.000001
Profile_Sorting_result: 0.000002
Profile_Sorting_result_cpu: 0.000002
*************************************** Profile_Sending_data: 0.000031
Profile_Sending_data_cpu: 0.000031
Profile_end: 0.000003
Profile_end_cpu: 0.000003
Profile_query_end: 0.000005
Profile_query_end_cpu: 0.000005
Profile_closing_tables: 0.000005
Profile_closing_tables_cpu: 0.000005
Profile_freeing_items: 0.000012
Profile_freeing_items_cpu: 0.000012
Profile_logging_slow_query: 0.000002
Profile_logging_slow_query_cpu: 0.000002
Profile_total: 0.000203
Profile_total_cpu: 0.000202
SET timestamp=1433091753;

The problem of migration from MySQL 5.5 to Percona 5.6

Lastest Forum Posts - May 31, 2015 - 7:50am
I read as how wonderful Percona server and decided to migrate to it.
From MySQL Server 5.5.43 to Percona Server 5.6.24-72.2 on CentOS 6.6 x86 (Linux 2.6.32-042stab108.2 on i686)
But osTicket and Piwik stopped working, showing error:
ERROR 1071 (42000) at line 49: Specified key was too long; max key length is 768 bytes
I read that the following parameters have to help, but in vain
innodb_large_prefix = 1 & innodb_file_format = Barracuda & innodb_file_per_table = 1
Full configuration file in the attachment.
Please help, I do not want to go back to MySQL.
Best regards, Ivan

Node Crash and Replication

Lastest Forum Posts - May 30, 2015 - 3:46am
Hello everyone. I got a problem here.


I have 3 percona cluster and one of it's node act as a MASTER for a Mysql MASTER - SLAVE replication.

The problem is if this node goes down for few hours and after i restarted it, a new entire copying of /var/lib/mysql is created losing all the binlogs generated previously.

In that case i have to re-create all my replication from scratch.

There is something i can do to avoid this?

Thanks

Tips for Decorating Small Spaces

Lastest Forum Posts - May 28, 2015 - 10:51pm
One of the hardest parts about living in a small home, condo or apartment is the lack of ways to decorate without taking up valuable space. Most people who reside in these smaller homes simply live with the notion that they cannot have a beautifully decorated home and have the storage space they need. The truth is that you definitely can have the best of both worlds, the trick is knowing how to do both, and here are some tips to helping you achieve both.

Utilizing and creating storage space

Traditional storage methods such as bookshelves and cabinets used in larger homes don’t work in smaller homes, but that does not mean you should give up on the idea of shelving and cabinets. Listotic suggests that the vertical space above furniture is the perfect place for built in shelving that hangs from the ceiling and leaves wall space to hang pictures and lighting, giving you both storage space and room to decorate. Furniture can take up a lot of room, so consider condensing furniture to create more space by using a console table as a dining room table. Another way to create storage space is by installing small closets in between studs to contain cleaning products, linens and anything else that you have a hard time storing elsewhere.

Tricks for decorating without cluttering

Corners are great places for shelving because you can use both sides of the wall for shelves and ultimately create double the décor space without cluttering up the walls. If you are working with small bedrooms and bathrooms, avoid taking up wall space and creating a cluttered feel by installing coat or towel racks on the back of the doors. Furniture takes up a lot of space, and can make a room feel cluttered, so consider using shelves as nightstands that take up less room and still allow you to keep books and lighting close by. If you do not have room for towel racks or dressers, or even shelving for décor, consider installing shelves above the door to hold towels, décor, clothing and other items.

Giving the illusion of space

If living in a small space is not ideal for you, give the illusion of a larger space by strategically placing large mirrors above the mantle or sofa to make the room seem bigger. Instead of using a dresser, consider using bins that can easily slide under the bed in order to open up the room. Turn space under the stairs into storage space or even just a nook for more seating. Lastly, place dressers inside closets if possible to give more space in the bedrooms. These tips and other tricks, living in a small home might not be so bad. You just need to know how to utilize the space you do have.

Interior decorating and remodeling news brought to you by baseboardradiatorcover.com

Source:
listotic.com/29-small-space-hacks/14/

Tags: home remodeling, home renovation, home décor, interior decorating, tips for decorating a home, decorating on a budget, diy décor, decorating small spaces

MongoDB Community Open House: June 1 in NYC

Latest MySQL Performance Blog posts - May 28, 2015 - 6:13am

If you can make it to Manhattan next Monday, please join me at the MongoDB Community Open House. The June 1 event is free and open to all. It runs from 3:30-6:30 p.m. just across the road from MongoDB World.

The MongoDB Community Open House, held at the New York Hilton Midtown, will feature technical presentations and sessions from key members of the MongoDB open source community. A reception will be held afterward featuring plenty of food, drink and fun. Everyone who attends the will get a cool t-shirt, too. Space is limited, though, so I suggest registering now to reserve your spot.

The talks include:
  • “MATH is Hard: TTL Index Configuration and Considerations,” by Kim Wilkins of Rackspace
  • “Implementing MongoDB 3.0 Storage Engine,” with Facebook’s Igor Canadi and Christian Rober of Percona
  • “Is it Fast: Measuring MongoDB Performance,” by Tim Callaghan of Acme Benchmarking
  • “MongoDB for MySQL Users,” by Percona’s Alexander Rubin
  • “Rolling out RocksDB in Production,” by Charity Majors of Facebook
  • “Percona TokuMX and Percona TokuMXSE Performance Benefits,” by Percona’s Jon Tobin

In addition to the free t-shirts, food and drinks, we’ll also be raffling off some prizes, including a full-access pass to Percona Live Amsterdam, to be held this coming September 21-23. Our autumn conference, moved to a new month and venue by popular demand, will be bigger and better than ever with great speakers, tutorials and sessions around MySQL, NoSQL and data in the cloud.

The MongoDB Community Open House will be especially valuable for those unable to attend MongoDB World for budgetary reasons. Our intent is to make MongoDB World even better by adding more technical content. And if you are attending MongoDB World, I invite you to drop in even if only for a session or two.

I hope to see you Monday in NYC!

The post MongoDB Community Open House: June 1 in NYC appeared first on MySQL Performance Blog.

How to verify data consistency for replication(master:window OS, slave:Ubuntu OS)?

Lastest Forum Posts - May 28, 2015 - 4:36am
Hi,

My replication solution mysql 5.5 version setup is using master as window 8 OS and slave is ubuntu OS. How can I verify my slave data consistency since percona seems like need to verify data from master (linux OS)

Chern

Query Analytics USE is highest entry

Lastest Forum Posts - May 28, 2015 - 1:10am
I need to find the source of USE as in USE databasename which is showing up as the top entry in Query profile. It contains much of the metrics values but doesn't appear to contain any code to tune.
Based on the fact I can see around the same number of Select possibly from Select 1 from the application servers I am thinking these come for the application servers creating connections.

Is there an ignore list or some way to verify these as not having some other underlying code issues? I want to remove the the entry if it is superfluous noise to not have to explain it to my manager

Percona XtraBackup 2.2.11 is now available

Latest MySQL Performance Blog posts - May 28, 2015 - 12:59am

Percona is glad to announce the release of Percona XtraBackup 2.2.11 on May 28, 2015. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

New Features:

  • Percona XtraBackup has been rebased on MySQL 5.6.24.

Bugs Fixed:

  • Version check would crash innobackupex and abort the backup on CentOS 5. Bug fixed #1255451.
  • Percona XtraBackup could crash when preparing the backup taken on MySQL/Percona Server 5.5 if there were open temporary tables during the backup. Bug fixed #1399471 (Fungo Wang).
  • Percona XtraBackup would fail to prepare the backup if the xtrabackup_logfile was lager than 512GB. Bug fixed #1425269.
  • Fix for bug #1403237 was incomplete, due to setting wrong offset last copied batch of log records was copied from wrong location. Bug fixed #1448447.
  • Percona XtraBackup now executes an extra FLUSH TABLES before executing FLUSH TABLES WITH READ LOCK to potentially lower the impact from FLUSH TABLES WITH READ LOCK. Bug fixed #1277403.
  • Regression introduced by fixing #1436793 in Percona XtraBackup 2.2.10 caused an error when taking an incremental backup from MariaDB 10. Bug fixed #1444541.
  • Percona XtraBackup now prints and stores the file based binlog coordinates in xtrabackup_binlog_info even though GTID is enabled. Bug fixed #1449834.
  • Percona XtraBackup doesn’t print warnings anymore during the prepare phase about missing tables when a filtering option (--databases, --tables, etc.) is provided. Bug fixed #1454815 (Davi Arnaut).

Other bugs fixed: #1415191.

Release notes with all the bugfixes for Percona XtraBackup 2.2.11 are available in our online documentation. Bugs can be reported on the launchpad bug tracker. Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases.

The post Percona XtraBackup 2.2.11 is now available appeared first on MySQL Performance Blog.

MySQL 5.7 key features

Latest MySQL Performance Blog posts - May 27, 2015 - 12:00am

The other day I was discussing new features of MySQL 5.7 with a Percona Support customer. After that conversation, I thought it would be a good idea to compile list of important features of MySQL 5.7. The latest MySQL 5.7.6 release candidate (RC) is out and is packed with nice features. Here’s a list of some MySQL 5.7 key features.

Replication Enhancements:

  • One of the top features in MySQL 5.7 is multi-source replication. With multi-source replication you can point multiple master server’s to slave so limitation of slave having only one master is lift off. There is nice blog post written by my colleague on multi-source replication you will find useful.
  • SHOW SLAVE STATUS is non-blocking since MySQL 5.7. SHOW SLAVE STATUS returns immediately without waiting for STOP SLAVE to finish which can be blocked by long running SQL query from replication SQL_THREAD. As a side note, the LOCK FREE SHOW SLAVE STATUS feature is first implemented in Percona Server 5.5.
  • Now you can have all the information about SHOW SLAVE STATUS from performance schema database tables. More details here from the manual.
  • With the new CHANGE REPLICATION FILTER command now you can modify replication filters rules without bouncing MySQL servers.
  • Since MySQL 5.7 you can perform CHANGE MASTER TO without stopping the slave via the STOP SLAVE command. For further details check the manual.
  • There is now a different method for parallel replication. With new implementation the slave can apply transaction in parallel with single database/schema too. Check slave_parallel_type for details.
  • Global Transaction Identifiers (GTID) is a feature that automatically tracks the replication position in replication stream, and since MySQL 5.7 gtid_mode is dynamic variables, which means you can enable/disable GTID in replication topology without synchronizing and restarting entire set of MySQL servers. As a side note, online GTID deployment feature is added in Percona Server 5.6. With this feature you can deploy GTID on existing replication setups without marking master read_only and stopping all slaves in replication chain. My colleague Stephane had written nice blogpost to perform online migration without master downtime.

InnoDB Enhancements:

  • Now you can resize InnoDB buffer pool online. Since MySQL 5.7 innodb_buffer_pool_size is a dynamic variable which provides the ability to resize buffer pool without restarting MySQL server.
  • From MySQL 5.7, online ALTER TABLE also supports RENAME INDEX clause to rename an index. This change will take in place without table copy operation.
  • InnoDB supports Transportable Tablespace feature for partitioned InnoDB tables. I wrote a blog post on Transportable Tablespace that you will find useful.
  • Innochecksum utility is enhanced with new options. I also wrote a recent blog post on this same topic.
  • As of MySQL 5.7, InnoDB supports “spatial indexes” and it also supports online DDL operation to add spatial indexes i.e. ALTER TABLE .. ALGORITHM=INPLACE.
  • Improved InnoDB buffer pool dump/reload operations. A new system variable, innodb_buffer_pool_dump_pct allows you to specify percentage of most recently used pages in each buffer pool to read out and dump.

Triggers:

  • As per SQL standard, MySQL 5.7 now supports multiple triggers per table for trigger event (DML) and timing (BEFORE,AFTER) i.e. multiple triggers are permitted now for each event e.g. multiple triggers on INSERT action.

Performance Improvements:

  • Bulk data load is improved on InnoDB in MySQL 5.7. InnoDB performs a bulk load when creating or rebuilding indexes. This method known as sorted index build and enhance create index operation and it also impacts FULLTEXT indexes.
  • Currently there is a single page cleaner thread responsible for flushing dirty pages from the buffer pool(s). In MySQL 5.7 InnoDB parallel flushing was implemented to improve flushing where separate background thread for each buffer pool instance for flush list, LRU list. It’s worth to mention a two-threaded flushing implemented in Percona Server 5.6.

Optimizer Improvements:

  • EXPLAIN FOR CONNECTION will let you run explain statements for already running queries. This may yield important information towards query optimization.
  • In MySQL 5.7 the optimizer avoids the creatation temporary table for result of UNION ALL queries and this will help to reduce disk I/O and disk space when UNION yields large result set. I found Morgan Tocker post informative on same.
  • JSON format for EXPLAIN first introduced in MySQL 5.6 which produces extended information. JSON format for EXPLAIN is enhanced in version 5.7 by printing total query cost which makes it easier to see the difference between the good and bad execution plans.
  • MySQL 5.7 now supports generated columns also known as virtual columns as new feature. My colleague Alexander explained this really well in this blogpost

MySQL Test Suite Enhancements:

  • The MySQL test suite now uses InnoDB as its default storage engine. Along with that many new tests added and existing tests enhanced including test suite for replication with GTID.

Security Enhancements:

  • Since MySQL 5.7 there is a password expiration policy in place. Any user that connects to a MySQL server goes through a password expiration life cycle and must change the password. More from the manual here.
  • Database administrators can nowo lock/unlock user accounts. Check details here.
  • As of MySQL 5.7, installation only creates only one ‘root@localhost’ user account with random password and marks the password expiration cycle. So, installation no longer creates anonymous-user accounts and along with that there is no test database. For root user account password, MySQL generates it during data directory initialization and marks it as expired and will write a message to stdout displaying the password.

Conclusion:
This is only a short list of new features in MySQL 5.7. Please feel free to add your favorite features in the comments section. Along with new features, there are quite a few deprecated/removed features in MySQL 5.7. You can get full list from the manual.

The post MySQL 5.7 key features appeared first on MySQL Performance Blog.

WSREP: last inactive check more than PT1.5S ago (PT2.61027S), skipping check

Lastest Forum Posts - May 26, 2015 - 10:59pm
I have 3 nodes running on Google Cloud.

The server verions is 5.6 the Galera version 3.8 running on a Debian 7.

Today I had a second occurency of a catastrophic problem that causes the nodes crashing. First one node crashes after some time a second node crashes and them all system is down.

I'd take a look at the error log and I found these messages right before the first node to crash:

2015-05-26 05:41:11 18566 [Warning] WSREP: last inactive check more than PT1.5S ago (PT1.58995S), skipping check
2015-05-26 05:41:13 18566 [Warning] WSREP: last inactive check more than PT1.5S ago (PT2.81387S), skipping check
2015-05-26 05:41:15 18566 [Note] WSREP: (21052d7d, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.240.31.226:4567
2015-05-26 05:41:16 18566 [Note] WSREP: (21052d7d, 'tcp://0.0.0.0:4567') reconnecting to 2cf899b9 (tcp://10.240.31.226:4567), attempt 0
2015-05-26 05:41:19 18566 [Warning] WSREP: last inactive check more than PT1.5S ago (PT6.08689S), skipping check
2015-05-26 05:41:22 18566 [Warning] WSREP: last inactive check more than PT1.5S ago (PT2.68869S), skipping check
2015-05-26 05:41:25 18566 [Note] WSREP: (21052d7d, 'tcp://0.0.0.0:4567') reconnecting to 39e0cac1 (tcp://10.240.135.197:4567), attempt 0
Killed
150526 05:41:28 mysqld_safe Number of processes running now: 0
150526 05:41:28 mysqld_safe WSREP: not restarting wsrep node automatically
150526 05:41:28 mysqld_safe mysqld from pid file /var/lib/mysql/mysql.pid ended


This log is from the bootstrap node, it was the first one to crash.

Can anyone can tell me what is happening?

Thanks

Nodes don't join after upgrade to XtraDB-Cluster-5.6.22-72.0 and xtrabackup-2.2.10

Lastest Forum Posts - May 26, 2015 - 8:12am
Hi all,

you see below the my.cnf and extractions of the logs on the joiner and donor. Unfortunately, I can't find out the problem, so I do need your help. I started the first node with /etc/init.d/mysql bootstrap-pxc.

Thank you in advance.
Cemil


DONOR - my.cnf


[mysqld]

binlog_format = ROW
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
innodb_log_file_size = 512M
innodb_data_file_path = ibdata1:24M:autoextend
innodb_buffer_pool_size = 12G
#innodb_flush_log_at_trx_commit = 2
query_cache_size = 0
query_cache_type = 0
bind-address = 0.0.0.0

log_slave_updates

##
## WSREP options
##
wsrep_provider = /services/mysql/lib/libgalera_smm.so
wsrep_provider_options = "gcache.size=256M"
wsrep_cluster_name = mysql_cluster
wsrep_cluster_address = gcomm://192.168.128.196,192.168.128.197,192.168.128.198
wsrep_node_name = db1-private
wsrep_node_address = 192.168.128.196
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = puser:secret

# Optional wsrep Provider Settings

wsrep_node_incoming_address = 192.168.128.196
wsrep_sst_receive_address = 192.168.128.196
wsrep_slave_threads = 8
#wsrep_replicate_myisam = 1



JOINER - my.cnf

[mysqld]

binlog_format = ROW
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
innodb_log_file_size = 512M
innodb_data_file_path = ibdata1:24M:autoextend
innodb_buffer_pool_size = 12G
#innodb_flush_log_at_trx_commit = 2
query_cache_size = 0
query_cache_type = 0
bind-address = 0.0.0.0

log_slave_updates

##
## WSREP options
##
wsrep_provider = /services/mysql/lib/libgalera_smm.so
wsrep_provider_options = "gcache.size=256M"
wsrep_cluster_name = mysql_cluster
wsrep_cluster_address = gcomm://192.168.128.196,192.168.128.197,192.168.128.198
wsrep_node_name = db2-private
wsrep_node_address = 192.168.128.197
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = puser:secret

# Optional wsrep Provider Settings

wsrep_node_incoming_address = 192.168.128.197
wsrep_sst_receive_address = 192.168.128.197
wsrep_slave_threads = 8
#wsrep_replicate_myisam = 1



JOINER - ERROR-LOG

WSREP_SST: [INFO] Waiting for SST streaming to complete! (20150526 16:15:27.N)
xbstream: Can't create/write to file '././backup-my.cnf' (Errcode: 2 - No such file or directory)
xbstream: failed to create file.
2015/05/26 16:15:38 socat[82119] E write(1, 0x802451000, 41): Broken pipe
WSREP_SST: [ERROR] Error while getting data from donor node: exit codes: 1 1 (20150526 16:15:38.N)
WSREP_SST: [ERROR] Cleanup after exit with status:32 (20150526 16:15:38.N)
2015-05-26 16:15:38 81866 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.128.197' --auth 'puser:secret' --datadir '/services/mysql/data/' --defaults-file '/services/mysql/my.cnf' --parent '81866' '' : 32 (Broken pipe)
2015-05-26 16:15:38 81866 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2015-05-26 16:15:38 81866 [ERROR] WSREP: SST failed: 32 (Broken pipe)
2015-05-26 16:15:38 81866 [ERROR] Aborting

2015-05-26 16:15:38 81866 [Warning] WSREP: 0.0 (db1-private): State transfer to 1.0 (db2-private) failed: -22 (Invalid argument)
2015-05-26 16:15:38 81866 [ERROR] WSREP: gcs/src/gcs_group.cpp:int gcs_group_handle_join_msg(gcs_group_t*, const gcs_recv_msg_t*)():731: Will never receive state. Need to abort.
2015-05-26 16:15:38 81866 [Note] WSREP: gcomm: terminating thread
2015-05-26 16:15:38 81866 [Note] WSREP: gcomm: joining thread
2015-05-26 16:15:38 81866 [Note] WSREP: gcomm: closing backend
2015-05-26 16:15:39 81866 [Note] WSREP: view(view_id(NON_PRIM,6641ad6c,18) memb {
a60e3d2b,0
} joined {
} left {
} partitioned {
6641ad6c,0
})
2015-05-26 16:15:39 81866 [Note] WSREP: view((empty))
2015-05-26 16:15:39 81866 [Note] WSREP: gcomm: closed
2015-05-26 16:15:39 81866 [Note] WSREP: mysqld: Terminated.
....



DONOR - ERROR-LOG
....
WSREP_SST: [INFO] Streaming with xbstream (20150526 16:15:27.N)
WSREP_SST: [INFO] Using socat as streamer (20150526 16:15:27.N)
WSREP_SST: [INFO] Using /tmp/tmp.4O9pcm11 as xtrabackup temporary directory (20150526 16:15:27.N)
WSREP_SST: [INFO] Using /tmp/tmp.2zGAaXD6 as innobackupex temporary directory (20150526 16:15:27.N)
WSREP_SST: [INFO] Streaming GTID file before SST (20150526 16:15:27.N)
WSREP_SST: [INFO] Evaluating xbstream -c ${INFO_FILE} | socat -u stdio TCP:192.168.128.197:4444; RC=( ${PIPESTATUS[@]} ) (20150526 16:15:27.N)
WSREP_SST: [INFO] Sleeping before data transfer for SST (20150526 16:15:27.N)
WSREP_SST: [INFO] Streaming the backup to joiner at 192.168.128.197 4444 (20150526 16:15:37.N)
WSREP_SST: [INFO] Evaluating innobackupex --defaults-file=/services/mysql/my.cnf --no-version-check $tmpopts $INNOEXTRA --galera-info --stream=$sfmt $itmpdir 2>${DATA}/innobackup.backup.log | socat
-u stdio TCP:192.168.128.197:4444; RC=( ${PIPESTATUS[@]} ) (20150526 16:15:37.N)
2015/05/26 16:15:38 socat[25151] E write(5, 0x802451000, 8192): Broken pipe
WSREP_SST: [ERROR] innobackupex finished with error: 1. Check /services/mysql/data//innobackup.backup.log (20150526 16:15:38.N)
WSREP_SST: [ERROR] Cleanup after exit with status:22 (20150526 16:15:38.N)
WSREP_SST: [INFO] Cleaning up temporary directories (20150526 16:15:38.N)
2015-05-26 16:15:38 57661 [ERROR] WSREP: Failed to read from: wsrep_sst_xtrabackup-v2 --role 'donor' --address '192.168.128.197:4444/xtrabackup_sst//1' --auth 'puser:secret' --socket '/tmp/mysql.
sock' --datadir '/services/mysql/data/' --defaults-file '/services/mysql/my.cnf' '' --gtid '8e7b1114-c27b-11e4-a13f-f6862ce824b1:4625349'
2015-05-26 16:15:38 57661 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'donor' --address '192.168.128.197:4444/xtrabackup_sst//1' --auth 'puser:secret' --socket '/t
mp/mysql.sock' --datadir '/services/mysql/data/' --defaults-file '/services/mysql/my.cnf' '' --gtid '8e7b1114-c27b-11e4-a13f-f6862ce824b1:4625349': 22 (Invalid argument)
2015-05-26 16:15:38 57661 [ERROR] WSREP: Command did not run: wsrep_sst_xtrabackup-v2 --role 'donor' --address '192.168.128.197:4444/xtrabackup_sst//1' --auth 'puser:secret' --socket '/tmp/mysql.
sock' --datadir '/services/mysql/data/' --defaults-file '/services/mysql/my.cnf' '' --gtid '8e7b1114-c27b-11e4-a13f-f6862ce824b1:4625349'
2015-05-26 16:15:38 57661 [Warning] WSREP: 0.0 (db1-private): State transfer to 1.0 (db2-private) failed: -22 (Invalid argument)
...




DONOR - innobackup.backup.log


150526 16:15:37 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/services/mysql/my.cnf;mysql_read_default_group=xtrabackup;mysql_s ocket=/tmp/mysql.sock' as 'pu
ser' (using password: YES).
150526 16:15:37 innobackupex: Connected to MySQL server
150526 16:15:37 innobackupex: Starting the backup operation

innobackupex: Using server version 5.6.22-72.0
innobackupex: Created backup directory /tmp/tmp.2zGAaXD6

150526 16:15:37 innobackupex: Starting ibbackup with command: xtrabackup --defaults-file="/services/mysql/my.cnf" --defaults-group="mysqld" --backup --suspend-at-end --target-dir=/tmp/tmp.4O9pcm11
--tmpdir=/tmp/tmp.4O9pcm11 --extra-lsndir='/tmp/tmp.4O9pcm11' --stream=xbstream
innobackupex: Waiting for ibbackup (pid=25154) to suspend
innobackupex: Suspend file '/tmp/tmp.4O9pcm11/xtrabackup_suspended_2'

xtrabackup version 2.2.10 based on MySQL server 5.6.22 FreeBSD10.1 (amd64) (revision id: )
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /services/mysql/data
xtrabackup: open files limit requested 0, set to 942318
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:24M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 536870912
>> log scanned up to (58176722719)
xtrabackup: Generating a list of tablespaces
[01] Streaming ./ibdata1
^Gxtrabackup: Error writing file 'UNOPENED' (Errcode: 32 - Broken pipe)
xb_stream_write_data() failed.
^Gxtrabackup: Error writing file 'UNOPENED' (Errcode: 32 - Broken pipe)
[01] xtrabackup: Error: xtrabackup_copy_datafile() failed.
[01] xtrabackup: Error: failed to copy datafile.
innobackupex: got a fatal error with the following stacktrace: at /services/mysql/bin/innobackupex line 2703.
main::wait_for_ibbackup_file_create('/tmp/tmp.4O9pcm11/xtrabackup_suspended_2') called at /services/mysql/bin/innobackupex line 2723
main::wait_for_ibbackup_suspend('/tmp/tmp.4O9pcm11/xtrabackup_suspended_2') called at /services/mysql/bin/innobackupex line 1976
main::backup() called at /services/mysql/bin/innobackupex line 1601
innobackupex: Error: The xtrabackup child process has died at /services/mysql/bin/innobackupex line 2703.

Storing time-series data with MongoDB and TokuMX

Latest MySQL Performance Blog posts - May 26, 2015 - 7:12am

Storing time-series data is a frequent pattern for databases – be it for logs or for any kind of monitoring. Such data has the following properties: records are inserted but also never updated, the insertion rate can be high and records are likely to expire after some time. MongoDB and TokuMX are both good fits because of their flexible schema feature. But how can we handle data expiration efficiently? Several options are available: capped collections, TTL collections and partitioning (TokuMX only), but they all have different features and performance profiles.

Summary
  • Capped collections: very good insert performance, but not eligible for sharding and hard to predict when documents will expire.
  • TTL collections: expiration date is easy to enforce and compatible with sharding, but purge or records is inefficient.
  • Partitioning: very good performance for inserts and purge, but not compatible with sharding and only available with TokuMX.
Capped collections

To create an app_stats capped collection with a size of 1GB, use the following command:

> db.createCollection( "app_stats", { capped: true, size: 1024*1024*1024 } )

MongoDB will write to this collections in a circular fashion: once allocated files are full, data at the beginning of the first file is being overwritten. This is very good to make sure your collection will never exceed the size you set. However predicting the size you will need to store 6 months of data can be tricky.

Another benefit of capped collection is that they keep data in insertion order. So you don’t need to add an extra index to sort data by insertion date: this is a good point for good write performance.

Can you update records in a capped collection? Yes but only as long as they don’t increase the original size of the document, but it is recommended to be light on updates as you might experience strange errors with secondaries in some rare cases.

Another limitation is that sharding is not supported.

TTL collections

TTL collections take a different approach: they are normal, but they have an index that has a special option. For instance:

> db.logs.createIndex({insertDate:1}, {expireAfterSeconds: 86400})

This index will make sure that records will be expired automatically after one day. Pretty neat!

And as a TTL collection is a regular collection, you can shard it if you need.

However the main limitation is the purge process: every minute, a background thread will look for documents that are expired and if it finds some, it will remove them. This is not really different from the application running a cron job every minute to remove old documents: it adds a constant purge workload that can be detrimental to insertion performance.

Note that TTL indexes do not work with capped collections.

Partitioning (TokuMX)

With TokuMX you can partition your data like you would with MySQL. For instance, if you want to create a collection partitioned on an insertDate field, you would use these commands:

> db.createCollection('stats_part',{primaryKey:{insertDate:1,_id:1}, partitioned: true}) > db.stats_part.addPartition({insertDate:ISODate("2015-05-31T23:59:59")}) > db.stats_part.addPartition({insertDate:ISODate("2015-06-30T23:59:59")}) [...]

The main benefit of partitioning is that expiring data is extremely simple and fast: remove the corresponding partition(s) with the dropPartition() function. Another nice property is that you can be very flexible regarding how large your partitions can grow. This could deserve a whole blog post, but let me give a quick example.

Suppose you want to expire data after 6 months. With regular MongoDB, indexes are using B-Trees so insertion is only fast if indexes fit in memory. So if you could create a partitioned collection, insertions would only be fast if the partition you are writing to (the last one) would fit in memory. Then you would probably have to create a partition for every day. With TokuMX, there is no such limitation and insertion performance will be as good if you have larger partitions like one partition per week or one partition per month.

Any drawback of partitioning? It involves some application overhead as you need to write the logic to periodically drop old partitions and create new ones. And sharding is not supported.

Conclusion

There is no one-size-fits-all solution when it comes to storing time-series data in MongoDB and TokuMX, but several options with their own benefits and trade-offs. Note that if your collection needs to be sharded, a TTL index is the only option. And if you are using TokuMX, partitioning is something to look at if you were planning a capped collection.

The post Storing time-series data with MongoDB and TokuMX appeared first on MySQL Performance Blog.

What are the implications of changing repl.commit_order?

Lastest Forum Posts - May 26, 2015 - 12:32am
I'm currently doing some trials of Galera Cluster with Percona 5.6. I've progressed to testing a 2 node cluster on baremetal. With repl.commit_order set to 0 (off) the TPS performance as measured with sysbench is comparable to that of a standalone Percona instance.

When repl.commit_order is set to 3 (the default), TPS performance is around 10% compared to that of a standalone Percona instance.

Sysbench command in each instance is:

: sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --oltp-table-size=5000000 --oltp-tables-count=1 --num-threads=32 --mysql-host=127.0.0.1 --mysql-user=root --mysql-password=**** --mysql-port=3306 --report-interval=1 --max-requests=0 --max-time=0 --mysql-db=sbtest run Firstly: What are the implications of the different commit_order modes?

Secondly: Can anyone suggest any other tweaks that will improve performance?

Row size too large (&amp;gt; 8126)

Lastest Forum Posts - May 25, 2015 - 3:50am
Apologies if this is the wrong thread to put this question, but I feel it fills context as to what we are running.

Basically I am running a 6 node cluster which works beautifully, however I have hit a snag and I am not 100% sure how to resolve it as I have conflicting documentation and ideas.

On my initial import from dump I changed the Engine from myisam to innodb and the import worked with out fault and so does the website reading the data.

Now Fast Forward a few days. A user does an update via the CMS then pops up this error:

Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline.

So I start work fixing the error and the most common fix is to use baracuda table space, so off to the none production machine I go.

Fresh install Percona-Server
Add:
innodb_file_format = barracuda
innodb_file_per_table = 1

run mysql_install_db

start mysql

Confirm innodb options:

+--------------------------+-----------+
| Variable_name | Value |
+--------------------------+-----------+
| innodb_file_format | Barracuda |
| innodb_file_format_check | ON |
| innodb_file_format_max | Antelope |
+--------------------------+-----------+

Now I import the database table I am having issues with but I see row_format to dynamic:

ERROR 1118 (42000) at line 25: Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.

Okay fine lets try row_format = compact:
That works

But wait, why does docs say use barracuda when you hit this issue , when in fact it does not, seems like its the other way around. I dont know maybe I am missing something here.

The table I am working on has over 600 fields and all are text, tinytext, float, int, varchar. Before you flame me on the varchar chnag to text, i have already tried that. Same result.


Pages

Subscribe to Percona aggregator
]]>