EmergencyEMERGENCY? Get 24/7 Help Now!

Want to understand server and storage engine thread management.

Lastest Forum Posts - March 29, 2016 - 11:14pm
Hi All,

Thanks for the wonderful articles. I am new to MySql and about to use MySql in my production env.

I like to know how threads are being managed by mysql server.

- What will happen when a thread fails? (Whether mysql tries to restart it or mysql will be crashed)

- Necessary threads for mysql to run?

This may be silly questions, but coming from oracle dba background I have so many questions about this.

Read-write split routing performance in MaxScale

Latest MySQL Performance Blog posts - March 29, 2016 - 5:42pm

In this blog post, we’ll discuss read-write split routing performance in MaxScale.

The two previous posts have shown how to setup high availability (HA) with Maxscale using asynchronous replication and how we monitor replication.

Now let’s focus on the routing module performing read-write splits.

This is our current configuration:

[Splitter Service] type=service router=readwritesplit servers=percona1, percona2 max_slave_replication_lag=30 user=maxscale passwd=264D375EC77998F13F4D0EC739AABAD4

This router module is designed to spread the read queries across multiple servers (slaves by default), and send the write queries to a single server: the master.

This module is intended to work with Asynchronous Master-Slave replication but also with Galera replication if you plan to write to a single node.

So what is routed to the Master?

  • Write statements
  • All statements within an open transaction, even if this transaction is read only
  • Store procedure and user-defined function call.
  • DDL statements
  • Execution of prepared statements (EXECUTE)
  • All statements using temporary tables

Example:

    • percona1: master
    • percona2 and percona3: slaves

Let’s connect to MaxScale with the MySQL’s interactive client:

mysql> select @@hostname; +------------+ | @@hostname | +------------+ | percona2 | +------------+ mysql> start transaction; mysql> select @@hostname; +------------+ | @@hostname | +------------+ | percona1 | +------------+ mysql> rollback; mysql> select @@hostname; +------------+ | @@hostname | +------------+ | percona2 | +------------+

Now let’s try with a READ ONLY transaction:

mysql> start transaction read only; mysql> select @@hostname; +------------+ | @@hostname | +------------+ | percona1 | +------------+

As we can see, MaxScale doesn’t support READ ONLY transactions. It considers them the same as any transaction. This means they are routed to the master as a WRITE.

We’ve already seen the max_slave_replication_lag optional parameter, but there are some others:

      • max_slave_connections: defines the maximum number of slaves a router session uses, the default is to use all the ones available
      • use_sql_variables_in: defines where queries’ reading session variables should be routed. Valid values are master and all (the latter being the default)
      • weightby: defines the name of the value used to calculate the weights of the server

Now let’s play with the weightby . So in this configuration, we will target 10% of the reads to percona2, and 90% to percona3:

[Splitter Service] type=service router=readwritesplit servers=percona1, percona2, percona3 weightby=myweight ... [percona2] type=server address=192.168.90.3 port=3306 protocol=MySQLBackend myweight=1 [percona3] type=server address=192.168.90.4 port=3306 protocol=MySQLBackend myweight=9

We restart MaxScale, and verify the settings of the service:

# maxadmin -pmariadb show service "Splitter Service" Service 0x363b460 Service: Splitter Service Router: readwritesplit (0x7fe7f1e88540) State: Started Number of router sessions: 0 Current no. of router sessions: 0 Number of queries forwarded: 0 Number of queries forwarded to master: 0 Number of queries forwarded to slave: 0 Number of queries forwarded to all: 0 Master/Slave percentage: 0.00% Connection distribution based on myweight server parameter. Server Target % Connections Operations Global Router percona3 90.0% 0 0 0 percona2 10.0% 0 0 0 percona1 100.0% 0 0 0 Started: Wed Feb 24 22:39:27 2016 Root user access: Disabled Backend databases 192.168.90.4:3306 Protocol: MySQLBackend 192.168.90.3:3306 Protocol: MySQLBackend 192.168.90.2:3306 Protocol: MySQLBackend Routing weight parameter: myweight Users data: 0x36397c0 Total connections: 2 Currently connected: 2 SSL: Disabled

The target % seems correct, let’s test it!

for i in `seq 1 10`; do mysql -h 192.168.90.5 -BN -umanager -ppercona -e "select @@hostname; select sleep(10)" 2>/dev/null & done percona2 percona2 percona2 percona2 percona2 percona2 percona3 percona3 percona3 percona3

That doesn’t seem good! Let’s check the service again:

Service 0x363b460 Service: Splitter Service Router: readwritesplit (0x7fe7f1e88540) State: Started Number of router sessions: 10 Current no. of router sessions: 10 Number of queries forwarded: 30 Number of queries forwarded to master: 0 Number of queries forwarded to slave: 30 Number of queries forwarded to all: 0 Master/Slave percentage: 0.00% Connection distribution based on myweight server parameter. Server Target % Connections Operations Global Router percona3 90.0% 10 10 5 percona2 10.0% 10 10 5 percona1 100.0% 10 10 0 Started: Wed Feb 24 22:39:27 2016 Root user access: Disabled Backend databases 192.168.90.4:3306 Protocol: MySQLBackend 192.168.90.3:3306 Protocol: MySQLBackend 192.168.90.2:3306 Protocol: MySQLBackend Routing weight parameter: myweight Users data: 0x36397c0 Total connections: 12 Currently connected: 12 SSL: Disabled

Five operations for both . . . this looks like a normal load balancer, 50%-50%.

So that doesn’t work as we expected. Let’s have a look at other router options:

      • slave_selection_criteria. Controls how the router chooses the slaves and how it load balances the sessions. There are some parameter options:
        • LEAST_GLOBAL_CONNECTIONS. Slave with least connections from MaxScale, not on the server itself
        • LEAST_ROUTER_CONNECTIONS. Slave with least connections from this service
        • LEAST_BEHIND_MASTER. Slave with smallest replication lag
        • LEAST_CURRENT_OPRTATIONS. Slave with least active operations (this is the default)
      • master_accept_reads. Uses the master for reads

The are some others; please check the online manual for:

      • max_sescmd_history
      • disable_sescmd_history

That explains the behavior we just observed. But what if we want to use the weight setting, and not spread the reads equivalently on the slaves?

I found the answer on IRC in the #maxscale freenode. Markus Makela (markusjm) explained to me that the default configuration in 1.3.0 is to use all the slaves, and load balance the actual statements. So to achieve what we want to do, we need to use these options in the service section:

router_options=slave_selection_criteria=LEAST_GLOBAL_CONNECTIONS max_slave_connections=1

Let’s test it:

for i in `seq 1 10`; do mysql -h 192.168.90.5 -BN -umanager -ppercona -e "select @@hostname; select sleep(10)" 2>/dev/null & done percona3 percona3 percona3 percona3 percona3 percona2 percona3 percona3 percona3 percona3 Service 0x1d88560 Service: Splitter Service Router: readwritesplit (0x7f9c018c3540) State: Started Number of router sessions: 10 Current no. of router sessions: 10 Number of queries forwarded: 30 Number of queries forwarded to master: 0 Number of queries forwarded to slave: 30 Number of queries forwarded to all: 0 Master/Slave percentage: 0.00% Connection distribution based on myweight server parameter. Server Target % Connections Operations Global Router percona3 90.0% 9 9 9 percona2 10.0% 1 1 1 percona1 100.0% 10 10 0 Started: Wed Feb 24 22:58:21 2016 Root user access: Disabled Backend databases 192.168.90.4:3306 Protocol: MySQLBackend 192.168.90.3:3306 Protocol: MySQLBackend 192.168.90.2:3306 Protocol: MySQLBackend Routing weight parameter: myweight Users data: 0x1d8a480 Total connections: 12 Currently connected: 12 SSL: Disabled

Yes! It worked as expected!

max_slave_connections sets the maximum number of slaves a router session uses at any moment. The default is to use all available slaves. When we set it to 1, we get one master and one slave connection per client, and the connections are balanced according to the server weights. The new mechanism uses statements instead of connections for load balancing (see MXS-588).

Finally, this routing module also support routing hints. I’ll cover them in my next MaxScale post.

More information: https://github.com/mariadb-corporation/MaxScale/blob/develop/Documentation/Routers/ReadWriteSplit.md

Lower Case Table Names

Lastest Forum Posts - March 29, 2016 - 2:24am
I want to force Percona Server 5.7 (running on Ubuntu 15.1))to default to lower case table names.
I hoped to get this working by inserting " lower-case-table-names=1" in my.cnf

This setting is being ignored for some reason - any ideas what the problem could be ?

thanks

Errors on restoration

Lastest Forum Posts - March 29, 2016 - 1:57am
Hello,

I encounter some strange errors when I try to restore a mysql backup made by my hosting provider.
I don’t manage to solve the error, even with a new mysql config file.
Perhaps you can help ? I am on MacOSX, mysql Ver 14.14 Distrib 5.7.11

Code: sh-3.2# ./Downloads/xtrabackup-1.6.2/bin/innobackupex ./../Documents/chs-web-02 InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona Inc 2009-2011. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. 160329 10:23:05 innobackupex: Starting mysql with options: --unbuffered -- 160329 10:23:05 innobackupex: Connected to database with mysql child process (pid=23088) 160329 10:23:12 innobackupex: Connection to database server closed IMPORTANT: Please check that the backup run completes successfully. At the end of a successful backup run innobackupex prints "completed OK!". innobackupex: Using mysql Ver 14.14 Distrib 5.7.11, for osx10.9 (x86_64) using EditLine wrapper innobackupex: Using mysql server version Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Use of uninitialized value $option_ibbackup_binary in concatenation (.) or string at ./Downloads/xtrabackup-1.6.2/bin/innobackupex line 1971. Can't exec "--print-param": No such file or directory at ./Downloads/xtrabackup-1.6.2/bin/innobackupex line 1972. innobackupex: fatal error: no 'mysqld' group in MySQL options innobackupex: fatal error: OR no 'datadir' option in group 'mysqld' in MySQL options Thanks

Duplicates when sorting by date

Lastest Forum Posts - March 29, 2016 - 12:02am
duplicate when you run a select query data from a table sorted by date and ustnovki limit for pagination.

http://joxi.ru/GrqWO1SNJWgd2z

http://joxi.ru/Rmznp9FWJkobmO

SELECT `iblock`.`ID` AS `ID`, iblock.TIMESTAMP_X FROM `b_iblock_element` `iblock` WHERE `iblock`.`ACTIVE` = 'Y' AND `iblock`.`ACTIVE_TO` > '2016-03-29 10:54:00' AND `iblock`.`IBLOCK_ID` = 39 ORDER BY `iblock`.`TIMESTAMP_X` DESC LIMIT 20, 10;
SELECT `iblock`.`ID` AS `ID`, iblock.TIMESTAMP_X FROM `b_iblock_element` `iblock` WHERE `iblock`.`ACTIVE` = 'Y' AND `iblock`.`ACTIVE_TO` > '2016-03-29 10:54:00' AND `iblock`.`IBLOCK_ID` = 39 ORDER BY `iblock`.`TIMESTAMP_X` DESC LIMIT 30, 10;

Percona Server 5.7 Multi-source Replication

Lastest Forum Posts - March 28, 2016 - 10:48am
Does Percona Server 5.7 support Multi-source Replication as supported in Oracle MySQL 5.7?
Are there any differences between Multi-source Replication in Percona Server 5.7 vs. Oracle MySQL 5.7?

Clone a DB

Lastest Forum Posts - March 28, 2016 - 9:48am
I have database text_production and I want to create a database called txt_development with the same information. I have a 3 node Galera cluster with percona 5.6, I dont want to cause any galera locks. So I dont think I can just dump the data from the production db right into the development db. Is there an easy way to do this. The DB is ~53GB.

Any ideas?

MySQL 5.7 primary key lookup results: is it really faster?

Latest MySQL Performance Blog posts - March 28, 2016 - 8:51am

This blog examines MySQL 5.7’s primary key lookup results, and determines if MySQL 5.7 is really faster than its early versions.

MySQL 5.7 was released some time ago, and now that the dust has settled it’s a good time to review its performance improvements.

I’m not doing this just to satisfy my own curiosity! Many customers still running MySQL 5.6 (or even MySQL 5.5) often ask “How much performance gain we can expect by switching to 5.7? Or will it actually be a performance hit, especially after Peter’s report here: https://www.percona.com/blog/2013/02/18/is-mysql-5-6-slower-than-mysql-5-5/?”

To determine the answer, we’ll look at some statistics. There are a variety workloads to consider, and we will start with the simplest one: MySQL primary key lookup for data that fits into memory. This workload does not involve transactions and is fully CPU-bound.

The full results, scripts and configurations can be found on our GitHub page.

For this test, my server is a 56-logical-thread system (2 sockets / 14 cores each / 2 hyper-threads each) powered by “Intel(R) Xeon(R) E5-2683 v3 @ 2.00GHz” CPUs.

These are the primary results:

Up to 20 threads, MySQL 5.5 clearly outperforms MySQL 5.7. After 20 threads, however, it hits scalability issues and starts struggling with throughput. MySQL 5.6 is a different story – it outperforms 5.7 up to 120 threads. After that 120 threads, MySQL 5.7 again scales much better, and it can maintain throughput to all 1000 threads.

The above results are on a system where the client and server use the same server. To verify the results, I also ran the test on a system configuration where the client and server are located on different servers, connected via 10GB network.

Here results are the results from that setup:

In this case, we pushed more load on the server (since the client does not share resources with MySQL), and we can see that MySQL 5.7 outperformed MySQL 5.6 after 68 threads (with MySQL 5.6 showing scalability problems even sooner).

There is another way to improve MySQL 5.6 results on large numbers of threads: good old innodb-thread-concurrency. Let’s see the MySQL 5.6 results after setting innodb-thread-concurrency=64:

We can see that using innodb-thread-concurrency improves MySQL 5.6 results when getting into hundreds of threads.

While investigating ways to improve overall throughput, I found disabling PERFORMANCE_SCHEMA during MySQL startup is a good option. The numbers got better after doing so. Below are the numbers for 5.6 and 5.7 with PERFORMANCE_SCHEMA disabled.

For MySQL 5.6:

For MySQL 5.7:

For MySQL 5.7, PERFORMANCE_SCHEMA’s overhead is quite visible.

Conclusions

I can say that Oracle clearly did a good job with MySQL 5.7, but they focused on primary keys lookups. They wanted to report 1.6M QPS.

I was not able to get to 1.6M; the best I could achieve was 470K QPS (with a disabled PERFORMANCE_SCHEMA). Full disclosure: I used sysbench 0.5 with LUA scripts and no prepared statements during this test. Oracle used the older sysbench 0.4 (with prepared statements), and their system had 144-logical threads.

MySQL 5.7, however, continues their tradition of slowness in low threads ranges. MySQL 5.6 was slower than MySQL 5.5, and MySQL 5.7 slower than MySQL 5.6.

PRIMARY KEY lookups aren’t the only workload type – there are many cases, some much more interesting! I will show the performance metrics for other workloads in upcoming posts.

MySQL replication error Got timeout reading communication packets

Lastest Forum Posts - March 28, 2016 - 6:42am
We have a simple master slave replication. In master we are creating federated tables and querying it to do bulk insert to a local table. On the master server we are not getting any issues, But, on the slave side we are getting SQL Error : Got timeout reading communication packets on query. Default database : xyz. Query 'INSERT into emp'

We have set max_allowed_packet to 200 Mb on both slave and master, ibdata is 4 Gb fixed. On the server we have enabled innodb_file_per_table.

Is this because, max_allowed_packet is low ? Or is there any particular variable we have to take care while using federated in master and replicating it ?

With Regards
Raghupradeep

change in command line?

Lastest Forum Posts - March 26, 2016 - 5:20pm
I had been using this in cron for full and incremental backups

FULL
innobackupex --parallel=4 --rsync --socket /var/lib/mysql/mysql.sock /backup/sql
INCREMENTAL
innobackupex --incremental --parallel=4 --rsync --socket /var/lib/mysql/mysql.sock /backup/sql

The 2nd line for Incremental used to work previously with xtrabackup version 2.2
It would put both full and incremental in the same directory which is what I want.

I have now upgraded to
percona-xtrabackup-2.3.4

And the incremental cron job above also creates FULL backups!

What am I doing wrong??

OS is CentOS 6.7

Thanks

upgrade percona-server-server-5.6 to 5.7

Lastest Forum Posts - March 26, 2016 - 11:26am
Hello!
My task is to update to 5.7 percona. on jessie 8.2

That's what I'm doing:
apt-get remove percona*
apt-get install percona-server-server-5.7


But it did not get to run percona 5.7, so that otput:
2016-03-26T17:57:40.972545Z 0 [Warning] Could not increase number of max_open_files to more than 65536 (request: 81925)
2016-03-26T17:57:40.972803Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-03-26T17:57:40.974366Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
2016-03-26T17:57:40.974396Z 0 [ERROR] Aborting

dpkg: error processing package percona-server-server-5.7 (--configure):
subprocess installed post-installation script returned error exit status 1
Processing triggers for libc-bin (2.19-18+deb8u2) ...
Processing triggers for systemd (215-17+deb8u3) ...
Errors were encountered while processing:
percona-server-server-5.7
E: Sub-process /usr/bin/dpkg returned an error code (1)


2016-03-26T17:59:44.496118Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-03-26T17:59:44.853137Z 0 [Warning] InnoDB: New log files created, LSN=45790
2016-03-26T17:59:44.918997Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2016-03-26T17:59:44.986867Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 85bc3d55-f37c-11e5-b0d8-901b0e8b1229.
2016-03-26T17:59:44.992031Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2016-03-26T17:59:45.154714Z 0 [Warning] CA certificate ca.pem is self signed.
2016-03-26T17:59:45.334111Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
2016-03-26T17:59:50.139410Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-03-26T17:59:50.289171Z 0 [Warning] CA certificate ca.pem is self signed.
dpkg: error processing package percona-server-server-5.7 (--configure):
subprocess installed post-installation script returned error exit status 1
E: Sub-process /usr/bin/dpkg returned an error code (1)
W: Operation was interrupted before it could finish

dpkg -l | grep percona
iF percona-server-server-5.7 5.7.11-4-1.jessie

When you upgrade to 5.5 to 5.6 did not have such problems.

Please tell me what could be the problem???

upgrade 5.5 to 5.6 will not startup ERROR Aborting last message in error.log WHY?

Lastest Forum Posts - March 25, 2016 - 11:57am
Why can I not startup up 5.6.29 on a CentOS 7 host
where Percona 5.5.28 runs w/o problem

Stop slave and shutdown 5.5.28 clean
removed old symlink to mysql 5.5.28 binaries.

NEXT yum install Percona-Server-shared-56-5.6.29-rel76.2.el6.x86_64.rpm
yum install Percona-Server-client-56-5.6.29-rel76.2.el6.x86_64.rpm
yum install Percona-Server-server-56-5.6.29-rel76.2.el6.x86_64.rpm

NEXT
Want to startup Mysql first time and run mysql_upgrade script
# /usr/sbin/mysqld --skip-grant-tables --user=mysql & [1] 2051 ]# 2016-03-25 11:33:59 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2016-03-25 11:33:59 0 [Note] /usr/sbin/mysqld (mysqld 5.6.29-76.2-log) starting as process 2051 ... 2016-03-25 11:33:59 2051 [ERROR] Can't read from messagefile '/usr/share/mysql/english/errmsg.sys' BUT it never starts up and there is not real info in the error.log only show ERROR Aborting ... why not?

2016-03-25 11:12:30 1825 [Note] InnoDB: Waiting for purge to start
2016-03-25 11:12:30 1825 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.29-76.2 started; log sequence number 23270845141317
2016-03-25 11:12:30 1825 [ERROR] /usr/sbin/mysqld: unknown variable 'table_cache=4096'
2016-03-25 11:12:30 1825 [ERROR] Aborting

2016-03-25 11:12:30 1825 [Note] Binlog end
2016-03-25 11:12:30 1825 [Note] Shutting down

Memory usage in percona 5.7

Lastest Forum Posts - March 25, 2016 - 3:37am
I just upgraded from percona 5.6 to 5.7 on my dev machine, then I noticed the memory usage is kind of crazy. Mysql is using over 900M memory(and over 800M with performance schema disabled). Any idea?

Below is my.cnf

[mysql]

# CLIENT #
port = 3306
socket = /var/run/mysqld/mysqld.sock
default-character-set = utf8

[mysqld]

# GENERAL #
user = mysql
default-storage-engine = InnoDB
socket = /var/run/mysqld/mysqld.sock
pid-file = /var/lib/mysql/mysql.pid

server-id = 1

# MyISAM #
key-buffer-size = 32M
myisam-recover-options = FORCE,BACKUP

# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
skip-name-resolve
sysdate-is-now = 1
innodb = FORCE
innodb-strict-mode = 1

# DATA STORAGE #
datadir = /var/lib/mysql/

# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
expire-logs-days = 14
sync-binlog = 1

# CACHES AND LIMITS #
tmp-table-size = 16M
max-heap-table-size = 16M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 1024
table-definition-cache = 512
table-open-cache = 10

#performance_schema = off
sql_mode = NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY _ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

# INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 64M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 64M
innodb_ft_total_cache_size = 32M

# LOGGING #
log-error = /var/log/mysql/mysql-error.log
slow-query-log = 1
slow-query-log-file = /var/log/mysql/mysql-slow.log

Percona How To: Field Names and Document Size in MongoDB

Latest MySQL Performance Blog posts - March 24, 2016 - 2:28pm

In this blog post, we’ll discuss how shorter field names impact performance and document size in MongoDB.

The MongoDB Manual Developer Notes state:

Shortening field names reduce expressiveness and does not provide considerable benefit for larger documents and where document overhead is not of significant concern. Shorter field names do not lessen the size of indexes because indexes have a predefined structure. In general, it is not necessary to use short field names.

This is a pretty one-sided statement, and we should be careful not to fall into this trap. At first glance, you might think “Oh that makes sense due to compression!” However, compression is only one part of the story. When we consider the size of a single document, we need to consider several things:

  • Size of the data in the application memory
  • Size over the network
  • Size in the replication log
  • Size in memory in the cache
  • Amount of data being sent to the compressor
  • Size on disk*
  • Size in the journal files*

As you can see, this is a pretty expansive list, and this is just for consideration on field naming – we haven’t even gotten to using the right data types for the value yet.

Further, only the last two items in the list (“*” starred) represent any part of the system that has compression (to date). Put another way, the conversation about compression only covers about 25% of the discussion about field names. MongoDB Inc’s comment is trying to sidestep nearly 75% of the rest of the conversation.

To ensure an even debate, I want to break size down into two major areas: Field Optimization and Value Optimization. They both touch on all of the areas listed above except for sorting, which is only about value optimization.

Field Optimization

When we talk about field optimization, it is purely considering using smaller field names. This might seem obvious, but when your database field names become object properties in your application code, the developers want these to be expressive (i.e., longer and space-intensive).

Consider the following:

locations=[]; for (i=1;i<=1000;i++){    locations.push({ longitude : 28.2211, latitude : 128.2828 }) } devices=[]; for (i=1;i<=10;i++){    devices.push( {        name:"iphone6",        last_ping: ISODate(),        version: 8.1 ,        security_pass: true,        last_10_locations: locations.slice(10,20)    }) } x={    _id : ObjectId(),    first_name: "David",    last_name:     "Murphy",    birthdate:     "Aug 16 2080",    address :     "123 nowhere drive Nonya, TX, USA , 78701",    phone_number1:     "512-555-5555",    phone_number2:    "512-555-5556",    known_locations: locations,    last_checkin : ISODate(),    devices : devices } >Object.bsonsize(x) 54879

Seems pretty standard, but wow! That’s 54.8k per document! Now let’s consider another format:

locations2=[]; for (i=1;i<=1000;i++){    locations2.push({ lon : 28.2211, lat : 128.2828 }) } devices2=[]; for (i=1;i<=10;i++){    devices2.push( {        n:"iphone6",        lp: ISODate(),        v: 8.1 ,        sp: true,        l10: locations.slice(10,20)    }) } y={    _id : ObjectId(),    fn:     "David",    ln:     "Murphy",    bd:     "Aug 16 2080",    a :     "123 nowhere drive Nonya, TX, USA , 78701",    pn1:     "512-555-5555",    pn2:    "512-555-5556",    kl:     locations2,    lc :     ISODate(),    d :     devices2 } > Object.bsonsize(y) 41392 > Object.bsonsize(y)/Object.bsonsize(x) 0.754241148708978

This minor change saves space by 25%, without changing any actual data. I know you can already see things like kl or l10 and are wondering, “What the heck is that!” This is where some clever tricks with the application code can come in.

You can make a mapping collection in MongoDB, or keep it in your application code – so in the code  self.l10 is renamed to self.last_10_locations. Some people go so far as using constants – for example “self.LAST_10_LOCATIONS”  to “self.l10 = self.get_value(LAST_10_LOCATIONS)” – to reduce the field size.

Value Optimization

Using the same example, let’s assume we want to improve the field usage. We know we will always pull a user by their _id,  or the most recent people to check-in. To help optimize this further, let us assume “x” is still our main document:

locations=[]; for (i=1;i<=1000;i++){    locations.push({ longitude : 28.2211, latitude : 128.2828 }) } devices=[]; for (i=1;i<=10;i++){    devices.push( {        name:"iphone6",        last_ping: ISODate(),        version: 8.1 ,        security_pass: true,        last_10_locations: locations.slice(10,20)    }) } x={    _id : ObjectId(),    first_name: "David",    last_name:     "Murphy",    birthdate:     "Aug 16 2080",    address :     "123 nowhere drive Nonya, TX, USA , 78701",    phone_number1:     "512-555-5555",    phone_number2:    "512-555-5556",    known_locations: locations,    laat_checkin : ISODate(),    devices : devices } >Object.bsonsize(x) 54879

But now, instead of optimizing field names, we want to optimize the values:

locations=[]; for (i=1;i<=1000;i++){    locations.push({ longitude : 28.2211, latitude : 128.2828 }) } devices=[]; for (i=1;i<=10;i++){    devices.push( {        name:"iphone6",        last_ping: ISODate(),        version: 8.1 ,        security_pass: true,        last_10_locations: locations.slice(10,20)    }) } z={    _id : ObjectId(),    first_name: "David",    last_name:     "Murphy",    birthdate:     ISODate("2080-08-16T00:00:00Z"),    address :     "123 nowhere drive Nonya, TX, USA , 78701",    phone_number1:    5125555555,    phone_number2:    5125555556,    known_locations: locations,    last_checkin : ISODate(),    devices : devices } >Object.bsonsize(z) 54853

In this example, we changed phone numbers to integers and used the “Date Type” for dates (as already done in the devices document). The savings were much smaller than earlier, coming in at only 26 bytes, but this could have a significant impact when multiplied out to many fields and documents. If we had started this example quoting the floats as many people do, we would see more of a difference. But always watch out for numbers and dates shown as strings: these almost always waste space.

When you combine both sets of savings  you have:

54853- 26 - 41392 = 13435

That’s right: 24.5% smaller memory size on the network and for the application to parse with its CPU! Easy wins to reduce your resource needs, and to make the COO happier.

pt-table-checksum breaks MySQL replication with MariaDB 10.1.12

Lastest Forum Posts - March 24, 2016 - 1:18pm
I have two galera clusters C1 and C2 each with two nodes. I have async replication set between node 'C1N1' of cluster C1 and 'C2N1' of cluster C2 (C1N1 ---> C2N1).
The binlog format for both C1 and C2 is ROW. I have tested the replication with basic create table and insert table commands and it works as expected.

As soon as I run the pt-table-checksum command on C1N1, I see that the replication link C1N1 ---> C2N1 has broken. Running the 'show slave status\G' command on C2N1 shows this error message: Last_SQL_Errno: 1047
Last_SQL_Error: Node has dropped from cluster
The pt-table-checksum was run with the following parameters: ./pt-table-checksum h=10.65.99.221 --user=root --password=pass123 --replicate mydb.checksums --no-check-binlog-format --ignore-databases=mysql

Any pointers please

FreeBSD 10.2 Installation of Percona XtraDB Cluster

Lastest Forum Posts - March 24, 2016 - 9:06am
I have a fresh installation of FreeBSD 10.2 and I have been trying to install Percona XtraDB Cluster. When I try to compile it from source I keep getting this error:

Code: /usr/local/src/Percona-XtraDB-Cluster-5.6.28-76.1/mysys/posix_timers.c:44:23: error: unknown type name 'sigval_t' timer_notify_function(sigval_t sigev_value) ^ /usr/local/src/Percona-XtraDB-Cluster-5.6.28-76.1/mysys/posix_timers.c:71:30: error: use of undeclared identifier 'SYS_gettid' thread_id= (pid_t) syscall(SYS_gettid); ^ 2 errors generated. *** Error code 1 Stop. make[2]: stopped in /usr/local/src/Percona-XtraDB-Cluster-5.6.28-76.1 *** Error code 1 Stop. make[1]: stopped in /usr/local/src/Percona-XtraDB-Cluster-5.6.28-76.1 *** Error code 1 Stop. make: stopped in /usr/local/src/Percona-XtraDB-Cluster-5.6.28-76.1 Would anyone know how to correct this issue?

Thanks.

Want to be a superhero? Join the Database Performance Team!

Latest MySQL Performance Blog posts - March 24, 2016 - 7:39am

Admit it, you’ve always wanted to fight danger, win battles and save the day! Who doesn’t? Do you want to be a superhero? Percona can show you how!

We don’t have any radioactive spiders or billionaire gadgets, but we do have our own team of superheroes dedicated to protecting your database performance: The Database Performance Team!

The Database Performance Team is comprised of our services experts, who work tirelessly every day to guarantee the performance of your database. Percona’s database services are some of our most valuable customer resources – besides the software itself. Whether it’s support, consulting, technical account managers, or remote DBAs, our support team is made up of superheroes that make sure your database is running at peak performance.

We want you to join us in the fight against poor performance. Join our Database Performance Team crew as part of the Database Street Team!

We’ll be introducing the members of our super group in the coming weeks. As we introduce the Database Performance Team (the “characters” below), we want you! We’ll be offering “missions” for you to complete: challenges, puzzles, or actions that get you prizes for success!

Your first mission: guess the identities of our secret team before we reveal them!

Mystery Character 1

Mystery Character 2

Mystery Character 3

Mystery Character 4

Mystery
Character 5

Hint: Hint: Hint: Hint: Hint: Funny, friendly, quick-witted, supporting, fast and courteous – but still able to get the job done with amazing competence. Computer-like smarts, instant recall, a counselor, able to understand a problem and the solution quickly. Technical, with clairvoyant foresight, with the knowledge and statistics to account for all issues, manages problems before they happen. Remotely all-seeing, a director, good at multi-tasking, adapts-on-the-fly, cool in a crisis. Insanely strong, can’t be stopped, hard to knock down, the product of rigorous testing, unlimited endurance. Who am I? Who am I? Who am I? Who am I? Who am I?

Follow @Percona on Twitter and use the hashtag #DatabasePerformanceTeam to cast your guess on who any mystery character is. Correctly guess any of their names or roles, and the lucky winner gets their choice of our mystery T-shirt in either men’s or women’s style.

Stay tuned, as we reveal the identities of the Database Performance Team over the coming weeks!

Join the ranks of the Database Street Team! Fun games, cool prizes – more info is coming soon!

Some facts:*

Gartner has estimated the average cost of downtime at $5,000 per minute!

Join The Database Performance Team today!

 

 

*Source: http://data-informed.com/key-challenges-facing-the-modern-database-administrator/

2node - 1 arbitrator switched to non-primary when i shutted down 1 node

Lastest Forum Posts - March 23, 2016 - 9:19pm
Hello,

I run Percona Cluster on 2 nodes: 10.0.0.91 (pxc-01) and 10.0.0.92 (pxc-02) + 1 arbitrator (garb) on 10.0.0.10.
Today, I shutted down the 10.0.0.92 (pxc-02) machine.
10.0.0.91 (pxc-01) became non-primary.

I though arbitrator was supposed to take over and declare 10.0.0.91 (pxc-01) primary to keep it working?

After 1 minute I had to run "SET GLOBAL wsrep_provider_options='pc.bootstrap=true';" on 10.0.0.91 to solve the situation.
Any ideas why?

Here are the logs from the arbitrator:

Code: 2016-03-24 05:39:35.073 INFO: (b23eccb0, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.0.0.92:4567 2016-03-24 05:39:36.573 INFO: (b23eccb0, 'tcp://0.0.0.0:4567') reconnecting to c2c3d173 (tcp://10.0.0.92:4567), attempt 0 2016-03-24 05:39:36.972 INFO: evs::proto(b23eccb0, OPERATIONAL, view_id(REG,0a880fcf,7)) suspecting node: c2c3d173 2016-03-24 05:39:36.972 INFO: evs::proto(b23eccb0, OPERATIONAL, view_id(REG,0a880fcf,7)) suspected node without join message, declaring inactive 2016-03-24 05:39:44.472 WARN: evs::proto(b23eccb0, GATHER, view_id(REG,0a880fcf,7)) install timer expired 2016-03-24 05:39:44.472 INFO: no install message received 2016-03-24 05:39:44.472 INFO: view(view_id(NON_PRIM,0a880fcf,7) memb { b23eccb0,0 } joined { } left { } partitioned { 0a880fcf,2 c2c3d173,2 }) 2016-03-24 05:39:44.472 INFO: view(view_id(NON_PRIM,b23eccb0,8) memb { b23eccb0,0 } joined { } left { } partitioned { 0a880fcf,2 c2c3d173,2 }) 2016-03-24 05:39:44.472 INFO: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 2016-03-24 05:39:44.472 INFO: Flow-control interval: [9999999, 9999999] 2016-03-24 05:39:44.472 INFO: Received NON-PRIMARY. 2016-03-24 05:39:44.472 INFO: Shifting SYNCED -> OPEN (TO: 154125793) 2016-03-24 05:39:44.472 INFO: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 2016-03-24 05:39:44.472 INFO: Flow-control interval: [9999999, 9999999] 2016-03-24 05:39:44.472 INFO: Received NON-PRIMARY. 2016-03-24 05:40:03.978 INFO: declaring 0a880fcf at tcp://10.0.0.91:4567 stable 2016-03-24 05:40:03.979 INFO: view(view_id(NON_PRIM,0a880fcf,9) memb { 0a880fcf,2 b23eccb0,0 } joined { } left { } partitioned { c2c3d173,2 }) 2016-03-24 05:40:03.979 INFO: New COMPONENT: primary = no, bootstrap = no, my_idx = 1, memb_num = 2 2016-03-24 05:40:03.979 INFO: Flow-control interval: [9999999, 9999999] 2016-03-24 05:40:03.979 INFO: Received NON-PRIMARY. 2016-03-24 05:40:50.271 INFO: view(view_id(PRIM,0a880fcf,9) memb { 0a880fcf,2 b23eccb0,0 } joined { } left { } partitioned { c2c3d173,2 }) 2016-03-24 05:40:50.271 INFO: save pc into disk 2016-03-24 05:40:50.271 WARN: open file(./gvwstate.dat.tmp) failed(Permission denied) 2016-03-24 05:40:50.271 INFO: forgetting c2c3d173 (tcp://10.0.0.92:4567) 2016-03-24 05:40:50.271 INFO: New COMPONENT: primary = yes, bootstrap = yes, my_idx = 1, memb_num = 2 2016-03-24 05:40:50.271 INFO: (b23eccb0, 'tcp://0.0.0.0:4567') turning message relay requesting off 2016-03-24 05:40:50.271 INFO: STATE EXCHANGE: Waiting for state UUID. 2016-03-24 05:40:50.272 INFO: STATE EXCHANGE: sent state msg: d2125aae-f169-11e5-a54f-2f2fb5523976 2016-03-24 05:40:50.272 INFO: STATE EXCHANGE: got state msg: d2125aae-f169-11e5-a54f-2f2fb5523976 from 0 (pxc-01) 2016-03-24 05:40:50.272 INFO: STATE EXCHANGE: got state msg: d2125aae-f169-11e5-a54f-2f2fb5523976 from 1 (garb) 2016-03-24 05:40:50.272 WARN: Quorum: No node with complete state: 2016-03-24 05:40:50.272 INFO: Partial re-merge of primary b28e0200-efcc-11e5-894c-7bf586b54a55 found: 2 of 3. 2016-03-24 05:40:50.272 INFO: Quorum results: version = 3, component = PRIMARY, conf_id = 7, members = 2/2 (joined/total), act_id = 154125793, last_appl. = 154125696, protocols = 0/7/3 (gcs/repl/appl), group UUID = 7f16b5ae-d1f6-11e5-9382-4ec823ecea7a 2016-03-24 05:40:50.272 INFO: Flow-control interval: [9999999, 9999999] 2016-03-24 05:40:50.272 INFO: Restored state OPEN -> SYNCED (154125793) 2016-03-24 05:40:53.076 INFO: cleaning up c2c3d173 (tcp://10.0.0.92:4567) Here are logs from remaining node (pxc-01):

Code: 2016-03-24 05:39:35 1811 [Note] WSREP: (0a880fcf, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.0.0.92:4567 2016-03-24 05:39:36 1811 [Note] WSREP: (0a880fcf, 'tcp://0.0.0.0:4567') reconnecting to c2c3d173 (tcp://10.0.0.92:4567), attempt 0 2016-03-24 05:40:01 1811 [Note] WSREP: evs::proto(0a880fcf, GATHER, view_id(REG,0a880fcf,7)) suspecting node: c2c3d173 2016-03-24 05:40:01 1811 [Note] WSREP: evs::proto(0a880fcf, GATHER, view_id(REG,0a880fcf,7)) suspected node without join message, declaring inactive 2016-03-24 05:40:02 1811 [Note] WSREP: view(view_id(NON_PRIM,0a880fcf,7) memb { 0a880fcf,2 } joined { } left { } partitioned { b23eccb0,0 c2c3d173,2 }) 2016-03-24 05:40:02 1811 [Note] WSREP: view(view_id(NON_PRIM,0a880fcf,8) memb { 0a880fcf,2 } joined { } left { } partitioned { b23eccb0,0 c2c3d173,2 }) 2016-03-24 05:40:02 1811 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 2016-03-24 05:40:02 1811 [Note] WSREP: Flow-control interval: [16, 16] 2016-03-24 05:40:02 1811 [Note] WSREP: Received NON-PRIMARY. 2016-03-24 05:40:02 1811 [Note] WSREP: Shifting SYNCED -> OPEN (TO: 154125793) 2016-03-24 05:40:02 1811 [Note] WSREP: New cluster view: global state: 7f16b5ae-d1f6-11e5-9382-4ec823ecea7a:154125793, view# -1: non-Primary, number of nodes: 1, my index: 0, protocol version 3 2016-03-24 05:40:02 1811 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 2016-03-24 05:40:02 1811 [Note] WSREP: Flow-control interval: [16, 16] 2016-03-24 05:40:02 1811 [Note] WSREP: Received NON-PRIMARY. 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 1119, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 336, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 1816, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 516, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 504, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 504, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 528, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 504, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 504, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 914, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 336, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 873, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 336, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 347, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 347, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 1570, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 771, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 783, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 843, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 440, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 440, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 409, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 440, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 415, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 509, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 500, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 538, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Note] WSREP: New cluster view: global state: 7f16b5ae-d1f6-11e5-9382-4ec823ecea7a:154125793, view# -1: non-Primary, number of nodes: 1, my index: 0, protocol version 3 2016-03-24 05:40:03 1811 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 805, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Warning] WSREP: Send action {(nil), 538, TORDERED} returned -107 (Transport endpoint is not connected) 2016-03-24 05:40:03 1811 [Note] WSREP: declaring b23eccb0 at tcp://10.0.0.10:4567 stable 2016-03-24 05:40:03 1811 [Note] WSREP: view(view_id(NON_PRIM,0a880fcf,9) memb { 0a880fcf,2 b23eccb0,0 } joined { } left { } partitioned { c2c3d173,2 }) 2016-03-24 05:40:03 1811 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 2 2016-03-24 05:40:03 1811 [Note] WSREP: Flow-control interval: [23, 23] 2016-03-24 05:40:03 1811 [Note] WSREP: Received NON-PRIMARY. 2016-03-24 05:40:03 1811 [Note] WSREP: New cluster view: global state: 7f16b5ae-d1f6-11e5-9382-4ec823ecea7a:154125793, view# -1: non-Primary, number of nodes: 2, my index: 0, protocol version 3 2016-03-24 05:40:03 1811 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 2016-03-24 05:40:50 1811 [Note] WSREP: view(view_id(PRIM,0a880fcf,9) memb { 0a880fcf,2 b23eccb0,0 } joined { } left { } partitioned { c2c3d173,2 }) 2016-03-24 05:40:50 1811 [Note] WSREP: save pc into disk 2016-03-24 05:40:50 1811 [Note] WSREP: forgetting c2c3d173 (tcp://10.0.0.92:4567) 2016-03-24 05:40:50 1811 [Note] WSREP: deleting entry tcp://10.0.0.92:4567 2016-03-24 05:40:50 1811 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = yes, my_idx = 0, memb_num = 2 2016-03-24 05:40:50 1811 [Note] WSREP: (0a880fcf, 'tcp://0.0.0.0:4567') turning message relay requesting off 2016-03-24 05:40:50 1811 [Note] WSREP: STATE_EXCHANGE: sent state UUID: d2125aae-f169-11e5-a54f-2f2fb5523976 2016-03-24 05:40:50 1811 [Note] WSREP: STATE EXCHANGE: sent state msg: d2125aae-f169-11e5-a54f-2f2fb5523976 2016-03-24 05:40:50 1811 [Note] WSREP: STATE EXCHANGE: got state msg: d2125aae-f169-11e5-a54f-2f2fb5523976 from 0 (pxc-01) 2016-03-24 05:40:50 1811 [Note] WSREP: STATE EXCHANGE: got state msg: d2125aae-f169-11e5-a54f-2f2fb5523976 from 1 (garb) 2016-03-24 05:40:50 1811 [Warning] WSREP: Quorum: No node with complete state: Version : 3 Flags : 0x7 Protocols : 0 / 7 / 3 State : NON-PRIMARY Prim state : SYNCED Prim UUID : b28e0200-efcc-11e5-894c-7bf586b54a55 Prim seqno : 7 First seqno : 154012102 Last seqno : 154125793 Prim JOINED : 3 State UUID : d2125aae-f169-11e5-a54f-2f2fb5523976 Group UUID : 7f16b5ae-d1f6-11e5-9382-4ec823ecea7a Name : 'pxc-01' Incoming addr: '10.0.0.91:3306' Version : 3 Flags : 0xe Protocols : 0 / 127 / 127 State : NON-PRIMARY Prim state : SYNCED Prim UUID : b28e0200-efcc-11e5-894c-7bf586b54a55 Prim seqno : 7 First seqno : -1 Last seqno : 154125793 Prim JOINED : 3 State UUID : d2125aae-f169-11e5-a54f-2f2fb5523976 Group UUID : 7f16b5ae-d1f6-11e5-9382-4ec823ecea7a Name : 'garb' Incoming addr: '' 2016-03-24 05:40:50 1811 [Note] WSREP: Partial re-merge of primary b28e0200-efcc-11e5-894c-7bf586b54a55 found: 2 of 3. 2016-03-24 05:40:50 1811 [Note] WSREP: Quorum results: version = 3, component = PRIMARY, conf_id = 7, members = 2/2 (joined/total), act_id = 154125793, last_appl. = 154125672, protocols = 0/7/3 (gcs/repl/appl), group UUID = 7f16b5ae-d1f6-11e5-9382-4ec823ecea7a 2016-03-24 05:40:50 1811 [Note] WSREP: Flow-control interval: [23, 23] 2016-03-24 05:40:50 1811 [Note] WSREP: Restored state OPEN -> SYNCED (154125793) 2016-03-24 05:40:50 1811 [Note] WSREP: New cluster view: global state: 7f16b5ae-d1f6-11e5-9382-4ec823ecea7a:154125793, view# 8: Primary, number of nodes: 2, my index: 0, protocol version 3 2016-03-24 05:40:50 1811 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 2016-03-24 05:40:50 1811 [Note] WSREP: REPL Protocols: 7 (3, 2) 2016-03-24 05:40:50 1811 [Note] WSREP: Service thread queue flushed. 2016-03-24 05:40:50 1811 [Note] WSREP: Assign initial position for certification: 154125793, protocol version: 3 2016-03-24 05:40:50 1811 [Note] WSREP: Service thread queue flushed. 2016-03-24 05:40:50 1811 [Note] WSREP: Synchronized with group, ready for connections 2016-03-24 05:40:50 1811 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 2016-03-24 05:40:53 1811 [Note] WSREP: cleaning up c2c3d173 (tcp://10.0.0.92:4567)

&amp;quot;TokuDB: Auto scheduling background analysis&amp;quot; continuously appearing in log file

Lastest Forum Posts - March 23, 2016 - 8:43pm
I recently upgraded a replication slave from Percona Server 5.6 to the latest version of Percona Server 5.7.

We are using the TokuDB engine for many tables.

Now, the mysql log file is getting lots of entries likes this:

2016-03-24T03:29:59.950572Z 41 [Note] TokuDB: Auto scheduling background analysis for ./diggers_db/tblcurrent, delta_activity 1 is greater than 30 percent of 4 rows. - succeeded.
2016-03-24T03:29:59.960814Z 41 [Note] TokuDB: Auto scheduling background analysis for ./tilttray_db/tblcurrent, delta_activity 1 is greater than 30 percent of 3 rows. - succeeded.
2016-03-24T03:29:59.989689Z 41 [Note] TokuDB: Auto scheduling background analysis for ./luxury_db/tblcurrent, delta_activity 3 is greater than 30 percent of 13 rows. - succeeded.
2016-03-24T03:29:59.999439Z 41 [Note] TokuDB: Auto scheduling background analysis for ./valley_db/tblcurrent, delta_activity 2 is greater than 30 percent of 8 rows. - succeeded.
2016-03-24T03:30:00.054944Z 41 [Note] TokuDB: Auto scheduling background analysis for ./slab_db/tblcurrent, delta_activity 1 is greater than 30 percent of 5 rows. - succeeded.

The same database names are appearing again and again. I note that most of these tables are tiny, with between 1 and 20 rows. It's been over a day since the upgrade and it's still happening.

Perhaps this is a bug in the TokuDB analyze code causing it to schedule unnecessary analyze operations?

I see there are many configuration options to control the behaviour TokuDB analyze, but not sure if it is necessary to tweak these - I'd rather not stab in the dark.

Would appreciate expert advice. Thanks.

Percona Live featured talk with Stewart Smith: Why Would I Run MySQL or MariaDB on POWER Anyway?

Latest MySQL Performance Blog posts - March 23, 2016 - 12:07pm

Welcome to the next installment of our talks with Percona Live Data Performance Conference 2016 speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference, as well as discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live registration bonus! This blog is Percona Live featured talk with Stewart Smith.

In this installment, we’ll meet Stewart Smith, OPAL Architect at IBM. His talk is Why Would I Run MySQL or MariaDB on POWER Anyway? and will present the technical reasons why you might want to consider the POWER architecture for MySQL/MariaDB production workloads.

I got a chance to discuss Stewart’s talk with him:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it.

Stewart: While at the AUUG (Australian Unix Users Group) Conference in Melbourne way back in 2004, I mentioned to Arjen Lentz (MySQL employee #25) that I was starting to look for my next adventure. Shortly after, at lunch time, instead of going to eat lunch Arjen hands me his cell phone and on the other end of the line was Brian Aker, then Director of Architecture at MySQL AB. That that was my first interview.

So in 2004, I started working on MySQL Cluster (NDB). Four years later, having encountered pretty much all of the warts in the MySQL Server while working on MySQL Cluster, I became the second person to commit code to Drizzle – a fork of MySQL where we tried to rethink everything and reshape the code base to be a) modern, efficient and scalable on modern multi-core machines, and b) designed for large scale web applications. Luckily – instead of firing us all for forking the product in secret – Sun moved us to the CTO group and we got to work on Drizzle full time.

From 2011 to 2014 I was Director of Server Development at Percona where I continued my focus on iterative change and automated QA.

All of these were amazing learning experiences and I’m really proud of what we achieved during these times, the impact of which is still being felt in the database world.

Ultimately, though, it was time for a change – and in my role at IBM as OPAL Architect, I work on the OpenPower Abstraction Layer (OPAL), the completely Open Source firmware for OpenPOWER systems. Of course, at some point, somebody discovered I knew something about MySQL, and I managed to (as a side project) port MySQL to POWER and get a world record of 1 million queries per second. This led to MariaDB being officially supported on POWER8 processors and IBM investing more time in the reliability and performance of MySQL on POWER.

So while my day job is no longer database internals, I stick my head in occasionally as there’s still some part of me that enjoys it.

Percona: Your talk is going to be on “Why would I run MySQL or MariaDB on POWER anyway?” So does that mean you’re firmly on the side of increasing HW power before optimization to achieve performance? If so, why? And for what workload types?

Stewart: We’ve always been scaling both up and out with MySQL. It used to be that scaling up was going from a single processor with a single core to two processors. Now, a cell phone with fewer than four cores is low end.

Of course, POWER CPUs have more benefits than just more cores and threads. There’s a long history of building POWER CPUs to be reliable, with an ethos of *never* acting on bad data, so there’s lots of error checking throughout the CPU itself in addition to ECC memory.

So while POWER can bring raw computing performance to the table, it can also bring reliability and our density with virtual machines can be really interesting.

Percona: Virtualization, SDN, cloud deployments – all of these use distributed resources. How does this affect the use of MySQL/MariaDB on POWER? And how can these types of setups affect application performance – positively and negatively?

Stewart: We’re lucky on POWER in that our hardware has been designed to last a great many years with the idea of partitioning it out to multiple operating systems.

The big advantage for cloud deployments is isolation between tenants, and if we can do this with minimal or zero performance impact, that’s a win for everyone. A challenge to cloud environments is always IO. Databases love lots of low latency IOPs and too often, adding virtualization adds latency, reducing density (tenants per physical machine).

Percona: What do you see as an issue that we the open source database community > needs to be on top of with regard white box development? What keeps you up at night with regard to the future of white box deployments?

Stewart: I think there’s a few big challenges to address for today’s hardware, the main one being scaling to the number of CPU cores/threads we have now as well as to the number of IOPs we have now. These are, however, not new problems – they’re ones that MySQL and InnoDB have been struggling with for over a decade.

Other open source databases (e.g., MongoDB) have re-learned the lesson the hard way: big global locks don’t scale. With storage backends coming to Mongo such as TokuDB and WiredTiger, it has the opportunity to
become an interesting player.

Non-volatile memory has the opportunity to change things more than cheap, high-performance SSDs have. When the unit of persistence is a byte, and not a 512/4096-byte block or multi-kilobyte/megabyte erase
block, things get *different*. Engines such as MyRocks may fare a lot better than more traditional engines like InnoDB – but more than likely there are new designs yet to exist.

I think the biggest challenge is going to be creating a vibrant and innovative development community around an SQL front-end – an open source project (rather than an open source product) where new ideas and experimentation can flourish.

Percona: What are you most looking forward to at Percona Live Data Performance Conference 2016?

Stewart: I’m really looking forward to some of the internals talks. I’m a deeply technical person and I want the deep dive into the interesting details of how things work. Of special interest are the MyRocks and TokuDB sessions, as well as how InnoDB is evolving.

I’ll likely poke my head into several sessions around managing MySQL deployments in order to keep up to date on how people are deploying and using relational databases today.

These days, I’m the old gray-beard of the MySQL world inside IBM, where I try and keep to mostly being advisory while focusing on my primary role which is open source firmware for POWER.

Also, over the years working on databases, I’ve made a great many friends who I don’t get to see often enough. I’m really looking forward to catching up with friends and former colleagues – and this is one of the
things I treasure about this community.

You can read more about POWER and Stewart’s thoughts at his personal blog.

To see Stewart’s talk, register for Percona Live Data Performance Conference 2016. Use the code “FeaturedTalk” and receive $100 off the current registration price!

The Percona Live Data Performance Conference is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 18-21 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.



General Inquiries

For general inquiries, please send us your question and someone will contact you.