]]>
]]>

You are here

Feed aggregator

Percona SSH based configuratin failing

Lastest Forum Posts - April 20, 2015 - 8:46am


HI

su - www-data -c 'env -i php /usr/share/cacti/site/scripts/ss_get_by_ssh.php --type memory --host 10.78.203.141 --items gu,gv'

gu:-1 gv:-1 (not returing proper values)


i am able to do
root@CACTI:/usr/share/cacti/site/scripts# ssh -i /etc/cacti/id_rsa cacti@10.78.203.141 echo "it works"
it works


i am not sure why i ama not getting the output




















handshake with remote endpoint failed, but endpoint is not part of config (any more)

Lastest Forum Posts - April 20, 2015 - 3:57am
I set up a cluster of 3 database nodes using a percona 5.6 cluster. Everything works, all nodes are replicated. I configured SSL for state transfer. The cluster recently changed its topography, because we had to migrate 2 nodes to new machines. The cluster was completely restarted recently with configuration files that only contain the 3 current cluster nodes node3, node8, nodeA (don't mind the naming). Earlier we also had node1 but this machine left the cluster and does not run any mysql any more.

Now the problem: In the log files of node3 and node8 I find every couple of seconds an error log that looks like this:

: 2015-04-20 11:06:24 22546 [ERROR] WSREP: handshake with remote endpoint ssl://<PUBLIC.node1.IP>:57953 failed: 1: 'End of file.' ( ) where ```<PUBLIC.node1.IP>``` is the address of the old node1, which is by no means any more part of the cluster. I looked through the my.cnf config files of all cluster nodes, but all traces of node1 are gone from it. How does it come the node3 and node8 try to contact the outdated node? Note, that this is after a complete bootstrapping of the cluster starting from nodeA, which does not show the strange error messages in the log.

I found an error description of a very similar error here: https://mariadb.com/kb/en/mariadb/st...-in-error-log/

There it was suggested, that this may be due to a broken network configuration where nodes could only ping from one node to the other, but not the reverse direction. But the given reason/solution does not fit in my case, since all nodes can ping each other fine. Also, it does not explain at all the fact, that my cluster nodes try to contact a node that is not part of the cluster any more.

Here is the relevant part of the congiguration file (from node8)


: ### PERCONA CLUSTER STUFF wsrep_provider=/usr/lib/libgalera_smm.so wsrep_cluster_address=gcomm://<PUBLIC.nodeA.IP>,<PUBLIC.node3.IP>,<PUBLIC.node 8. IP> #wsrep_cluster_address=gcomm:// wsrep_node_address=<PUBLIC.node8.IP> wsrep_slave_threads=8 wsrep_sst_method=xtrabackup-v2 binlog_format=ROW default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 wsrep_cluster_name=betdata_cluster wsrep_provider_options=gcache.size=3G;socket.ssl_cert=/etc/mysql/cert.pem;socket.ssl_key=/etc/mysql/key.pem wsrep_sst_donor=node3,nodeA wsrep_node_name=node8 wsrep_sst_auth=sstuser:blahblubb wsrep_sst_receive_address=<PUBLIC.node8.IP>:14444 query_cache_size=0 query_cache_type=0
Any Ideas?

Note. This is a cross posting from http://dba.stackexchange.com/questio...ndpoint-is-not

MySql Order by on a huge joined datasets is taking too long time.

Lastest Forum Posts - April 17, 2015 - 3:25am
I have to join many tables where in each table has huge volume of data and on such joined huge dataset, i have to apply Order by which is taking a very very longer time

MySql Order by on a huge joined datasets is taking too long time.

Lastest Forum Posts - April 17, 2015 - 3:25am
I have to join many tables where in each table has huge volume of data and on such joined huge dataset, i have to apply Order by which is taking a very very longer time

disconnecting clients

Lastest Forum Posts - April 17, 2015 - 1:14am
2015-04-17 10:15:15 1226 [Note] Forcefully disconnecting 5 remaining clients
2015-04-17 10:15:15 1226 [Warning] /usr/sbin/mysqld: Forcing close of thread 1229 user: 'b_user'
2015-04-17 10:15:15 1226 [Warning] /usr/sbin/mysqld: Forcing close of thread 1231 user: 'b_user'
2015-04-17 10:15:15 1226 [Warning] /usr/sbin/mysqld: Forcing close of thread 1228 user: 'b_user'
2015-04-17 10:15:15 1226 [Warning] /usr/sbin/mysqld: Forcing close of thread 1273 user: 'b_user'
2015-04-17 10:15:15 1226 [Warning] /usr/sbin/mysqld: Forcing close of thread 1270 user: 'b_user'

why is this happening?

Percona XtraDB Cluster crashes periodically

Lastest Forum Posts - April 16, 2015 - 2:23pm
Hello,


Running 3 nodes PXC, and am experimenting frequent crashes. I already reported a bug on Percona Launchpad.

https://bugs.launchpad.net/percona-server/+bug/1436320

Do you have any ideas of what could be the source of these frequent crashes ? Or any way to know what's happening.

Kind regards.

Profiling MySQL queries from Performance Schema

Latest MySQL Performance Blog posts - April 16, 2015 - 10:49am

When optimizing queries and investigating performance issues, MySQL comes with built in support for profiling queries akaSET profiling = 1; . This is already awesome and simple to use, but why the PERFORMANCE_SCHEMA alternative?

Because profiling will be removed soon (already deprecated on MySQL 5.6 ad 5.7); the built-in profiling capability can only be enabled per session. This means that you cannot capture profiling information for queries running from other connections. If you are using Percona Server, the profiling option for log_slow_verbosity is a nice alternative, unfortunately, not everyone is using Percona Server.

Now, for a quick demo: I execute a simple query and profile it below. Note that all of these commands are executed from a single session to my test instance.

mysql> SHOW PROFILES; +----------+------------+----------------------------------------+ | Query_ID | Duration   | Query                                  | +----------+------------+----------------------------------------+ |        1 | 0.00011150 | SELECT * FROM sysbench.sbtest1 LIMIT 1 | +----------+------------+----------------------------------------+ 1 row in set, 1 warning (0.00 sec) mysql> SHOW PROFILE SOURCE FOR QUERY 1; +----------------------+----------+-----------------------+------------------+-------------+ | Status               | Duration | Source_function       | Source_file      | Source_line | +----------------------+----------+-----------------------+------------------+-------------+ | starting             | 0.000017 | NULL                  | NULL             |        NULL | | checking permissions | 0.000003 | check_access          | sql_parse.cc     |        5797 | | Opening tables       | 0.000021 | open_tables           | sql_base.cc      |        5156 | | init                 | 0.000009 | mysql_prepare_select  | sql_select.cc    |        1050 | | System lock          | 0.000005 | mysql_lock_tables     | lock.cc          |         306 | | optimizing           | 0.000002 | optimize              | sql_optimizer.cc |         138 | | statistics           | 0.000006 | optimize              | sql_optimizer.cc |         381 | | preparing            | 0.000005 | optimize              | sql_optimizer.cc |         504 | | executing            | 0.000001 | exec                  | sql_executor.cc  |         110 | | Sending data         | 0.000025 | exec                  | sql_executor.cc  |         190 | | end                  | 0.000002 | mysql_execute_select  | sql_select.cc    |        1105 | | query end            | 0.000003 | mysql_execute_command | sql_parse.cc     |        5465 | | closing tables       | 0.000004 | mysql_execute_command | sql_parse.cc     |        5544 | | freeing items        | 0.000005 | mysql_parse           | sql_parse.cc     |        6969 | | cleaning up          | 0.000006 | dispatch_command      | sql_parse.cc     |        1874 | +----------------------+----------+-----------------------+------------------+-------------+ 15 rows in set, 1 warning (0.00 sec)

To demonstrate how we can achieve the same with Performance Schema, we first identify our current connection id. In the real world, you might want to get the connection/processlist id of the thread you want to watch i.e. fromSHOW PROCESSLIST .

mysql> SELECT THREAD_ID INTO @my_thread_id     -> FROM threads WHERE PROCESSLIST_ID = CONNECTION_ID(); Query OK, 1 row affected (0.00 sec)

Next, we identify the bounding EVENT_IDs for the statement stages. We will look for the statement we wanted to profile using the query below from theevents_statements_history_longtable. Your LIMIT clause may vary depending on how much queries the server might be getting.

mysql> SELECT THREAD_ID, EVENT_ID, END_EVENT_ID, SQL_TEXT, NESTING_EVENT_ID     -> FROM events_statements_history_long     -> WHERE THREAD_ID = @my_thread_id     ->   AND EVENT_NAME = 'statement/sql/select'     -> ORDER BY EVENT_ID DESC LIMIT 3 G *************************** 1. row ***************************        THREAD_ID: 13848         EVENT_ID: 419     END_EVENT_ID: 434         SQL_TEXT: SELECT THREAD_ID INTO @my_thread_id FROM threads WHERE PROCESSLIST_ID = CONNECTION_ID() NESTING_EVENT_ID: NULL *************************** 2. row ***************************        THREAD_ID: 13848         EVENT_ID: 374     END_EVENT_ID: 392         SQL_TEXT: SELECT * FROM sysbench.sbtest1 LIMIT 1 NESTING_EVENT_ID: NULL *************************** 3. row ***************************        THREAD_ID: 13848         EVENT_ID: 353     END_EVENT_ID: 364         SQL_TEXT: select @@version_comment limit 1 NESTING_EVENT_ID: NULL 3 rows in set (0.02 sec)

From the results above, we are mostly interested with the EVENT_ID and END_EVENT_ID values from the second row, this will give us the stage events of this particular query from theevents_stages_history_longtable.

mysql> SELECT EVENT_NAME, SOURCE, (TIMER_END-TIMER_START)/1000000000 as 'DURATION (ms)'     -> FROM events_stages_history_long     -> WHERE THREAD_ID = @my_thread_id AND EVENT_ID BETWEEN 374 AND 392; +--------------------------------+----------------------+---------------+ | EVENT_NAME                     | SOURCE               | DURATION (ms) | +--------------------------------+----------------------+---------------+ | stage/sql/init                 | mysqld.cc:998        |        0.0214 | | stage/sql/checking permissions | sql_parse.cc:5797    |        0.0023 | | stage/sql/Opening tables       | sql_base.cc:5156     |        0.0205 | | stage/sql/init                 | sql_select.cc:1050   |        0.0089 | | stage/sql/System lock          | lock.cc:306          |        0.0047 | | stage/sql/optimizing           | sql_optimizer.cc:138 |        0.0016 | | stage/sql/statistics           | sql_optimizer.cc:381 |        0.0058 | | stage/sql/preparing            | sql_optimizer.cc:504 |        0.0044 | | stage/sql/executing            | sql_executor.cc:110  |        0.0008 | | stage/sql/Sending data         | sql_executor.cc:190  |        0.0251 | | stage/sql/end                  | sql_select.cc:1105   |        0.0017 | | stage/sql/query end            | sql_parse.cc:5465    |        0.0031 | | stage/sql/closing tables       | sql_parse.cc:5544    |        0.0037 | | stage/sql/freeing items        | sql_parse.cc:6969    |        0.0056 | | stage/sql/cleaning up          | sql_parse.cc:1874    |        0.0006 | +--------------------------------+----------------------+---------------+ 15 rows in set (0.01 sec)

As you can see the results are pretty close, not exactly the same but close. SHOW PROFILE shows Duration in seconds, while the results above is in milliseconds.

Some limitations to this method though:

  • As we’ve seen it takes a few hoops to dish out the information we need. Because we have to identify the statement we have to profile manually, this procedure may not be easy to port into tools like the sys schema or pstop.
  • Only possible if Performance Schema is enabled (by default its enabled since MySQL 5.6.6, yay!)
  • Does not cover all metrics compared to the native profiling i.e. CONTEXT SWITCHES, BLOCK IO, SWAPS
  • Depending on how busy the server you are running the tests, the sizes of the history tables may be too small, as such you either have to increase or loose the history to early i.e.performance_schema_events_stages_history_long_sizevariable. Using ps_history might help in this case though with a little modification to the queries.
  • The resulting Duration per event may vary, I would think this may be due to the additional as described on performance_timers table. In any case we hope to get this cleared up as result when this bug is fixed.

The post Profiling MySQL queries from Performance Schema appeared first on MySQL Performance Blog.

io buffer sizes

Lastest Forum Posts - April 16, 2015 - 9:14am
So doing some research I found a bunch of old 2007-2010 article talking about io buffers (read_buffer_size and others) and the performance change around 128k when they change from molloc to mmap. I was wondering if there is any updated info on this? And if in virtualized environments this changes at all etc. I would love to see a new blog that deep dives into this.

http://inaugust.com/post/15
http://www.percona.com/blog/2007/09/...ue-is-optimal/
http://www.percona.com/blog/2010/10/...size-in-mysql/

xtrabackup_binlog_info recording only GTID and not log pos anymore

Lastest Forum Posts - April 15, 2015 - 2:49pm
Hi,

We're using Percona's innobackupex version v1.5.1-xtrabackup with xtrabackup version 2.2.8.

At some point the xtrabackup_binlog_info from restored backups contains only the GTID and not the binary log filename and position as it used to:

: backups-01:/restore/full# cat xtrabackup_binlog_info 62397e9e-981c-11e3-a2bf-02011441b742:1-435590 This makes a lot harder to restore to a point in time using mysqlbinlog because this utility requires the --start-pos parameter to be a binary log position to begin replaying the SQL statements from there.

The only workaround that I found is to use mysqlbinlog to dump the initial binary log file and manually search for the next GTID, taking its starting log poisition.

Is this expected or should I fill a bug report? Is there a way to make the point-in-time recovery straightforward with only the starting GTID at hand?

Thanks in advance,

Diego

Checking table definition consistency with mysqldiff

Latest MySQL Performance Blog posts - April 15, 2015 - 1:45pm

Data inconsistencies in replication environments are a pretty common. There are lots of posts that explain how to fix those using pt-table-checksum and pt-table-sync. Usually we only care about the data but from time to time we receive this question in support:

How can I check the table definition consistency between servers?

Replication also allow us to have different table definition between master and slaves. For example, there are some cases that you need some indexes on slaves for querying purposes but are not really needed on the master. There are some other cases where those differences are just a mistake that needs to be fixed.

mysqldiff, included in Oracle’s MySQL Utilities, can help us to find those differences and get the information we need to fix those them. In this post I’m going to show you how to use it with an example.

Find table definition inconsistencies

mysqldiff allows us to find those inconsistencies checking the differences between the tables on the same server (different databases) or on different servers (also possible on different databases). In this example I’m going to search for differences in table definitions between two different servers, server1 and server2.

The command line is pretty simple. This is used to compare the tables on “test” database:

mysqldiff --server1=user@host1 --server2=user@host2 test:test

If the database name is different:

mysqldiff --server1=user@host1 --server2=user@host2 testdb:anotherdb

If the table name is different:

mysqldiff --server1=user@host1 --server2=user@host2 testdb.table1:anotherdb.anothertable

Now I want to check the table definition consistency between two servers. The database’s name is “employees”:

# mysqldiff --force --server1=root:msandbox@127.0.0.1:21489 --server2=root:msandbox@127.0.0.1:21490 employees:employees # WARNING: Using a password on the command line interface can be insecure. # server1 on 127.0.0.1: ... connected. # server2 on 127.0.0.1: ... connected. # Comparing `employees` to `employees` [PASS] # Comparing `employees`.`departments` to `employees`.`departments` [FAIL] # Object definitions differ. (--changes-for=server1) # --- `employees`.`departments` +++ `employees`.`departments` @@ -1,6 +1,6 @@ CREATE TABLE `departments` ( `dept_no` char(4) NOT NULL, - `dept_name` varchar(40) NOT NULL, + `dept_name` varchar(256) DEFAULT NULL, PRIMARY KEY (`dept_no`), UNIQUE KEY `dept_name` (`dept_name`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 # Comparing `employees`.`dept_emp` to `employees`.`dept_emp` [PASS] # Comparing `employees`.`dept_manager` to `employees`.`dept_manager` [PASS] # Comparing `employees`.`employees` to `employees`.`employees` [FAIL] # Object definitions differ. (--changes-for=server1) # --- `employees`.`employees` +++ `employees`.`employees` @@ -5,5 +5,6 @@ `last_name` varchar(16) NOT NULL, `gender` enum('M','F') NOT NULL, `hire_date` date NOT NULL, - PRIMARY KEY (`emp_no`) + PRIMARY KEY (`emp_no`), + KEY `last_name` (`last_name`,`first_name`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 # Comparing `employees`.`salaries` to `employees`.`salaries` [PASS] # Comparing `employees`.`titles` to `employees`.`titles` [PASS] Compare failed. One or more differences found.

There are at least two differences. One in departments table and another one in employees table. The output is similar to diff. By default the tool stops after finding the first difference. That’s why we use –force, to tell the tool to continue checking all the tables.

It shows us that on departments the dept_name is varchar(40) on server1 and varchar(256) on server2. For “employees” table, it has a KEY (last_name, first_name) on the server2 that is not present on server1. Why is it taking server2 as a reference? Because of this line:

# Object definitions differ. (--changes-for=server1)

So, the changes shown on the diff are for server1. If you want server2 to be the one to be changed and server1 used as reference, then –changes-for=server2 would be needed.

In some cases the diff output is not really useful. We actually need a SQL query to do the changes on the server. We just need to add –difftype=sql to the command line:

# mysqldiff --force --difftype=sql --server1=root:msandbox@127.0.0.1:21489 --server2=root:msandbox@127.0.0.1:21490 employees:employees [...] # Comparing `employees`.`departments` to `employees`.`departments` [FAIL] # Transformation for --changes-for=server1: ALTER TABLE `employees`.`departments` DROP INDEX dept_name, ADD UNIQUE INDEX dept_name (dept_name), CHANGE COLUMN dept_name dept_name varchar(256) NULL; [...] # Comparing `employees`.`employees` to `employees`.`employees` [FAIL] # Transformation for --changes-for=server1: # ALTER TABLE `employees`.`employees` DROP PRIMARY KEY, ADD PRIMARY KEY(`emp_no`), ADD INDEX last_name (last_name,first_name);

As we can see, the tool is not perfect. There are two problems here:

1- On “departments table” it drops a UNIQUE key that is present in both servers only to add it again. Waste of time and resources.

2- On “employees” table it drops and recreate the PRIMARY KEY, again something that is not needed a all.

I have created a bug report but this also teaches us a good lesson. Don’t just copy and paste commands without first double checking it.

What mysqldiff runs under the hood?

Mostly queries on INFORMATION_SCHEMA. These are the ones used to check inconsistencies on departments:

SHOW CREATE TABLE `departments`; SELECT TABLE_SCHEMA, TABLE_NAME, ENGINE, AUTO_INCREMENT, AVG_ROW_LENGTH, CHECKSUM, TABLE_COLLATION, TABLE_COMMENT, ROW_FORMAT, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'employees' AND TABLE_NAME = 'departments'; SELECT ORDINAL_POSITION, COLUMN_NAME, COLUMN_TYPE, IS_NULLABLE, COLUMN_DEFAULT, EXTRA, COLUMN_COMMENT, COLUMN_KEY FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'employees' AND TABLE_NAME = 'departments'; SELECT PARTITION_NAME, SUBPARTITION_NAME, PARTITION_ORDINAL_POSITION, SUBPARTITION_ORDINAL_POSITION, PARTITION_METHOD, SUBPARTITION_METHOD, PARTITION_EXPRESSION, SUBPARTITION_EXPRESSION, PARTITION_DESCRIPTION FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA = 'employees' AND TABLE_NAME = 'departments'; SELECT CONSTRAINT_NAME, COLUMN_NAME, REFERENCED_TABLE_SCHEMA, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE TABLE_SCHEMA = 'employees' AND TABLE_NAME = 'departments' AND REFERENCED_TABLE_SCHEMA IS NOT NULL; SELECT TABLE_SCHEMA, TABLE_NAME, ENGINE, AUTO_INCREMENT, AVG_ROW_LENGTH, CHECKSUM, TABLE_COLLATION, TABLE_COMMENT, ROW_FORMAT, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'employees' AND TABLE_NAME = 'departments'; SELECT ORDINAL_POSITION, COLUMN_NAME, COLUMN_TYPE, IS_NULLABLE, COLUMN_DEFAULT, EXTRA, COLUMN_COMMENT, COLUMN_KEY FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'employees' AND TABLE_NAME = 'departments'; SELECT PARTITION_NAME, SUBPARTITION_NAME, PARTITION_ORDINAL_POSITION, SUBPARTITION_ORDINAL_POSITION, PARTITION_METHOD, SUBPARTITION_METHOD, PARTITION_EXPRESSION, SUBPARTITION_EXPRESSION, PARTITION_DESCRIPTION FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA = 'employees' AND TABLE_NAME = 'departments';

As a summary, it checks partitions, row format, collation, constraints and so on.

Conclusion

There are different tools for different purposes. We can check the data consistency with pt-table-checkum/pt-table-sync but also the table definitions with mysqldiff.

The post Checking table definition consistency with mysqldiff appeared first on MySQL Performance Blog.

Missing my.cnf in default install

Lastest Forum Posts - April 15, 2015 - 12:08pm
Later I have installed several PXC in several different env's and OS.
In each installation it always installs default my.cnf to easy modifying and tuning. And it work as expected.
But from now I was wondering about missed my.cnf in clean install

Percona Team, you OK?

Full SST after clean shutdown of all nodes in the cluster and bootstrapping one node

Lastest Forum Posts - April 15, 2015 - 11:43am
Hello,

We are running a 3 node PXC cluster on 5.6.21. After gracefully shutting down all nodes in the cluster, we bootstrapped one node with the most advanced seqno in grastate.dat file (/etc/init.d/mysql bootstrap-pxc). When starting up the rest of the nodes (service mysql start), it triggers full SST, which takes a long time. We are wondering how we can avoid full SST and if IST is possible after bootstrapping a node in the cluster. And what is the best way to restart a cluster after gracefully shutting down all nodes?

Any help is appreciated.

how does --kill-long-query-type affect slave sql thread

Lastest Forum Posts - April 15, 2015 - 5:01am
Hi,

i'm taking backups on a slave server,
And i had a problem when Innobackupex issued the FLUSH TABLES WITH READ LOCK, it was locked because of long running query
as described in http://www.percona.com/doc/percona-x...ved_ftwrl.html

if for example i'll set -kill-long-queries-timeout = 20, --kill-long-query-type=all
will it also kill INSERT/UPDATE/DDL queries that are run by the slave sql thread?

Thanks for the help

Missing .frm files

Lastest Forum Posts - April 15, 2015 - 1:09am
Let me first explain my scenario,

I have two instance on my local linux box for mysql running on 3306 and 3307 port respectively. I want to get incremental backup
of 3 DB from 3306 and want to restore it on 3307.

Problem is that when i complete the backup & restoring process (using innobackupex) the data directory didn't contains any .frm files.

Below are the steps which i followed

************************************************** ******
Taking, Preparing and restoring multiple databases:-

Complete backup:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 /root/sunil/jimit/Percona/data/backups/multiple_db_backups

First Incremental backup:-
-> Create some new tables and add data in those tables:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --incremental /root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/first_incremental/ --incremental-basedir=/root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-26_06-27-14


Second Incremental Backup:-
-> Create some new tables and add data in those tables:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --incremental /root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/second_incremental/ --incremental-basedir=/root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/first_incremental/2015-03-20_07-44-43



Preparing multiple backups in order to restore:-

-> Applying logs to main compelete backup:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --apply-log --redo-only /root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-20_07-30-31


-> Applying logs of first incremental backup to main compelete backup:-
Applying first incremental backup:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --apply-log /root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-20_07-30-31 --incremental-dir=/root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/first_incremental/2015-03-20_07-44-43


-> Applying logs of second incremental backup to main compelete backup:-
Applying second incremental backup:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --apply-log /root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-20_07-30-31 --incremental-dir=/root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/second_incremental/2015-03-20_07-55-05



Restoring Complete backups for multiple databases prepared with first and second incremental ones:-
-> Stop Mysql service
/etc/init.d/mysqld-5.6.20 stop

-> Make data directory changes in my.cnf file by keeping path of empty data directory folder at 3 places.
-> Make changes wherever there is data directory path in my.cnf file.

Path is /data/multiple_db_datadir/

innobackupex --copy-back --defaults-file=/opt/mysql/etc/my.cnf /root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-20_07-30-31


-> Give permission to mysql user with chmod
chown -R mysql:mysql /data/multiple_db_datadir/mysql/

-> Copy mysql database to new data directory folder where test database resides.
cp -r /data/mysql-5.6-16/mysql /data/multiple_db_datadir/

Again give permission:-
chown -R mysql:mysql /data/multiple_db_datadir/mysql/


-> Start Mysql service and cross check tables of test database.
/etc/init.d/mysqld-5.6.20 start
/opt/mysql/bin/mysql -uroot -pxxxxx -P3307

Missing .frm files

Lastest Forum Posts - April 15, 2015 - 1:09am
Let me first explain my scenario,

I have two instance on my local linux box for mysql running on 3306 and 3307 port respectively. I want to get incremental backup
of 3 DB from 3306 and want to restore it on 3307.

Problem is that when i complete the backup & restoring process (using innobackupex) the data directory didn't contains any .frm files.

Below are the steps which i followed

************************************************** ******
Taking, Preparing and restoring multiple databases:-

Complete backup:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 /root/sunil/jimit/Percona/data/backups/multiple_db_backups

First Incremental backup:-
-> Create some new tables and add data in those tables:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --incremental /root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/first_incremental/ --incremental-basedir=/root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-26_06-27-14


Second Incremental Backup:-
-> Create some new tables and add data in those tables:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --incremental /root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/second_incremental/ --incremental-basedir=/root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/first_incremental/2015-03-20_07-44-43



Preparing multiple backups in order to restore:-

-> Applying logs to main compelete backup:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --apply-log --redo-only /root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-20_07-30-31


-> Applying logs of first incremental backup to main compelete backup:-
Applying first incremental backup:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --apply-log /root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-20_07-30-31 --incremental-dir=/root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/first_incremental/2015-03-20_07-44-43


-> Applying logs of second incremental backup to main compelete backup:-
Applying second incremental backup:-
innobackupex --defaults-file=/opt/mysql/etc/my.cnf --user=root --password=xxxxx --port=3306 --host=127.0.0.1 --include=test,test_1 --apply-log /root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-20_07-30-31 --incremental-dir=/root/sunil/jimit/Percona/data/backups/multiple_db_incr_backups/second_incremental/2015-03-20_07-55-05



Restoring Complete backups for multiple databases prepared with first and second incremental ones:-
-> Stop Mysql service
/etc/init.d/mysqld-5.6.20 stop

-> Make data directory changes in my.cnf file by keeping path of empty data directory folder at 3 places.
-> Make changes wherever there is data directory path in my.cnf file.

Path is /data/multiple_db_datadir/

innobackupex --copy-back --defaults-file=/opt/mysql/etc/my.cnf /root/sunil/jimit/Percona/data/backups/multiple_db_backups/2015-03-20_07-30-31


-> Give permission to mysql user with chmod
chown -R mysql:mysql /data/multiple_db_datadir/mysql/

-> Copy mysql database to new data directory folder where test database resides.
cp -r /data/mysql-5.6-16/mysql /data/multiple_db_datadir/

Again give permission:-
chown -R mysql:mysql /data/multiple_db_datadir/mysql/


-> Start Mysql service and cross check tables of test database.
/etc/init.d/mysqld-5.6.20 start
/opt/mysql/bin/mysql -uroot -pxxxxx -P3307

Team Tokutek is proud to join Team Percona!

Latest MySQL Performance Blog posts - April 14, 2015 - 11:18am

If you haven’t already heard, on the Tuesday morning of the 2015 Percona Live MySQL Conference and Expo it was announced that Tokutek is now part of the Percona family.  This means TokuDB® for MySQL, and TokuMX™ for MongoDB are Percona products now; and that the Tokutek  team is now part of the Percona team.

Percona’s well-deserved reputation for unparalleled customer service and support in the MySQL market makes them the perfect home for Tokutek’s ground-breaking products.  And with the Tokutek acquisition, Percona can expand and extend their activities and offerings into the MongoDB market.

This is a win/win for NoSQL and MySQL fans alike.

More About Tokutek

Tokutek is the company that productized a new and revolutionary form of database indexing designed specifically for modern, Big Data applications.  Based on data science research on new methods for high-performance data processing for data sets that no longer fit in memory, Fractal Tree® indexing is the secret sauce inside TokuDB and TokuMX.

Unlike the 40-year-old B-tree indexing found in other MySQL and MongoDB solutions, Fractal Tree indexing enables: up to 50x better performance; as much as 90% data compression; and 95% better write-optimization.  That translates into significant customer satisfaction gains as well as major cost savings.

In addition, drawing upon their experience in the MySQL world, Tokutek developers introduced full ACID and MVCC transaction compliance, better concurrency, and an improved failover protocol to the MongoDB marketplace with TokuMX. And that means better reliability for mission-critical big data applications built with MongoDB.

Next Steps

The Tokutek team is very excited to be joining the Percona team as we move into the next phase of growth on the MySQL and NoSQL market.

For now, if you want to learn more about TokuDB and TokuMX please visit www.tokutek.com.  (In the coming weeks, the Tokutek site will be folded into the Percona site.)

If you want to strike up a conversation about enterprise subscriptions for either product drop us a line at tokutek@percona.com.

Regards,
Craig Clark
Vice President, Percona Sales

The post Team Tokutek is proud to join Team Percona! appeared first on MySQL Performance Blog.

Tokutek now part of the Percona family

Latest MySQL Performance Blog posts - April 14, 2015 - 11:13am

It is my pleasure to announce that Percona has acquired Tokutek and will take over development and support for TokuDB® and TokuMX™ as well as the revolutionary Fractal Tree® indexing technology that enables those products to deliver improved performance, reliability and compression for modern Big Data applications.

At Percona we have been working with the Tokutek team since 2009, helping to improve performance and scalability. The TokuDB storage engine has been available for Percona Server for about a year, so joining forces is quite a natural step for us.

Fractal Tree indexing technology—developed by years of data science research at MIT, Stony Brook University and Rutgers University—is the new generation data structure which, for many workloads, leapfrogs traditional B-tree technology which was invented in 1972 (over 40 years ago!).  It is also often superior to LSM indexing, especially for mixed workloads.

But as we all know in software engineering, an idea alone is not enough.  There are hundreds of databases which have data structures based on essentially the same B-Tree idea, but their performance and scalability differs dramatically. The Tokutek engineering team has spent more than 50 man years designing, implementing and polishing this technology, which resulted  (in my opinion) in the only production-ready Open Source transactional alternative to the InnoDB storage engine in the MySQL space – TokuDB; and the only viable alternative distribution of MongoDB  – TokuMX.

Designed for Modern World –  TokuDB and TokuMX were designed keeping in mind modern database workloads, modern hardware and modern operating system properties which allowed for much more clean and scalable architecture, leading to great performance and scalability.

Compression at Speed  – As part of it, compression was an early part of design, so a very high level of compression can be achieved with low performance overhead. In fact, chances are with fast compression you will get better performance with compression enabled.

Great Read/Write Balance  – You find databases (or storage engines) are often classified into read optimized and write optimized, and even though you most likely heard about much better insert speed with Fractal Tree indexing, both for MySQL and MongoDB  you may not know that this is achieved with Read performance being in the same ballpark or better for many workloads. The difference is just not so drastic.

Multiple Clustered Keys  –  This is a great feature, which together with compression and low cost index maintenance, allows  TokuDB and TokuMX to reach much better performance for performance critical queries by clustering the data needed by such query together.

Messages    – When we’re speaking about conventional data structure such as B-trees or Hash tables, it is essentially a way data is stored and operations are being performed in it.  Fractal Tree indexing operates with a different paradigm which is focused around “Messages” being delivered towards the data to perform operations in questions.  This allows it to do a lot of clever stuff, such as implement more complex operations with the same message,  merge multiple messages together to optimize performance and use messages for internal purposes such as low overhead online optimization, table structure changes etc.

Low Overhead Maintenance  –  One of obvious uses of such Messages is  Low Overhead Maintenance.  The InnoDB storage engine allows you to add column “online,” which internally requires a full table rebuild, requiring a lot of time and resources for copy of the table.  TokuDB however, can use “broadcast message” to add the column which will become available almost immediately and will gradually physically propagate when data is modified. It is quite a difference!

Smart No-Read Updates –  Messages allow you to do smart complex updates without reading the data, dramatically improving performance.  For example this is used to implement “Read Free Replication”

Optimized In Memory Data Structures –  You may have heard a lot about in-memory databases, which are faster because they are using data structure optimized for properties on memory rather just caching the pages from disk, as, for example,  MyISAM and InnoDB do.   TokuDB and  TokuMX offer you the best of both worlds  by using memory optimized data structures for resident data and disk optimized data structures when data is pushed to disk.

Optimized IO  –  Whether you’re using legacy spinning media or Solid State Storage you will appreciate TokuDB having optimized IO – doing less and more sequential IO which helps spinning media performance, as well as dramatically reducing wear on flash, so you can improve longevity for your media or use lower cost storage.

Between the Tokutek engineering team and Percona we have a lot of ideas on how to take this technology even further, so it is the technology of choice for large portions of modern database workloads in the MySQL and MongoDB space. We are committed to working together to advance the limits of Open Source databases (relational or not)!

Interested to check out whether TokuDB or TokuMX is right for your application? Please contact us at tokutek@percona.com.

The post Tokutek now part of the Percona family appeared first on MySQL Performance Blog.

need HA and failover

Lastest Forum Posts - April 14, 2015 - 11:13am

we have mysql prod database in AWS EC2 and we like to implement a full HA solution so we have no downtime and can failover, can you send me a details solution, much appreciated.

Percona Toolkit 2.2.14 is now available

Latest MySQL Performance Blog posts - April 14, 2015 - 9:33am

Percona is pleased to announce the availability of Percona Toolkit 2.2.14.  Released April 14, 2015. Percona Toolkit is a collection of advanced command-line tools to perform a variety of MySQL server and system tasks that are too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

This release is the current GA (Generally Available) stable release in the 2.2 series. It includes multiple bug fixes for pt-table-checksum with better support for Percona XtraDB Cluster, various other fixes, as well as continued preparation for MySQL 5.7 compatibility. Full details are below. Downloads are available here and from the Percona Software Repositories.

New Features:

  • pt-slave-find can now resolve the IP address and show the slave’s hostname. This can be done with the new --resolve-address option.
  • pt-table-sync can now ignore the tables whose names match a specific Perl regex with the new --ignore-tables-regex option.

Bugs Fixed:

  • Fixed bug 925781: Inserting non-BMP characters into a column with utf8 charset would cause the Incorrect string value error when running the pt-table-checksum.
  • Fixed bug 1368244: pt-online-schema-change --alter-foreign-keys-method=drop-swap` was not atomic and thus it could be interrupted. Fixed by disabling common interrupt signals during the critical drop-rename phase.
  • Fixed bug 1381280: pt-table-checksum was failing on BINARY field in Primary Key. Fixed by implementing new --binary-index flag to optionally create checksum table using BLOB data type.
  • Fixed bug 1421405: Running pt-upgrade against a log with many identical (or similar) queries was producing repeated sections with the same fingerprint.
  • Fixed bug 1402730: pt-duplicate-key-checker was not checking for duplicate keys when --verbose option was set.
  • Fixed bug 1406390: A race condition was causing pt-heartbeat to crash with sleep argument error.
  • Fixed bug 1417558: pt-stalk when used along with --collect-strace didn’t write the strace output to the expected destination file.
  • Fixed bug 1421025: Missing dependency for perl-TermReadKey RPM package was causing toolkit commands to fail when they were run with --ask-pass option.
  • Fixed bug 1421781: pt-upgrade would fail when log contained SELECT...INTO queries. Fixed by ignoring/skipping those queries.
  • Fixed bug 1425478: pt-stalk was removing non-empty files that were starting with an empty line.
  • Fixed bug 1419098: Fixed bad formatting in the pt-table-checksum documentation.

Details of the release can be found in the release notes and the 2.2.14 milestone at Launchpad. Bugs can be reported on the Percona Toolkit launchpad bug tracker.

The post Percona Toolkit 2.2.14 is now available appeared first on MySQL Performance Blog.

MySQL: Avoid filesort when using ORDER BY LIMIT

Lastest Forum Posts - April 14, 2015 - 8:53am
: CREATE TABLE table1 ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `kid` int(10) unsigned NOT NULL, `table_group` varchar(100) COLLATE utf8_bin NOT NULL) ENGINE=InnoDB; I Got index on following Column;
: PRIMARY KEY (`id`) KEY `index1` (`kid`,`table_group`); The table has 5million rows matching to the where clause; When i do a explain on the below query its doing filesort and the runtime with limit its take 10seconds which is very high.
: mysql> explain select * from db1.table1 FORCE INDEX(index1) where kid=187 and table_group in ('NOT_PRESENT', 'NOTHING', 'PERROR') order by id limit 200\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: tabl1 type: range possible_keys: index1 key: index1 key_len: 306 ref: NULL rows: 1052764 Extra: Using index condition; Using filesort I want to avoid this filesort; Please help

Pages

Subscribe to Percona aggregator
]]>