]]>
]]>

You are here

Feed aggregator

Can only Single Node in CentOS 7

Lastest Forum Posts - December 24, 2014 - 11:16am
Can't seem to get hosts talking to each other. Cluster size is always 1 and, even though I can ping boxes from each other, they won't join.

Followed the guide here: http://www.percona.com/doc/percona-x...ots_howto.html
with these packages:

: Percona-XtraDB-Cluster-56-debuginfo.x86_64 1:5.6.21-25.8.940.el7 @percona Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.21-25.8.940.el7 @percona Percona-XtraDB-Cluster-full-56.x86_64 1:5.6.21-25.8.940.el7 @percona Percona-XtraDB-Cluster-galera-3.x86_64 3.8-1.3390.rhel7 @percona Percona-XtraDB-Cluster-galera-3-debuginfo.x86_64 3.8-1.3390.rhel7 @percona Percona-XtraDB-Cluster-garbd-3.x86_64 3.8-1.3390.rhel7 @percona Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.21-25.8.940.el7 @percona Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.21-25.8.940.el7 @percona Percona-XtraDB-Cluster-test-56.x86_64 1:5.6.21-25.8.940.el7 @percona percona-release.x86_64 0.0-1 installed percona-xtrabackup.x86_64 2.2.7-5050.el7 @percona

Host 1:
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_incoming_addresses | 192.168.197.115:3306 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | eace2c17-8b9a-11e4-a4e2-375c869d986a |
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 98047d41-8b9a-11e4-9b38-37945e1e5709 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |


: [root@localhost ~]# cat /etc/my.cnf [mysqld] # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 #wsrep_cluster_address=gcomm://192.168.197.115,192.168.136.56,192.168.134.12 wsrep_cluster_address=gcomm:// # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node address wsrep_node_address=192.168.197.115 # wsrep_cluster_name=vfcluster1 wsrep_sst_method=xtrabackup wsrep_sst_auth="sstuser:sstpass" # LOGGING # log-bin = /var/lib/mysql/bin.log log_error = /var/log/mysql/mysql-error.log slow_query_log = 1 slow_query_log_file = /var/log/mysql/mysql-slow.log long_query_time = 1 : [root@localhost ~]# garbd -a gcomm://192.168.197.115,192.168.136.56,192.168.134.12?pc.wait_prim=no -g vfcluster1 2014-12-24 14:10:10.615 INFO: CRC-32C: using hardware acceleration. 2014-12-24 14:10:10.615 INFO: Read config: daemon: 0 name: garb address: gcomm://192.168.197.115,192.168.136.56,192.168.134.12?pc.wait_prim=no group: vfcluster1 sst: trivial donor: options: gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_master_slave=yes cfg: log: 2014-12-24 14:10:10.617 INFO: protonet asio version 0 2014-12-24 14:10:10.617 INFO: Using CRC-32C for message checksums. 2014-12-24 14:10:10.618 INFO: backend: asio 2014-12-24 14:10:10.619 WARN: access file(gvwstate.dat) failed(No such file or directory) 2014-12-24 14:10:10.619 INFO: restore pc from disk failed 2014-12-24 14:10:10.620 INFO: GMCast version 0 terminate called after throwing an instance of 'gu::NotSet' Aborted
Host 2:
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_incoming_addresses | 192.168.197.115:3306 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 6acc4184-8b9d-11e4-be4b-f666cd9bd1e5 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 389ea7e8-8b9d-11e4-9dda-e6c798b477e1 |
| wsrep_cluster_status | Primary |


: [root@localhost ~]# cat /etc/my.cnf [mysqld] # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://192.168.197.115,192.168.136.56,192.168.134.12 #wsrep_cluster_address=gcomm:// # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node address wsrep_node_address=192.168.197.115 # wsrep_cluster_name=vfcluster1 wsrep_sst_method=xtrabackup wsrep_sst_auth="sstuser:sstpass" # LOGGING # log-bin = /var/lib/mysql/bin.log log_error = /var/log/mysql/mysql-error.log slow_query_log = 1 slow_query_log_file = /var/log/mysql/mysql-slow.log long_query_time = 1 : [root@localhost ~]# garbd -a gcomm://192.168.197.115,192.168.136.56,192.168.134.12?pc.wait_prim=no -g vfcluster1 2014-12-24 13:55:55.622 INFO: CRC-32C: using hardware acceleration. 2014-12-24 13:55:55.622 INFO: Read config: daemon: 0 name: garb address: gcomm://192.168.197.115,192.168.136.56,192.168.134.12?pc.wait_prim=no group: vfcluster1 sst: trivial donor: options: gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_master_slave=yes cfg: log: 2014-12-24 13:55:55.624 INFO: protonet asio version 0 2014-12-24 13:55:55.624 INFO: Using CRC-32C for message checksums. 2014-12-24 13:55:55.625 INFO: backend: asio 2014-12-24 13:55:55.625 WARN: access file(gvwstate.dat) failed(No such file or directory) 2014-12-24 13:55:55.626 INFO: restore pc from disk failed 2014-12-24 13:55:55.627 INFO: GMCast version 0 terminate called after throwing an instance of 'gu::NotSet' Aborted

InnoDB crash recovery speed in MySQL 5.6

Latest MySQL Performance Blog posts - December 24, 2014 - 6:43am

It has been a while since I have looked at InnoDB crash recovery. A lot has change in the last few years – we have serious crash recovery performance improvements in MySQL 5.5 and MySQL 5.6, we have solid state drives raising as typical high performance IO subsystem and we also have the ability to set much larger log files and often have a much larger InnoDB Buffer Pool to work with.

First let me revisit the challenge with have with InnoDB configuration. For write-intensive workloads it is extremely important to size innodb_log_file_size for good performance, however the longer log file size you have the longer you might have to wait for InnoDB to complete crash recovery, which impacts your recovery strategy.

How much can innodb_log_file_size impact performance? Massively! Doing intensive writes to a database that well fits in memory, I’d say there’s a 10x difference between having combined size of log files of 32GB vs 64MB.

Before we look at some performance numbers let’s talk about what InnoDB Crash Recovery time depends on:

Combined Innodb Log File Size innodb_log_file_size*innodb_log_files_in_group is what really matters. It does not really matter which of those two you change. I prefer to keep innodb_log_files_in_group as default and only work with innodb_log_file_size. The larger size you have allocated the longer recovery will take.

innodb_checkpoint_age – Combined size of InnoDB log files defines how many changes not reflected in the tablespace we may have where innodb_checkpoint_age shows how much changes we actually have at the current moment, being an actual driving factor of recovery time. If you have very large log files allocated but for your workload innodb_checkpoint_age stays low chances are recovery will be quick. Be careful however – intensive writes can cause innodb_checkpoint_age to go much higher than the average for your workload causing recovery time from crashes at that time to be much longer.

Innodb Buffer Pool Size – This is another very important factor. During recovery, InnoDB has to redo changes to the unflushed/dirty pages from buffer pool, which is obviously limited by buffer pool size. This also means innodb_max_dirty_pages_pct can be used to impact recovery speed. This is the number of dirty pages being the true driving factor. With small buffer pool, a limited number of dirty pages based on the workload you might not have innodb_checkpoint_age to go high even if you have allowed for large log space.

Data Structure matters a lot for recovery speed. Generally shorter rows being updated will mean longer recovery time for the same log file size. This should make sense as shorter row changes means there is less log space produced for the same amount of page changes. If you do a lot of blob writes InnoDB crash recovery can be short even with relatively large log files.

Access Pattern is another key factor – the more “random” access is the more distinct pages you will have touched during the same innodb_checkpoint_age the longer recovery can take.

Hardware - Better hardware means recovery goes faster, as much is obvious. More specifically you will be looking for storage performance at low concurrency (both reads and writes are important) as well as fast CPU cores – crash recovery at this point is not able to use multiple cores effectively.

Let’s now look at the test….
I am running Sysbench on an 11GB table, designed to fit in the 12GB buffer pool. Here is the exact command:

sysbench --tx-rate=4000 --num-threads=64 --report-interval=10 --max-time=0 --max-requests=0 --rand-type=uniform --oltp-table-size=40000000 --mysql-user=root --mysql-password=password --test=/usr/share/doc/sysbench/tests/db/update_index.lua run

The box is rather low end i3-4010U (4 threads) CPU with a Samsung EVO 840GB SSD, so numbers are expected to be higher on real server hardware.

In my first test I’m injecting 4000 updates/sec which is about half of what the box can do at sustained load. I do this to illustrate more common load scenario as we rarely run systems at their saturation point in real world. The uniform distribution should mean worse case scenarios for in-memory workloads through I think recovery speed would be slower if u use random writes to the database much larger than the amount of memory.

At this workload I’m getting innodb_checkpoint_age of 15GB even though total log file size is 32GB. Crashing the system makes for about 40 minutes recovery time so the log was processed at the 6.25MB/sec

Here are some interesting graphs:

As you can see recovery is essentially close to single core. It also can be CPU bound at times (and will be more so with faster storage) – at certain times of recovery when logs are being scanned it can be completely CPU bound (see how IO wait essentially goes to zero at some times)

Over time as recovery progresses more and more blocks become cached, so they do not have to be read from the disk for log records to be applied, meaning the workload becomes more and more write bound.

This is an unweighted IO utilization graph where 1000 corresponds to 100% of time where at least one IO request was outstanding. As you can see from this and the previous drive, InnoDB does not keep the IO well saturated all the time during crash recovery.

Additionally to the first sysbench crash test I did two more – one running a system completely saturated with updates. This made innodb_checkpoint_age to go as high as 23.2GB and crash recovery took 1h 20 minutes, showing some 4.8MB/sec The thing to consider in this case is that MySQL was not able to keep up with purging the history so it was growing quickly meaning crash recovery had to cover a lot of undo space modifications.

Finally I also did a run with a more skewed pareto distribution which resulted in 9.8G innodb_checkpoint_age 33min crash recovery time and 4.94MB/sec of log processing speed.

As I explained above there are a lot of moving parts so your numbers are likely to be quite different, yet I hope this can provide some reasonable baseline you can use for calculation.

Note also waiting for the server to recover from the crash is only one way to deal with recovery. Even if you size log files to be very small you will likely need to deal with Operating System boot and when warmup which will take a few minutes. It is often much better to use a different primary method of crash recovery, such as failover to the MySQL Replication Slave or using Percona XtraDB Cluster. If you use these methods you can often use quite a high combined InnoDB log file size to optimize for performance.

Final Thoughts: Even though InnoDB Crash Recovery has improved in MySQL 5.5 and MySQL 5.6 there is still room to improve it even more. As we see from the resource usage graphs during recovery there is an opportunity to both use multiple CPU cores more effectively as well as drive IO subsystem with higher concurrency and in more sustained fashion.

The post InnoDB crash recovery speed in MySQL 5.6 appeared first on MySQL Performance Blog.

XtraBackup fails instantly

Lastest Forum Posts - December 24, 2014 - 1:51am
Hi there,

XtraBackup fails on me when backuping up our database. The error is the infamous

: xtrabackup: error: log block numbers mismatch: xtrabackup: error: expected log block no. 1012092717, but got no. 1020481321 from the log file. So I googled and read the posts on this forum. At first I thought the logfiles were too small. Allthough 3 logfiles of 1GB each sounds pretty big, I checked the writing speed and it was around 300MB per minute. Checked with:

: mysql -uUSER -pPASSWD -e "show engine innodb status\G select sleep(60); show engine innodb status\G" \ | grep sequence | awk '{ if(N==0) { N=$(NF); } else { printf("%.2f MB/m\n", ($(NF) - N)/1024.0/1024.0); } }' With 300 MB/minute, I would need 18GB of logfiles to keep an hours worth of transactions. So allthough the manual states the logfiles should not exceed 4GB in total, I increased the sizes to 3x 5GB. There was no warning or error in the logs and the databases kept working OK.

However, this did not work for the backups. Most of the times XtraBackup fails within seconds, but I can't believe the log has wrapped that fast. If only our I/O really was that fast...

Then I even increased to 2x 10GB to comply with the 18GB needed for an hour, but still to no avail.

Running XtraBackup directed to >/dev/null also fails within seconds, so I don't suspect the I/O is too slow.

Sometimes it seems to really try to copy the logfiles and then it takes a couple of minutes before it crashes, but most of the times it's within seconds.

I guess I did something wrong, could someone help me out?
Should I decrease logfile sizes back to small values even though the transaction log won't hold much more than a couple of minutes?


is this a bug?

Lastest Forum Posts - December 24, 2014 - 12:51am

i have met a replication error:

the slave server crashed
when it started again,
I execute the command: "show slave status"

*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: xx.xx.xx.xx
Master_User: repl
Master_Port: xxxx
Connect_Retry: 60
Master_Log_File: mysql-bin.000858
Read_Master_Log_Pos: 379908871
Relay_Log_File: relay-bin.000620
Relay_Log_Pos: 724847546
Relay_Master_Log_File: mysql-bin.000806
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1062
Last_Error: Error 'Duplicate entry '1544846670' for key 'PRIMARY'' on query. Default database: 'xxxxxx_databasename'. Query: 'INSERT INTO xxx_tabname (......) VALUES ( 1165305, 17120165, 10, 100301, 81003010003, 2005, 1005, 50, 0, NULL, '2013-04-28 18:24:48', '2013-07-28 23:59:59', 1000, 1000, NULL, 1000, NULL, NULL, 0, NULL,
Skip_Counter: 0
Exec_Master_Log_Pos: 724847400
Relay_Log_Space: 56238889769
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1062
Last_SQL_Error: Error 'Duplicate entry '1544846670' for key 'PRIMARY'' on query. Default database: 'xxxxxx_databasename'. Query: 'INSERT INTO xxx_tabname (......) VALUES ( 1165305, 17120165, 10, 100301, 81003010003, 2005, 1005, 50, 0, NULL, '2013-04-28 18:24:48', '2013-07-28 23:59:59', 1000, 1000, NULL, 1000, NULL, NULL, 0, NULL,
Replicate_Ignore_Server_Ids:
Master_Server_Id: 4
1 row in set (0.00 sec)

but i found that the slave did not contain a row that the id = 1544846670;

then I executed 'start slave'
and show slave status again:
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: xx.xx.xx.xx
Master_User: repl
Master_Port: xxxx
Connect_Retry: 60
Master_Log_File: mysql-bin.000858
Read_Master_Log_Pos: 379935147
Relay_Log_File: relay-bin.000620
Relay_Log_Pos: 724847546
Relay_Master_Log_File: mysql-bin.000806
Slave_IO_Running: Yes
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1062
Last_Error: Error 'Duplicate entry '1544846671' for key 'PRIMARY'' on query. Default database: 'xxxxxx_databasename'. Query: 'INSERT INTO xxx_tabname (......) VALUES ( 1165306, 14765802804742, 10, 100301, 81003010003, 2005, 1000, 50, 0, NULL, '2013-04-28 17:45:12', '2014-04-27 23:59:59', 6000, 5000, NULL, 5000, NULL, NULL, 0, N
Skip_Counter: 0
Exec_Master_Log_Pos: 724847400
Relay_Log_Space: 56238916045
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1062
Last_SQL_Error: Error 'Duplicate entry '1544846671' for key 'PRIMARY'' on query. Default database: 'dxxxxxx_databasename'. Query: 'INSERT INTO xxx_tabname (......) VALUES ( 1165306, 14765802804742, 10, 100301, 81003010003, 2005, 1000, 50, 0, NULL, '2013-04-28 17:45:12', '2014-04-27 23:59:59', 6000, 5000, NULL, 5000, NULL, NULL, 0, N
Replicate_Ignore_Server_Ids:
Master_Server_Id: 4
1 row in set (0.00 sec)


the error record became the next one;
but I did nothing else! except the "start slave" command
then I found that the record(id='1544846671') didnot exist too.


Now the slave had been recreated.




mysql-error.log:

141211 18:56:11 [ERROR] Slave SQL: Error 'Duplicate entry '1544846670' for key 'PRIMARY'' on query. Default database: 'xxxx_databasename'. Query: 'INSERT INTO tabname (......) VALUES ( 1165305, 17120165, 10, 100301, 81003010003, 2005, 1005, 50, 0, NULL, '2013-04-28 18:24:48', '2013-07-28 23:59:59', 1000, 1000, NULL, 1000, NULL, NULL,
141211 18:56:11 [Warning] Slave: Duplicate entry '1544846670' for key 'PRIMARY' Error_code: 1062
141211 18:56:11 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.000806' position 724847400
141212 9:35:01 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000806' at position 724847400, relay log '/var/lib/mysql/relay-bin.000620' position: 724847546
141212 9:35:01 [ERROR] Slave SQL: Error 'Duplicate entry '1544846671' for key 'PRIMARY'' on query. Default database: 'xxxx_databasename'. Query: 'INSERT INTO tabname (......) VALUES ( 1165306, 14765802804742, 10, 100301, 81003010003, 2005, 1000, 50, 0, NULL, '2013-04-28 17:45:12', '2014-04-27 23:59:59', 6000, 5000, NULL, 5000, NULL, N
141212 9:35:01 [Warning] Slave: Duplicate entry '1544846671' for key 'PRIMARY' Error_code: 1062
141212 9:35:01 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.000806' position 724847400
141212 9:35:56 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000806' at position 724847400, relay log '/var/lib/mysql/relay-bin.000620' position: 724847546
141212 9:35:56 [ERROR] Slave SQL: Error 'Duplicate entry '1544846672' for key 'PRIMARY'' on query. Default database: 'xxxx_databasename'. Query: 'INSERT INTO tabname (......) VALUES ( 1165306, 36659631, 10, 100301, 81003010003, 2005, 1000, 50, 0, NULL, '2013-04-28 17:45:15', '2014-04-27 23:59:59', 6000, 5000, NULL, 5000, NULL, NULL,
141212 9:35:56 [Warning] Slave: Duplicate entry '1544846672' for key 'PRIMARY' Error_code: 1062
141212 9:35:56 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.000806' position 724847400

Percona Configuration Wizard

Lastest Forum Posts - December 23, 2014 - 10:37pm
Hi,

OS: CentOS7
DB: MarianDB
Storage: InnoDB

I've used Percona Configuration Wizard to create my.cnf

I found the following optimization parameters on a few forums, but these are not there in optimization parameters create by Percona Configuration Wizard.

max_user_connections = 60
max_allowed_packet = 1M
max_connect_errors = 999999
table_cache = 1200 ## max user connections x number of JOINs your heaviest SQL query contains

Trying to understand why the above parameters were ignored.

Advice or pointers are appreciated.

Thanks!

File carving methods for the MySQL DBA

Latest MySQL Performance Blog posts - December 23, 2014 - 8:31am

This is a long overdue blog post from London’s 44con Cyber Security conference back in September. A lot of old memories were brought to the front as it were; the one I’m going to cover in this blog post is: file carving.

So what is file carving? despite the terminology it’s not going to be a full roast dinner; unless you have an appetite for data which as you’re here I’m assuming you have.

The TL;DR of “what is file carving” is taking a target blob of data (often a multi GB / TB file) and reducing it in to targeted pieces of data, this could be for instance grabbing all the jpeg images in a packet capture / mysqldump; or pulling that single table/schema out of a huge mysqldump with –all-databases (if you’re not using mydumper you really should it avoids issues like this!) aka “Sorting the wheat from the chaff”.

Let’s take for example at the time of writing this post I am looking to extract a single schema out of one such mysqldump –all-database file of around 2GB (2GB of course isn’t large however it’s large enough to give a practical example; the methods for larger files are of course the same). So where to start?

You’ll need the following tools installed:

  1. xxd (you can substitute xxd for od, hexer or any other hex editing / viewing tool you are comfortable with, just make sure it can handle very large files)
  2. grep

Let’s carve out the mysql schema

dbusby@kali:~$ xxd yourdumpfile.sql | grep 'mysql' -B5 | grep 'ASE' -A2 -B2
00003c0: 6e74 2044 6174 6162 6173 653a 2060 6d79 nt Database: my
00003d0: 7371 6c60 0a2d 2d0a 0a43 5245 4154 4520 sql.--..CREATE
00003e0: 4441 5441 4241 5345 202f 2a21 3332 3331 DATABASE /*!3231
00003f0: 3220 4946 204e 4f54 2045 5849 5354 532a 2 IF NOT EXISTS*
0000400: 2f20 606d 7973 716c 6020 2f2a 2134 3031 / mysql /*!40

Wonderful so we have some hex representation of the sql dumpfile why on earth do we want the hex? we need to define our offsets. In short our offsets are the position of the start and end of the chunk we intend to carve from the file.

From the above our start offset is 00003d9 at the start of CREATE DATABASE; for those unfamiliar with hexdump outputs I recommend looking at the tool hexer a vi like tool and pressing v to enter visual selection mode select a few characters and you’ll not something as follows “visual selection:  0x000003d9 – …”.

You can of course work out the range visually from the above, 00003d0 is the start of the line, each alphanumeric pair is a single byte the byte offset notation is hexedecimal 0,1,2,3,4,5,6,7,8,9,a,b,c,d,e,f.

00003d0: 7371 6c60 0a2d 2d0a 0a43 5245 4154 4520 sql.--..CREATE
00003d0 == s, 00003d1 == q, 00003d2 == l And so on, we can easily verify this using xxd
dbusby@kali:~$ xxd -s 0x3d9 yourdumpfile.sql | head -n3
00003d9: 4352 4541 5445 2044 4154 4142 4153 4520 CREATE DATABASE
00003e9: 2f2a 2133 3233 3132 2049 4620 4e4f 5420 /*!32312 IF NOT
00003f9: 4558 4953 5453 2a2f 2060 6d79 7371 6c60 EXISTS*/ mysql

right so now we need the end offset, as above we establish a search pattern as the schema data we're carving is in the midst of a larger file we can look for the start of the dump for the next schema.


dbusby@kali:~$ xxd -s 0x3d9 yourdumpfile.sql | grep '--' -A5 | grep C -A2 -B2 | less
...
0083b19: 2043 7572 7265 6e74 2044 6174 6162 6173 Current Databas
0083b29: 653a 2060 7065 7263 6f6e 6160 0a2d 2d0a e: nextschema`.--.
...

I’ve piped into less here as there were many matches to the grep patterns.

From the above we can see a potential offset of 0x83b19 however we want to “backtrack” a few bytes to before the — comment start.


dbusby@kali:~$ xxd -s 0x83b14 yourdumpfile.sql | head -n1
0083b14: 2d2d 0a2d 2d20 4375 7272 656e 7420 4461 --.-- Current Da

Excellent we have our offsets starting at 0x3d9 ending at 0x83b14 we need to now convert base16 (hexidecimal) into base10 fortunatly we can do this usinc the bc utility very easily however we will need to fully expand and make upper case our offsets.


dbusby@kali:~$ echo 'ibase=16;00003D9' | bc
985
dbusby@kali:~$ echo 'ibase=16;0083B14' | bc
539412
dbusby@kali:~$ echo '539412-985' | bc
538427
dbusby@kali:~$ dd if=yourdumpfile.sql of=mysql.sql skip=985 bs=1 count=538427
538427+0 records in
538427+0 records out
538427 bytes (538 kB) copied, 1.08998 s, 494 kB/s

Let’s discuss this a little; what we have done here is convert our start offset to a base10 count of bytes to offset by when using dd (skip=985) we then convert the end offset to its base10 byte position, and by removing the startoffset base10 value this gives us the size of the chunk we are carving.

We now put this into a dd command line, and voila! we have a mysql.sql file which contains only the mysqldump data.

I hope this post helps somewhat to demystify file carving; the above techniques can be applied to any for of file carving need and is not limited only to mysql files.

The post File carving methods for the MySQL DBA appeared first on MySQL Performance Blog.

Help on pt-table-sync

Lastest Forum Posts - December 22, 2014 - 8:16am
Getting the below error.

root@cloud ~]# pt-table-sync --execute h=master, d=dummy, t=sax, h=slave
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Unknown DSN option 'd' in 'd=dummy,'. For more details, please use the --help option, or try 'perldoc /usr/bin/pt-table-sync' for complete documentation. at /usr/bin/pt-table-sync line 2090.

Testing backup locks during Xtrabackup SST on Percona XtraDB Cluster

Latest MySQL Performance Blog posts - December 22, 2014 - 5:00am
Background on Backup Locks

I was very excited to see Backup locks support in release notes for the latest Percona XtraDB Cluster 5.6.21 release. For those who are not aware, backup locks offer an alternative to FLUSH TABLES WITH READ LOCK (FTWRL) in Xtrabackup. While Xtrabackup can hot-copy Innodb, everything else in MySQL must be locked (usually briefly) to get a consistent snapshot that lines up with Innodb. This includes all other storage engines, but also things like table schemas (even on Innodb) and async replication binary logs. You can skip this lock, but it isn’t generally considered a ‘safe’ backup in every case.

Until recently, Xtrabackup (like most other backup tools) used FTWRL to accomplish this. This worked great, but had the unfortunate side-effect of locking every single table, even the Innodb ones.  This functionally meant that even a hot-backup tool for Innodb had to take a (usually short) global lock to get a consistent backup with MySQL overall.

Backup locks change that by introducing a new locking command on Percona Server called ‘LOCK TABLES FOR BACKUP’.  This works by locking writes to non-transactional tables, as well as locking DDL on all tables (including Innodb).  If Xtrabackup (of a recent vintage) detects that it’s backing up a Percona Server (also of recent vintage), it will automatically use LOCK TABLES WITH BACKUP instead of FLUSH TABLES WITH READ LOCK.

The TL;DR of this is that you can keep on modifying your Innodb data through the entire backup, since we don’t need to use FTWRL any longer.

This feature was introduced in Percona Server 5.6.16-64.0 and Percona XtraBackup 2.2.  I do not believe you will find it in any other MySQL variant, though I could be corrected.

What this means for Percona XtraDB Cluster (PXC)

The most common (and logical) SST method for Percona XtraDB Cluster is using Xtrabackup. This latest release of PXC includes support for backup locks, meaning that Xtrabackup donor nodes will no longer need to get a global lock. Practically for PXC users, this means that your Donor nodes can stay in rotation without causing client interruptions due to FTWRL.

Seeing it in action

To test this out, I spun up a 3-node cluster on AWS and fired up a sysbench run on the first node. I forced and SST on the node. Here is a snippet of the innobackup.backup.log (generated by all Xtrabackup donors in Percona XtraDB Cluster):

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. Get the latest version of Percona XtraBackup, documentation, and help resources: http://www.percona.com/xb/p 141218 19:22:01 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/etc/my.cnf;mysql_read_default_group=xtraback up;mysql_socket=/var/lib/mysql/mysql.sock' as 'sst' (using password: YES). 141218 19:22:01 innobackupex: Connected to MySQL server 141218 19:22:01 innobackupex: Starting the backup operation IMPORTANT: Please check that the backup run completes successfully. At the end of a successful backup run innobackupex prints "completed OK!". innobackupex: Using server version 5.6.21-70.1-56 innobackupex: Created backup directory /tmp/tmp.Rm0qA740U3 141218 19:22:01 innobackupex: Starting ibbackup with command: xtrabackup --defaults-file="/etc/my.cnf" --defaults-group="mysqld" --backup --suspe nd-at-end --target-dir=/tmp/tmp.dM03LgPHFY --innodb_data_file_path="ibdata1:12M:autoextend" --tmpdir=/tmp/tmp.dM03LgPHFY --extra-lsndir='/tmp/tmp.dM 03LgPHFY' --stream=xbstream innobackupex: Waiting for ibbackup (pid=21892) to suspend innobackupex: Suspend file '/tmp/tmp.dM03LgPHFY/xtrabackup_suspended_2' xtrabackup version 2.2.7 based on MySQL server 5.6.21 Linux (x86_64) (revision id: ) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql xtrabackup: open files limit requested 0, set to 5000 xtrabackup: using the following InnoDB configuration: xtrabackup: innodb_data_home_dir = ./ xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend xtrabackup: innodb_log_group_home_dir = ./ xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 1073741824 xtrabackup: using O_DIRECT >> log scanned up to (10525811040) xtrabackup: Generating a list of tablespaces [01] Streaming ./ibdata1 >> log scanned up to (10529368594) >> log scanned up to (10532685942) >> log scanned up to (10536422820) >> log scanned up to (10539562039) >> log scanned up to (10543077110) [01] ...done [01] Streaming ./mysql/innodb_table_stats.ibd [01] ...done [01] Streaming ./mysql/innodb_index_stats.ibd [01] ...done [01] Streaming ./mysql/slave_relay_log_info.ibd [01] ...done [01] Streaming ./mysql/slave_master_info.ibd [01] ...done [01] Streaming ./mysql/slave_worker_info.ibd [01] ...done [01] Streaming ./sbtest/sbtest1.ibd >> log scanned up to (10546490256) >> log scanned up to (10550321726) >> log scanned up to (10553628936) >> log scanned up to (10555422053) [01] ...done ... [01] Streaming ./sbtest/sbtest17.ibd >> log scanned up to (10831343724) >> log scanned up to (10834063832) >> log scanned up to (10837100278) >> log scanned up to (10840243171) [01] ...done xtrabackup: Creating suspend file '/tmp/tmp.dM03LgPHFY/xtrabackup_suspended_2' with pid '21892' >> log scanned up to (10843312323) 141218 19:24:06 innobackupex: Continuing after ibbackup has suspended 141218 19:24:06 innobackupex: Executing LOCK TABLES FOR BACKUP... 141218 19:24:06 innobackupex: Backup tables lock acquired 141218 19:24:06 innobackupex: Starting to backup non-InnoDB tables and files innobackupex: in subdirectories of '/var/lib/mysql/' innobackupex: Backing up files '/var/lib/mysql//mysql/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (74 files) >> log scanned up to (10846683627) >> log scanned up to (10847773504) innobackupex: Backing up files '/var/lib/mysql//sbtest/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (21 files) innobackupex: Backing up file '/var/lib/mysql//test/db.opt' innobackupex: Backing up files '/var/lib/mysql//performance_schema/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (53 files) >> log scanned up to (10852976291) 141218 19:24:09 innobackupex: Finished backing up non-InnoDB tables and files 141218 19:24:09 innobackupex: Executing LOCK BINLOG FOR BACKUP... 141218 19:24:09 innobackupex: Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS... 141218 19:24:09 innobackupex: Waiting for log copying to finish >> log scanned up to (10856996124) xtrabackup: The latest check point (for incremental): '9936050111' xtrabackup: Stopping log copying thread. .>> log scanned up to (10856996124) xtrabackup: Creating suspend file '/tmp/tmp.dM03LgPHFY/xtrabackup_log_copied' with pid '21892' 141218 19:24:10 innobackupex: Executing UNLOCK BINLOG 141218 19:24:10 innobackupex: Executing UNLOCK TABLES 141218 19:24:10 innobackupex: All tables unlocked 141218 19:24:10 innobackupex: Waiting for ibbackup (pid=21892) to finish xtrabackup: Transaction log of lsn (9420426891) to (10856996124) was copied. innobackupex: Backup created in directory '/tmp/tmp.Rm0qA740U3' 141218 19:24:30 innobackupex: Connection to database server closed 141218 19:24:30 innobackupex: completed OK!

We can see the LOCK TABLES FOR BACKUP issued at 19:24:06 and unlocked at 19:24:10. Let’s see Galera apply stats from this node during that time:

mycluster / ip-10-228-128-220 (idx: 0) / Galera 3.8(rf6147dd) Wsrep Cluster Node Repl Queue Ops Bytes Conflct Gcache Window Flow time P cnf # Stat Laten Up Dn Up Dn Up Dn lcf bfa ist idx dst appl comm p_ms 19:23:55 P 5 3 Dono 698µs 0 72 0 5418 0.0 3.5M 0 0 187k 94 3k 3 2 0 19:23:56 P 5 3 Dono 701µs 0 58 0 5411 0.0 3.5M 0 0 188k 229 3k 3 2 0 19:23:57 P 5 3 Dono 701µs 0 2 0 5721 0.0 3.7M 0 0 188k 120 3k 3 2 0 19:23:58 P 5 3 Dono 689µs 0 5 0 5643 0.0 3.6M 0 0 188k 63 3k 3 2 0 19:23:59 P 5 3 Dono 679µs 0 55 0 5428 0.0 3.5M 0 0 188k 115 3k 3 2 0 19:24:01 P 5 3 Dono 681µs 0 1 0 4623 0.0 3.0M 0 0 188k 104 3k 3 2 0 19:24:02 P 5 3 Dono 690µs 0 0 0 4301 0.0 2.7M 0 0 188k 141 3k 3 2 0 19:24:03 P 5 3 Dono 688µs 0 2 0 4907 0.0 3.1M 0 0 188k 227 3k 3 2 0 19:24:04 P 5 3 Dono 692µs 0 44 0 4894 0.0 3.1M 0 0 188k 116 3k 3 2 0 19:24:05 P 5 3 Dono 706µs 0 0 0 5337 0.0 3.4M 0 0 188k 63 3k 3 2 0

Initially the node is keeping up ok with replication. The Down Queue (wsrep_local_recv_queue) is sticking around 0. We’re applying 4-5k transactions per second (Ops Dn). When the backup lock kicks in, we do see an increase in the queue size, but note that transactions are still applying on this node:

19:24:06 P 5 3 Dono 696µs 0 170 0 5671 0.0 3.6M 0 0 187k 130 3k 3 2 0 19:24:07 P 5 3 Dono 695µs 0 2626 0 3175 0.0 2.0M 0 0 185k 2193 3k 3 2 0 19:24:08 P 5 3 Dono 692µs 0 1248 0 6782 0.0 4.3M 0 0 186k 1800 3k 3 2 0 19:24:09 P 5 3 Dono 693µs 0 611 0 6111 0.0 3.9M 0 0 187k 651 3k 3 2 0 19:24:10 P 5 3 Dono 708µs 0 93 0 5316 0.0 3.4M 0 0 187k 139 3k 3 2 0

So this node isn’t locked from innodb write transactions, it’s just suffering a bit of IO load while the backup finishes copying its files and such. After this, the backup finished up and the node goes back to a Synced state pretty quickly:

19:24:11 P 5 3 Dono 720µs 0 1 0 4486 0.0 2.9M 0 0 188k 78 3k 3 2 0 19:24:12 P 5 3 Dono 715µs 0 0 0 3982 0.0 2.5M 0 0 188k 278 3k 3 2 0 19:24:13 P 5 3 Dono 1.2ms 0 0 0 4337 0.0 2.8M 0 0 188k 143 3k 3 2 0 19:24:14 P 5 3 Dono 1.2ms 0 1 0 4901 0.0 3.1M 0 0 188k 130 3k 3 2 0 19:24:16 P 5 3 Dono 1.1ms 0 0 0 5289 0.0 3.4M 0 0 188k 76 3k 3 2 0 19:24:17 P 5 3 Dono 1.1ms 0 42 0 4998 0.0 3.2M 0 0 188k 319 3k 3 2 0 19:24:18 P 5 3 Dono 1.1ms 0 15 0 3290 0.0 2.1M 0 0 188k 75 3k 3 2 0 19:24:19 P 5 3 Dono 1.1ms 0 0 0 4124 0.0 2.6M 0 0 188k 276 3k 3 2 0 19:24:20 P 5 3 Dono 1.1ms 0 4 0 1635 0.0 1.0M 0 0 188k 70 3k 3 2 0 19:24:21 P 5 3 Dono 1.1ms 0 0 0 5026 0.0 3.2M 0 0 188k 158 3k 3 2 0 19:24:22 P 5 3 Dono 1.1ms 0 20 0 4100 0.0 2.6M 0 0 188k 129 3k 3 2 0 19:24:23 P 5 3 Dono 1.1ms 0 0 0 5412 0.0 3.5M 0 0 188k 159 3k 3 2 0 19:24:24 P 5 3 Dono 1.1ms 0 315 0 4567 0.0 2.9M 0 0 187k 170 3k 3 2 0 19:24:25 P 5 3 Dono 1.0ms 0 24 0 5535 0.0 3.5M 0 0 188k 131 3k 3 2 0 19:24:26 P 5 3 Dono 1.0ms 0 0 0 5427 0.0 3.5M 0 0 188k 71 3k 3 2 0 19:24:27 P 5 3 Dono 1.0ms 0 1 0 5221 0.0 3.3M 0 0 188k 256 3k 3 2 0 19:24:28 P 5 3 Dono 1.0ms 0 0 0 5317 0.0 3.4M 0 0 188k 159 3k 3 2 0 19:24:29 P 5 3 Dono 1.0ms 0 1 0 5491 0.0 3.5M 0 0 188k 163 3k 3 2 0 19:24:30 P 5 3 Sync 1.0ms 0 0 0 5540 0.0 3.5M 0 0 188k 296 3k 3 2 0 19:24:31 P 5 3 Sync 992µs 0 106 0 5594 0.0 3.6M 0 0 187k 130 3k 3 2 0 19:24:33 P 5 3 Sync 984µs 0 19 0 5723 0.0 3.7M 0 0 188k 275 3k 3 2 0 19:24:34 P 5 3 Sync 976µs 0 0 0 5508 0.0 3.5M 0 0 188k 182 3k 3 2 0

Compared to Percona XtraDB Cluster 5.5

The Backup Locking is only a feature of Percona XtraDB Cluster 5.6, so if we repeat the experiment on 5.5, we can see a more severe lock:

141218 20:31:19 innobackupex: Executing FLUSH TABLES WITH READ LOCK... 141218 20:31:19 innobackupex: All tables locked and flushed to disk 141218 20:31:19 innobackupex: Starting to backup non-InnoDB tables and files innobackupex: in subdirectories of '/var/lib/mysql/' innobackupex: Backing up files '/var/lib/mysql//sbtest/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (21 files) innobackupex: Backing up files '/var/lib/mysql//mysql/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (72 files) >> log scanned up to (6633554484) innobackupex: Backing up file '/var/lib/mysql//test/db.opt' innobackupex: Backing up files '/var/lib/mysql//performance_schema/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (18 files) 141218 20:31:21 innobackupex: Finished backing up non-InnoDB tables and files 141218 20:31:21 innobackupex: Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS... 141218 20:31:21 innobackupex: Waiting for log copying to finish xtrabackup: The latest check point (for incremental): '5420681649' xtrabackup: Stopping log copying thread. .>> log scanned up to (6633560488) xtrabackup: Creating suspend file '/tmp/tmp.Cq5JRZEFki/xtrabackup_log_copied' with pid '23130' 141218 20:31:22 innobackupex: All tables unlocked

Our lock lasts from 20:31:19 until 20:31:21, so it’s fairly short. Note that with larger databases with more schemas and tables, this can be quite a bit longer. Let’s see the effect on the apply rate for this node:

mycluster / ip-10-229-68-156 (idx: 0) / Galera 2.11(r318911d) Wsrep Cluster Node Repl Queue Ops Bytes Conflct Gcache Window Flow time P cnf # Stat Laten Up Dn Up Dn Up Dn lcf bfa ist idx dst appl comm p_ms 20:31:13 P 5 3 Dono N/A 0 73 0 3493 0.0 1.8M 0 0 1.8m 832 746 2 2 0.0 20:31:14 P 5 3 Dono N/A 0 29 0 3578 0.0 1.9M 0 0 1.8m 850 749 3 2 0.0 20:31:15 P 5 3 Dono N/A 0 0 0 3513 0.0 1.8M 0 0 1.8m 735 743 2 2 0.0 20:31:16 P 5 3 Dono N/A 0 0 0 3651 0.0 1.9M 0 0 1.8m 827 748 2 2 0.0 20:31:17 P 5 3 Dono N/A 0 27 0 3642 0.0 1.9M 0 0 1.8m 840 762 2 2 0.0 20:31:18 P 5 3 Dono N/A 0 0 0 3840 0.0 2.0M 0 0 1.8m 563 776 2 2 0.0 20:31:19 P 5 3 Dono N/A 0 0 0 4368 0.0 2.3M 0 0 1.8m 823 745 2 1 0.0 20:31:20 P 5 3 Dono N/A 0 3952 0 339 0.0 0.2M 0 0 1.8m 678 751 1 1 0.0 20:31:21 P 5 3 Dono N/A 0 7883 0 0 0.0 0.0 0 0 1.8m 678 751 0 0 0.0 20:31:22 P 5 3 Dono N/A 0 4917 0 5947 0.0 3.1M 0 0 1.8m 6034 3k 7 6 0.0 20:31:24 P 5 3 Dono N/A 0 10 0 8238 0.0 4.3M 0 0 1.8m 991 1k 7 6 0.0 20:31:25 P 5 3 Dono N/A 0 0 0 3016 0.0 1.6M 0 0 1.8m 914 754 2 1 0.0 20:31:26 P 5 3 Dono N/A 0 0 0 3253 0.0 1.7M 0 0 1.8m 613 766 1 1 0.0 20:31:27 P 5 3 Dono N/A 0 1 0 3600 0.0 1.9M 0 0 1.8m 583 777 2 1 0.0 20:31:28 P 5 3 Dono N/A 0 0 0 3640 0.0 1.9M 0 0 1.8m 664 750 2 2 0.0

The drop here is more severe and the apply rate hits 0 (and stays there for the duration of the FTWRL).

Implications

Obviously Xtrabackup running on a PXC node will cause some load on the node itself, so there still maybe good reasons to keep a Donor node out of rotation from your application.  However, this is less of an issue than it was in the past, where writes would definitely stall on a Donor node and present potentially intermittent stalls on the application.

How you allow applications to start using a Donor node automatically (or not) depends on how you have your HA between the application and cluster setup.  If you use HAproxy or similar with clustercheck, you can either modify the script itself or change a command line argument. The node is in the Donor/Desynced state below:

[root@ip-10-229-64-35 ~]# /usr/bin/clustercheck clustercheckuser clustercheckpassword! HTTP/1.1 503 Service Unavailable Content-Type: text/plain Connection: close Content-Length: 44 Percona XtraDB Cluster Node is not synced. [root@ip-10-229-64-35 ~]# /usr/bin/clustercheck clustercheckuser clustercheckpassword! 1 HTTP/1.1 200 OK Content-Type: text/plain Connection: close Content-Length: 40 Percona XtraDB Cluster Node is synced.

For those doing their own custom health checking, you basically just need to pass nodes that have a wsrep_local_state_comment of either ‘Synced’ or ‘Donor/Desynced’.

The post Testing backup locks during Xtrabackup SST on Percona XtraDB Cluster appeared first on MySQL Performance Blog.

Where do I ask about tcprstat?

Lastest Forum Posts - December 22, 2014 - 3:26am

Hi all,


I want to compile tcprstat and am encountering errors. None of the
forums seem to cover this utility.

This seems to be the closest that I could find. Is it OK to ask here
or should I go elsewhere?


Paul...

Mysql Desing Question: How store uuid+<a char string in database> column

Lastest Forum Posts - December 20, 2014 - 8:38am
One of the key application table has a column adid (Varchar(50)) as unique key, which has following format (uuid+ a character string). This column is used for heavy batch updaes (millions of updates, 100 updates are done with one commit). This causes slave and it is also a scalability concern for future.

0000015c-aae9-46f0-ac23-f3d188ca1686-0-us_gse
00000161-cefd-4c56-bd99-d3ba716d6acf-0-cts_gsb

Table has auto-increment primary key.

Is there more efficient way to store this data? For uuid it is recommended to convert into binary(16) ? What can we do here ? Can this string be also be converted into binary? Or any other design idea?

CentOS 6.5 And Percona XtraDB Cluster Just Doesn't Work?

Lastest Forum Posts - December 19, 2014 - 1:55pm
I can easily get Ubuntu/Debian clustering working. However, CentOS I get nothing but errors:
141219 14:42:55 mysqld_safe mysqld from pid file /var/lib/mysql/perconadb2.hsmovedev.com.pid ended
141219 14:43:30 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
141219 14:43:30 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.WnB5jJ' --pid-file='/var/lib/mysql/perconadb2.hsmovedev.com-recover.pid'
141219 14:43:32 mysqld_safe WSREP: Recovered position cab4275d-87c6-11e4-8794-87a6df8cd654:0
141219 14:43:32 [Note] WSREP: wsrep_start_position var submitted: 'cab4275d-87c6-11e4-8794-87a6df8cd654:0'
141219 14:43:32 [Note] WSREP: Read nil XID from storage engines, skipping position init
141219 14:43:32 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/libgalera_smm.so'
141219 14:43:32 [Note] WSREP: wsrep_load(): Galera 2.11(r318911d) by Codership Oy <info@codership.com> loaded successfully.
141219 14:43:32 [Note] WSREP: Found saved state: cab4275d-87c6-11e4-8794-87a6df8cd654:-1
141219 14:43:32 [Note] WSREP: Reusing existing '/var/lib/mysql//galera.cache'.
141219 14:43:32 [Note] WSREP: Passing config to GCS: base_host = 192.168.102.210; base_port = 4567; cert.log_conflicts = no; debug = no; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.npvo = false; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = P30S; pc.weight = 1; protonet.backend = asio;
141219 14:43:32 [Note] WSREP: Assign initial position for certification: 0, protocol version: -1
141219 14:43:32 [Note] WSREP: wsrep_sst_grab()
141219 14:43:32 [Note] WSREP: Start replication
141219 14:43:32 [Note] WSREP: Setting initial position to cab4275d-87c6-11e4-8794-87a6df8cd654:0
141219 14:43:32 [Note] WSREP: protonet asio version 0
141219 14:43:32 [Note] WSREP: backend: asio
141219 14:43:32 [Note] WSREP: GMCast version 0
141219 14:43:32 [Note] WSREP: (14146b78, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
141219 14:43:32 [Note] WSREP: (14146b78, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
141219 14:43:32 [Note] WSREP: EVS version 0
141219 14:43:32 [Note] WSREP: PC version 0
141219 14:43:32 [Note] WSREP: gcomm: connecting to group 'my_centos_cluster', peer '192.168.102.209:,192.168.102.210:,192.168.102.211 :'
141219 14:43:32 [Warning] WSREP: (14146b78, 'tcp://0.0.0.0:4567') address 'tcp://192.168.102.210:4567' points to own listening address, blacklisting
141219 14:43:35 [Warning] WSREP: no nodes coming from prim view, prim not possible
141219 14:43:36 [Warning] WSREP: last inactive check more than PT1.5S ago (PT3.50768S), skipping check
141219 14:44:05 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():141
141219 14:44:05 [ERROR] WSREP: gcs/src/gcs_core.cpp:long int gcs_core_open(gcs_core_t*, const char*, const char*, bool)():204: Failed to open backend connection: -110 (Connection timed out)
141219 14:44:05 [ERROR] WSREP: gcs/src/gcs.cpp:long int gcs_open(gcs_conn_t*, const char*, const char*, bool)():1303: Failed to open channel 'my_centos_cluster' at 'gcomm://192.168.102.209,192.168.102.210,192.168.102.211': -110 (Connection timed out)
141219 14:44:05 [ERROR] WSREP: gcs connect failed: Connection timed out
141219 14:44:05 [ERROR] WSREP: wsrep::connect() failed: 7
141219 14:44:05 [ERROR] Aborting

141219 14:44:05 [Note] WSREP: Service disconnected.
141219 14:44:06 [Note] WSREP: Some threads may fail to exit.
141219 14:44:06 [Note] /usr/sbin/mysqld: Shutdown complete

141219 14:44:06 mysqld_safe mysqld from pid file /var/lib/mysql/perconadb2.hsmovedev.com.pid ended

I followed the guide to the letter, telnet to the ports jsut fine on the master. What is the issue?

Store UUID in an optimized way

Latest MySQL Performance Blog posts - December 19, 2014 - 6:00am

A few years ago Peter Zaitsev, in a post titled “To UUID or not to UUID,” wrote: There is timestamp based part in UUID which has similar properties to auto_increment and which could be used to have values generated at same point in time physically local in BTREE index.”

For this post I’ve rearranged the timestamp part of UUID (Universal Unique Identifier) and did some benchmarks.

Many people store UUID as char (36) and use as row identity value (PRIMARY KEY) because it is unique across every table, every database and every server and allow easy merging of records from different databases. But here comes the problem, using it as PRIMARY KEY causes the problems described below.

Problems with UUID
  • UUID has 36 characters which makes it bulky.
  • InnoDB stores data in the PRIMARY KEY order and all the secondary keys also contain PRIMARY KEY. So having UUID as PRIMARY KEY makes the index bigger which can not be fit into the memory
  • Inserts are random and the data is scattered.

Despite the problems with UUID, people still prefer it because it is UNIQUE across every table, can be generated anywhere. In this blog, I will explain how to store UUID in an efficient way by re-arranging timestamp part of UUID.

Structure of UUID

MySQL uses UUID version 1 which is a 128-bit number represented by a utf8 string of five hexadecimal numbers

  • The first three numbers are generated from a timestamp.
  • The fourth number preserves temporal uniqueness in case the timestamp value loses monotonicity (for example, due to daylight saving time).
  • The fifth number is an IEEE 802 node number that provides spatial uniqueness. A random number is substituted if the latter is not available (for example, because the host computer has no Ethernet card, or we do not know how to find the hardware address of an interface on your operating system). In this case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision should have very low probability.

The timestamp is mapped as follows:
When the timestamp has the (60 bit) hexadecimal value: 1d8eebc58e0a7d7. The following parts of the UUID are set:: 58e0a7d7-eebc-11d8-9669-0800200c9a66. The 1 before the most significant digits (in 11d8) of the timestamp indicates the UUID version, for time-based UUIDs this is 1.

Fourth and Fifth parts would be mostly constant if it is generated from a single server. First three numbers are based on timestamp, so they will be monotonically increasing. Lets rearrange the total sequence making the UUID closer to sequential. This makes the inserts and recent data look up faster. Dashes (‘-‘) make no sense, so lets remove them.
58e0a7d7-eebc-11d8-9669-0800200c9a66 => 11d8eebc58e0a7d796690800200c9a66

Benchmarking

I created created three tables

  • events_uuid – UUID binary(16) PRIMARY KEY
  • events_int – Additional BIGINT auto increment column and made it as primary key and index on UUID column
  • events_uuid_ordered – Rearranged UUID binary(16) as PRIMARY KEY

I created three stored procedures which insert 25K random rows at a time into the respective tables. There are three more stored procedures which call the random insert-stored procedures in a loop and also calculate the time taken to insert 25K rows and data and index size after each loop. Totally I have inserted 25M records.

    • Data Size
      Horizontal Axis – Number of inserts x 25,000
      Vertical Axis – Data Size in MB

      The data size for UUID table is more than other two tables.
    • Index Size
      Horizontal axis – Number of inserts x 25,000
      Vertical axis – Index Size in MB
    • Total Size
      Horizontal Axis – Number of inserts x 25,000
      Vertical Axis – Total Size in MB
    • Time taken
      Horizontal axis – Number of inserts x 25,000
      Vertical axis – Time Taken in seconds

For the table with UUID as PRIMARY KEY, you can notice that as the table grows big, the time taken to insert rows is increasing almost linearly. Whereas for other tables, the time taken is almost constant.

The size of UUID table is almost 50% bigger than Ordered UUID table and 30% bigger than table with BIGINT as PRIMARY KEY. Comparing the Ordered UUID table BIGINT table, the time taken to insert rows and the size are almost same. But they may vary slightly based on the index structure.

root@localhost:~# ls -lhtr /media/data/test/ | grep ibd -rw-rw---- 1 mysql mysql  13G Jul 24 15:53 events_uuid_ordered.ibd -rw-rw---- 1 mysql mysql  20G Jul 25 02:27 events_uuid.ibd -rw-rw---- 1 mysql mysql  15G Jul 25 07:59 events_int.ibd

Table Structure

#1 events_int CREATE TABLE `events_int` (  `count` bigint(20) NOT NULL AUTO_INCREMENT,  `id` binary(16) NOT NULL,  `unit_id` binary(16) DEFAULT NULL,  `event` int(11) DEFAULT NULL,  `ref_url` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,  `campaign_id` binary(16) COLLATE utf8_unicode_ci DEFAULT '',  `unique_id` binary(16) COLLATE utf8_unicode_ci DEFAULT NULL,  `user_agent` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,  `city` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,  `country` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,  `demand_partner_id` binary(16) DEFAULT NULL,  `publisher_id` binary(16) DEFAULT NULL,  `site_id` binary(16) DEFAULT NULL,  `page_id` binary(16) DEFAULT NULL,  `action_at` datetime DEFAULT NULL,  `impression` smallint(6) DEFAULT NULL,  `click` smallint(6) DEFAULT NULL,  `sold_impression` smallint(6) DEFAULT NULL,  `price` decimal(15,7) DEFAULT '0.0000000',  `actioned_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',  `unique_ads` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,  `notification_url` text COLLATE utf8_unicode_ci,  PRIMARY KEY (`count`),  KEY `id` (`id`),  KEY `index_events_on_actioned_at` (`actioned_at`),  KEY `index_events_unit_demand_partner` (`unit_id`,`demand_partner_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; #2 events_uuid CREATE TABLE `events_uuid` (  `id` binary(16) NOT NULL,  `unit_id` binary(16) DEFAULT NULL, ~ ~ PRIMARY KEY (`id`),  KEY `index_events_on_actioned_at` (`actioned_at`),  KEY `index_events_unit_demand_partner` (`unit_id`,`demand_partner_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; #3 events_uuid_ordered CREATE TABLE `events_uuid_ordered` (   `id` binary(16) NOT NULL,   `unit_id` binary(16) DEFAULT NULL, ~ ~ PRIMARY KEY (`id`),   KEY `index_events_on_actioned_at` (`actioned_at`),   KEY `index_events_unit_demand_partner` (`unit_id`,`demand_partner_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

Conclusions

 

    • Create function to rearrange UUID fields and use it

DELIMITER // CREATE DEFINER=`root`@`localhost` FUNCTION `ordered_uuid`(uuid BINARY(36)) RETURNS binary(16) DETERMINISTIC RETURN UNHEX(CONCAT(SUBSTR(uuid, 15, 4),SUBSTR(uuid, 10, 4),SUBSTR(uuid, 1, 8),SUBSTR(uuid, 20, 4),SUBSTR(uuid, 25))); // DELIMITER ;

Inserts

INSERT INTO events_uuid_ordered VALUES (ordered_uuid(uuid()),'1','M',....);

Selects

SELECT HEX(uuid),is_active,... FROM events_uuid_ordered ;

    • Define UUID as binary(16) as binary does not have any character set

 

References

 

The post Store UUID in an optimized way appeared first on MySQL Performance Blog.

wsrep/xtrabackup-v2 fails when having a dedicated binary logfile directory

Lastest Forum Posts - December 19, 2014 - 1:24am
I've already setup the donor and now want to start the second node.

rpm -qa | grep -i percona
Percona-XtraDB-Cluster-client-56-5.6.21-25.8.938.el6.x86_64
percona-xtrabackup-2.2.7-5050.el6.x86_64
Percona-XtraDB-Cluster-shared-56-5.6.21-25.8.938.el6.x86_64
Percona-XtraDB-Cluster-server-56-5.6.21-25.8.938.el6.x86_64
percona-release-0.1-3.noarch
Percona-XtraDB-Cluster-galera-3-3.8-1.3390.rhel6.x86_64

sst-method is set to 'xtrabackup-v2'.
With a dedicated binlog directory '/db03/mysql56-01/binlogs', wsrep fails with the following messages:

WSREP_SST: [INFO] Cleaning the existing datadir and innodb-data/log directories (20141219 09:03:29.673)
removed `/db02/mysql56-01/innodata/undo001'
removed `/db02/mysql56-01/innodata/ibdata2'
removed `/db02/mysql56-01/innodata/auto.cnf'
removed `/db02/mysql56-01/innodata/ibdata1'
removed `/db04/mysql56-01/innologs/ib_logfile1'
removed `/db04/mysql56-01/innologs/ib_logfile0'
dirname: extra operand `/db03/mysql56-01/binlogs/mysql56-01-bin'
Try `dirname --help' for more information.
WSREP_SST: [ERROR] Cleanup after exit with status:1 (20141219 09:03:29.770)
2014-12-19 09:03:29 17264 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '192.168.137.171' --auth 'root:mysql' --datadir '/db02/mysql56-01/innodata/' --defaults-file '/db05/mysql56-01/admin/startup/my.cnf' --parent '17264' '' : 1 (Operation not permitted)
2014-12-19 09:03:29 17264 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2014-12-19 09:03:29 17264 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2014-12-19 09:03:29 17264 [ERROR] Aborting

The 'dirname' command is somehow used with the wrong amount of arguments, i guess.

The whole err-log is attached.

Kind regards,
Wolfsrudel

Need to understand the log cleanup in mysql

Lastest Forum Posts - December 18, 2014 - 10:53pm
Hi,

My my.cnf is

innodb_log_file_size = 100M

And

mysql> SHOW GLOBAL VARIABLES LIKE 'innodb_log_file_size';
+----------------------+-----------+
| Variable_name | Value |
+----------------------+-----------+
| innodb_log_file_size | 104857600 |
+----------------------+-----------+
1 row in set (0.02 sec)

Also /var/lib/mysql

-rw-rw---- 1 mysql mysql 104857600 Nov 6 12:03 ib_logfile0
-rw-rw---- 1 mysql mysql 104857600 May 21 2012 ib_logfile1


I understand the log file is reached its maximum allocated size. I want to know if i delete it simply by "rm -rf ib_logfile0" and "rm -rf "ib_logfile0" will be any impact to the mysql functionality. Kindly advice.

I am very new to mysql, stuck with one issue, our sos berlin jobscheduler is not running due to mysql issue. Please help me.

Thanks in Advance!
Bala..

Need to understand the log cleanup in mysql

Lastest Forum Posts - December 18, 2014 - 10:53pm
Hi,

My my.cnf is

innodb_log_file_size = 100M

And

mysql> SHOW GLOBAL VARIABLES LIKE 'innodb_log_file_size';
+----------------------+-----------+
| Variable_name | Value |
+----------------------+-----------+
| innodb_log_file_size | 104857600 |
+----------------------+-----------+
1 row in set (0.02 sec)

Also /var/lib/mysql

-rw-rw---- 1 mysql mysql 104857600 Nov 6 12:03 ib_logfile0
-rw-rw---- 1 mysql mysql 104857600 May 21 2012 ib_logfile1


I understand the log file is reached its maximum allocated size. I want to know if i delete it simply by "rm -rf ib_logfile0" and "rm -rf "ib_logfile0" will be any impact to the mysql functionality. Kindly advice.

I am very new to mysql, stuck with one issue, our sos berlin jobscheduler is not running due to mysql issue. Please help me.

Thanks in Advance!
Bala..

Why pt-kill hangs in connecting to MySQL?

Lastest Forum Posts - December 18, 2014 - 6:58pm
I could connect to local MySQL successfully:
: gf@li741-164 ~> mysql -uroot -S /run/mysqld/mysqld.sock Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 202 Server version: 5.6.21-70.1 Source distribution Copyright (c) 2009-2014 Percona LLC and/or its affiliates Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> but when using pt-kill,it seems hangs to connect to MySQL,I waited for 5 minutes ,but it still hangs:
: gf@li741-164 ~> pt-kill --busy-time 6 --user root --print --socket /run/mysqld/mysqld.sock

Making HAProxy 1.5 replication lag aware in MySQL

Latest MySQL Performance Blog posts - December 18, 2014 - 7:48am

HAProxy is frequently used as a software load balancer in the MySQL world. Peter Boros, in a past post, explained how to set it up with Percona XtraDB Cluster (PXC) so that it only sends queries to available nodes. The same approach can be used in a regular master-slaves setup to spread the read load across multiple slaves. However with MySQL replication, another factor comes into play: replication lag. In this case the approach mentioned for Percona XtraDB Cluster does not work that well as the check we presented only returns ‘up’ or ‘down’. We would like to be able to tune the weight of a replica inside HAProxy depending on its replication lag. This is what we will do in this post using HAProxy 1.5.

Agent checks in HAProxy

HAProxy 1.5 allows us to run an agent check, which is a check that can be added to a regular health check. The benefit of agent checks is that the return value can be ‘up’ or ‘down’, but also a weight.

What is an agent? It is simply a program that can be accessed from a TCP connection on a given port. So if we want to run an agent on a MySQL server that will:

  • Mark the server as down in HAProxy if replication is not working
  • Set the weight to 100% if the replication lag is < 10s
  • Set the weight to 50% if the replication lag is >= 10s and < 60s
  • Set the weight to 5% in all other situations

We can use a script like this:

$ less agent.php = 10 && $lag < 60){ return "up 50%"; } else return "up 5%"; } set_time_limit(0); $socket = stream_socket_server("tcp://127.0.0.1:$port", $errno, $errstr); if (!$socket) { echo "$errstr ($errno) n"; } else { while ($conn = stream_socket_accept($socket,9999999999999)) { $cmd = "$mysql -h127.0.0.1 -u$user -p$password -P$mysql_port -Ee "$query" | grep Seconds_Behind_Master | cut -d ':' -f2 | tr -d ' '"; exec("$cmd",$lag); $weight = set_weight($lag[0]); unset($lag); fputs ($conn, $weight); fclose ($conn); } fclose($socket); } ?>

If you want the script to be accessible from port 6789 and connect to a MySQL instance running on port 3306, run:

$ php agent.php 6789 3306

You will also need a dedicated MySQL user:

mysql> GRANT REPLICATION CLIENT ON *.* TO 'haproxy'@'127.0.0.1' IDENTIFIED BY 'haproxy_pwd';

When the agent is started, you can check that it is working properly:

# telnet 127.0.0.1 6789 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. up 100% Connection closed by foreign host.

Assuming it is run locally on the app server, that 2 replicas are available (192.168.10.2 and 192.168.10.3) and that the application will send all reads on port 3307, you will define a frontend and a backend in your HAProxy configuration like this:

frontend read_only-front bind *:3307 mode tcp option tcplog log global default_backend read_only-back backend read_only-back mode tcp balance leastconn server slave1 192.168.10.2 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions server slave2 192.168.10.3 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions

Demo

Now that everything is set up, let’s see how HAProxy can dynamically change the weight of the servers depending on the replication lag.

No lag

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # HAProxy $ echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,100 read_only-back,slave2,UP,100 read_only-back,BACKEND,UP,200

Slave1 lagging

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 25 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,50 read_only-back,slave2,UP,100 read_only-back,BACKEND,UP,150

Slave2 down

# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: NULL # echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,100 read_only-back,slave2,DOWN (agent),100 read_only-back,BACKEND,UP,100

Conclusion

Agent checks are a nice addition in HAProxy 1.5. The setup presented above is a bit simplistic though: for instance, if HAProxy fails to connect to the agent, it will not mark the corresponding as down. It is then recommended to keep a regular health check along with the agent check.

Astute readers will also notice that in this configuration, if replication is broken on all nodes, HAProxy will stop sending reads. This may not be the best solution. Possible options are: stop the agent and mark the servers as UP using the stats socket or add the master as a backup server.

And as a final note, you can edit the code of the agent so that replication lag is measured with Percona Toolkit’s pt-heartbeat instead of Seconds_Behind_Master.

The post Making HAProxy 1.5 replication lag aware in MySQL appeared first on MySQL Performance Blog.

replication error

Lastest Forum Posts - December 17, 2014 - 11:14pm
Hi all.

My replication stopped with error Last_SQL_Errno: 1032 or Last_SQL_Errno: 1062
example error:
Last_SQL_Error: Could not execute Delete_rows event on table cportal.b_search_content_stem; Can't find record in 'b_search_content_stem', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mk-db1-master-bin.000488, end_log_pos 74862564

I am found this issue http://www.percona.com/blog/2013/09/...ith-mysql-5-6/
but it not working for me.
my version is Percona-Server-server-56-5.6.21-rel69.0.el6.x86_64.

my slave cnf http://pastebin.com/wTdcbTwU
my master cnf http://pastebin.com/q2uJAynU

thanks for help

Why pt-align discard first line automatically?

Lastest Forum Posts - December 17, 2014 - 11:10pm
When I type following commands:
: gf@li741-164 ~> mysql -uroot -h 127.0.0.1 -Be "select * from mysql.user limit 2" The result contains one column name title line and two host line:
: Host User Password Select_priv Insert_priv Update_priv Delete_priv Create_priv Drop_priv Reload_priv Shutdown_priv Process_priv File_priv Grant_priv References_priv Index_priv Alter_priv Show_db_priv Super_priv Create_tmp_table_priv Lock_tables_priv Execute_priv Repl_slave_priv Repl_client_priv Create_view_priv Show_view_priv Create_routine_priv Alter_routine_priv Create_user_priv Event_priv Trigger_priv Create_tablespace_priv ssl_type ssl_cipher x509_issuer x509_subjectmax_questions max_updates max_connections max_user_connections plugin authentication_string password_expired localhost root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y YY Y Y Y Y Y 0 0 0 0 N li741-164.members.linode.com root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y YY Y Y Y Y Y Y Y 0 0 0 0 N But after using pt-align:
: gf@li741-164 ~> mysql -uroot -h 127.0.0.1 -Be "select * from mysql.user limit 2" | pt-align first column name title line is ignored,just left two host lines
: localhost root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 0 0 0 0 N li741-164.members.linode.com root Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 0 0 0 0 N Why this happens?

pt-table-checsum details

Lastest Forum Posts - December 17, 2014 - 12:51pm
Hi Team,

We have performed pt-table-check sum on specific server and understand that a Database by name Percona gets created under which checksums table will be added once we run the pt-table-checksum.

There are different fields in the checksum table. Could you please clarify the actual meaning of the below fields:
this_crc,
this_cnt,
master_crc,
master_cnt,
chunk,
chunk_index,
lower_boundary,
upper_boundary.

Best Regards,
Krishna

Pages

Subscribe to Percona aggregator
]]>