]]>
]]>

You are here

Feed aggregator

creators discuss its neon

Lastest Forum Posts - March 26, 2015 - 2:20am
http://www.youtube.com/watch?v=YZN5Xa-0iCA
Hotline Miami makes you feel things that you don't want to feel. It shows you a veritable smorgasbord of ugly

binlog positions "off" a bit after restore

Lastest Forum Posts - March 25, 2015 - 8:52am
I have a few shards in an application that are approaching 600-800G on-disk, but aren't heavily used. I'm spinning up new off-site backup & reporting copies of all shards and let four streaming xtrabackup runs go last night. I have a script that I use frequently to clone out new slaves. Two of the shards, with active customer bases, started right up as normal (150-200G). The two shards with larger data sizes appear to have slightly wrong (behind) master coordinates in xtrabackup_slave_info.

So, when I do a streaming backup from an existing slave, at the end I get:
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.002238', MASTER_LOG_POS=263779110

overnight, the master moved through to
| mysql-bin.002239 | 30261034 |

So it passes a quick sanity check. Built a CHANGE MASTER with the correct ip/user/etc, fire it off, and start replication. Duplicate key error. Oops!

I verified from the error log that the expected slave statement was issued:
Slave SQL thread initialized, starting replication in log 'mysql-bin.002238' at position 263779110, relay log '/mysql/binlog/mysqld-relay-bin.000001' position: 4. I get:

Last_SQL_Error: Error 'Duplicate entry '68407820' for key 'PRIMARY'' on query.... This is against a Rails sessions table.
mysql> select id, created_at from sessions where id = '68407821';

+----------+---------------------+
| id | created_at |
+----------+---------------------+
| 68407821 | 2015-03-24 03:54:15 |
+----------+---------------------+

Using mysqlbinlog, I found that insert into the original binlogs. It seems to be halfway through a transaction for thread 841292:


#150325 2:37:43 server id 130161118 end_log_pos 263780113 CRC32 0xa869770f Query thread_id=841292 exec_time=0 error_code=0
SET TIMESTAMP=1427265463/*!*/;
INSERT INTO `sessions` (--- redacted --- )
/*!*/;

Thus, the xtrabackup_slave_info position should have been *at least* the next one:

#150325 2:37:43 server id 130161118 end_log_pos 263780693 CRC32 0x5b59698f Query thread_id=841292 exec_time=0 error_code=0

but as noted this appears to be splitting a transaction. So, the entire transaction for thread 841292 was committed to disk (I verified on the restore that the data is correct for the entire transaction) AND data from the next few transactions is present.

Info:

Source Slave:
:~$ dpkg -l|grep percona
ii libperconaserverclient18.1 5.6.19-67.0-618.wheezy amd64 Percona Server database client library
ii libperconaserverclient18.1-dev 5.6.19-67.0-618.wheezy amd64 Percona Server database development files
ii percona-server-client-5.6 5.6.19-67.0-618.wheezy amd64 Percona Server database client binaries
ii percona-server-common-5.6 5.6.19-67.0-618.wheezy amd64 Percona Server database common files (e.g. /etc/mysql/my.cnf)
ii percona-server-server 5.6.19-67.0-618.wheezy amd64 Percona Server database server
ii percona-server-server-5.6 5.6.19-67.0-618.wheezy amd64 Percona Server database server binaries
ii percona-xtrabackup 2.2.9-5067-1.wheezy amd64 Open source backup tool for InnoDB and XtraDB

Destination:
ii libperconaserverclient18.1 5.6.22-71.0-726.wheezy amd64 Percona Server database client library
ii libperconaserverclient18.1-dev 5.6.22-71.0-726.wheezy amd64 Percona Server database development files
ii percona-server-client-5.6 5.6.22-71.0-726.wheezy amd64 Percona Server database client binaries
ii percona-server-common-5.6 5.6.22-71.0-726.wheezy amd64 Percona Server database common files (e.g. /etc/mysql/my.cnf)
ii percona-server-server 5.6.22-71.0-726.wheezy amd64 Percona Server database server
ii percona-server-server-5.6 5.6.22-71.0-726.wheezy amd64 Percona Server database server binaries
ii percona-xtrabackup 2.2.9-5067-1.wheezy amd64 Open source backup tool for InnoDB and XtraDB


I feel like I'm missing something obvious here, like a failed roll-back or something. If it hadn't happened on 2/4 of the servers overnight, I probably wouldn't bother posting. What have I done wrong/misunderstood?

Thanks!

Wes

Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more

Latest MySQL Performance Blog posts - March 25, 2015 - 3:00am

Founded in 2004 to help people find great local businesses, Yelp has some 135 million monthly unique visitors. With those traffic volumes Yelp’s 300+ engineers are constantly working to keep things moving smoothly – and when you move that fast you learn many things.

Fortunately for the global MySQL community, three Yelp DBAs will be sharing what they’ve learned at the annual Percona Live MySQL Conference and Expo this April 13-16 in Santa Clara, California.

Say “hello” to Susanne Lehmann, Jenni Snyder and Josh Snyder! I chatted with them over email about their presentations, on how MySQL is used at Yelp, and about the shortage of women in MySQL.

***

Tom: Jenni, you and Josh will be co-presenting “Next generation monitoring: moving beyond Nagios ” on April 14.

You mentioned that Yelp’s databases scale dynamically, and so does your monitoring of those databases. And to minimize human intervention, you’ve created a Puppet and Sensu monitoring ensemble… because “if it’s not monitored, it’s not in production.” Talk to me more about Yelp’s philosophy of “opt-out monitoring.” What does that entail? How does that help Yelp?

Jenni: Before we moved to Sensu, our Nagios dashboards were a sea of red, muted, acknowledged, or disabled service checks. In fact, we even had a cluster check to make sure that we never accidentally put a host into use that was muted or marked for downtime. It was possible for a well-meaning operator to acknowledge checks on a host and forget about it, and I certainly perpetrated a couple of instances of disks filling up after acknowledging a 3am “warning” page that I’d rather forget about. With Sensu, hosts and services come out of the downtime/acknowledgement state automatically after a number of days, ensuring that we’re kept honest and stay on top of issues that need to be addressed.

Also, monitoring is deployed with a node, not separate monitoring configuration. Outside of a grace period we employ when a host is first provisioned or rebooted, if a host is up, it’s being monitored and alerting. Also, alerting doesn’t always mean paging. We also use IRC and file tickets directly into our tracking system when we don’t need eyes on a problem right away.

Tom: Susanne, in your presentation, titled “insert cassandra into prod where use_case=?;” you’ll discuss the situations you’ve encountered where MySQL just wasn’t the right tool for the job.

What led up to that discovery and how did you come up with finding the right tools (and what were they) to run alongside and support MySQL?

Susanne: Our main force behind exploring other datastores alongside MySQL was that Yelp is growing outside the US market a lot. Therefore we wanted the data to be nearer to the customer and needed multi-master writes.

Also, we saw use cases where our application data was organized very key-value like and not relational, which made them a better fit for a NoSQL solution.

We decided to use Cassandra as a datastore and I plan to go more into detail why during my talk. Now we offer developers more choices on how to store our application data, but we also believe in the “right tool for the job” philosophy and might add more solutions to the mix in the future.

Tom: Jenni, you’ll also be presenting “Schema changes multiple times a day? OK!” I know that you and your fellow MySQL DBAs are always improving and also finding better ways of supporting new and existing features for Yelp users like me. Delivering on such a scale must entail some unique processes and tools. Does this involve a particular mindset among your fellow DBAs? Also, what are some of those key tools – and processes and how are they used?

Jenni: Yelp prizes the productivity of our developers and our ability to iterate and develop new features quickly. In order to do that, we need to be able to not only create new database tables, but also modify existing ones, many of which are larger than MySQL can alter without causing considerable replication delay. The first step is to foster a culture of automated testing, monitoring, code reviews, and partnership between developers and DBAs to ensure that we can quickly & safely roll out schema changes. In my talk, I’ll be describing tools that we’ve talked about before, like our Gross Query Checker, as well as the way the DBA team works with developers while still getting the rest of our work done. The second, easy part is using a tool like pt-online-schema-change to run schema changes online without causing replication delay or degrading performance

Tom: Josh, you’ll also be speaking on “Bootstrapping databases in a single command: elastic provisioning for the win.” What is “elastic provisioning” and how are you using it for Yelp’s tooling?

Josh: When I say that we use elastic provisioning, I mean that we can reliably and consistently build a database server from scratch, with minimal human involvement. The goal is to encompass every aspect of the provisioning task, including configuration, monitoring, and even load balancing, in a single thoroughly automated process. With this process in place, we’ve found ourselves able to quickly allocate and reallocate resources, both in our datacenters and in the cloud. Our tools for implementing the above goals give us greater confidence in our infrastructure, while avoiding single-points of failure and achieving the maximum possible level of performance. We had a lot of fun building this system, and we think that many of the components involved are relevant to others in the field.

Tom: Susanne and Jenni, last year at Percona Live there was a BoF session titled “MySQL and Women (or where are all the women?).” The idea was to discuss why there are “just not enough women working on the technology side of tech.” In a nutshell, the conversation focused on why there are not more women in MySQL and why so relatively few attend MySQL conferences like Percona Live.

The relative scarcity of women in technical roles was also the subject of an article published in the August 2014 issue of Forbes, citing a recent industry report.

Why, in your (respective) views, do you (or don’t) think that there are so few women in MySQL? And how can this trend be reversed?

Susanne: I think there are few women in MySQL and the reasons are manifold. Of course there is the pipeline problem. Then there is the problem, widely discussed right now, that women who are entering STEM jobs are less likely staying in there. These are reasons not specific for MySQL jobs, but rather for STEM in general. What is more specific for database/MySQL jobs is, in my opinion, that often times DBAs need to be on call, they need to stay in the office if things go sideways. Database problems tend often to be problems that can’t wait till the next morning. That makes it more demanding when you have a family for example (which is true for men as well of course, but seems still to be more of a problem for women).

As for how to reverse the trend, I liked this Guardian article because it covers a lot of important points. There is no easy solution.

I like that more industry leaders and technology companies are discussing what they can do to improve diversity these days. In general, it really helps to have a great professional (female) support system. At Yelp, we have AWE, the Awesome Women in Engineering group, in which Jenni and I are both active. We participate in welcoming women to Yelp engineering, speaking at external events and workshops to help other women present their work, mentoring, and a book club.

Jenni: I’m sorry that I missed Percona Live and this BoF last year; I was out on maternity leave. I believe that tech/startup culture is a huge reason that fewer women are entering and staying these days, but a quick web search will lead you to any number of articles debating the subject. I run into quite a few women working with MySQL; it’s large, open community and generally collaborative and supportive nature is very welcoming. As the article you linked to suggests, MySQL has a broad audience. It’s easy to get started with and pull into any project, and as a result, most software professionals have worked with it at some time or another.

On another note, I’m happy to see that Percona Live has a Code of Conduct. I hope that Percona and/or MySQL will consider adopting a Community Code of Conduct like Python, Puppet, and Ubuntu. Doing so raises the bar for all participants, without hampering collaboration and creativity!

* * *

Thanks very much, Susanne, Jenni and Josh! I look forward to seeing you next month at the conference. And readers, if you’d like to attend Percona Live, use the promo code Yelp15 for 15% off! Just enter that during registration. If you’re already attending, be sure to tweet about your favorite sessions using the hashtag #PerconaLive. And if you need to find a great place to eat while attending Percona Live, click here for excellent Yelp recommendations.

The post Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more appeared first on MySQL Performance Blog.

HaProxy configuration to prevent multiple node writing with Percona Cluster

Lastest Forum Posts - March 24, 2015 - 2:16pm
So as recommended with Percona Cluster, we are (trying) to only write to one cluster node at a time. The application is using JDBC connection pooling. Whenever there is a flap in the service, it seems we end up with multiple nodes being written and then cluster deadlocks / local certification errors. We've improved this a little by changing our configuration to 'stick on dst' instead of 'stick on src'.
Below is the configuration. Any suggestions? Should we not be using sticky sessions?
global
log 127.0.0.1 local0
maxconn 4960
#debug
#quiet
user haproxy
group haproxy
stats socket /var/run/haproxy-stats uid haproxy mode 770
stats maxconn 10
noepoll

defaults
log global
option dontlognull
retries 2
option redispatch
maxconn 2000
timeout connect 4s
timeout client 1800s
timeout server 1800s

peers hapeers
peer xxxxxxx yyyyyy:1024
peer aaaaaa bbbbbb:1024

frontend percona_cluster
bind 0.0.0.0:3306
default_backend percona_cluster

backend percona_cluster
mode tcp
option tcpka
option tcplog
option mysql-check
stick-table type ip size 1 peers hapeers nopurge
stick on dst
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
server ec2-xxxxxxxxx.compute-1.amazonaws.com xxxxxxx:3306 maxconn 2500 check port 9200 inter 12000 rise 3 fall 3
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www

# set up application listeners configured via json
listen ssl_cert
bind 0.0.0.0:443 ssl crt /etc/haproxy/haproxy.pem no-sslv3
balance roundrobin
stick-table type ip size 200k peers hapeers expire 30m
mode http
stats enable
stats scope .
stats uri /haproxy-hp?stats


Restoring XtraBackup files onto a Windows located Database

Lastest Forum Posts - March 24, 2015 - 9:36am
I currently work in a MySQL on Windows based environment (mainly 5.5), but we are gradually moving more and more to MySQL on Linux (in fact we now have more Linux Servers than Windows). At present we use MySQLDump for our backups, but this is getting increasingly time consuming and unwieldy, so I am looking at alternatives.

If we use XtraBackup to take backups from one of our Linux DB's can we restore that file onto a Windows machine if required?

Second node won't join cluster/SST fails

Lastest Forum Posts - March 24, 2015 - 5:25am
Hello,

I've had a 3-node cluster online for a few days, and I tried to take the second node down to change the tmpdir in my.cnf (disk was getting full). When I start MySQL now, the node will not recieve an SST and fails with some frustrating error messages.

Packages (same on both nodes)
ii percona-xtradb-cluster-server-5.6 5.6.21-25.8-938.trusty
ii percona-xtrabackup 2.2.9-5067-1.trusty

my.cnf

: # Path to Galera library wsrep_provider=/usr/lib/libgalera_smm.so # Cluster connection URL wsrep_cluster_address=gcomm://1.1.1.1,2.2.2.2,3.3.3.3 #wsrep_cluster_address=gcomm:// # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how |InnoDB| autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Authentication for SST method wsrep_sst_auth="sstuser:passwordhere" # Node #1 address wsrep_node_address=2.2.2.2 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=brp #innodb_buffer_pool_size=145774M innodb_flush_log_at_trx_commit=2 innodb_file_per_table=1 innodb_data_file_path = ibdata1:100M:autoextend ## You may want to tune the below depending on number of cores and disk sub innodb_read_io_threads=4 innodb_write_io_threads=4 innodb_io_capacity=200 innodb_doublewrite=1 innodb_log_file_size=1024M innodb_log_buffer_size=96M innodb_buffer_pool_instances=8 innodb_log_files_in_group=2 innodb_thread_concurrency=64 #innodb_file_format=barracuda innodb_flush_method = O_DIRECT innodb_autoinc_lock_mode=2 ## avoid statistics update when doing e.g show tables innodb_stats_on_metadata=0 innodb_data_home_dir=/var/lib/mysql innodb_log_group_home_dir=/var/lib/mysql
innobackup.prepare.log (JOINER)
: InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. Get the latest version of Percona XtraBackup, documentation, and help resources: http://www.percona.com/xb/p 150324 12:01:13 innobackupex: Starting the apply-log operation IMPORTANT: Please check that the apply-log run completes successfully. At the end of a successful apply-log run innobackupex prints "completed OK!". 150324 12:01:13 innobackupex: Starting ibbackup with command: xtrabackup --defaults-file="/var/lib/mysql/.sst/backup-my.cnf" --defaults-group="mysqld" --prepare --target-dir=/var/lib/mysql/.sst xtrabackup version 2.2.9 based on MySQL server 5.6.22 Linux (x86_64) (revision id: ) xtrabackup: cd to /var/lib/mysql/.sst xtrabackup: Error: cannot open ./xtrabackup_checkpoints xtrabackup: error: xtrabackup_read_metadata() xtrabackup: This target seems not to have correct metadata... 2015-03-24 12:01:13 7fc3e1ed3780 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. xtrabackup: Warning: cannot open ./xtrabackup_logfile. will try to find. 2015-03-24 12:01:13 7fc3e1ed3780 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. xtrabackup: Fatal error: cannot find ./xtrabackup_logfile. xtrabackup: Error: xtrabackup_init_temp_log() failed. innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 2642. main::apply_log() called at /usr//bin/innobackupex line 1570 innobackupex: Error: innobackupex: ibbackup failed at /usr//bin/innobackupex line 2642. innobackup.backup.log (DONOR)
: InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. This software is published under the GNU GENERAL PUBLIC LICENSE Version 2, June 1991. Get the latest version of Percona XtraBackup, documentation, and help resources: http://www.percona.com/xb/p 150324 11:58:24 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/etc/mysql/my.cnf;mysql_read_default_group=xtrabackup;mysql_socket=/var/run/mysqld/mysqld.sock' as 'sstuser' (using password: YES). 150324 11:58:24 innobackupex: Connected to MySQL server 150324 11:58:24 innobackupex: Starting the backup operation IMPORTANT: Please check that the backup run completes successfully. At the end of a successful backup run innobackupex prints "completed OK!". innobackupex: Using server version 5.6.21-70.1-56 innobackupex: Created backup directory /tmp/tmp.EZdqYbIZoL 150324 11:58:24 innobackupex: Starting ibbackup with command: xtrabackup --defaults-file="/etc/mysql/my.cnf" --defaults-group="mysqld" --backup --suspend-at-end --target-dir=/tmp --tmpdir=/tmp --extra-lsndir='/tmp' --stream=xbstream innobackupex: Waiting for ibbackup (pid=20892) to suspend innobackupex: Suspend file '/tmp/xtrabackup_suspended_2' xtrabackup version 2.2.9 based on MySQL server 5.6.22 Linux (x86_64) (revision id: ) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql xtrabackup: open files limit requested 1024000, set to 1024000 xtrabackup: using the following InnoDB configuration: xtrabackup: innodb_data_home_dir = /var/lib/mysql xtrabackup: innodb_data_file_path = ibdata1:100M:autoextend xtrabackup: innodb_log_group_home_dir = /var/lib/mysql xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 1073741824 xtrabackup: using O_DIRECT >> log scanned up to (1374935811) xtrabackup: Generating a list of tablespaces 2015-03-24 11:58:24 7f9248add780 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 2704. main::wait_for_ibbackup_file_create('/tmp/xtrabackup_suspended_2') called at /usr//bin/innobackupex line 2724 main::wait_for_ibbackup_suspend('/tmp/xtrabackup_suspended_2') called at /usr//bin/innobackupex line 1977 main::backup() called at /usr//bin/innobackupex line 1601 innobackupex: Error: The xtrabackup child process has died at /usr//bin/innobackupex line 2704. Now, there are 2 errors that look like permissions/OS errors. OS error 13 Permission Denied, and OS error 2, file not found. The file not found relates to not finding the xtrabackup-checkpoints I think, but I have no idea if I have to fix that or not.

The Permission Denied errors, make no sense to me, here is some permissions from my setup.

(JOINER)
-rw-r--r-- 1 mysql root 5164 Mar 24 11:56 /etc/mysql/my.cnf <--- my.cnf
drwxrwxrwt 5 root root 4096 Mar 24 12:20 tmp/ <----tmpdir
drwxr-xr-x 3 mysql mysql 12288 Mar 24 12:08 mysql/ <--- datadir

(DONER)
-rw-r--r-- 1 mysql root 5164 Mar 24 11:56 my.cnf <--- my.cnf
drwxrwxrwt 4 root root 4096 Mar 24 12:20 tmp/ <-- tmpdir
drwxr-xr-x 20 mysql mysql 12288 Mar 24 12:09 mysql/ <----datadir



Any help is appreciated. I've gone round in circles for hours and hours checking the basic config and it all seems OK.

new percona cluster with excisting data

Lastest Forum Posts - March 24, 2015 - 2:43am
H everyone

I am building up a staging percona cluster with three servers from a test database taken from xtrabackup (includes grastate files, etc) . The relevant database is about 30G and what i did was to copy the data files on all servers to the /var/lib/mysql/ directory and changed ownership to mysql

I bootstrap the first server and then when i start the second server it starts SST process.

How can i avoid the SST process?
Also, since the data are exactly the same, why does the joiner starts SST?

Thanks

Calling all MySQL DBAs: How do you use Percona Toolkit?

Latest MySQL Performance Blog posts - March 24, 2015 - 12:00am

Percona Toolkit is one of our most mature open source applications. Derived from Maatkit and Aspersa, Percona Toolkit has evolved significantly over the years. The software now contains 32 tools, over 4,000 tests, and has been downloaded over 250,000 times. Anyone who manages a database – from DBAs to system administrators to even software developers – benefits from Percona Toolkit’s ability to perform a variety of MySQL server and system tasks that are too difficult or complex to perform manually.

We continue to make Percona Toolkit better each month. Over the last 9 months alone Percona has had 6 releases and resolved nearly 50 issues.

While Percona team members in Support, Consulting, and Managed Services are big drivers of identifying bugs and new features (driven mostly by Percona customer needs), the community of Percona Toolkit users plays a significant role in making the open source software what it is today.

We’d like to learn how we can make Percona Toolkit even better for your needs. Please take a brief survey so we can learn how you actually use the software. As a thank you for taking the survey, we are randomly giving away five $50 Amazon.com gift cards to participants. It’s a small token but one that we hope you’ll appreciate.

Recent additions to Percona Toolkit have included better Percona XtraDB Cluster support as well as multiple fixes and improvements to pt-online-schema-change, pt-kill, pt-query-digest, pt-stalk, and preparation for the MySQL 5.7 GA. Help us continue to improve Percona Toolkit by taking part in our survey. If you use Percona Toolkit and are attending Percona Live next month, please keep a look out for me. I’d like to hear about your experiences.

The post Calling all MySQL DBAs: How do you use Percona Toolkit? appeared first on MySQL Performance Blog.

Bootstrap cluster when DNS entry is empty?

Lastest Forum Posts - March 23, 2015 - 6:53pm
Hi, I'm using XtraDB 5.6 with consul for service discovery via DNS and the cluster is failing to bootstrap when there are no nodes yet available.

In my.cnf I have this set up on all nodes, and it works great when the cluster is bootstrapped on one node: wsrep_cluster_address=gcomm://db.service.consul

With the above, I can automatically add any number of new nodes to my cluster without intervention.
But when all three of my nodes are down I get this in my error.log and the cluster fails to bootstrap: "Failed to resolve tcp://db.service.consul:4567" and my cluster won't bootstrap.

I guess this isn't totally unexpected behavior, but if I were to leave my cluster_address as empty gcomm:// my cluster bootstraps as expected, and since DNS is returning 0 node addresses I would think the node would bootstrap itself.


Can anyone think of a workaround for this, or is this a bug?

Thanks for your time and input!

Percona Master-Master lag issues

Lastest Forum Posts - March 23, 2015 - 7:53am
I have 2 servers with master-master replication using Percona MySQL 5.6 and I am having replication lag in the second server: db-02, however db-01 never had a lag situation.

What could be causing this lag? It rises sometimes up to >1000 secs of lag and sunddenly it fells to 0 secs. I have checked it with Percona's pt-heartbeat tool and is real lag.

This are the server configurations, both have 8 cores, 32GB RAM and SSD disks:

DB-01 (Xeon E3-1245 V2 @ 3.40GHz)
: [mysqld] server-id = 1 log_bin = /var/lib/mysql/mysql-bin.log binlog-ignore-db = mysql binlog-ignore-db = test datadir = /var/lib/mysql expire-logs-days = 10 auto_increment_increment= 2 auto_increment_offset = 1 max_connections=3000 query_cache_size=0 query_cache_type=0 innodb_file_per_table innodb_file_format=barracuda innodb_flush_method=O_DIRECT innodb_log_file_size=128M innodb_thread_concurrency=0 innodb_flush_log_at_trx_commit=1 sync_binlog=1 # ~90% of memory innodb_buffer_pool_size=27000M # 4 * # of CPU cores innodb_read_io_threads=32 innodb_write_io_threads=32 innodb_io_capacity = 5000
DB-02 (i7-4770 CPU @ 3.40GHz)
: // same that db-01 but with this options different: server-id = 2 auto_increment_offset = 2 The output of SHOW SLAVE STATUS on each server:

DB-01
: Slave_IO_State: Waiting for master to send event Master_Host: db-02 Master_Log_File: mysql-bin.000022 Read_Master_Log_Pos: 3348962 Relay_Log_File: db-01-relay-bin.000042 Relay_Log_Pos: 3349125 Relay_Master_Log_File: mysql-bin.000022 Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 3348962 Relay_Log_Space: 3349462 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Seconds_Behind_Master: 0 Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it Master_Retry_Count: 86400 Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: DB-02
: Slave_IO_State: Waiting for master to send event Master_Host: db-01 Master_Log_File: mysql-bin.000019 Read_Master_Log_Pos: 38905060 Relay_Log_File: db-02-relay-bin.000048 Relay_Log_Pos: 17834447 Relay_Master_Log_File: mysql-bin.000019 Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 28196753 Relay_Log_Space: 28542928 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Seconds_Behind_Master: 310 Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it Master_Retry_Count: 86400 Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp:

pt-online-schema-change concepts

Lastest Forum Posts - March 23, 2015 - 2:59am
Hi

I am reading the doc for the relevant tool in order to use it, well, actually I have performed a test with great success but when i came across to tables with foreign keys i got troubled . So i have a few questions on the subject which i don’t understand:

1) "When the tool renames the original table to let the new one take its place, the foreign keys “follow” the renamed table, and must be changed to reference the new table instead." I dont understand this. I mean, why does the child table "follows" the old table? The way I think of it , the child table will look for the foreign keys according to the table name (right?), and since the new table (the altered) was renamed and has the proper name, how come the child table tries to reference the foreign keys to the old one. Also, as it is stated on the manual, the old table is dropped which means the child table tries to reference a table that doesn’t exist?

What change on the foreign keys will be made on the child table? I mean, the table name and the rows are the same ...

I am not an expert of mysql so excuse me in advance if i ask something obvious.

Recovering data from a corrupt embedded MySQL Server.

Lastest Forum Posts - March 22, 2015 - 10:30pm
Dear All, I have inherited an embedded database from a customer, which seems apparently encrypted. You cannot read the data from the FRM and MYD & MYI files by copying them into the mysql directory since we get an 'Incorrect Information from .... table_name.frm' file. I have worked on this fulltime since 3 days and I'm getting nowhere. Is there a possibility to extract data from the MYD file alone ? I know the FRM file is encrypted, and the former software can still read the data from the database. Any pointers will be highly welcomed.

Thanks

Cannot find MYSQL extension

Lastest Forum Posts - March 22, 2015 - 6:39am
Hello,

I installed Percona following the directions at https://documentation.cpanel.net/dis...L+with+Percona. However, now my website says: "Your PHP installation appears to be missing the MySQL extension which is required by WordPress.". What is up?

Thanks!

Failing to connect 2nd node

Lastest Forum Posts - March 21, 2015 - 9:45pm
I'm trying to bring up a xtradb cluster of 2 nodes running on two separate Ubuntu 12.04 servers, I was able to install Percona on the 1st node and get the service to start, however, when I try to start the service for the 2nd node, the service always fails to start.

In the below error.log file for the joiner, I'm going some error's which I can't figure out how to solve. I can see that the joiner node (dg-mysql02 does know about the other node in the cluster, dg-mysql01, but it seems like there is an issue when they try to sync.

Any help is appreciated.
WSREP_SST: [INFO] WARNING: Stale temporary SST directory: /var/lib/mysql//.sst from previous state transfer (20150322 04:38:24.233)
WSREP_SST: [INFO] Evaluating socat -u TCP-LISTEN:4444,reuseaddr stdio | xbstream -x; RC=( ${PIPESTATUS[@]} ) (20150322 04:38:24.239)
WSREP_SST: [INFO] Proceeding with SST (20150322 04:38:24.255)
WSREP_SST: [INFO] Cleaning the existing datadir and innodb-data/log directories (20150322 04:38:24.258)
WSREP_SST: [INFO] Cleaning the binlog directory /var/log/mysql as well (20150322 04:38:24.275)
rm: cannot remove `/var/log/mysql/*.index': No such file or directory
WSREP_SST: [INFO] Waiting for SST streaming to complete! (20150322 04:38:24.283)
2015-03-22 04:38:25 31140 [Note] WSREP: (4535124a, 'tcp://0.0.0.0:4567') turning message relay requesting off
WSREP_SST: [ERROR] xtrabackup_checkpoints missing, failed innobackupex/SST on donor (20150322 04:38:34.422)
WSREP_SST: [ERROR] Cleanup after exit with status:2 (20150322 04:38:34.425)
2015-03-22 04:38:34 31140 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address 'dg-mysql02' --auth 'sstuser:s3cretPass' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '31140' --binlog '/var/log/mysql/mysql-bin' : 2 (No such file or directory)
2015-03-22 04:38:34 31140 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2015-03-22 04:38:34 31140 [ERROR] WSREP: SST failed: 2 (No such file or directory)
2015-03-22 04:38:34 31140 [ERROR] Aborting

2015-03-22 04:38:34 31140 [Warning] WSREP: 1.1 (dg-mysql01.reston.visp.verizon.com): State transfer to 0.1 (dg-mysql02.reston.visp.verizon.com) failed: -22 (Invalid argument)
2015-03-22 04:38:34 31140 [ERROR] WSREP: gcs/src/gcs_group.cpp:int gcs_group_handle_join_msg(gcs_group_t*, const gcs_recv_msg_t*)():731: Will never receive state. Need to abort.
2015-03-22 04:38:34 31140 [Note] WSREP: gcomm: terminating thread
2015-03-22 04:38:34 31140 [Note] WSREP: gcomm: joining thread
2015-03-22 04:38:34 31140 [Note] WSREP: gcomm: closing backend
2015-03-22 04:38:35 31140 [Note] WSREP: view(view_id(NON_PRIM,4535124a,12) memb {
4535124a,1
} joined {
} left {
} partitioned {
65bd84ba,1
})
2015-03-22 04:38:35 31140 [Note] WSREP: view((empty))
2015-03-22 04:38:35 31140 [Note] WSREP: gcomm: closed
2015-03-22 04:38:35 31140 [Note] WSREP: /usr/sbin/mysqld: Terminated.
Aborted
150322 04:38:35 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

MySQL fragmentation(reorg)

Lastest Forum Posts - March 20, 2015 - 1:53am
Hello
This is my first question. ^^

I have a question about the fragmentation (reorg) in mysql.

Q. If the statement("delete/insert" ) is a frequent table, should be run periodically to "optimize table"?
Because "randomI/O" is bad (not table size). Is it right?
Q. Is there a way to query the table fragmentation like "innodb_scan_pages_contiguous, Innodb_scan_pages_jumpy".


Got error 5 during commit, wsrep_max_ws_size limit

Lastest Forum Posts - March 19, 2015 - 12:12pm
I'm running percona xtradb cluster 5.6 (5.6.22-72.0)

So I'm having problem importing a mysqldump of an innodb database that's rather large. The error I get from the client is "Got error 5 during commit"

Some google-fu says this nondescript error is usually from an insert going over the wsrep_max_ws_size.

I've tried adjusting this value higher, but the limit seems to be at 4GB (4294901759 bytes). My insert for one table is quite a bit larger than this. Is this limit my actual problem, and if so... why can't I increase beyond this 4GB limit?

xtradb galera cluster direct ethernet connection

Lastest Forum Posts - March 19, 2015 - 10:25am
Hi

We are planning to setup percona/galera cluster for master-master sync replication between 2 servers.

Is it possible to use direct Ethernet connection over gigabit ethernet port on the 2 servers?

Is there any settings in the cluster that specifies the ip-address? or will the galera cluster use the default port (which might be connected to overloaded switch)?

Thanks

Percona MySQL Server source code

Lastest Forum Posts - March 19, 2015 - 3:26am
Hi All,

As I know, Percona 5.6 is optimized and based on Oracle's MySQL 5.6 community edition.

Will future Percona server be optimized and based on MariaDB source please?

Thanks

Choosing a good sharding key in MongoDB (and MySQL)

Latest MySQL Performance Blog posts - March 19, 2015 - 12:00am

MongoDB 3.0 was recently released. Instead of focusing on what’s new – that is so easy to find, let’s rather talk about something that has not changed a lot since the early MongoDB days. This topic is sharding and most specifically: how to choose a good sharding key. Note that most of the discussion will also apply to MySQL, so if you are more interested in sharding than in MongoDB, it could still be worth reading.

When do you want to shard?

In general sharding is recommended with MongoDB as soon as any of these conditions is met:

  • #1: A single server can no longer handle the write workload.
  • #2: The working set no longer fits in memory.
  • #3: The dataset is too large to easily fit in a single server.

Note that #1 and #2 are by far the most common reason why people need sharding. Also note that in the MySQL world, #2 does not imply that you need sharding.

What are the properties of a good sharding key?

The starting point is that a cross-shard query is very expensive in a sharded environment. It is easy to understand why: the query has to be executed independently on several shards, and then results have to be merged.

With MongoDB, mongos will transparently route queries to the right shards and will automatically merge the results: this is very handy but the hidden complexity can also make you forget that you have executed a very poorly optimized query.

This is where the choice of a sharding key is so critical: choose the right key and most queries will be simple and efficient, choose a wrong one and you’ll have ugly and slow queries.

Actually a good sharding does not need to have tens of properties, but only two:

  • Insertions should be as much balanced as possible across all shards.
  • Each query should be able to be executed by retrieving data from as little shards as possible (ideally 1 shard).

Sounds quite easy, right? However depending on your use case, it may be quite difficult to find a good sharding key. Let’s look at a few examples.

Social Network

Say we have users who can be connected to other users, who can read or write posts and who have their own wall.

All 3 collections can become very large, so sharding all will be necessary over time.

For the user and wall collection the user_id field is an obvious choice and it actually meets both criteria for a good sharding key.

For the post collection, user_id also looks like an obvious choice, but if you think about how the collection is accessed for reads, you will realize that you will need to fetch it using its post_id, not its user_id (simply because a user can have multiple posts). If you shard by user_id, any read to a single post will be broadcast to all shards: this is clearly not a good option.

So using post_id is a better choice. However it only meets criteria #2: most posts are never updated, so all the writes are insertions that will go to a single shard. However the traffic on the post collection is strongly in favor of reads, so being able to speed up reads while not slowing down writes is probably an acceptable tradeoff.

Access Logs

The workload here is very specific: write-intensive and append-only.

Sharding by ObjectId is definitely a bad idea: while data can be easily spread across all shards, all writes will only go to one shard, so you will have no benefit compared to a non-sharded setup when it comes to scale the writes.

A better solution is to use a hash of the ObjectId: that way data AND writes will be spread across all shards.

Another good option would be to use another field in your documents that you know is evenly distributed across the whole dataset. Such field may not exist though, that’s why hashing the ObjectId is a more generic solution.

Ecommerce

MongoDB can be a good option to store a product catalog: being schemaless, it can easily store products with very different attributes.

To be usable such a catalog must be searchable. This means that many indexes need to be added, and the working set will probably grow very quickly. In this case your main concern is probably not to scale the writes, but to make reads as efficient as possible.

Sharding can be an option because if done properly, each shard will act as a coarse-grained index. Now the issue is to find which field(s) will evenly distribute the dataset. Most likely a single field will not be enough, you will have to use a compound sharding key.

Here I would say that there is no generic solution, but if the products are for instance targeted at either kid, woman or man and if you have several categories of products, a potential sharding key would be (target, category, sku).

Note that in this case, reading from secondaries may be enough to ensure good scalability.

Conclusion

As you can see, choosing a correct sharding key is not always easy: do not assume that because some apps are sharded by some fields, you should do the same. You may need a totally different strategy or sharding may even not be the right solution for your issues.

If you are interested in learning more about MongoDB, I will be presenting a free webinar on March 25 at 10 a.m. Pacific time. It will be an introduction to MongoDB for MySQL DBAs and developers. Register here if you are interested.

The post Choosing a good sharding key in MongoDB (and MySQL) appeared first on MySQL Performance Blog.

pt-table-sync password in the old format please change the password to the new format

Lastest Forum Posts - March 18, 2015 - 11:06pm
Hi All,

I'm looking at doing an upgrade from 5.5 to 5.6. I have 5.6 as a slave to 5.5 then i promote 5.6 as master later on.

I want to ensure data integrity by running pt-sync, but i'm getting this error:

DBI connect(';host=172.16.1.154;mysql_read_default_gro up=client','myadmin',...) failed: Server is running in --secure-auth mode, but 'myadmin'@'172.16.1.154' has a password in the old format; please change the password to the new format at /usr/bin/pt-table-sync line 2220

But on master that user is already using new format:

select User,Host,Password from user where user='myadmin' and Host='172.16.1.154';
+------------+------------+-------------------------------------------+
| User | Host | Password |
+------------+------------+-------------------------------------------+
| myadmin | 172.16.1.154 | *F1234567893F0BBAFABBA0A6EE6AE9987654321A |
+------------+------------+-------------------------------------------+
1 row in set (0.00 sec)

May i know if anyone has encountered and have fixed the said issue. Thank you in advanced.

Nhadie

Pages

Subscribe to Percona aggregator
]]>