EmergencyEMERGENCY? Get 24/7 Help Now!

Solid Wood Furniture For Sale UK

Lastest Forum Posts - 22 min 15 sec ago
Solid Wood Furniture For Sale UK . Go to s-o-l-i-d-w-o-o-d-b-e-d-s'c-o'u-k

Solid Wood Furniture Sale UK

Lastest Forum Posts - May 25, 2016 - 7:24pm


-S-o-l-i-d- -W-o-o-d- -F-u-r-n-i-t-u-r-e- -S-a-l-e- -U-K- -G-o- -T-o- solidwoodbeds.co.uk

Solid Wood Furniture Sale UK

Lastest Forum Posts - May 25, 2016 - 7:23pm

-S-o-l-i-d- -W-o-o-d- -F-u-r-n-i-t-u-r-e- -S-a-l-e- -U-K- -G-o- -T-o- solidwoodbeds.co.uk

script with pt-heartbeat becomes zombie

Lastest Forum Posts - May 25, 2016 - 7:13am
Hi,

i'm currently testing replication of a MySQL DB. MySQL is version 5.5.49, Server is SLES 11 SP4. I have installed percona-toolkit 2.2.16-1.
I'm currently testing pt-heartbeat. I have a script which starts pt-heartbeat:

Code: sunhb65278:~ # cat /root/skripte/heartbeat.sh #!/bin/bash pidof perl /usr/bin/pt-heartbeat > /dev/null rueck=$? if [ $rueck -ne 0 ]; then pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize sleep 5 fi temp=$(pt-heartbeat -h sunhb58820-2 --user checksum --password checksum --check --database percona --master-server-id 10352397) diff1=${temp#*.} # Zahl vor dem Punkt diff2=${temp%.*} # Zahl hinter dem Punkt if [ $diff1 -gt 0 ]; then mail -s "pt-heartbeat on $HOSTNAME fehlgeschlagen" bernd.lentes@helmholtz-muenchen.de << EOT Achtung ! Slave hängt $temp Sekunden hinter dem Master ! EOT exit fi if [ $diff2 -gt 0 ]; then mail -s "pt-heartbeat on $HOSTNAME fehlgeschlagen" bernd.lentes@helmholtz-muenchen.de << EOT Achtung ! Slave hängt $temp Sekunden hinter dem Master ! EOT exit fi The script is called by cron every minute.

As you see the script first tries if pt-heartbeat is already running, if not it starts.
If i watch the processes, this happen:

Code: TIME:15:51:01 root 31532 0.0 0.0 4552 548 pts/1 S+ 15:51 0:00 grep heartbeat TIME:15:51:02 root 31535 0.0 0.0 11320 1400 ? Ss 15:51 0:00 /bin/bash /root/skripte/heartbeat.sh root 31539 0.0 0.0 76732 15008 ? Ss 15:51 0:00 perl /usr/bin/pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize root 31546 0.0 0.0 4552 544 pts/1 S+ 15:51 0:00 grep heartbeat TIME:15:51:03 root 31535 0.0 0.0 11320 1400 ? Ss 15:51 0:00 /bin/bash /root/skripte/heartbeat.sh root 31539 0.0 0.0 76732 15312 ? Ss 15:51 0:00 perl /usr/bin/pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize root 31553 0.0 0.0 4552 544 pts/1 S+ 15:51 0:00 grep heartbeat TIME:15:51:04 root 31535 0.0 0.0 11320 1400 ? Ss 15:51 0:00 /bin/bash /root/skripte/heartbeat.sh root 31539 0.0 0.0 76732 15312 ? Ss 15:51 0:00 perl /usr/bin/pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize root 31560 0.0 0.0 4552 548 pts/1 S+ 15:51 0:00 grep heartbeat TIME:15:51:05 root 31535 0.0 0.0 11320 1400 ? Ss 15:51 0:00 /bin/bash /root/skripte/heartbeat.sh root 31539 0.0 0.0 76732 15312 ? Ss 15:51 0:00 perl /usr/bin/pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize root 31567 0.0 0.0 4552 548 pts/1 S+ 15:51 0:00 grep heartbeat TIME:15:51:06 root 31535 0.0 0.0 11320 1400 ? Ss 15:51 0:00 /bin/bash /root/skripte/heartbeat.sh root 31539 0.0 0.0 76732 15312 ? Ss 15:51 0:00 perl /usr/bin/pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize root 31574 0.0 0.0 4552 548 pts/1 S+ 15:51 0:00 grep heartbeat TIME:15:51:07 root 31535 0.0 0.0 11320 1400 ? Ss 15:51 0:00 /bin/bash /root/skripte/heartbeat.sh root 31539 0.0 0.0 76732 15312 ? Ss 15:51 0:00 perl /usr/bin/pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize root 31576 33.3 0.0 83044 17924 ? S 15:51 0:00 perl /usr/bin/pt-heartbeat -h sunhb58820-2 --user checksum --password checksum --check --database percona --master-server-id 10352397 root 31582 0.0 0.0 4552 548 pts/1 S+ 15:51 0:00 grep heartbeat TIME:15:51:08 root 31535 0.0 0.0 0 0 ? Zs 15:51 0:00 [heartbeat.sh] <defunct> root 31539 0.0 0.0 76732 15312 ? Ss 15:51 0:00 perl /usr/bin/pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize root 31589 0.0 0.0 4552 548 pts/1 S+ 15:51 0:00 grep heartbeat TIME:15:51:09 root 31535 0.0 0.0 0 0 ? Zs 15:51 0:00 [heartbeat.sh] <defunct> root 31539 0.0 0.0 76732 15312 ? Ss 15:51 0:00 perl /usr/bin/pt-heartbeat -h 127.0.0.1 --user checksum --password checksum --update --database percona --daemonize root 31596 0.0 0.0 4552 544 pts/1 S+ 15:51 0:00 grep heartbeat First no process is running. Then cron starts the script. Why is the script /root/skripte/heartbeat.sh (pid 31535) becoming a zombie at 15:51:08 ?

Do you have any idea ?

Thanks.

Bernd

http://www.menshealthsupplement.info/trubiotrim/

Lastest Forum Posts - May 25, 2016 - 5:25am
What do you do when you have a Trubiotrim like it? I comprehend their sentiment, but they want to be completely accurate. I don't have a hot temper.

It is a high flying approach. There are gobs of reasons why you might be thinking that you could get your hands on an assortment of Trubiotrim. Which group do you belong to? This is slick. Bear this in mind: I am an undisputed old hand in Trubiotrim. After seeing Trubiotrim firsthand I might need to recommend Trubiotrim.

http://www.menshealthsupplement.info/trubiotrim/

http://www.pureasiangarciniareview.com/dermagen-iq-fr/

Lastest Forum Posts - May 25, 2016 - 5:03am
Les producteurs de cette contre faire en sorte que vous aurez impressionnant est causée par son utilisation orchestrée. Peu importe, au lieu d'obtenir leur caution pour ce sérum d'analyse de dermagen iq l'âge, que l'information face à lui-même avec pas un problème. Pourquoi l'impact à travers une large gamme massive d'argent sur l'aggravation des thérapies quand vous pouvez avoir le dans un superbe verdict incertain et par élargis prises des résultats impressionnants avec un sérum apparent?

http://www.pureasiangarciniareview.com/dermagen-iq-fr/

mysql works but systemd status shows it failed after bootstraping

Lastest Forum Posts - May 24, 2016 - 10:02pm
Hello

Recently I noticed mysql failed after bootstraping on a fresh cluster so I checked /etc/init.d/mysql status
Code: ● mysql.service - LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon Loaded: loaded (/etc/init.d/mysql) Active: failed (Result: exit-code) since Tue 2016-05-24 22:37:02 IRDT; 14h ago Process: 2313 ExecStart=/etc/init.d/mysql start (code=exited, status=1/FAILURE) May 24 22:37:02 1-dbpool-a01 mysql[2313]: Starting MySQL (Percona Xtr... May 24 22:37:02 1-dbpool-a01 mysql[2313]: failed! May 24 22:37:02 1-dbpool-a01 systemd[1]: mysql.service: control proc...1 May 24 22:37:02 1-dbpool-a01 systemd[1]: Failed to start LSB: Start .... May 24 22:37:02 1-dbpool-a01 systemd[1]: Unit mysql.service entered .... Hint: Some lines were ellipsized, use -l to show in full. I checked the error file and no error detected and I loged in to mysql and all works with no errors

so whats wrong with init script ? where is problem ? how can I find why service failed ?

Code: Distributor ID: Debian Description: Debian GNU/Linux 8.4 (jessie) Release: 8.4 Codename: jessie Linux 1-vm-dbpool-a01 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) x86_64 GNU/Linux mysql Ver 14.14 Distrib 5.6.29-76.2, for debian-linux-gnu (x86_64) using 6.3 ii percona-release 0.1-3.jessie all Package to install Percona gpg key and APT repo ii percona-xtrabackup 2.3.4-1.jessie amd64 Open source backup tool for InnoDB and XtraDB ii percona-xtradb-cluster-56 5.6.29-25.15-1.jessie amd64 Percona XtraDB Cluster with Galera ii percona-xtradb-cluster-client-5.6 5.6.29-25.15-1.jessie amd64 Percona XtraDB Cluster database client binaries ii percona-xtradb-cluster-common-5.6 5.6.29-25.15-1.jessie amd64 Percona XtraDB Cluster database common files (e.g. /etc/mysql/my.cnf) ii percona-xtradb-cluster-galera-3 3.15-1.jessie amd64 Metapackage for latest version of galera3. ii percona-xtradb-cluster-galera-3.x 3.15-1.jessie amd64 Galera components of Percona XtraDB Cluster ii percona-xtradb-cluster-server-5.6 5.6.29-25.15-1.jessie amd64 Percona XtraDB Cluster database server binaries if you need further information let me know
thank you

xtrabackup only performs a full back up if is executed with super user privileges

Lastest Forum Posts - May 24, 2016 - 6:54pm
I executed this command to backup my database:
Code: xtrabackup --backup --databases='database' --target-dir=/home/user/backups --datadir=/var/lib/mysql/ But I get the following error:
Code: 160520 02:00:54 version_check Done. 160520 02:00:54 Connecting to MySQL server host: localhost, user: root, password: set, port: 0, socket: /var/lib/mysql/mysql.sock Using server version 5.5.44-MariaDB xtrabackup version 2.4.2 based on MySQL server 5.7.11 Linux (x86_64) (revision id: 8e86a84) xtrabackup: uses posix_fadvise(). xtrabackup: cd to /var/lib/mysql/ xtrabackup: open files limit requested 0, set to 1024 xtrabackup: using the following InnoDB configuration: xtrabackup: innodb_data_home_dir = . xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend xtrabackup: innodb_log_group_home_dir = . xtrabackup: innodb_log_files_in_group = 2 xtrabackup: innodb_log_file_size = 5242880 InnoDB: Number of pools: 1 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to the directory. I solved it by running the same command with sudo, the problem is that the backup directory gets created as root so my user doesn't have access to that directory so I always have to change the ownership recursively for that directory so I can be able to read it. This method isn't pretty efficient to me.

- Is there any other alternative to do this?
- Do I always have to
execute this command with sudo?

If I do a partial backup my functions are lost!

Lastest Forum Posts - May 24, 2016 - 6:46pm
I'm following these steps to backup a database only:

To backup the database I do:

Code: innobackupex --user=root --databases='database' /home/user/innobackupex Preparing the backup:

Code: innobackupex --apply-log --export /home/user/innobackupex then to copy all the files the the file directory I do:

Code: rsync -avrP /home/user/innobackupex /var/lib/mysql/ MY database got restored but when I try to list my functions they are gone! SHOW FUNCTION STATUS

Now, if I tried to do a full backup this does't happen.

Any idea why this is happening?

Server not working after VM image copy.

Lastest Forum Posts - May 24, 2016 - 10:31am
this is what happens at the console:

show dbs;
Tue May 24 14:15:32.727 listDatabases failed:{
"errmsg" : "exception: dictionary XXXXXXXXXXXXXX should exist, but we got ENOENT",
"code" : 16988,
"ok" : 0
} at /mnt/workspace/percona-tokumx-2.0-debian-binary/label_exp/vps-ubuntu-trusty-x64-04/tokumx-enterprise-2.0.2/src/mongo/shell/mongo.js:46

and the log is showing this directly:


Tue May 24 14:24:43.031 [conn4529] Assertion: 16988:dictionary XXXXXXXXXXXXXX should exist, but we got ENOENT
0xb0c9f6 0x99affc 0x99b09c 0x7fa85a 0x7fbc1e 0x83f534 0x8428d0 0x859cf7 0x85a07e 0x8f8ca9 0x8f935f 0x855522 0x862de8 0x8108ce 0x81511c 0x81730d 0x6ea86b 0x7d8e12 0x7fcd58720182 0x7fcd56a8847d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0xb0c9f6]
/usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x9c) [0x99affc]
/usr/bin/mongod() [0x99b09c]
/usr/bin/mongod(_ZN5mongo14IndexInterface4openEbb+0x35a) [0x7fa85a]
/usr/bin/mongod(_ZN5mongo14IndexInterface4makeERKNS_7BSONOb jEbb+0x1fe) [0x7fbc1e]
/usr/bin/mongod(_ZN5mongo14CollectionBaseC2ERKNS_7BSONObjEP b+0x2e4) [0x83f534]
/usr/bin/mongod(_ZN5mongo17IndexedCollectionC1ERKNS_7BSONOb jEPb+0x10) [0x8428d0]
/usr/bin/mongod(_ZN5mongo10CollectionC1ERKNS_7BSONObjEb+0xa 57) [0x859cf7]
/usr/bin/mongod(_ZN5mongo10Collection4makeERKNS_7BSONObjEb+ 0x3e) [0x85a07e]
/usr/bin/mongod(_ZN5mongo13CollectionMap7open_nsERKNS_10Str ingDataEb+0x3d9) [0x8f8ca9]
/usr/bin/mongod(_ZN5mongo13CollectionMap13getCollectionERKN S_10StringDataE+0x4f) [0x8f935f]
/usr/bin/mongod(_ZN5mongo21getOrCreateCollectionERKNS_10Str ingDataEb+0x12) [0x855522]
/usr/bin/mongod(_ZN5mongo13insertObjectsEPKcRKSt6vectorINS_ 7BSONObjESaIS3_EEbmbb+0x538) [0x862de8]
/usr/bin/mongod() [0x8108ce]
/usr/bin/mongod(_ZN5mongo14receivedInsertERNS_7MessageERNS_ 5CurOpE+0x4fc) [0x81511c]
/usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERN S_10DbResponseERKNS_11HostAndPortE+0x12fd) [0x81730d]
/usr/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7Me ssageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+ 0xbb) [0x6ea86b]
/usr/bin/mongod(_ZN5mongo17PortMessageServer17handleIncomin gMsgEPv+0x452) [0x7d8e12]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7fcd58720182]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fcd56a8847d]
Tue May 24 14:24:43.039 [conn4529] insert xxxxxxxxxxl.system.indexes keyUpdates:0 exception: dictionary XXXXXXXXXXXXXX should exist, but we got ENOENT code:16988 locks(micros) r:7876 7ms


The files it is complaining exists (XXXXXXXX), the only difference is that the name at the message contains "-", and the files use "_".

Any idea what can I do?




Webinar Thursday May 26: Troubleshooting MySQL hardware resource usage

Latest MySQL Performance Blog posts - May 24, 2016 - 8:46am

Join Sveta on Thursday, May 26, 2016, at 10 am PDT (UTC-7) for her webinar Troubleshooting MySQL hardware resource usage.

MySQL does not just run on its own. It stores data on disk, and stores data and temporarily results in memory. It uses CPU resources to perform operations, and a network to communicate with its clients.

In this webinar, we’ll discuss common resource usage issues, how they affect MySQL Server performance, and methods to find out how resources are being used. We will employ both OS-level tools, and new features in Performance Schema that provide detailed information on what exactly is happening inside MySQL Server.

Register for the webinar here.

Sveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015.

Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can solve typical issues quicker, teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle.

She is the author of the book MySQL Troubleshooting and JSON UDF Functions for MySQL.

cannot join cluster after node upgrade.

Lastest Forum Posts - May 24, 2016 - 6:39am
Hello,

I have upgaded one node on the existing cluster which is running on debian wheezy percona xtradb 5.6 with the following versions of packages:

Code: percona-xtrabackup 2.2.12-1.wheezy percona-xtradb-cluster-5.6-dbg 5.6.25-25.12-1.wheezy percona-xtradb-cluster-client-5.6 5.6.25-25.12-1.wheezy percona-xtradb-cluster-common-5.6 5.6.25-25.12-1.wheezy percona-xtradb-cluster-full-56 5.6.25-25.12-1.wheezy percona-xtradb-cluster-galera-3 3.9.3494.wheezy percona-xtradb-cluster-galera-3.x 3.9.3494.wheezy percona-xtradb-cluster-galera-3.x-dbg 3.9.3494.wheezy percona-xtradb-cluster-galera3-dbg 3.9.3494.wheezy percona-xtradb-cluster-garbd-3 3.9.3494.wheezy percona-xtradb-cluster-garbd-3.x 3.9.3494.wheezy percona-xtradb-cluster-garbd-3.x-dbg 3.9.3494.wheezy percona-xtradb-cluster-server-5.6 5.6.25-25.12-1.wheezy percona-xtradb-cluster-server-debug-5.6 5.6.25-25.12-1.wheezy percona-xtradb-cluster-test-5.6 5.6.25-25.12-1.wheezy to Debian jessie percona xtradb 5.6 with the following versions of packages:
Code: percona-release 0.1-3.jessie percona-xtrabackup 2.3.4-1.jessie percona-xtradb-cluster-5.6-dbg 5.6.29-25.15-1.jessie percona-xtradb-cluster-client-5.6 5.6.29-25.15-1.jessie percona-xtradb-cluster-common-5.6 5.6.29-25.15-1.jessie percona-xtradb-cluster-full-56 5.6.29-25.15-1.jessie percona-xtradb-cluster-galera-3 3.15-1.jessie percona-xtradb-cluster-galera-3.x 3.15-1.jessie percona-xtradb-cluster-galera-3.x-dbg 3.15-1.jessie percona-xtradb-cluster-galera3-dbg 3.15-1.jessie percona-xtradb-cluster-garbd-3 3.15-1.jessie percona-xtradb-cluster-garbd-3.x 3.15-1.jessie percona-xtradb-cluster-garbd-3.x-dbg 3.15-1.jessie percona-xtradb-cluster-server-5.6 5.6.29-25.15-1.jessie percona-xtradb-cluster-server-debug-5.6 5.6.29-25.15-1.jessie percona-xtradb-cluster-test-5.6 5.6.29-25.15-1.jessie When I start the node to join the cluster I get the following errors in the log files:

Code: 2016-05-24 12:58:13 8254 [Note] WSREP: Read nil XID from storage engines, skipping position init 2016-05-24 12:58:13 8254 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera3/libgalera_smm.so' 2016-05-24 12:58:13 8254 [Note] WSREP: wsrep_load(): Galera 3.9(r93aca2d) by Codership Oy <info@codership.com> loaded successfully. 2016-05-24 12:58:13 8254 [Note] WSREP: CRC-32C: using hardware acceleration. 2016-05-24 12:58:13 8254 [Warning] WSREP: Could not open saved state file for reading: /var/lib/client.sql/test-cluster//grastate.dat 2016-05-24 12:58:13 8254 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1 2016-05-24 12:58:13 8254 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/client.sql/test-cluster/; base_host = 10.21.97.98; base_port = 14039; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/client.sql/test-cluster/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/client.sql/test-cluster//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = 2016-05-24 12:58:13 8254 [Note] WSREP: Service thread queue flushed. 2016-05-24 12:58:13 8254 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1 2016-05-24 12:58:13 8254 [Note] WSREP: wsrep_sst_grab() 2016-05-24 12:58:13 8254 [Note] WSREP: Start replication 2016-05-24 12:58:13 8254 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1 2016-05-24 12:58:13 8254 [Note] WSREP: protonet asio version 0 2016-05-24 12:58:13 8254 [Note] WSREP: Using CRC-32C for message checksums. 2016-05-24 12:58:13 8254 [Note] WSREP: backend: asio 2016-05-24 12:58:13 8254 [Warning] WSREP: access file(/var/lib/client.sql/test-cluster//gvwstate.dat) failed(No such file or directory) 2016-05-24 12:58:13 8254 [Note] WSREP: restore pc from disk failed 2016-05-24 12:58:13 8254 [Note] WSREP: GMCast version 0 2016-05-24 12:58:13 8254 [Note] WSREP: (2ccf1242, 'tcp://0.0.0.0:14039') listening at tcp://0.0.0.0:14039 2016-05-24 12:58:13 8254 [Note] WSREP: (2ccf1242, 'tcp://0.0.0.0:14039') multicast: , ttl: 1 2016-05-24 12:58:13 8254 [Note] WSREP: EVS version 0 2016-05-24 12:58:13 8254 [Note] WSREP: gcomm: connecting to group 'test-cluster', peer '10.21.97.98:,10.254.60.210:,10.48.49.211:' 2016-05-24 12:58:13 8254 [Warning] WSREP: (2ccf1242, 'tcp://0.0.0.0:14039') address 'tcp://10.21.97.98:14039' points to own listening address, blacklisting 2016-05-24 12:58:16 8254 [Warning] WSREP: no nodes coming from prim view, prim not possible 2016-05-24 12:58:16 8254 [Note] WSREP: view(view_id(NON_PRIM,2ccf1242,1) memb { 2ccf1242,0 } joined { } left { } partitioned { }) 2016-05-24 12:58:17 8254 [Warning] WSREP: last inactive check more than PT1.5S ago (PT3.53097S), skipping check 2016-05-24 12:58:46 8254 [Note] WSREP: view((empty)) 2016-05-24 12:58:46 8254 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out) at gcomm/src/pc.cpp:connect():162 2016-05-24 12:58:46 8254 [ERROR] WSREP: gcs/src/gcs_core.cpp:long int gcs_core_open(gcs_core_t*, const char*, const char*, bool)():206: Failed to open backend connection: -110 (Connection timed out) 2016-05-24 12:58:46 8254 [ERROR] WSREP: gcs/src/gcs.cpp:long int gcs_open(gcs_conn_t*, const char*, const char*, bool)():1379: Failed to open channel 'test-cluster' at 'gcomm://10.21.97.98,10.254.60.210,10.48.49.211': -110 (Connection timed out) 2016-05-24 12:58:46 8254 [ERROR] WSREP: gcs connect failed: Connection timed out 2016-05-24 12:58:46 8254 [ERROR] WSREP: wsrep::connect(gcomm://10.21.97.98,10.254.60.210,10.48.49.211) failed: 7 2016-05-24 12:58:46 8254 [ERROR] Aborting 2016-05-24 12:58:46 8254 [Note] WSREP: Service disconnected. 2016-05-24 12:58:47 8254 [Note] WSREP: Some threads may fail to exit. 2016-05-24 12:58:47 8254 [Note] Binlog end 2016-05-24 12:58:47 8254 [Note] mysqld: Shutdown complete Any idea how to fix this issue to achieve a fully working upgade? Note that eventually I want to upgrade all the nodes to the same version of OS/Percona xtradb

Thanks in advance

MaxScale master master

Lastest Forum Posts - May 24, 2016 - 5:40am
Hello. I have test environment with 3 MDB server wit galera and master-master replications. All work fine.
I try configure MaxScale for HA.
This is Maxscale config

Code: [maxscale] threads=4 [server1] type=server address=10.10.15.30 port=3306 protocol=MySQLBackend myweight=2 [server2] type=server address=10.10.15.31 port=3306 protocol=MySQLBackend myweight=5 [server3] type=server address=10.10.15.32 port=3306 protocol=MySQLBackend myweight=3 [Multi-Master Monitor] type=monitor module=mmmon servers=server1,server2,server3 user=maxscale passwd=fYwUnK5w detect_stale_master=true [Read-Only Service] type=service router=readconnroute router_options=synced servers=server1,server2,server3 user=maxscale passwd=fYwUnK5w enable_root_user=1 weightby=myweight [Read-Write Service] type=service router=readwritesplit servers=server1,server2,server3 user=maxscale passwd=fYwUnK5w enable_root_user=1 [MaxAdmin Service] type=service router=cli [Read-Only Listener] type=listener service=Read-Only Service protocol=MySQLClient port=4008 [Read-Write Listener] type=listener service=Read-Write Service protocol=MySQLClient port=3306 [MaxAdmin Listener] type=listener service=MaxAdmin Service protocol=maxscaled port=6603 For test i use this code

Code: #!/bin/bash mysqlslap \ --user=root \ --password=123 \ --host=10.10.15.33 \ --concurrency=20 \ --number-of-queries=10000 \ --create-schema=employees \ --query="./select.sql" \ --delimiter=";" \ --verbose \ --iterations=2 \ --debug-info Code: cat select.sql insert into employees (d,c) values (2,3); and see this error

Code: ./mysqlslap.sh mysqlslap: Cannot run query insert into employees (d,c) values (2,3) ERROR : Lost connection to MySQL server during query In log file

Code: May 24 18:37:21 ubuntuMaxScale maxscale[25118]: Server at 10.10.15.30:3306 should be master but is RUNNING MASTER instead and can't be chosen to master. May 24 18:37:21 ubuntuMaxScale maxscale[25118]: Routing the query failed. Session will be closed. May 24 18:37:21 ubuntuMaxScale maxscale[25118]: Server at 10.10.15.30:3306 should be master but is RUNNING MASTER instead and can't be chosen to master. May 24 18:37:21 ubuntuMaxScale maxscale[25118]: Routing the query failed. Session will be closed. If i do this request manual, all fine

Code: Server version: 10.0.0 1.4.3-maxscale mariadb.org binary distribution Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> insert into employees.employees (d,c) values (2,3); Query OK, 1 row affected (0.00 sec) MySQL [(none)]> insert into employees.employees (d,c) values (2,3); Query OK, 1 row affected (0.00 sec) MySQL [(none)]>

http://www.healtheverplus.com/alpha-f1/

Lastest Forum Posts - May 24, 2016 - 4:25am
Solid body is an amazing thing to have and when you are alive and well then you get a confidence in yourself. Weight preparing is a champion amongst the most old amusements on the planet and it is not only a diversion truly working out is around a lifestyle. The best gift an individual has from God is the prosperity and body and weight preparing is about making your body look as delightful as you can in this way people do a significant measure of workouts and activity hence in each one of the zones of the world.

Get more info ===>>>> http://www.healtheverplus.com/alpha-f1/

http://yoursbetterhealthsolutions.com/bisou-cream-reviews/

Lastest Forum Posts - May 24, 2016 - 1:13am
I find that many publications which are directed at ladies give attention to gender advice, expensive trend, and runway style Beauty tips . Sadly, I actually donot have use for that sort of information. However, All You Could publication is truly designed for common ladies that need support using a many more than having ten orgasms per Bisou Cream day, or studying how to wear glitter eye shadow. I could actually use the advice that's provided in All and much more than that, I ENJOY the coupons!
http://yoursbetterhealthsolutions.co...cream-reviews/

pt-archiver - multitable delete or delete from with join

Lastest Forum Posts - May 24, 2016 - 12:21am
Hi everybody.
Can we use pt-archiver for archiving or purging data from tables with FK using join. I did not find any examples or info in documentation about this.
Can you advice?

Example delete query.

delete testdb1.tbl1 from testdb1.tbl1
inner join testdb1.tbl2 on testdb1.tbl1.tbl1_id=testdb1.tbl2.tbl2_id
where testdb1.tbl2.updated=?

Can A PErcona 5.6 SLAVE talk to a 5.6 MySQL Community Server MASTER

Lastest Forum Posts - May 23, 2016 - 3:09pm
I want to setup up a local Percon SLAVE
talking to a AWS MySQL Community MASTER

IS THIS POSSIBLE ?
I don't see why not ?
or
I have to find a MySQL Communit server

My goal is to transfer MySQL AWS data to Local Percona Db
Make it a slave to AWS and let it replicate until when we are ready to cut over
then Make Percona Slave new MASTER and DELETE AWS
thx

SLAVE :
mysql> show variables like '%vers%' ;
+---------------------------+------------------------------------------------------+
| Variable_name | Value |
+---------------------------+------------------------------------------------------+
| innodb_version | 5.6.29-76.2 |
| protocol_version | 10 |
| slave_type_conversions | |
| thread_pool_oversubscribe | 3 |
| version | 5.6.29-76.2-log |
| version_comment | Percona Server (GPL), Release 76.2, Revision ddf26fe |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+---------------------------+------------------------------------------------------+
8 rows in set (0.00 sec)


MASTER :
mysql> show variables like '%vers%' ;
+-------------------------+------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------+
| innodb_version | 5.6.19 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.6.19-log |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+-------------------------+------------------------------+
7 rows in set (0.02 sec)

pt-online-schema-change crashes instance by allocating all available memory.

Lastest Forum Posts - May 23, 2016 - 2:28pm
when i use pt-online-schema-change it crashes my ec2 instance (micro with 1gb ram, i even tried a small with 2gb). it gets so bad i don't even have enough memory to reboot (not enough memory to sudo, and i have to stop the instance).

is there a way to limit max memory usage for this tool? it's basically unusable for certain large tables.

Can not start slave to AWS Master : 1298 Error Unknown or incorrect time zone: 'UTC'

Lastest Forum Posts - May 23, 2016 - 2:27pm
We are moving from AWS to host and store all DBs on own Data center
I took a mysql dump from AWS (with binlog name and position)
I import on new Percona 5.6 db but get error while trying to start slave.
setup CHANGE MASTER works but start slave fails .

When checking AWS it shows : system_time_zone= UTC
time_zone = UTC

Mine NEW 5.6 Percona show
system_time_zone = PDT
time_zone = SYSYEM

I had to change my unix host to UTC and bounce mysql since it reads the host time_zone upon startup

I also set in my.cnf and bounce mysql.
[mysqld_safe]
timezone='UTC'
default-time-zone='UTC'

mysql> show variables where variable_name like '%zone%' ;
system_time_zone UTC
time_zone SYSTEM

IT still show time_zone as = SYSTEM
HOW DO I CHANGE THIS to UTC ??? (I want same as AWS)
or how else can I start a slave replicaiton from AWS to slave on premises .... ?

TRIED :
mysql> SET @@global.time_zone = UTC ;
ERROR 1298 (HY000): Unknown or incorrect time zone: 'UTC'
mysql> SET @@global.time_zone = 'UTC' ;
ERROR 1298 (HY000): Unknown or incorrect time zone: 'UTC'
mysql> SET time_zone = 'UTC' ;
ERROR 1298 (HY000): Unknown or incorrect time zone: 'UTC'
mysql> SET time_zone = UTC ;
ERROR 1298 (HY000): Unknown or incorrect time zone: 'UTC'


Take Percona’s one-click high availability poll

Latest MySQL Performance Blog posts - May 23, 2016 - 1:47pm

Wondering what high availability (HA) solutions are most popular? Take our high availability poll below!

HA is always a hot topic. The reality is that if your data is not available, your customers cannot do business with you. In fact, estimates show the average cost of downtime is about $5K per minute. With an average outage taking 40 minutes to correct, you could be looking at a potential cost of $200K if your MySQL instance goes down. Whether your database is on premise, or in public or private clouds, it is critical that your database deployment does not have a potentially devastating single point of failure.

Please take a few seconds and answer the following poll. It will help the community get an idea of how companies are approaching HA in their critical database environments.

If you’re using other solutions or have specific issues, feel free to comment below. We’ll post a follow-up blog with the results!

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.


General Inquiries

For general inquiries, please send us your question and someone will contact you.