]]>
]]>

You are here

Feed aggregator

Error startup PXC with 1 node

Lastest Forum Posts - January 14, 2015 - 2:15am
Hi, I can't startup PXC with bootstrap in Centos 7 with 1 node.

This is the config file my.cnf:

: [mysql] # CLIENT # port = 3306 socket = /var/lib/mysql/mysql.sock [mysqld] # GENERAL # user = mysql default-storage-engine = InnoDB socket = /var/lib/mysql/mysql.sock pid-file = /var/lib/mysql/mysql.pid # Cluster Config # wsrep_cluster_address=gcomm:// binlog_format=ROW wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_cluster_name=AirePXC wsrep_sst_method=mysqldump #wsrep_sst_auth="sstuser:s3cr3t" wsrep_node_name=PXC1 wsrep_node_address=127.0.0.1 innodb_locks_unsafe_for_binlog=1 innodb_autoinc_lock_mode=2 # MyISAM # key-buffer-size = 32M myisam-recover = FORCE,BACKUP # SAFETY # max-allowed-packet = 16M max-connect-errors = 1000000 sql-mode = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY innodb-strict-mode = 1 # DATA STORAGE # datadir = /var/lib/mysql/ # BINARY LOGGING # log-bin = /var/lib/mysql/mysql-bin expire-logs-days = 14 sync-binlog = 1 # CACHES AND LIMITS # tmp-table-size = 32M max-heap-table-size = 32M query-cache-type = 0 query-cache-size = 0 max-connections = 500 thread-cache-size = 50 open-files-limit = 65535 table-definition-cache = 1024 table-open-cache = 2048 # INNODB # innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 Startup PXC:

: systemctl start mysql@bootstrap.service And the output error:

: Job for mysql@bootstrap.service failed. See 'systemctl status mysql@bootstrap.service' and 'journalctl -xn' for details. Error Details:

: ene 14 11:03:19 localhost.localdomain mysql-systemd[25304]: ERROR! mysql pid file /var/lib/mysql/mysql.pid empty or not readable ene 14 11:03:19 localhost.localdomain mysql-systemd[25304]: WARNING: mysql may be already dead ene 14 11:03:19 localhost.localdomain systemd[1]: Failed to start Percona XtraDB Cluster with config /etc/sysconfig/mysql.bootstrap. -- Subject: Unit mysql@bootstrap.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit mysql@bootstrap.service has failed. -- -- The result is failed. ene 14 11:03:19 localhost.localdomain systemd[1]: Unit mysql@bootstrap.service entered failed state. ene 14 11:03:20 localhost.localdomain sshd[25272]: Failed password for root from 103.41.124.18 port 34714 ssh2 ene 14 11:03:20 localhost.localdomain sshd[25272]: Received disconnect from 103.41.124.18: 11: [preauth] ene 14 11:03:20 localhost.localdomain sshd[25272]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.41.124.18 user=root ene 14 11:03:22 localhost.localdomain unix_chkpwd[25333]: password check failed for user (root) ene 14 11:03:22 localhost.localdomain sshd[25331]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.41.124.18 user=root ene 14 11:03:22 localhost.localdomain sshd[25331]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root" [root@localhost data]# Any ideas?

benchmark with sysbench0.5

Lastest Forum Posts - January 13, 2015 - 6:49pm
when I did a test on percona server5.5.35 with sysbench0.5,
I found a lot of threads lasted for a long time:
root>show processlist;
+------+------+-----------------+--------+---------+------+----------+------------------------------------------------------------------------------------------------------+-----------+---------------+-----------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read |
+------+------+-----------------+--------+---------+------+----------+------------------------------------------------------------------------------------------------------+-----------+---------------+-----------+
| 6535 | test | 127.0.0.1:62338 | sbtest | Query | 76 | Updating | UPDATE sbtest2 SET k=k+1 WHERE id=2643761 | 0 | 1 | 1 |
| 6536 | test | 127.0.0.1:62341 | sbtest | Query | 76 | update | INSERT INTO sbtest12 (id, k, c, pad) VALUES (4388288, 729827, '11814348648-19862261356-49042326463-7 | 0 | 0 | 0 |
| 6537 | test | 127.0.0.1:62339 | sbtest | Query | 76 | Updating | UPDATE sbtest13 SET k=k+1 WHERE id=1353444 | 0 | 1 | 1 |
| 6538 | test | 127.0.0.1:62340 | sbtest | Query | 76 | update | INSERT INTO sbtest7 (id, k, c, pad) VALUES (2586203, 4762462, '03409980337-30835298673-87271499567-0 | 0 | 0 | 0 |
| 6539 | test | 127.0.0.1:62342 | sbtest | Query | 76 | updating | DELETE FROM sbtest5 WHERE id=2586943 | 0 | 1 | 1 |
| 6540 | test | 127.0.0.1:62343 | sbtest | Query | 76 | Updating | UPDATE sbtest10 SET k=k+1 WHERE id=2425865 | 0 | 1 | 1 |
| 6541 | test | 127.0.0.1:62345 | sbtest | Query | 76 | update | INSERT INTO sbtest13 (id, k, c, pad) VALUES (2020063, 3216557, '92518267352-69783502520-55744444753- | 0 | 0 | 0 |
| 6542 | test | 127.0.0.1:62344 | sbtest | Query | 76 | Updating | UPDATE sbtest7 SET k=k+1 WHERE id=3771602 | 0 | 1 | 1 |
| 6543 | test | 127.0.0.1:62347 | sbtest | Query | 76 | Updating | UPDATE sbtest13 SET k=k+1 WHERE id=4793540 | 0 | 1 | 1 |
| 6544 | test | 127.0.0.1:62348 | sbtest | Query | 76 | updating | DELETE FROM sbtest3 WHERE id=4068758 | 0 | 1 | 1 |
| 6545 | test | 127.0.0.1:62353 | sbtest | Query | 76 | Updating | UPDATE sbtest8 SET k=k+1 WHERE id=1162365 | 0 | 1 | 1 |
| 6546 | test | 127.0.0.1:62354 | sbtest | Query | 76 | updating | DELETE FROM sbtest9 WHERE id=1937572 | 0 | 1 | 1 |
| 6547 | test | 127.0.0.1:62349 | sbtest | Query | 76 | updating | DELETE FROM sbtest8 WHERE id=4261626 | 0 | 1 | 1 |
| 6548 | test | 127.0.0.1:62350 | sbtest | Query | 76 | update | INSERT INTO sbtest7 (id, k, c, pad) VALUES (1489188, 2729824, '43180915332-55361622607-08953597333-2 | 0 | 0 | 0 |
| 6549 | test | 127.0.0.1:62352 | sbtest | Query | 76 | update | INSERT INTO sbtest5 (id, k, c, pad) VALUES (729929, 2956969, '03814822121-48667714674-14509476873-54 | 0 | 0 | 0 |
| 6550 | test | 127.0.0.1:62356 | sbtest | Query | 76 | Updating | UPDATE sbtest9 SET k=k+1 WHERE id=2232675 | 0 | 1 | 1 |
| 6553 | root | 127.0.0.1:62424 | NULL | Query | 0 | NULL | show processlist | 0 | 0 | 0 |
+------+------+-----------------+--------+---------+------+----------+------------------------------------------------------------------------------------------------------+-----------+---------------+-----------+


The test script:
sysbench --db-driver=mysql --test=/work/soft/sysbench/sysbench/tests/db/oltp.lua \
--mysql-host=127.0.0.1 --mysql-port=3306 \
--mysql-user=test --mysql-password=test --mysql-db=sbtest \
--mysql-table-engine=innodb --mysql-engine-trx=yes\
--oltp-test-mode=complex\
--oltp-read-only=off\
--oltp-reconnect-mode=random\
--oltp-table-size=5000000 \
--max-time=60\
--max-requests=0\
--num-threads=16\
--report-interval=1 \
--rand-init=on --oltp_tables_count=16 --rand-type=uniform \
run


(CPU:16 cores
mem: 64G (BP=25G)
disk: SAS)

Zabbix template on latest zabbix server

Lastest Forum Posts - January 13, 2015 - 4:10pm
Hi,

The current zabbix template documentation says the testing has been done on zabbix version 2.0.9.

Are there known problems if I install the template on a more recent version of zabbix?

thank you.

Percona Live 2015 conference sessions announced!

Latest MySQL Performance Blog posts - January 13, 2015 - 9:01am

Today we announced the full conference sessions schedule for April’s Percona Live MySQL Conference & Expo 2015 and this year’s event, once again at the Hyatt Regency Santa Clara and Santa Clara Convention Center, looks to be the biggest yet with networking and learning opportunities for MySQL professionals and enthusiasts at all levels.

Conference sessions will run April 14-16 following each morning’s keynote addresses (the keynotes have yet to be announced). The 2015 conference features a variety of formal tracks and sessions related to high availability, DevOps, programming, performance optimization, replication and backup. They’ll also cover MySQL in the cloud, MySQL and NoSQL, MySQL case studies, security (a very hot topic), and “what’s new” in MySQL.

The sessions will be delivered by top MySQL practitioners at some of the world’s leading MySQL vendors and users, including Oracle, Facebook, Google, LinkedIn, Twitter, Yelp, Percona and MariaDB.

Sessions will include:

  • “Better DevOps with MySQL and Docker,” Sunny Gleason, Founder, SunnyCloud
  • “Big Transactions on Galera Cluster,” Seppo Jaakola, CEO, Codership
  • “Database Defense in Depth,” Geoffrey Anderson, Database Operations Engineer, Box, Inc.
  • “The Database is Down, Now What?” Jeremy Tinley, Senior MySQL Operations Engineer, Etsy.com
  • “Encrypting MySQL data at Google,” Jeremy Cole, Sr. Systems Engineer, and Jonas Oreland, Software Developer, Google
  • “High-Availability using MySQL Fabric,” Mats Kindahl, Senior Principal Software Developer, MySQL Group, Oracle
  • “High Performance MySQL choices in Amazon Web Services: Beyond RDS,” Andrew Shieh, Director of Operations, SmugMug
  • “How to Analyze and Tune MySQL Queries for Better Performance,” Øystein Grøvlen, Senior Principal Software Engineer, Oracle
  • “InnoDB: A journey to the core III,” Davi Arnaut, Sr. Software Engineer, LinkedIn, and Jeremy Cole, Sr. Systems Engineer, Google, Inc.
  • “Meet MariaDB 10.1,” Sergei Golubchik, Chief Architect, MariaDB
  • “MySQL 5.7 Performance: Scalability & Benchmarks,” Dimitri Kravtchuk, MySQL Performance Architect, Oracle
  • “MySQL at Twitter – 2015,” Calvin Sun, Sr. Engineering Manager, and Inaam Rana, Staff Software Engineer, Twitter
  • “MySQL Automation at Facebook Scale,” Shlomo Priymak, MySQL Database Engineer, Facebook
  • “MySQL Cluster Performance Tuning – The 7.4.x Talk,” Johan Andersson CTO and Alex Yu, Vice President of Products, Severalnines AB
  • “MySQL for Facebook Messenger,” Domas Mituzas, Database Engineer, Facebook
  • “MySQL Indexing, How Does It Really Work?” Tim Callaghan, Vice President of Engineering, Tokutek
  • “MySQL in the Hosted Cloud,” Colin Charles, Chief Evangelist, MariaDB
  • “MySQL Security Essentials,” Ronald Bradford, Founder & CEO, EffectiveMySQL
  • “Scaling MySQL in Amazon Web Services,” Mark Filipi, MySQL Team Lead, Pythian
  • “Online schema changes for maximizing uptime,” David Turner, DBA, Dropbox, and Ben Black, DBA, Tango
  • “Upgrading to MySQL 5.6 @ scale,” Tom Krouper, Staff Database Administrator , Twitter

Of course Percona Live 2015 will also include several hours of hands-on, intensive tutorials – led by some of the top minds in MySQL. We had a post talking about the tutorials in more detail last month. Since then we added two more: “MySQL devops: initiation on how to automate MySQL deployment” and “Deploying MySQL HA with Ansible and Vagrant.” And of course Dimitri Vanoverbeke, Liz van Dijk and Kenny Gryp will once again this year host the ever-popular “Operational DBA in a Nutshell! Hands On Tutorial!

Yahoo, VMWare, Box and Yelp are among the industry leaders sponsoring the event, and additional sponsorship opportunities are still available.

Worldwide interest in Percona Live continues to soar, and this year, for the first time, the conference will run in parallel with OpenStack Live 2015, a new Percona conference scheduled for April 13 and 14. That event will be a unique opportunity for OpenStack users and enthusiasts to learn from leading OpenStack experts in the field about top cloud strategies, improving overall cloud performance, and operational best practices for managing and optimizing OpenStack and its MySQL database core.

Best of all, your full Percona Live ticket gives you access to the OpenStack Live conference! So why not save some $$? Early Bird registration discounts are available through Feb. 1, 2015 at 11:30 p.m. PST.

I hope to see you in April!

The post Percona Live 2015 conference sessions announced! appeared first on MySQL Performance Blog.

Percona XtraDB Cluster recommendation?

Lastest Forum Posts - January 13, 2015 - 2:27am
Hello, we are creating a project where we will receive about 400,000 rows daily. We have seen benchmarks of PXC and we doubt if it will support writting sentences.

The XtraDB still the problem of InnoDB with ibdata1?

The servers are two Dell R620 virtualized, with RAID 10 and 128GB RAM

Our project is feasible with PXC?

Thanks!

Percona Server 5.6.22-71.0 is now available

Latest MySQL Performance Blog posts - January 12, 2015 - 10:07am

Percona is glad to announce the release of Percona Server 5.6.22-71.0 on January 12, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.22, including all the bug fixes in it, Percona Server 5.6.22-71.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.22-71.0 milestone on Launchpad.

New Features:

  • Percona Server has implemented improved slow log reporting for queries in stored procedures.
  • TokuDB storage engine package has been updated to version 7.5.4. Percona Server with an older version of TokuDB could hit an early scaling limit when the binary log was enabled. TokuDB 7.5.4 fixes this problem by using the hints supplied by the binary log group commit algorithm to avoid fsync’ing its recovery log during the commit phase of the 2 phase commit algorithm that MySQL uses for transactions when the binary log is enabled.

Bugs Fixed:

  • Debian and Ubuntu init scripts no longer have a hardcoded server startup timeout. This has been done to accommodate situations where server startup takes a very long time, for example, due to a crash recovery or buffer pool dump restore. Bugs fixed #1072538 and #1328262.
  • A read-write workload on compressed InnoDB tables might have caused an assertion error. Bug fixed #1268656.
  • Selecting from GLOBAL_TEMPORARY_TABLES table while running an online ALTER TABLE in parallel could lead to server crash. Bug fixed #1294190.
  • A wrong stack size calculation could lead to a server crash when Performance Schema tables were storing big amount of data or in case of server being under highly concurrent load. Bug fixed #1351148 (upstream #73979).
  • A query on an empty table with a BLOB column may crash the server. Bug fixed #1384568 (upstream #74644).
  • A read-write workload on compressed InnoDB tables might have caused an assertion error. Bug fixed #1395543.
  • If HandlerSocket was enabled, the server would hang during shutdown. Bug fixed #1397859.
  • The default MySQL configuration file, my.cnf, was not installed during a new installation on CentOS. Bug fixed #1405667.
  • The query optimizer did not pick a covering index for some ORDER BY queries. Bug fixed #1394967 (upstream #57430).
  • SHOW ENGINE INNODB STATUS was displaying two identical TRANSACTIONS sections. Bug fixed #1404565.
  • A race condition in Multiple user level locks per connection implementation could cause a deadlock. Bug fixed #1405076.

Other bugs fixed: #1394357, #1337251, #1399174, #1396330 (upstream #74987), and #1401776 (upstream #75189).

Known Issues:
If you’re upgrading TokuDB package on CentOS 5/6 you’ll need to restart the MySQL service after the upgrade, otherwise TokuDB storage engine won’t be initialized.

Release notes for Percona Server 5.6.22-71.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker

The post Percona Server 5.6.22-71.0 is now available appeared first on MySQL Performance Blog.

Percona Server 5.5.41-37.0 is now available

Latest MySQL Performance Blog posts - January 12, 2015 - 9:43am


Percona is glad to announce the release of Percona Server 5.5.41-37.0 on January 9, 2015. Based on MySQL 5.5.41, including all the bug fixes in it, Percona Server 5.5.41-37.0 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.41-37.0 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

New Features:

Bugs Fixed:

  • Debian and Ubuntu init scripts no longer have a hardcoded server startup timeout. This has been done to accommodate situations where server startup takes a very long time, for example, due to a crash recovery or buffer pool dump restore. Bugs fixed #1072538 and #1328262.
  • If HandlerSocket was enabled, the server would hang during shutdown. Bug fixed #1319904.
  • Wrong stack calculation could lead to a server crash when Performance Schema tables were storing big amount of data or in case of server being under highly concurrent load. Bug fixed #1351148 (upstream #73979).
  • Values of IP and DB fields in the Audit Log Plugin were incorrect. Bug fixed #1379023.
  • Percona Server 5.5 would fail to build with GCC 4.9.1 (such as bundled with Ubuntu Utopic) in debug configuration. Bug fixed #1396358 (upstream #75000).
  • Default MySQL configuration file, my.cnf, was not installed during the new installation on CentOS. Bug fixed #1405667.
  • A session on a server in mixed mode binlogging would switch to row-based binlogging whenever a temporary table was created and then queried. This switch would last until the session end or until all temporary tables in the session were dropped. This was unnecessarily restrictive and has been fixed so that only the statements involving temporary tables were logged in the row-based format whereas the rest of the statements would continue to use the statement-based logging. Bug fixed #1313901 (upstream #72475).
  • Purging bitmaps exactly up to the last tracked LSN would abort XtraDB changed page tracking. Bug fixed #1382336.
  • mysql_install_db script would silently ignore any mysqld startup failures. Bug fixed #1382782 (upstream #74440).

Other bugs fixed: #1067103, #1394357, #1282599, #1335590, #1369950, #1401791 (upstream #73281), and #1396330 (upstream #74987).

(Please also note that Percona Server 5.6 series is the latest General Availability series and current GA release is 5.6.22-71.0.)

Release notes for Percona Server 5.5.41-37.0 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.41-37.0 is now available appeared first on MySQL Performance Blog.

percona-server-server try to downgrade from 5.6 to 5.5

Lastest Forum Posts - January 12, 2015 - 6:23am
Hi Folks,

I have a strange behaviour, when trying a dist-upgrade. Seems that percona-server-server want to go from 5.6 to 5.5

: # apt-get dist-upgrade -uV Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: libperconaserverclient18 (5.5.41-rel37.0-727.precise) percona-server-common-5.5 (5.5.41-rel37.0-727.precise) The following packages have been kept back: percona-server-server (5.6.21-70.1-698.precise => 5.5.41-rel37.0-727.precise) 0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded. Need to get 0 B/795 kB of archives. After this operation, 3,152 kB of additional disk space will be used. : # dpkg -l |grep perco ii libperconaserverclient18.1 5.6.21-70.1-698.precise Percona Server database client library ii percona-server-client-5.6 5.6.21-70.1-698.precise Percona Server database client binaries ii percona-server-common-5.6 5.6.21-70.1-698.precise Percona Server database common files (e.g. /etc/mysql/my.cnf) ii percona-server-server 5.6.21-70.1-698.precise Percona Server database server ii percona-server-server-5.6 5.6.21-70.1-698.precise Percona Server database server binaries ii percona-xtrabackup 2.2.7-5050-1.precise Open source backup tool for InnoDB and XtraDB ii xtrabackup 2.2.7-5050-1.precise Transitional package for percona-xtrabackup : # apt-cache show percona-server-server Package: percona-server-server Source: percona-server-5.5 Version: 1:5.5.41-rel37.0-727.precise Architecture: amd64 Maintainer: Percona Server Development Team <mysql-dev@percona.com> Installed-Size: 56 Depends: percona-server-server-5.5 Homepage: http://www.percona.com/software/percona-server/ Priority: extra Section: database Filename: pool/main/p/percona-server-5.5/percona-server-server_5.5.41-rel37.0-727.precise_amd64.deb Size: 11898 SHA256: f9dd741ee0fa5e2692acad665292f71dbcf4d37a19a172c67a7e654365528b07 SHA1: bd6fa791bb34190d7e8b7678657abc012289cb14 MD5sum: 54af128d137d306d9c3a50692758147b Description: Percona Server database server (metapackage depending on the latest version) This is an empty package that depends on the current "best" version of percona-server-server (currently percona-server-server-5.5), as determined by the Percona Server maintainers. Install this package if in doubt about which Percona Server version you need. That will install the version recommended by the package maintainers. . Percona Server is a fast, stable and true multi-user, multi-threaded SQL database server. SQL (Structured Query Language) is the most popular database query language in the world. The main goals of Percona Server are speed, robustness and ease of use. Package: percona-server-server Status: install ok installed Priority: extra Section: database Installed-Size: 56 Maintainer: Percona Server Development Team <mysql-dev@percona.com> Architecture: amd64 Source: percona-server-5.6 Version: 5.6.21-70.1-698.precise Depends: percona-server-server-5.6 Description: Percona Server database server (metapackage depending on the latest version) This is an empty package that depends on the current "best" version of percona-server-server (currently percona-server-server-5.6), as determined by the Percona Server maintainers. Install this package if in doubt about which Percona Server version you need. That will install the version recommended by the package maintainers. . Percona Server is a fast, stable and true multi-user, multi-threaded SQL database server. SQL (Structured Query Language) is the most popular database query language in the world. The main goals of Percona Server are speed, robustness and ease of use. Homepage: http://www.percona.com/software/percona-server/ Any idea about this issue?

Thanks in advance.

Where to put mongoDB credentials?

Lastest Forum Posts - January 12, 2015 - 5:43am
Hello,

I'm trying to monitor a mongoDB using the MongoDB Monitoring Template for Cacti but I don't know where to enter the username and password for mongo. I'm already using the openvz template and everything works. I couldnt find anything in the install notes. In ss_get_by_ssh.php there is nothing for mongo.

Any advices ?

Segmentation fault when running XtraBackup version 2.2.6 and 2.2.7

Lastest Forum Posts - January 11, 2015 - 12:18am
I'm getting a Segmentation Fault when running XtraBackup on MySQL 5.5 with both v 2.2.6 and 2.2.7 on CentOS. I tried using a trusted and true script, it failed. used another, it failed. Tried manually to just run it and it failed.

I've not experienced this before.

Segmentation fault when running XtraBackup version 2.2.6 and 2.2.7

Lastest Forum Posts - January 11, 2015 - 12:15am
I am used to using the innobackupex-runner.sh script and it seg faulted when running a full backup. So I searched for another script and found run-innobackupex.sh, again it seg faulted in the same place.
Basically I'm running (I tried this part manually)
innobackupex --user=myuser --password=<secret> --include=.*[.].* /opt/backup/xtrabackup/base > /tmp/innobackupex-runner.3671.tmp
I even tried just running this:
innobackupex --user=myuser --password=<secret> /opt/backup/xtrabackup/base
The seg fault number changes ( I do have the permissions to write to this directory)
./run-innobackupex.sh: line 91: 3911 Segmentation fault innobackupex $USEROPTIONS $FILTERTABLES $BASEBACKDIR > $TMPFILE 2>&1
failed:
---------- ERROR OUTPUT from ----------
InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.
This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p
150111 00:08:33 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'myuser' (using password: YES).
150111 00:08:33 innobackupex: Connected to MySQL server
150111 00:08:33 innobackupex: Executing a version check against the server...
----------------------------------------------
Details: Percona server 5.5.37
Percona Server (GPL), Release 35.1, Revision 666

pt-table-checksum binlog_format=ROW issue

Lastest Forum Posts - January 9, 2015 - 9:59am
Hi all,

Please help, I am not able to make pt-table-checksum tool work.
I have a simple master-slave environment and I am trying to check data consistency between them.

I am issuing the next command

: pt-table-checksum --user=xxxxxx --password=xxxxx And I get the next error message

: 01-09T11:54:08 Failed to /*!50108 SET @@binlog_format := 'STATEMENT'*/: DBD::mysql::db do failed: Variable 'binlog_format' can't be set to the value of 'STATEMENT' [for Statement "/*!50108 SET @@binlog_format := 'STATEMENT'*/"] at /usr/bin/pt-table-checksum line 9148. This tool requires binlog_format=STATEMENT, but the current binlog_format is set to ROW and an error occurred while attempting to change it. If running MySQL 5.1.29 or newer, setting binlog_format requires the SUPER privilege. You will need to manually set binlog_format to 'STATEMENT' before running this tool. [root@hades admin]# [SSH] ERROR: An existing connection was forcibly closed by the remote host. [SSH] FAIL: Write failed: An existing connection was forcibly closed by the remote host. Our master is a cluster running under MariaDB Galera 10 with binlog format = ROW and I cannot change that because is mandatory as you can read here https://mariadb.com/kb/en/mariadb/do...alera-cluster/

I have been googling how to run this command without any success. Some people recommend to disable the check binlog format with the --nocheck-binlog-format, but that didn t do nothing

: [# pt-table-checksum --user=xxxxx --password=xxxxxx --nocheck-binlog-format 01-09T12:52:26 Failed to /*!50108 SET @@binlog_format := 'STATEMENT'*/: DBD::mysql::db do failed: Variable 'binlog_format' can't be set to the value of 'STATEMENT' [for Statement "/*!50108 SET @@binlog_format := 'STATEMENT'*/"] at /usr/bin/pt-table-checksum line 9148. This tool requires binlog_format=STATEMENT, but the current binlog_format is set to ROW and an error occurred while attempting to change it. If running MySQL 5.1.29 or newer, setting binlog_format requires the SUPER privilege. You will need to manually set binlog_format to 'STATEMENT' before running this tool. Before giving up, I would like to know if anybody has faced the same issue and how they solved it?
Thanks

More info
I am using Percona Toolkit version 2.2.12
Redhat 6.5
MariaDB 10

Managing data using open source technologies? Learn what’s hot in 2015!

Latest MySQL Performance Blog posts - January 8, 2015 - 12:17pm

Whether you’re looking at the overall MySQL ecosystem or the all-data management landscape, the choice of technologies has never been larger than it is in 2015.

Having so many options is great but it also can be very hard to make a selection. I’m going to help narrow the list next week during a Webinar titled, “Open Source Technologies you should evaluate in 2015,” January 14 at 10 a.m. PST.

During the hour I’ll share which technologies I think worthy of consideration in 2015 – open source and proprietary technologies that allow you to manage your data easier, increase development pace, scale better and improve availability and security. I’ll also discuss recent developments in MySQL, NoSQL and NewSQL, Cloud and general advances in hardware.

Specifically, some of the areas I’ll address will include:

  • Cloud-based Database as a Service (DBaaS) such as Amazon RDS for MySQL, Amazon RDS for Aurora, Google Cloud, and OpenStack Trove
  • MySQL 5.7
  • Hybrid database environments with MySQL plus MongoDB or other NoSQL solutions
  • Advanced Monitoring capabilities such as Percona Cloud Tools
  • Other performance enhancements such as solid state devices (SSD) and the TokuDB storage engine

I hope to see you next week! (Register now to reserve your spot!)

The post Managing data using open source technologies? Learn what’s hot in 2015! appeared first on MySQL Performance Blog.

local restore

Lastest Forum Posts - January 8, 2015 - 10:18am
Hi all,
I am very new to Percona and MySQL in general. I have a backup directory "2015-01-08 00-00-00" inside of which I see directories of all local database backups. I need to be able to restore one of the databases into another. How do I go about it?

Thanks,

Double MySQL issue

Lastest Forum Posts - January 8, 2015 - 6:31am
Hello,

After installing Percona it seems service mysql start and /etc/init.d/mysql is separate. service mysql start initiate old mysql instance and /etc/init.d/mysql do the percona instance. I verified that there is no mysql-server installed so its a weird behaviors.

root@MP2:/etc/init.d# ./mysql status
* Percona XtraDB Cluster up and running
root@MP2:/etc/init.d# service mysql status
mysql stop/waiting
root@MP2:/etc/init.d#

root@MP2:/etc/init# apt-get remove mysql
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package mysql
root@MP2:/etc/init# apt-get remove mysql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'mysql-server' is not installed, so not removed

This is on Ubuntu 14.04 and on both cluster members we experience the same. Can somebody advise?

Thanks

Datafiles missing and inconsistent size on Slave host

Lastest Forum Posts - January 8, 2015 - 4:11am
Hi Team,

We have 2 way replication enabled on 2 hosts where mysql version is 5.5.23 and replication is absolutely working fine. However, as a precautionary measure we have tried to replicate the Database onto two other hosts using native replication method (mixed).

121G Master DB
121G Slave I (master master enabled DB)
58G Slave-II
69G Slave-III

Master host files:
-rw-r--r-- 1 mysql mysql 37G Jan 8 11:09 abc.ibd
-rw-r--r-- 1 mysql mysql 50G Jan 8 11:09 xyz.ibd

Slave-I files:
-rw-rw---- 1 mysql mysql 37G Jan 8 11:14 abc.ibd
-rw-rw---- 1 mysql mysql 50G Jan 8 11:14 xyz.ibd

Slave-II files:
-rw-rw---- 1 mysql mysql 37G Jan 8 11:18 abc.ibd
NO xyz.ibd file

Slave-III files:
-rw-rw---- 1 mysql mysql 12G Jan 8 11:21 abc.ibd
-rw-rw---- 1 mysql mysql 26G Jan 8 11:21 xyz.ibd

Observed that in one of the slave hosts ibd file related to one table whose size is around 50GB is not showing up, but when we check the record count/records in the table they are showing up fine.

Similarly on the second slave the ibd file size is not matching with that of master, but strange thing is count of both the huge tables are matching with that of master.

Need to understand what went wrong on the slave hosts due to which the files are not showing up and also the inconsistency in the file size.

Best Regards,
Krishna

Commit in slow log, please help!

Lastest Forum Posts - January 7, 2015 - 6:18pm
Hi,

I checked mysql slow log,
commit command appear in slow log:

# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.195461 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;
# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.189559 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;
# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.123265 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;
# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.034187 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;
# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.007584 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;

mysql version 5.5
all table engine type is InnoDB
my.cnf:
sync_binlog 0
innodb_buffer_pool_size 4294967296
innodb_commit_concurrency 0
innodb_concurrency_tickets 500
innodb_file_per_table OFF
innodb_flush_log_at_trx_commit 2
innodb_flush_method
innodb_io_capacity 500
innodb_log_file_size 1073741824
innodb_read_io_threads 32


Data file size: 5G

Centos 5.6, SSD , ext4
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 205G 37G 158G 19% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 477M 46M 406M 11% /boot



[123@123:~]$ free -m
total used free shared buffers cached
Mem: 7827 7551 275 26 137 1975
-/+ buffers/cache: 5438 2388
Swap: 15999 81 15918


Can any one help me?

Thanks,

Percona XtraDB Cluster susceptible to disk-bound nodes

Lastest Forum Posts - January 7, 2015 - 11:08am
We have a three node cluster but for the purposes of experimentation I'm only sending traffic to one node (which I'll call the active node, the other two being inactive).

If I perform disk-heavy operations on the inactive nodes, the active node gets bogged down behind a lot of pending WSREP commits.

All three nodes are on RAID10 EBS volumes, however only one of them is running with Provisioned IOPs. I'm going to replace the storage on the other two nodes so that they're using Provisioned IOPs as well and repeat the experiments but I was wondering if there was something else I should be looking into?

Cheers

Does pt-table-checksum work in circular replication environments?

Lastest Forum Posts - January 7, 2015 - 7:47am
Hello there,

I have 3 masters in circular replication. I run pt-table-checksum in one of the masters only.
A->B->C->A

The queries do get replicated the "next" server B but I don't think they are being replicated "properly" to the third server C. The checksums are always identical to the B server.

Thanks,
Rodrigo.

Django with time zone support and MySQL

Latest MySQL Performance Blog posts - January 7, 2015 - 3:00am

This is yet another story of Django web-framework with time zone support and pain dealing with python datetimes and MySQL on the backend. In other words, offset-naive vs offset-aware datetimes.

Shortly, more about the problem. After reading the official documentation about the time zones, it makes clear that in order to reflect python datetime in the necessary time zone you need to make it tz-aware first and than show in that time zone.

Here is the first issue: tz-aware in what time zone? MySQL stores timestamps in UTC and converts for storage/retrieval from/to the current time zone. By default, the current time zone is the server’s time, can be changed on MySQL globally, per connection etc. So it becomes not obvious what was tz of the value initially before stored in UTC. If you change server or session tz further, it will lead to more mess. Unlike MySQL, PostgreSQL has timestamp with time zone data type, so Django can auto-detect tz and make datetimes tz-aware automatically featuring tzinfo attribute.

There are many solutions on the web… for example an extension to detect tz on UI by Javascript and pass back to the backend allowing you to work with tz-aware data, however, I need something simpler but mainly with less changes to the existing code base.

Here is my case. The server and MySQL are on UTC. That’s cool and it cuts off the first barrier. I store python datetimes in MySQL timespamp columns also in UTC per database time. Anyway, it is a best practice to store time in UTC. I have some read-only pages on the web app and want to show datetimes according to user’s tz. Looks to be a simple task but dealing with MySQL on backend, all my data coming from models have a naive datetime type assigned. So I need to find a way to easily make all my DateTimeField fields UTC-aware (add tzinfo attribute) and some convenient method of showing datetimes in user’s tz still having an access to UTC or naive datetimes for calculation on the backned. Therefore, I will be still doing all the calculation in UTC and showing TZ-aware values to users only on UI side.

This is an example of middleware that gets user’s tz from the database and sets in the session, so it can be retrieved anywhere using get_current_timezone():

from django.utils.timezone import activate from myapp.models import UserInfo class TimezoneMiddleware(object):     """Middleware that is run on each request before the view is executed. Activate user's timezone for further retrieval by get_current_timezone() or creating tz-aware datetime objects beforehand.     """     def process_request(self, request):         session_tz = request.session.get('timezone') # If tz has been already set in session, let's activate it # and avoid SQL query to retrieve it on each request.         if session_tz:             activate(session_tz)         else:             try:                 # Get user's tz from the database.                 uinfo = UserInfo.objects.get(user_id=request.user.id, user_id__isnull=False)                 if uinfo.timezone: # If tz is configured by user, let's set it for the session.                     request.session['timezone'] = uinfo.timezone                     activate(uinfo.timezone)             except UserInfo.DoesNotExist:                 pass

This is an excerpt from models.py:

from django.db import models from django.utils.timezone import get_current_timezone, make_aware, utc def localize_datetime(dtime):     """Makes DateTimeField value UTC-aware and returns datetime string localized in user's timezone in ISO format. """   tz_aware = make_aware(dtime, utc).astimezone(get_current_timezone())     return datetime.datetime.strftime(tz_aware, '%Y-%m-%d %H:%M:%S') class Messages(models.Model):     id = models.AutoField(primary_key=True)     body = models.CharField(max_length=160L)     created = models.DateTimeField(auto_now_add=True) @property     def created_tz(self):     return localize_datetime(self.created) ...

“Messages” model has “created” field (timestamp in MySQL) and a property “created_tz”. That property reflects “created” in user’s tz using the function localize_datetime() which makes naive datetimes tz(UTC)-aware, converts into user’s tz set on the session level and returns a string in ISO format. In my case, I don’t prefer the default RFC format that includes +00:00 tz portion of datetime with tzinfo attribute or even need tz-aware datetimes to operate with. Same way I can have similar properties in all needed models knowing they can be accessed by the same name with “_tz” suffix.

Taking into account the above, I reference “created” for calculations in views or controllers and “created_tz” in templaetes or for JSON-output.  This way I don’t need to change all references of “created” to something like “make_aware(created, utc)” or datetime.datetime.utcnow() to datetime.datetime.utcnow().replace(tzinfo=pytz.utc) across the code. The code changes in my app will be minimal by introducing a custom property in the model and continue operating with UTC on the raw level:

# views.py # Operating UTC msgs = Messages.objects.filter(created__gt=datetime.datetime.now() - datetime.datetime.timedelta(hours=24))

<! -- HTML template -- > {% for m in msgs %} {{ m.id }}. {{ m.body }} (added on {{ m.created_tz }}) {% endfor %} * All times in user's tz.

I hope this article may help in your findings.
Happy New Year across all time zones!

The post Django with time zone support and MySQL appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
]]>