]]>
]]>

You are here

Feed aggregator

Segmentation fault when running XtraBackup version 2.2.6 and 2.2.7

Lastest Forum Posts - January 11, 2015 - 12:18am
I'm getting a Segmentation Fault when running XtraBackup on MySQL 5.5 with both v 2.2.6 and 2.2.7 on CentOS. I tried using a trusted and true script, it failed. used another, it failed. Tried manually to just run it and it failed.

I've not experienced this before.

Segmentation fault when running XtraBackup version 2.2.6 and 2.2.7

Lastest Forum Posts - January 11, 2015 - 12:15am
I am used to using the innobackupex-runner.sh script and it seg faulted when running a full backup. So I searched for another script and found run-innobackupex.sh, again it seg faulted in the same place.
Basically I'm running (I tried this part manually)
innobackupex --user=myuser --password=<secret> --include=.*[.].* /opt/backup/xtrabackup/base > /tmp/innobackupex-runner.3671.tmp
I even tried just running this:
innobackupex --user=myuser --password=<secret> /opt/backup/xtrabackup/base
The seg fault number changes ( I do have the permissions to write to this directory)
./run-innobackupex.sh: line 91: 3911 Segmentation fault innobackupex $USEROPTIONS $FILTERTABLES $BASEBACKDIR > $TMPFILE 2>&1
failed:
---------- ERROR OUTPUT from ----------
InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.
This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.
Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p
150111 00:08:33 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'myuser' (using password: YES).
150111 00:08:33 innobackupex: Connected to MySQL server
150111 00:08:33 innobackupex: Executing a version check against the server...
----------------------------------------------
Details: Percona server 5.5.37
Percona Server (GPL), Release 35.1, Revision 666

pt-table-checksum binlog_format=ROW issue

Lastest Forum Posts - January 9, 2015 - 9:59am
Hi all,

Please help, I am not able to make pt-table-checksum tool work.
I have a simple master-slave environment and I am trying to check data consistency between them.

I am issuing the next command

: pt-table-checksum --user=xxxxxx --password=xxxxx And I get the next error message

: 01-09T11:54:08 Failed to /*!50108 SET @@binlog_format := 'STATEMENT'*/: DBD::mysql::db do failed: Variable 'binlog_format' can't be set to the value of 'STATEMENT' [for Statement "/*!50108 SET @@binlog_format := 'STATEMENT'*/"] at /usr/bin/pt-table-checksum line 9148. This tool requires binlog_format=STATEMENT, but the current binlog_format is set to ROW and an error occurred while attempting to change it. If running MySQL 5.1.29 or newer, setting binlog_format requires the SUPER privilege. You will need to manually set binlog_format to 'STATEMENT' before running this tool. [root@hades admin]# [SSH] ERROR: An existing connection was forcibly closed by the remote host. [SSH] FAIL: Write failed: An existing connection was forcibly closed by the remote host. Our master is a cluster running under MariaDB Galera 10 with binlog format = ROW and I cannot change that because is mandatory as you can read here https://mariadb.com/kb/en/mariadb/do...alera-cluster/

I have been googling how to run this command without any success. Some people recommend to disable the check binlog format with the --nocheck-binlog-format, but that didn t do nothing

: [# pt-table-checksum --user=xxxxx --password=xxxxxx --nocheck-binlog-format 01-09T12:52:26 Failed to /*!50108 SET @@binlog_format := 'STATEMENT'*/: DBD::mysql::db do failed: Variable 'binlog_format' can't be set to the value of 'STATEMENT' [for Statement "/*!50108 SET @@binlog_format := 'STATEMENT'*/"] at /usr/bin/pt-table-checksum line 9148. This tool requires binlog_format=STATEMENT, but the current binlog_format is set to ROW and an error occurred while attempting to change it. If running MySQL 5.1.29 or newer, setting binlog_format requires the SUPER privilege. You will need to manually set binlog_format to 'STATEMENT' before running this tool. Before giving up, I would like to know if anybody has faced the same issue and how they solved it?
Thanks

More info
I am using Percona Toolkit version 2.2.12
Redhat 6.5
MariaDB 10

Managing data using open source technologies? Learn what’s hot in 2015!

Latest MySQL Performance Blog posts - January 8, 2015 - 12:17pm

Whether you’re looking at the overall MySQL ecosystem or the all-data management landscape, the choice of technologies has never been larger than it is in 2015.

Having so many options is great but it also can be very hard to make a selection. I’m going to help narrow the list next week during a Webinar titled, “Open Source Technologies you should evaluate in 2015,” January 14 at 10 a.m. PST.

During the hour I’ll share which technologies I think worthy of consideration in 2015 – open source and proprietary technologies that allow you to manage your data easier, increase development pace, scale better and improve availability and security. I’ll also discuss recent developments in MySQL, NoSQL and NewSQL, Cloud and general advances in hardware.

Specifically, some of the areas I’ll address will include:

  • Cloud-based Database as a Service (DBaaS) such as Amazon RDS for MySQL, Amazon RDS for Aurora, Google Cloud, and OpenStack Trove
  • MySQL 5.7
  • Hybrid database environments with MySQL plus MongoDB or other NoSQL solutions
  • Advanced Monitoring capabilities such as Percona Cloud Tools
  • Other performance enhancements such as solid state devices (SSD) and the TokuDB storage engine

I hope to see you next week! (Register now to reserve your spot!)

The post Managing data using open source technologies? Learn what’s hot in 2015! appeared first on MySQL Performance Blog.

local restore

Lastest Forum Posts - January 8, 2015 - 10:18am
Hi all,
I am very new to Percona and MySQL in general. I have a backup directory "2015-01-08 00-00-00" inside of which I see directories of all local database backups. I need to be able to restore one of the databases into another. How do I go about it?

Thanks,

Double MySQL issue

Lastest Forum Posts - January 8, 2015 - 6:31am
Hello,

After installing Percona it seems service mysql start and /etc/init.d/mysql is separate. service mysql start initiate old mysql instance and /etc/init.d/mysql do the percona instance. I verified that there is no mysql-server installed so its a weird behaviors.

root@MP2:/etc/init.d# ./mysql status
* Percona XtraDB Cluster up and running
root@MP2:/etc/init.d# service mysql status
mysql stop/waiting
root@MP2:/etc/init.d#

root@MP2:/etc/init# apt-get remove mysql
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package mysql
root@MP2:/etc/init# apt-get remove mysql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'mysql-server' is not installed, so not removed

This is on Ubuntu 14.04 and on both cluster members we experience the same. Can somebody advise?

Thanks

Datafiles missing and inconsistent size on Slave host

Lastest Forum Posts - January 8, 2015 - 4:11am
Hi Team,

We have 2 way replication enabled on 2 hosts where mysql version is 5.5.23 and replication is absolutely working fine. However, as a precautionary measure we have tried to replicate the Database onto two other hosts using native replication method (mixed).

121G Master DB
121G Slave I (master master enabled DB)
58G Slave-II
69G Slave-III

Master host files:
-rw-r--r-- 1 mysql mysql 37G Jan 8 11:09 abc.ibd
-rw-r--r-- 1 mysql mysql 50G Jan 8 11:09 xyz.ibd

Slave-I files:
-rw-rw---- 1 mysql mysql 37G Jan 8 11:14 abc.ibd
-rw-rw---- 1 mysql mysql 50G Jan 8 11:14 xyz.ibd

Slave-II files:
-rw-rw---- 1 mysql mysql 37G Jan 8 11:18 abc.ibd
NO xyz.ibd file

Slave-III files:
-rw-rw---- 1 mysql mysql 12G Jan 8 11:21 abc.ibd
-rw-rw---- 1 mysql mysql 26G Jan 8 11:21 xyz.ibd

Observed that in one of the slave hosts ibd file related to one table whose size is around 50GB is not showing up, but when we check the record count/records in the table they are showing up fine.

Similarly on the second slave the ibd file size is not matching with that of master, but strange thing is count of both the huge tables are matching with that of master.

Need to understand what went wrong on the slave hosts due to which the files are not showing up and also the inconsistency in the file size.

Best Regards,
Krishna

Commit in slow log, please help!

Lastest Forum Posts - January 7, 2015 - 6:18pm
Hi,

I checked mysql slow log,
commit command appear in slow log:

# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.195461 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;
# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.189559 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;
# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.123265 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;
# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.034187 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;
# User@Host: xxxxxxx @ [127.0.0.1]
# Query_time: 1.007584 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1420594562;
commit;

mysql version 5.5
all table engine type is InnoDB
my.cnf:
sync_binlog 0
innodb_buffer_pool_size 4294967296
innodb_commit_concurrency 0
innodb_concurrency_tickets 500
innodb_file_per_table OFF
innodb_flush_log_at_trx_commit 2
innodb_flush_method
innodb_io_capacity 500
innodb_log_file_size 1073741824
innodb_read_io_threads 32


Data file size: 5G

Centos 5.6, SSD , ext4
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 205G 37G 158G 19% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 477M 46M 406M 11% /boot



[123@123:~]$ free -m
total used free shared buffers cached
Mem: 7827 7551 275 26 137 1975
-/+ buffers/cache: 5438 2388
Swap: 15999 81 15918


Can any one help me?

Thanks,

Percona XtraDB Cluster susceptible to disk-bound nodes

Lastest Forum Posts - January 7, 2015 - 11:08am
We have a three node cluster but for the purposes of experimentation I'm only sending traffic to one node (which I'll call the active node, the other two being inactive).

If I perform disk-heavy operations on the inactive nodes, the active node gets bogged down behind a lot of pending WSREP commits.

All three nodes are on RAID10 EBS volumes, however only one of them is running with Provisioned IOPs. I'm going to replace the storage on the other two nodes so that they're using Provisioned IOPs as well and repeat the experiments but I was wondering if there was something else I should be looking into?

Cheers

Does pt-table-checksum work in circular replication environments?

Lastest Forum Posts - January 7, 2015 - 7:47am
Hello there,

I have 3 masters in circular replication. I run pt-table-checksum in one of the masters only.
A->B->C->A

The queries do get replicated the "next" server B but I don't think they are being replicated "properly" to the third server C. The checksums are always identical to the B server.

Thanks,
Rodrigo.

Django with time zone support and MySQL

Latest MySQL Performance Blog posts - January 7, 2015 - 3:00am

This is yet another story of Django web-framework with time zone support and pain dealing with python datetimes and MySQL on the backend. In other words, offset-naive vs offset-aware datetimes.

Shortly, more about the problem. After reading the official documentation about the time zones, it makes clear that in order to reflect python datetime in the necessary time zone you need to make it tz-aware first and than show in that time zone.

Here is the first issue: tz-aware in what time zone? MySQL stores timestamps in UTC and converts for storage/retrieval from/to the current time zone. By default, the current time zone is the server’s time, can be changed on MySQL globally, per connection etc. So it becomes not obvious what was tz of the value initially before stored in UTC. If you change server or session tz further, it will lead to more mess. Unlike MySQL, PostgreSQL has timestamp with time zone data type, so Django can auto-detect tz and make datetimes tz-aware automatically featuring tzinfo attribute.

There are many solutions on the web… for example an extension to detect tz on UI by Javascript and pass back to the backend allowing you to work with tz-aware data, however, I need something simpler but mainly with less changes to the existing code base.

Here is my case. The server and MySQL are on UTC. That’s cool and it cuts off the first barrier. I store python datetimes in MySQL timespamp columns also in UTC per database time. Anyway, it is a best practice to store time in UTC. I have some read-only pages on the web app and want to show datetimes according to user’s tz. Looks to be a simple task but dealing with MySQL on backend, all my data coming from models have a naive datetime type assigned. So I need to find a way to easily make all my DateTimeField fields UTC-aware (add tzinfo attribute) and some convenient method of showing datetimes in user’s tz still having an access to UTC or naive datetimes for calculation on the backned. Therefore, I will be still doing all the calculation in UTC and showing TZ-aware values to users only on UI side.

This is an example of middleware that gets user’s tz from the database and sets in the session, so it can be retrieved anywhere using get_current_timezone():

from django.utils.timezone import activate from myapp.models import UserInfo class TimezoneMiddleware(object):     """Middleware that is run on each request before the view is executed. Activate user's timezone for further retrieval by get_current_timezone() or creating tz-aware datetime objects beforehand.     """     def process_request(self, request):         session_tz = request.session.get('timezone') # If tz has been already set in session, let's activate it # and avoid SQL query to retrieve it on each request.         if session_tz:             activate(session_tz)         else:             try:                 # Get user's tz from the database.                 uinfo = UserInfo.objects.get(user_id=request.user.id, user_id__isnull=False)                 if uinfo.timezone: # If tz is configured by user, let's set it for the session.                     request.session['timezone'] = uinfo.timezone                     activate(uinfo.timezone)             except UserInfo.DoesNotExist:                 pass

This is an excerpt from models.py:

from django.db import models from django.utils.timezone import get_current_timezone, make_aware, utc def localize_datetime(dtime):     """Makes DateTimeField value UTC-aware and returns datetime string localized in user's timezone in ISO format. """   tz_aware = make_aware(dtime, utc).astimezone(get_current_timezone())     return datetime.datetime.strftime(tz_aware, '%Y-%m-%d %H:%M:%S') class Messages(models.Model):     id = models.AutoField(primary_key=True)     body = models.CharField(max_length=160L)     created = models.DateTimeField(auto_now_add=True) @property     def created_tz(self):     return localize_datetime(self.created) ...

“Messages” model has “created” field (timestamp in MySQL) and a property “created_tz”. That property reflects “created” in user’s tz using the function localize_datetime() which makes naive datetimes tz(UTC)-aware, converts into user’s tz set on the session level and returns a string in ISO format. In my case, I don’t prefer the default RFC format that includes +00:00 tz portion of datetime with tzinfo attribute or even need tz-aware datetimes to operate with. Same way I can have similar properties in all needed models knowing they can be accessed by the same name with “_tz” suffix.

Taking into account the above, I reference “created” for calculations in views or controllers and “created_tz” in templaetes or for JSON-output.  This way I don’t need to change all references of “created” to something like “make_aware(created, utc)” or datetime.datetime.utcnow() to datetime.datetime.utcnow().replace(tzinfo=pytz.utc) across the code. The code changes in my app will be minimal by introducing a custom property in the model and continue operating with UTC on the raw level:

# views.py # Operating UTC msgs = Messages.objects.filter(created__gt=datetime.datetime.now() - datetime.datetime.timedelta(hours=24))

<! -- HTML template -- > {% for m in msgs %} {{ m.id }}. {{ m.body }} (added on {{ m.created_tz }}) {% endfor %} * All times in user's tz.

I hope this article may help in your findings.
Happy New Year across all time zones!

The post Django with time zone support and MySQL appeared first on MySQL Performance Blog.

CentOS 7

Lastest Forum Posts - January 6, 2015 - 4:04pm
How do you bootstrap with CentOS 7 systemctl? Get the following error:

systemctl start mysqld --wsrep-cluster-address="gcomm://"

systemctl: unrecognized option '--wsrep-cluster-address=gcomm://'

Server not recognised as percona server

Lastest Forum Posts - January 6, 2015 - 7:47am
Hello,

We have just installed percona cloud tools, we're using percona server but cloud tools complains that:

Filesort, Filesort on disk, Full join, Full scan, Query cache hits, Temporary tables, Temporary tables on disk are available only for Percona Server.

We're using percona server installed from the percona repository on ubuntu 12.04, I'm not sure what I'm missing that would help PCT recognise that we dfinitely are using percona server.

Cheers,

Steve

Problem backing up/streaming 2000+ databases, waiting for global read lock

Lastest Forum Posts - January 6, 2015 - 6:41am
Happy New Year everyone,

I'm hoping someone can help me out. I'm trying to use Xtrabackup to stream about 2500 production databases from the server to a backup storage server (i.e. not a MySQL server, just storage). We've been doing this for a while using NFS and it's no longer a viable solution because of the write speed to NFS and the load NFS imposses on the system. So I'm attempting to switch from using xtrabackup_56 directly to innobackupex and streaming with xbstream to my storage server.

On my development box it works great. On production, not so much. After a few hours of running MySQL stops accepting connections and when I do a processlist on it every query is in a state of "Waiting for global read lock" adn the process that I think is from xtrabackup is in a sleep state. Any suggestions on fixing this? From what I've read I'm thinking that --lock-wait-quiry-type=update would be the way to go? If I kill off the root process that's sleeping in MySQL, things are all good again. Any thoughts? I'm running xtrabackup 2.1.9 (yes, I need to upgrade it, but I can't right now unless it's absolutely required).

this is the xtrabackup command that's running on the system.

xtrabackup_56 --defaults-group=mysqld --backup --suspend-at-end --target-dir=/tmp --tmpdir=/tmp --throttle=16 --parallel=4 --stream=xbstream

Thanks
Chad

Change IP subnet of cluster

Lastest Forum Posts - January 6, 2015 - 5:17am
We would like to move our cluster into a new subnet. What is the best way of doing it?

We can have a downtime and shutdown all servers at the same time. But we shouldn't loose any data.

I would have shut down all servers and then moved all servers into the new subnet. And then started it like it is a new cluster. And at the end deleted any references to the old IP addresses. But I am not sure if this is the best way.

Getting mutex information from MySQL’s performance_schema

Latest MySQL Performance Blog posts - January 6, 2015 - 12:00am

We have been using SHOW ENGINE INNODB MUTEX command for years. It shows us mutex and rw-lock information that could be useful during service troubleshooting in case of performance problems. As Morgan Tocker announced in his blog post the command will be removed from MySQL 5.7 and we have to use performance_schema to get that info.

The documentation of MySQL also says that most of the command output has been removed from 5.6 and that we can find similar info in performance_schema. It doesn’t show any examples of how to use performance_schema or what is the query we need to use from now on. It is also important to mention that 5.6 doesn’t show any warning about the feature being deprecated.

This is a short blog post to show how to configure performance_schema and get the info we need. Hoping it will end up in the official documentation in some way.

The instruments we need are not enabled by default. Those are in wait/synch/mutex/% so the config line we need to use is:

performance-schema-instrument='wait/synch/mutex/innodb/%=ON'

Then, just compare the results from an idle Percona Server 5.6. First the output of SHOW ENGINE…

mysql> show engine innodb mutex; +--------+------------------------------+------------+ | Type | Name | Status | +--------+------------------------------+------------+ | InnoDB | &buf_pool->flush_state_mutex | os_waits=1 | | InnoDB | &log_sys->checkpoint_lock | os_waits=2 | +--------+------------------------------+------------+

Now the results from the query that get us the mutex information from performance_schema:

mysql> SELECT EVENT_NAME, SUM_TIMER_WAIT/1000000000 WAIT_MS, COUNT_STAR FROM performance_schema.events_waits_summary_global_by_event_name WHERE SUM_TIMER_WAIT > 0 AND EVENT_NAME LIKE 'wait/synch/mutex/innodb/%' ORDER BY SUM_TIMER_WAIT DESC, COUNT_STAR DESC; +----------------------------------------------------+---------+------------+ | EVENT_NAME | WAIT_MS | COUNT_STAR | +----------------------------------------------------+---------+------------+ | wait/synch/mutex/innodb/log_sys_mutex | 11.1054 | 28279 | | wait/synch/mutex/innodb/buf_pool_flush_state_mutex | 9.7611 | 94095 | | wait/synch/mutex/innodb/os_mutex | 5.3339 | 58515 | | wait/synch/mutex/innodb/dict_sys_mutex | 2.4108 | 4033 | | wait/synch/mutex/innodb/flush_list_mutex | 2.3688 | 8036 | | wait/synch/mutex/innodb/lock_wait_mutex | 2.2412 | 4016 | | wait/synch/mutex/innodb/buf_pool_LRU_list_mutex | 2.1912 | 4182 | | wait/synch/mutex/innodb/fil_system_mutex | 0.9789 | 5060 | | wait/synch/mutex/innodb/mutex_list_mutex | 0.1723 | 8523 | | wait/synch/mutex/innodb/rw_lock_list_mutex | 0.1706 | 8245 | | wait/synch/mutex/innodb/srv_innodb_monitor_mutex | 0.0102 | 65 | | wait/synch/mutex/innodb/recv_sys_mutex | 0.0050 | 146 | | wait/synch/mutex/innodb/buf_pool_free_list_mutex | 0.0048 | 165 | | wait/synch/mutex/innodb/trx_mutex | 0.0020 | 105 | | wait/synch/mutex/innodb/srv_sys_mutex | 0.0012 | 11 | | wait/synch/mutex/innodb/trx_sys_mutex | 0.0010 | 29 | | wait/synch/mutex/innodb/lock_mutex | 0.0008 | 26 | | wait/synch/mutex/innodb/innobase_share_mutex | 0.0004 | 5 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 0.0003 | 4 | | wait/synch/mutex/innodb/file_format_max_mutex | 0.0003 | 6 | | wait/synch/mutex/innodb/rseg_mutex | 0.0002 | 7 | | wait/synch/mutex/innodb/recv_writer_mutex | 0.0001 | 1 | | wait/synch/mutex/innodb/ut_list_mutex | 0.0001 | 1 | | wait/synch/mutex/innodb/ibuf_mutex | 0.0001 | 2 | | wait/synch/mutex/innodb/log_flush_order_mutex | 0.0000 | 1 | +----------------------------------------------------+---------+------------+

The difference is clear. We get much more information from Performance Schema. In my personal opinion, despite the extra resources needed by Performance Schema, the change is for the better.

The post Getting mutex information from MySQL’s performance_schema appeared first on MySQL Performance Blog.

Using YUM to install specific MySQL/Percona Server versions

Latest MySQL Performance Blog posts - January 5, 2015 - 6:52am

Sometimes it is desired to use particular software versions in production, and not necessary the latest ones. There may be several reasons for that, where I think the most common is when a new version should spend some time in testing or a staging environment before getting to production. In theory each new version is supposed to be better as usually it contains a handful of bug fixes and even new or improved functionality. However there is also a risk of some regression or a new bug introduction as a side effect of code changes.

Quite often DBAs want the same MySQL version to be installed on all database instances, regardless of what actually is the latest version available in the software provider’s repository. There are several ways to achieve this:
* download specific version packages manually and then install them,
* have custom local repository mirror where you decide when and which version gets there, and just update from there using yum/apt,
* have database instance images with all software prepared,
* point to a particular version just using default package-management utility.

My idea was to remind about this last method as maybe the least known one.
In this article I will focus on YUM as it seems this is the only one currently offering multiple versions from official repositories of Oracle and Percona MySQL variants. APT theoretically is also able to install older versions, but command “apt-cache madison …” returns only the latest one for me. For example using Oracle repo:

root@ubuntu-14:~# apt-cache madison mysql-community-server mysql-community-server | 5.6.22-2ubuntu14.04 | http://repo.mysql.com/apt/ubuntu/ trusty/mysql-5.6 amd64 Packages mysql-community | 5.6.22-2ubuntu14.04 | http://repo.mysql.com/apt/ubuntu/ trusty/mysql-5.6 Sources

So let’s see how it looks like for YUM repositories. I have installed repositories from Oracle, MariaDB and Percona on Centos 6 test machine. This is what they offer for the main server package versions:

[root@localhost ~]# yum repolist repo id repo name status base CentOS-6 - Base 6,518 extras CentOS-6 - Extras 36 mariadb MariaDB 17 mysql-connectors-community MySQL Connectors Community 12 mysql-tools-community MySQL Tools Community 18 mysql56-community MySQL 5.6 Community Server 112 percona-release-noarch Percona-Release YUM repository - noarch 26 percona-release-x86_64 Percona-Release YUM repository - x86_64 432 updates CentOS-6 - Updates 530 repolist: 7,701 [root@localhost ~]# yum -q list available --showduplicates mysql-community-server.x86_64 Available Packages mysql-community-server.x86_64 5.6.15-1.el6 mysql56-community mysql-community-server.x86_64 5.6.16-1.el6 mysql56-community mysql-community-server.x86_64 5.6.17-4.el6 mysql56-community mysql-community-server.x86_64 5.6.19-2.el6 mysql56-community mysql-community-server.x86_64 5.6.20-4.el6 mysql56-community mysql-community-server.x86_64 5.6.21-2.el6 mysql56-community mysql-community-server.x86_64 5.6.22-2.el6 mysql56-community [root@localhost ~]# [root@localhost ~]# yum -q list available --showduplicates MariaDB-server.x86_64 Available Packages MariaDB-server.x86_64 10.0.15-1.el6 mariadb [root@localhost ~]# [root@localhost ~]# yum -q list available --showduplicates Percona-Server-server-56.x86_64 Available Packages Percona-Server-server-56.x86_64 5.6.13-rel61.0.461.rhel6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.14-rel62.0.483.rhel6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.15-rel63.0.519.rhel6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.16-rel64.0.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.16-rel64.1.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.16-rel64.2.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.17-rel65.0.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.17-rel66.0.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.19-rel67.0.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.20-rel68.0.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.21-rel69.0.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.21-rel70.0.el6 percona-release-x86_64 Percona-Server-server-56.x86_64 5.6.21-rel70.1.el6 percona-release-x86_64

So at least for both Oracle and Percona packages we can use yum to install several versions back (12 in case of Percona Server 5.6).

How can we do that? Let’s install Percona Server version 5.6.19. To get a full package name with it’s version, we join it’s name with version but the CPU family part needs to be removed or replaced to the end. So Percona-Server-server-56.x86_64 +  5.6.19-rel67.0.el6 -> Percona-Server-server-56-5.6.19-rel67.0.el6 or Percona-Server-server-56-5.6.19-rel67.0.el6.x86_64:

[root@localhost ~]# yum -q install Percona-Server-server-56-5.6.19-rel67.0.el6 Percona-Server-client-56-5.6.19-rel67.0.el6 Percona-Server-shared-56-5.6.19-rel67.0.el6 ============================================================================================================================= Package Arch Version Repository Size ============================================================================================================================= Installing: Percona-Server-client-56 x86_64 5.6.19-rel67.0.el6 percona-release-x86_64 6.8 M Percona-Server-server-56 x86_64 5.6.19-rel67.0.el6 percona-release-x86_64 19 M Percona-Server-shared-56 x86_64 5.6.19-rel67.0.el6 percona-release-x86_64 721 k Transaction Summary ============================================================================================================================= Install 3 Package(s) Is this ok [y/N]: y (...) [root@localhost ~]# rpm -qa|grep Percona Percona-Server-server-56-5.6.19-rel67.0.el6.x86_64 Percona-Server-client-56-5.6.19-rel67.0.el6.x86_64 Percona-Server-shared-56-5.6.19-rel67.0.el6.x86_64

But what if it happens that we have to revert to previous version? We can actually do that with YUM very quickly:

[root@localhost ~]# service mysql status SUCCESS! MySQL (Percona Server) running (1998) [root@localhost ~]# service mysql stop Shutting down MySQL (Percona Server).. SUCCESS! [root@localhost ~]# yum -q downgrade Percona-Server-server-56.x86_64 Percona-Server-client-56.x86_64 Percona-Server-shared-56.x86_64 ================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================ Downgrading: Percona-Server-client-56 x86_64 5.6.17-rel66.0.el6 percona-release-x86_64 6.8 M Percona-Server-server-56 x86_64 5.6.17-rel66.0.el6 percona-release-x86_64 19 M Percona-Server-shared-56 x86_64 5.6.17-rel66.0.el6 percona-release-x86_64 720 k Transaction Summary ================================================================================================================================ Downgrade 3 Package(s) Is this ok [y/N]: y Giving mysqld 5 seconds to exit nicely (...) [root@localhost ~]# rpm -qa|grep Percona Percona-Server-shared-56-5.6.17-rel66.0.el6.x86_64 Percona-Server-server-56-5.6.17-rel66.0.el6.x86_64 Percona-Server-client-56-5.6.17-rel66.0.el6.x86_64

What if we want to downgrade, but let’s say few versions down? It is also possible with a single YUM command. By the way, after the last step we are on Percona Server 5.6.17 rel66.0 version and YUM nicely shows that when listing all available packages, see the screen shot below:

So let’s downgrade to Percona Server 5.6.15 rel63.0 – it’s actually very easy as the “downgrade” option also understands the version attribute:

[root@localhost ~]# yum -q downgrade Percona-Server-server-56-5.6.15-rel63.0.519.rhel6 Percona-Server-client-56-5.6.15-rel63.0.519.rhel6 Percona-Server-shared-56-5.6.15-rel63.0.519.rhel6 ===================================================================================================================================  Package                             Arch              Version                             Repository                         Size =================================================================================================================================== Downgrading:  Percona-Server-client-56            x86_64            5.6.15-rel63.0.519.rhel6            percona-release-x86_64            6.5 M  Percona-Server-server-56            x86_64            5.6.15-rel63.0.519.rhel6            percona-release-x86_64             18 M  Percona-Server-shared-56            x86_64            5.6.15-rel63.0.519.rhel6            percona-release-x86_64            691 k Transaction Summary =================================================================================================================================== Downgrade     3 Package(s) Is this ok [y/N]: y Giving mysqld 5 seconds to exit nicely (...)

We can do the same, or more complicated package operations, using the YUM transaction feature:

[root@localhost ~]# yum shell Loaded plugins: fastestmirror, security Setting up Yum Shell > remove Percona-Server-shared-56 Percona-Server-server-56 Percona-Server-client-56 Setting up Remove Process > install Percona-Server-server-56-5.6.15-rel63.0.519.rhel6 Percona-Server-client-56-5.6.15-rel63.0.519.rhel6 Percona-Server-shared-56-5.6.15-rel63.0.519.rhel6 (...) Setting up Install Process > run --> Running transaction check ---> Package Percona-Server-client-56.x86_64 0:5.6.15-rel63.0.519.rhel6 will be installed ---> Package Percona-Server-client-56.x86_64 0:5.6.17-rel66.0.el6 will be erased ---> Package Percona-Server-server-56.x86_64 0:5.6.15-rel63.0.519.rhel6 will be installed ---> Package Percona-Server-server-56.x86_64 0:5.6.17-rel66.0.el6 will be erased ---> Package Percona-Server-shared-56.x86_64 0:5.6.15-rel63.0.519.rhel6 will be obsoleting ---> Package Percona-Server-shared-56.x86_64 0:5.6.17-rel66.0.el6 will be erased (...) ================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================ Installing: Percona-Server-client-56 x86_64 5.6.15-rel63.0.519.rhel6 percona-release-x86_64 6.5 M Percona-Server-server-56 x86_64 5.6.15-rel63.0.519.rhel6 percona-release-x86_64 18 M Percona-Server-shared-51 x86_64 5.1.73-rel14.12.624.rhel6 percona-release-x86_64 2.1 M replacing mysql-libs.x86_64 5.1.73-3.el6_5 Percona-Server-shared-56 x86_64 5.6.15-rel63.0.519.rhel6 percona-release-x86_64 691 k replacing mysql-libs.x86_64 5.1.73-3.el6_5 Removing: Percona-Server-client-56 x86_64 5.6.17-rel66.0.el6 @percona-release-x86_64 33 M Percona-Server-server-56 x86_64 5.6.17-rel66.0.el6 @percona-release-x86_64 86 M Percona-Server-shared-56 x86_64 5.6.17-rel66.0.el6 @percona-release-x86_64 3.4 M Transaction Summary ================================================================================================================================ Install 4 Package(s) Remove 3 Package(s) Total download size: 27 M Is this ok [y/N]: y (...) Removed: Percona-Server-client-56.x86_64 0:5.6.17-rel66.0.el6 Percona-Server-server-56.x86_64 0:5.6.17-rel66.0.el6 Percona-Server-shared-56.x86_64 0:5.6.17-rel66.0.el6 Installed: Percona-Server-client-56.x86_64 0:5.6.15-rel63.0.519.rhel6 Percona-Server-server-56.x86_64 0:5.6.15-rel63.0.519.rhel6 Percona-Server-shared-51.x86_64 0:5.1.73-rel14.12.624.rhel6 Percona-Server-shared-56.x86_64 0:5.6.15-rel63.0.519.rhel6 Replaced: mysql-libs.x86_64 0:5.1.73-3.el6_5 Finished Transaction > quit Leaving Shell [root@localhost ~]# rpm -qa|grep Percona Percona-Server-shared-56-5.6.15-rel63.0.519.rhel6.x86_64 Percona-Server-client-56-5.6.15-rel63.0.519.rhel6.x86_64 Percona-Server-shared-51-5.1.73-rel14.12.624.rhel6.x86_64 Percona-Server-server-56-5.6.15-rel63.0.519.rhel6.x86_64

The fact that we can use single command or single YUM transaction, instead of a manual rpm download and installation, is not the only advantage of this method. The other very important one is that we don’t have to worry about breaking package dependencies when we uninstall current version manually in order to install different one, as there may be many packages that depend on MySQL.

However, if anyone does “yum update” on this system, our packages will be upgraded to the latest version, which is what we probably don’t want to happen yet, and surely not without our supervision:

[root@localhost ~]# yum -q update =================================================================================================================================== Package Arch Version Repository Size =================================================================================================================================== Updating: Percona-Server-client-56 x86_64 5.6.21-rel70.1.el6 percona-release-x86_64 6.4 M Percona-Server-server-56 x86_64 5.6.21-rel70.1.el6 percona-release-x86_64 19 M Percona-Server-shared-56 x86_64 5.6.21-rel70.1.el6 percona-release-x86_64 721 k Transaction Summary =================================================================================================================================== Upgrade 3 Package(s) Is this ok [y/N]: N Exiting on user Command

To prevent that, we should lock our packages on current version. To achieve that, we need yum-plugin-versionlock package which allows us to do this:

[root@localhost ~]# yum versionlock Percona-Server-server-56 Percona-Server-client-56 Percona-Server-shared-56 Loaded plugins: fastestmirror, security, versionlock Adding versionlock on: 0:Percona-Server-server-56-5.6.15-rel63.0.519.rhel6 Adding versionlock on: 0:Percona-Server-client-56-5.6.15-rel63.0.519.rhel6 Adding versionlock on: 0:Percona-Server-shared-56-5.6.15-rel63.0.519.rhel6 versionlock added: 3 [root@localhost ~]# yum update Loaded plugins: fastestmirror, security, versionlock Setting up Update Process (...) No Packages marked for Update

Now these packages cannot be updated unless you clear the locks with

yum versionlock clear

(clear any locks) or particular locks:

[root@localhost ~]# yum -q versionlock list 0:Percona-Server-server-56-5.6.15-rel63.0.519.rhel6.* 0:Percona-Server-client-56-5.6.15-rel63.0.519.rhel6.* 0:Percona-Server-shared-56-5.6.15-rel63.0.519.rhel6.* [root@localhost ~]# yum versionlock delete '0:Percona-Server-client-56-5.6.15-rel63.0.519.rhel6.*' Loaded plugins: fastestmirror, security, versionlock Deleting versionlock for: 0:Percona-Server-client-56-5.6.15-rel63.0.519.rhel6.* versionlock deleted: 1 [root@localhost ~]# yum -q versionlock list 0:Percona-Server-server-56-5.6.15-rel63.0.519.rhel6.* 0:Percona-Server-shared-56-5.6.15-rel63.0.519.rhel6.* [root@localhost ~]# yum -q update ===================================================================================================================================  Package                              Arch               Version                          Repository                          Size =================================================================================================================================== Updating:  Percona-Server-client-56             x86_64             5.6.21-rel70.1.el6               percona-release-x86_64             6.4 M Transaction Summary =================================================================================================================================== Upgrade       1 Package(s) Is this ok [y/N]:

I think the fact that you can install particular MySQL and Percona Server versions using YUM lets you simplify software management tools recipes, like Chef, Puppet or Ansible. For example, the Chef’s yum_package resource has version attribute, and an example recipe for Percona Server 5.6.20 installation may look like this:

pkgs = ["Percona-Server-client-56","Percona-Server-shared-56","Percona-Server-server-56","Percona-Server-56-debuginfo"] pkgs.each do |pkg| yum_package pkg do version "5.6.20-rel68.0.el6" allow_downgrade true end end

Hopefully we will see similar functionality from APT MySQL repositories eventually.

The post Using YUM to install specific MySQL/Percona Server versions appeared first on MySQL Performance Blog.

mysql broken, please help!

Lastest Forum Posts - January 4, 2015 - 1:13am
Hello,

I spent last 6+ hours researching and now posting for some help as nothing in online discussions have helped...

This is a new dedicated server.
I see that mysql is running on the server
[root@host2 /]# service mysql start
Starting MySQL (Percona Server).. [ OK ]
[root@host2 /]# service mysql status
MySQL (Percona Server) running (18295) [ OK ]
[root@host2 /]#
However, no php applications can connect to it.
I am getting errors like:
Fatal error: Call to undefined function mysql_connect() in
Your PHP installation appears to be missing the MySQL extension which is required by WordPress.
it was working before until i updated apache config with php5.4.
PLEASE HELP.

thanks

The MySQL Query Cache: How it works, plus workload impacts (good and bad)

Latest MySQL Performance Blog posts - January 2, 2015 - 12:00am

Query caching is one of the prominent features in MySQL and a vital part of query optimization. It is important to know how it works as it has the potential to cause significant performance improvements – or a slowdown – of your workload.

The MySQL query cache is a global one shared among the sessions. It caches the select query along with the result set, which enables the identical selects to execute faster as the data fetches from the in memory. It is important to have everything identical, no new comments, spaces, or most significantly differences in the WHERE clause. Basically when you trigger a select query, if it is available in the cache; it fetches from there or it considers the query as a new one and will go to the parser.

Even though it has some nice advantages, the MySQL query cache has its own downsides too. Well, let’s think about this: If you are frequently updating the table, you are then invalidating the query cache for ALL queries cached for that table. So really, anytime you have a “frequently updated table” means you’re probably not going to get any sort of good usage from the query cache. See the below example.

mysql> SHOW STATUS LIKE "qcache%"; +-------------------------+---------+ | Variable_name | Value | +-------------------------+---------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 1031320 | | Qcache_hits | 0 | | Qcache_inserts | 0 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 1 | | Qcache_queries_in_cache | 0 | | Qcache_total_blocks | 1 | +-------------------------+---------+ 8 rows in set (0.06 sec) mysql> select * from d.t1; 405 rows in set (0.05 sec) mysql> select * from d.t1 where id=88995159; 1 row in set (0.00 sec) mysql> SHOW STATUS LIKE "qcache%"; +-------------------------+---------+ | Variable_name | Value | +-------------------------+---------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 1020600 | | Qcache_hits | 0 | | Qcache_inserts | 2 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 1 | | Qcache_queries_in_cache | 2 | | Qcache_total_blocks | 6 | +-------------------------+---------+ 8 rows in set (0.00 sec)

From the above  we are sure the queries are cached. Let us try an insert and see the status, it will invalidate the query cache and reclaim the memory.

mysql> insert into d.t1 (data)value('Welcome'); Query OK, 1 row affected (0.05 sec) mysql> SHOW STATUS LIKE "qcache%"; +-------------------------+---------+ | Variable_name | Value | +-------------------------+---------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 1031320 | | Qcache_hits | 0 | | Qcache_inserts | 2 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 1 | | Qcache_queries_in_cache | 0 | | Qcache_total_blocks | 1 | +-------------------------+---------+ 8 rows in set (0.00 sec)

Now let us think about how to decide the query cache size:

To exemplify:-  I am having a mysql instance with two tables “t” and “t1″. Table “t” is with numerous records and “t1″ is with a fewer records. Let’s restart the mysql and see the query cache details.

mysql> show variables like 'query_cache_size'; +------------------+---------+ | Variable_name | Value | +------------------+---------+ | query_cache_size | 1048576 | +------------------+---------+ 1 row in set (0.00 sec) mysql> SHOW STATUS LIKE "qcache%"; +-------------------------+---------+ | Variable_name | Value | +-------------------------+---------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 1031320 | | Qcache_hits | 0 | | Qcache_inserts | 0 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 1 | | Qcache_queries_in_cache | 0 | | Qcache_total_blocks | 1 | +-------------------------+---------+ 8 rows in set (0.01 sec)

From the above status note the below four points.

1) There is around 1 MB free space with Qcache.

2) The queries in Qcache are zero.

3) There is no Qcache hits.

4) Qcache lowmem prunes is zero.

mysql> select * from d.t1; 405 rows in set (0.03 sec) mysql> SHOW STATUS LIKE "qcache%"; +-------------------------+---------+ | Variable_name | Value | +-------------------------+---------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 1021624 | | Qcache_hits | 0 | | Qcache_inserts | 1 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 1 | | Qcache_queries_in_cache | 1 | | Qcache_total_blocks | 4 | +-------------------------+---------+ 8 rows in set (0.01 sec)

From the aforesaid status it is clear the query has been cached and it should execute much faster in the next try and increase the Qcache hits status variable by one.

mysql> select * from d.t1; 405 rows in set (0.00 sec). mysql> SHOW STATUS LIKE "%Qcache_hits%"; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | Qcache_hits | 1 | +---------------+-------+ 1 row in set (0.00 sec)

Now let us see how the data is getting pruned from the Qcache. For this I will execute a select on table “t” which is having massive records.

mysql> select * from d.t where id > 78995159; mysql> SHOW STATUS LIKE "Qcache_lowmem_prunes"; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | Qcache_lowmem_prunes | 1 | +----------------------+-------+ 1 row in set (0.00 sec) mysql> select * from d.t1; 405 rows in set (0.02 sec) mysql> SHOW STATUS LIKE "qcache%"; +-------------------------+---------+ | Variable_name | Value | +-------------------------+---------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 1021624 | | Qcache_hits | 1 | | Qcache_inserts | 2 | | Qcache_lowmem_prunes | 1 | | Qcache_not_cached | 6 | | Qcache_queries_in_cache | 1 | | Qcache_total_blocks | 4 | +-------------------------+---------+ 8 rows in set (0.01 sec)

The Qcache_lowmem_prunes is the status variable which indicates how many times MySQL had to clear/prune some data from the Qcache to make space for the outputs of other queries. We need to observe the Qcache_lowmem_prunes  status variable and try to increase/adjust the size of the cache till we get a very low value ratio for the variable.

It is also undesirable to keep the query cache relatively high value at 256 MB as the Qcache invalidation becomes costly. For details, Peter Zaitsev wrote about this a few years ago in a post that’s still relevant today titled, “Beware large Query_Cache sizes.”

Contention often makes query cache the bottleneck instead of help when you have many CPU cores. Generally, query cache should be off unless proven useful for your workload. So it is important to know your environment well to enable the query cache and to decide what the query cache size should be.

There will also be circumstances where there is no chance of identical selects and in this case it is important to set the query_cache_size and query_cache_type variable to zero. The query_cache_type variable controls the query cache and  setting the query_cache_type to zero will reduce the significant overhead in query execution. On a highly concurrent environment there are chances of query cache mutex, which may become the source of a bottleneck. Setting the query_cache_type to zero will avoid the query cache mutex, as the query cache cannot be enabled at runtime which reduces the overhead in query execution. Please go through the details of QUERY CACHE ENHANCEMENTS with Percona Server.

The post The MySQL Query Cache: How it works, plus workload impacts (good and bad) appeared first on MySQL Performance Blog.

Problem with master info when creating a slave from an existing slave with Xtrabackup

Lastest Forum Posts - January 1, 2015 - 6:23pm
We tried a couple of days ago to create a slave using an existing slave as the place to copy from, we are using mariadb 10.0.14.

We tried a couple of ways to point that slave to the master and in the end had to use the original slave as the master for this slave, creating a daisy chain instead of two slave directly pointing to the one master.
That slave has log_bin enabled and the log_bin file numbers on it are at a higher number than the current master. This occurred due to this slave having been a master previously and us not disabling log_bin when restarting the slave after switching masters.
Using the notes at http://www.percona.com/doc/percona-x...plication.html to create a slave from an existing slave left us a little short and had to work a couple of things out.
The xtrabackup_slave_info file contained change master to slave_pos for GTID not being fully across GTID I wasn't sure how the slave was supposed to sync first with a file and position number.
There are a few steps missing for mariadb in that note now pertaining to how mariadb gtid is implemented
My question is what are the steps to take this from a copied sslave to getting it to sync to the master not the other slave as its mater creating a daisy chain?
Should I enable GTID as we are not currently using it?
Was it just a case of entering "Change master to master_host Master_user, Master_password" without master_file and position
followed by Change master to slave_pos
and then Slave Start?
Would that have allowed that slave to join the actual master and rectify the Log_bin file numbers and position being different?
master mysql-bin.000018
slave 1 mysql-bin.000021


Should I now enable gtid so allowing me to switch from slave 1 as the master for slave two or is this problem of position number being higher a real problem?
I do have the option of turning the bin_log off on slave 1 forcing a change in behaviour but perhaps other don't and I may not at some stage in the future.

Any assistance is greatly appreciated

Peter

Pages

Subscribe to Percona aggregator
]]>