]]>
]]>

You are here

Feed aggregator

Bizarre apt-get behavior - restarts Percona server install on a production server

Lastest Forum Posts - May 12, 2015 - 7:55am
This happens trying to apt-get mailutils

Ubuntu 14.04LTS happily running Percona Server (GPL), Release 72.1, Revision 0503478 - when I run "apt-get install mailutils" (to be able to send out email messages when space is low, etc) it restarts the Percona server install (with the pink full-window asking me to enter a new root password...) -

dpkg --get-selections shows the persona packages installed normally.

Any idea what might be going on?

Performance tab and Queries tab empty

Lastest Forum Posts - May 11, 2015 - 4:18pm
Hi,

Performance tab and Queries tab are empty, Metrics tab is ok.
I guess I have to set an MySQL instance.
Since we have a tight firewall rules.
Could please provide what IPs are being used for the PCT service.

10x

Higher CPU utilization after 5.6 Percona server upgrade

Lastest Forum Posts - May 11, 2015 - 4:05pm
We are in process of upgrading our (Percona) Mysql 5.5 mysql servers to Mysql 5.6. We have upgraded all our slaves except one.
However, we have observed that for the same workload, 5.6 slaves are consuming more user CPU.

CPU utlization on 5.5 slave continues to be lower. For comparison, we have setup an additional slave on Oracle's community Mysql 5.6. Here the cpu utilization is lower.

Does the percona Mysql 5.6 consumes more cpu than previous version (5.5) or Oracle community 5.6 ? Has anyone else observed similar behavior? In our environment this observation is consistent and reproducible. let me know if anyone has solution for it.


Slave upgraded to Percona 5.6 (EST Timezone)


01:20:01 PM CPU %user %nice %system %iowait %steal %idle
01:20:01 PM all 10.97 0.00 0.05 0.19 0.00 88.79
01:30:01 PM all 11.20 0.00 0.06 0.01 0.00 88.73
01:40:01 PM all 9.48 0.00 0.08 0.02 0.00 90.41


Slave still on 5.5:

10:20:01 AM CPU %user %nice %system %iowait %steal %idle

10:20:01 AM all 0.26 0.00 0.04 0.39 0.00 99.31
10:30:01 AM all 0.23 0.00 0.10 0.08 0.00 99.58
10:40:01 AM all 0.32 0.00 0.05 0.17 0.00 99.47



Slave upgraded to Oracle Mysql 5.6:

10:20:01 AM CPU %user %nice %system %iowait %steal %idle

10:20:01 AM all 0.26 0.00 0.04 0.39 0.00 99.31
10:30:01 AM all 0.23 0.00 0.10 0.08 0.00 99.58
10:40:01 AM all 0.32 0.00 0.05 0.17 0.00 99.47

Installation issue on Ubuntu 12.04 LTS

Lastest Forum Posts - May 11, 2015 - 10:28am
Hi all,

I'm trying to install Percona 5.5 on an Ubuntu 12.04 LTS and I get this error:

root@test:/etc/apt# apt-get install percona-server-server-5.5 percona-server-client-5.5
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
percona-server-client-5.5 : Depends: libdbi-perl but it is not installable
percona-server-server-5.5 : Depends: libdbi-perl but it is not installable
Depends: libdbd-mysql-perl but it is not installable
Depends: libaio1 (>= 0.3.93) but it is not installable
E: Unable to correct problems, you have held broken packages.

I googled this and found a bug issue dating 2013 that supposed to be fixed but nothing recent.

Any help will be appreciated.

Thanks,

Serge

How Percona Support handles bugs

Latest MySQL Performance Blog posts - May 11, 2015 - 3:00am

One of the great values of a Percona Support contract is that we provide bug fixes for covered software, and not just support in terms of advice on how to use it. This is the skill which is most likely missing from in-house for most customers, as it requires a team with code knowledge to build and test infrastructure – something only a few companies can afford to invest in.

There is a lot of misunderstanding about bugs. What is a bug? What is a feature? What is a repeatable bug? How will Percona troubleshoot the bug? In this post I will answer some of the questions about this.

Bugs vs. Features ⎼ One thing a lot of people have a hard time understanding is the difference between a bug and a feature, or when software was designed to work a certain way which might be unwelcome. There is a gray line here, but you need to expect that some of the things you consider to be bugs will be seen as behavior-change features and will be considered as such.

Unfixable Bugs ⎼ There are some behaviors that any sane person would call a bug, but which arise from design limitations or oversight that are impossible to fix in the current GA version without introducing changes that would destabilize it. Such bugs will need to be fixed in the next major GA release or sometimes even further in the future. Some bugs are not bugs at all but rather design tradeoffs made. These can’t be “fixed” unless different design tradeoffs are chosen.

Workaround ⎼ There are going to be unexpected behaviors, unfixable bugs and bugs that take awhile to fix, so your first practical response to running into the bug is often finding a workaround which does not expose it. The Percona Support team will help find a workaround that causes minimal impact to your business, but be prepared: changes to the application, deployed version, schema or configuration will often be required.

Emergencies ⎼ When you have an emergency, our focus is to restore the system to working order. In a complex system a bug fix can often not be delivered in a short period of time, which typically means finding a workaround.

Bug Turnaround It is not possible to guarantee the turnaround on a bug fix, as all bugs are different. Some bugs are rather trivial and we might be able to provide a hotfix 24 hours after we have a repeatable test case. In other cases the bug might be complicated and take weeks of engineering to fix or even might be impossible to fix in the current GA version.

Verified Bug Fixes When you submit the bug we have to verify if it is actually being a bug. In many cases it might be intended behavior; in others, a user mistake. It is also possible that the behavior has happened once and can’t be repeated. Having a repeatable test case that reveals the bug is the best way to have a bug fixed quickly. You might be able to create a repeatable test case, or our support team might be able to help you create the test case.

Sporadic Bugs These are very hard bug types that happen sporadically over a period of time. For example, you might have a system crash once every 3 months with no way to repeat it. The cause of such bugs can be very complicated; for example, a buffer overrun in one piece of code can cause corruption and crash in another place hours later. There are a number of diagnostic tools that exist for such bugs, but generally they take quite awhile to resolve. In addition, without a repeatable test case, it is often impossible to verify that the proposed fix actually resolves the bug.

Environmental Bugs ⎼ Some bugs are caused by what can be called your environment. It could be some hardware bugs or incompatibilities, a build not quite compatible with your version of  operating system, operating system bugs, etc. In some cases we can very clearly point to the environment problems. In others we can suspect the environment is an issue and we may ask you to see if the bug also happens in another environment, such as different hardware or OS installation.

Hot Fixes As our default policy we fix bugs in the next release of our software so it can go through the full QA cycle, be properly documented, etc. If you have implemented a workaround and you can wait until the next release, this is the best choice. If not, with the Percona Platinum Support contract, we can provide you with a hotfix that is a special build containing the version of the software you’re running, and with only the bug fix of interest applied. Hotfixes are especially helpful if you’re not looking to do a full software upgrade – requiring several revisions – but want to validate the fix with the minimum possible changes. Hotfixes might also be different from the final bug fix that goes into the GA release. With hotfixes, our goal is to provide a working solution for you faster. Afterward we may optimize or re-architect the code, come up with better option names, etc.

Bug Diagnostics Depending on the nature of the bug there are multiple tools that our support team will use for diagnostics – finding a way to fix the bug. To set expectations correctly, it can be a very involved process, where you might need to provide a lot of information or try things on your system, such as:

  • Test case. If you have a test case that can be repeated by the Percona team to trigger the bug, the diagnostic problem is solved from the customer side. Internal debugging starts at this point. It might not be easy to get to that.
  • If we have a crash that we can’t repeat on our system we often will ask you to enable “core” file, or run the program under a debugger so we can get more information when the crash happens.
  • If the problem is related to performance, you should be ready to gather both MySQL information such as EXPLAIN, status counters, information from performance schema, etc., along with system level information such as pt-pmp output,  pt-stalk,  oprofile, perf, etc.
  • If the problem is a “deadlock,” we often need information from gdb about the full state of the system. Information from processlist, performance_schema, SHOW ENGINE INNODB STATUS can also be helpful.
  • It can be very helpful when you have a test system on which you can repeat the problem in your environment and where you can experiment without impacting production. It is not possible in all cases, but is very helpful.
  • Sometimes, for hard-to-repeat bugs, we will need to run a special diagnostics build that provides us with additional debug information. In others, we might need to run a debug build or do a run under valgrind or other software designed to catch bugs. It often has a large performance impact, so it is good to see how your workload can be scaled down in order for this to be feasible.
  • Depending on your environment we might need to login to troubleshoot your bug or might request that you upload the data needed to repeat the bug in our lab (assuming it is not too sensitive). In cases where direct login is not possible, we can help you to come to a repeatable test case via phone, chat, or email. Using screen sharing can also be very helpful.

Bugs and Non-Percona Software ⎼ Percona Support covers some software not produced by Percona. For open source software, if it is not exempt from bug fix support, we will provide the custom build with a bug fix as well as provide the suggested fix to the software maintainer for its possible inclusion in the next release. For example, if we find a bug in the MySQL Community Edition, we will pass our suggested fix to the MySQL Engineering team at Oracle. For other software that is not open source, such as Amazon RDS, we can help to facilitate creation and submission of a repeatable test case and workaround, but we can’t provide a fix as we do not have access to the source code.

In Conclusion When I think about software bugs, I find some good parallels with human “bugs” (diseases).  Some issues are trivial to diagnose and the fix is obvious. Others might be very hard to diagnose I guess many of us have been in a situation where you visit doctor after doctor and tell them your symptoms, and they run tests but still can’t figure out what’s wrong. Once a diagnosis is done, though, it is not always given the “fix” available or feasible, and while a complete solution is preferred, sometimes we have to settle for “managing” the disease, which is our parallel to implementing changes and settling for a workaround. So in the same way as human doctors, we can’t guarantee we will get to the root of every problem, or if we do, that we will be able to fix every one of them. However, as with having good doctors – having us on your team will maximize the chance of successful resolution.

The post How Percona Support handles bugs appeared first on MySQL Performance Blog.

select from table while inserting data

Lastest Forum Posts - May 10, 2015 - 2:54pm
Hi all,
I'm using MySql 5.5 on amazon RDS, with innodb Engine on all the tables.
At the last time when I running 'show full processlist' I see a lot of queries that 'waiting for metadata lock'.
I understood that it's happen while table X is reading by Select Query (lock s) and before the select query finish, insert/update query are executing on the same table, and then the latest waiting .
The second case that I see it's when insert/update query running on table X (lock x) and then Select query try to read data from the same table .
I simulated those cases, and try some solutions like 'SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED' (dirty read) before the select query , but it's seems that it doesn't help to solve the problem.

What more can I try? Is there any way to run select query (even if the data not 100% accurate) while insert/update query already running on table?
or the only solution is read replica.


​Thank you!!

Percona Server 5.6.24-72.2 is now available

Latest MySQL Performance Blog posts - May 8, 2015 - 1:55pm

Percona is glad to announce the release of Percona Server 5.6.24-72.2 on May 8, 2015. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.24, including all the bug fixes in it, Percona Server 5.6.24-72.2 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.24-72.2 milestone on Launchpad.

New Features:

  • TokuDB storage engine package has been updated to version 7.5.7.

Bugs Fixed:

  • A server binary as distributed in binary tarballs could fail to load on different systems due to an unsatisfied libssl.so.6 dynamic library dependency. This was fixed by replacing the single binary tarball with multiple tarballs depending on the OpenSSL library available in the distribution: 1) ssl100 – for all Debian/Ubuntu versions except Squeeze/Lucid (libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f2e389a5000)); 2) ssl098 – only for Debian Squeeze and Ubuntu Lucid (libssl.so.0.9.8 => /usr/lib/libssl.so.0.9.8 (0x00007f9b30db6000)); 3) ssl101 – for CentOS 6 and CentOS 7 (libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007facbe8c4000)); 4) ssl098e – to be used only for CentOS 5 (libssl.so.6 => /lib64/libssl.so.6 (0x00002aed5b64d000)). Bug fixed #1172916.
  • Executing a stored procedure containing a subquery would leak memory. Bug fixed #1380985 (upstream #76349).
  • A slave server restart could cause a 1755 slave SQL thread error if multi-threaded slave was enabled. This is a regression introduced by fix for bug #1331586 in 5.6.21-70.0. Bug fixed #1380985.
  • A string literal containing an invalid UTF-8 sequence could be treated as falsely equal to a UTF-8 column value with no invalid sequences. This could cause invalid query results. Bug fixed #1247218 by a fix ported from MariaDB (MDEV-7649).
  • Percona Server .deb binaries were built without fast mutexes. Bug fixed #1433980.
  • Installing or uninstalling the Audit Log Plugin would crash the server if the audit_log_file variable was pointing to an inaccessible path. Bug fixed #1435606.
  • The audit_log_file would point to random memory area if the Audit Log Plugin was not loaded into server, and then installed with INSTALL PLUGIN, and my.cnf contained audit_log_file setting. Bug fixed #1437505.
  • A specific trigger execution on the master server could cause a slave assertion error under row-based replication. The trigger would satisfy the following conditions: 1) it sets a savepoint; 2) it declares a condition handler which releases this savepoint; 3) the trigger execution passes through the condition handler. Bug fixed #1438990 (upstream #76727).
  • Percona Server client packages were built with with EditLine instead of Readline. This was causing history file produced by the client no longer easy to read. Further, a client built with EditLine could display incorrectly on PuTTY SSH client after its window resize. Bugs fixed #1266386, #1296192 and #1332822 (upstream #63130, upstream #72108 and #69991).
  • Unlocking a table while holding the backup binlog lock would cause an implicit erroneous backup lock release, and a subsequent server crash or hang at the later explicit backup lock release request. Bug fixed #1371827.
  • Initializing slave threads or executing CHANGE MASTER TO statement would crash a debug build if autocommit was disabled and at least one of slave info tables were configured as tables. Bug fixed #1393682.

Other bugs fixed: #1372263 (upstream #72080), #1436138 (upstream #76505), #1182949 (upstream #69453), #1111203 (upstream #68291), and #1384566 (upstream #74615).

Release notes for Percona Server 5.6.24-72.2 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

The post Percona Server 5.6.24-72.2 is now available appeared first on MySQL Performance Blog.

Percona Server 5.5.43-37.2 is now available

Latest MySQL Performance Blog posts - May 8, 2015 - 1:55pm


Percona is glad to announce the release of Percona Server 5.5.43-37.2 on May 8, 2015. Based on MySQL 5.5.43, including all the bug fixes in it, Percona Server 5.5.43-37.2 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.43-37.2 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • A server binary as distributed in binary tarballs could fail to load on different systems due to an unsatisfied libssl.so.6 dynamic library dependency. This was fixed by replacing the single binary tarball with multiple tarballs depending on the OpenSSL library available in the distribution: 1) ssl100 – for all Debian/Ubuntu versions except Squeeze/Lucid (libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f2e389a5000)); 2) ssl098 – only for Debian Squeeze and Ubuntu Lucid (libssl.so.0.9.8 => /usr/lib/libssl.so.0.9.8 (0x00007f9b30db6000)); 3) ssl101 – for CentOS 6 and CentOS 7 (libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007facbe8c4000)); 4) ssl098e – to be used only for CentOS 5 (libssl.so.6 => /lib64/libssl.so.6 (0x00002aed5b64d000)). Bug fixed #1172916.
  • mysql_install_db would make the server produce an “Error in my_thread_global_end(): 1 threads didn't exit” error message. While this error does not prevent mysql_install_db from completing successfully, its presence might cause any mysql_install_db-calling script to return an error as well. This is a regression introduced by backporting fix for bug #1319904. Bug fixed #1402074.
  • A string literal containing an invalid UTF-8 sequence could be treated as falsely equal to a UTF-8 column value with no invalid sequences. This could cause invalid query results. Bug fixed #1247218 by a fix ported from MariaDB (MDEV-7649).
  • Percona Server .deb binaries were built without fast mutexes. Bug fixed #1433980.
  • Installing or uninstalling the Audit Log Plugin would crash the server if the audit_log_file variable was pointing to an inaccessible path. Bug fixed #1435606.
  • The audit_log_file variable would point to random memory area if the Audit Log Plugin was not loaded into server, and then installed with INSTALL PLUGIN, and my.cnf contained audit_log_file setting. Bug fixed #1437505.
  • Percona Server client .deb packages were built with with EditLine instead of Readline. Further, a client built with EditLine could display incorrectly on PuTTY SSH client after its window resize. Bugs fixed #1266386 and #1332822 (upstream #63130 and #69991).

Other bugs fixed: #1436138 (upstream #76505).

(Please also note that Percona Server 5.6 series is the latest General Availability series and current GA release is 5.6.24-72.2.)

Release notes for Percona Server 5.5.43-37.2 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.43-37.2 is now available appeared first on MySQL Performance Blog.

MongoDB with Percona TokuMXse – experimental build RC5 is available!

Latest MySQL Performance Blog posts - May 8, 2015 - 9:43am

While our engineering team is working on finalizing the TokuMXse storage engine, I want to provide an experimental build that you can try and test MongoDB 3.0 with our storage engine.

It is available here
percona.com/downloads/TESTING/Percona-TokuMXse-rc5/percona-tokumxse-3.0.3pre-rc5.tar.gz

To start MongoDB with TokuMXse storage engine use:

mongod --storageEngine=tokuft

I am looking for your feedback!

The post MongoDB with Percona TokuMXse – experimental build RC5 is available! appeared first on MySQL Performance Blog.

MySQL indexing 101: a challenging single-table query

Latest MySQL Performance Blog posts - May 8, 2015 - 12:00am

We discussed in an earlier post how to design indexes for many types of queries using a single table. Here is a real-world example of the challenges you will face when trying to optimize queries: two similar queries, but one is performing a full table scan while the other one is using the index we specially created for these queries. Bug or expected behavior? Read on!

Our two similar queries

# Q1 mysql> explain select col1, col2 from t where ts >= '2015-04-30 00:00:00'; +----+-------------+---------------+------+---------------+------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+---------------+------+---------------+------+---------+------+---------+-------------+ | 1 | SIMPLE | t | ALL | ts | NULL | NULL | NULL | 4111896 | Using where | +----+-------------+---------------+------+---------------+------+---------+------+---------+-------------+ # Q2 mysql> explain select count(*) from t where ts >='2015-04-30 00:00:00'; +----+-------------+---------------+-------+---------------+--------------+---------+------+---------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+---------------+-------+---------------+--------------+---------+------+---------+--------------------------+ | 1 | SIMPLE | t | range | ts | ts | 5 | NULL | 1809458 | Using where; Using index | +----+-------------+---------------+-------+---------------+--------------+---------+------+---------+--------------------------+

Q1 runs a full-table scan while Q2 is using the index on ts, which by the way is covering – See Using index in the Extra field. Why such different execution plans?

Let’s try to understand what happens with Q1.

This is a query with a single inequality on the ts field and we have an index on ts. The optimizer tries to see if this index is usable (possible_keys field), this is all very logical. Now if we look at the rows field for Q1 and Q2, we can see that the index would allow us to only read 45% of the records (1.8M out of 4.1M). Granted, this is not excellent but this should be much better than a full table scan anyway, right?

If you think so, read carefully what’s next. Because this assumption is simply not correct!

Estimating the cost of an execution plan (simplified)

First of all, the optimizer does not know if data or indexes are in memory or need to be read from disk, it will simply assume everything is on disk. What it does know however is that sequential reads are much faster than random reads.

So let’s execute Q1 with the index on ts. Step 1 is to perform a range scan on this index to identify the 1.8M records that match the condition: this is a sequential read, so this is quite fast. However now step 2 is to get the col1 and col2 fields for each record that match the condition. The index provides the primary key value for each matching record so we will have to run a primary key lookup for each matching record.

Here is the issue: 1.8M primary key lookups is equivalent to 1.8M random reads, therefore this will take a lot of time. Much more time than sequentially reading the full table (which means doing a full scan of the primary key because we are using InnoDB here).

Contrast that with how Q2 can be executed with the index on ts. Step 1 is the same: identify the 1.8M matching records. But the difference is: there’s no step 2! That’s why we call this index a ‘covering index': we don’t need to run point queries on the primary key to get extra fields. So this time, using the index on ts is much more efficient than reading the full table (which again would mean that we would do a full-table scan of the primary key).

Now there is one more thing to understand: a full-table scan is a sequential operation when you think about it from a logical point of view, however the InnoDB pages are certainly not stored sequentially on disk. So at the disk level, a full table is more like multiple random reads than a single large sequential read.

However it is still much faster than a very large number or point query and it’s easy to understand why: when you read a 16KB page for a full table scan, all records will be used. While when you read a 16KB page for a random read, you might only use a single record. So in the worst case, reading 1.8M records will require 1.8M random reads while reading the full table with 4M records will only require 100K random reads – the full table scan is still an order of magnitude faster.

Optimizing our query

Now that we’ve understood why the optimizer chose a full table scan for Q1, is there a way to make it run faster by using an index? If we can create a covering index, we will no longer need the expensive primary key lookups. Then the optimizer is very likely to choose this index over a full table scan. Creating such a covering index is easy:

ALTER TABLE t ADD INDEX idx_ts_col1_col2 (ts, col1, col2);

Some of you may object that because we have an inequality on ts, the other columns cannot be used. This would be true if we had conditions on col1 or col2 in the WHERE clause, but that does not apply here since we’re only adding these extra columns to get a covering index.

Conclusion

Understanding how indexes can be used to filter, sort or cover is paramount to be able to optimize queries, even simple ones. Understanding (even approximately) how a query is run according to a given execution plan is also very useful. Otherwise you will sometimes be puzzled by the decisions made by the optimizer.

Also note that beginning in MySQL 5.7, the cost model can be tuned. This can help the optimizer make better decisions: for instance random reads are far cheaper on fast storage than on regular disks.

The post MySQL indexing 101: a challenging single-table query appeared first on MySQL Performance Blog.

Percona and Debian 8 Jessie

Lastest Forum Posts - May 7, 2015 - 4:47pm
Hello everyone,

I was wondering if we could expect a Debian 8 (jessie) apt repository with Percona software? It has been a couple of weeks since Debian jessie was officially released and is the current stable version so I assume more and more people will be interested in seeing Percona's software working on Jessie.

Thank you,
Rumen Telbizov

Xtrabackup_56 exit code of 141

Lastest Forum Posts - May 7, 2015 - 12:57pm
Greetings,

I'm using xtrabackup 2.1.9 to backup one of my database servers. The last few days when running the prepare process it exits with a code of 141. I can't find anything in the logs (or online) that would suggest what this means or what the problem is. Can anyone shed some light on this?

The final message in the log file is

xtrabackup: starting shutdown with innodb_fast_shutdown = 1

It looks like the prepare worked but didn't finish. Any insight would be great.

Chad

unexpected high memory usage using ndbmtd MySQL Cluster ndbcluster

Lastest Forum Posts - May 7, 2015 - 10:53am
Hello,
I am experiencing higher system memory usage than expected when using ndbmtd instead of ndbd. By my count from config.ini each node should be using around 13GB of memory, but looking at the ndbmtd parent process it has consumed around 36GB instead. I know that the RedoBuffer is a "per LDM" value and that each LDM thread is using the value from RedoBuffer, but are there other values that this is true for? It's consuming more than twice what I expect. I've used ndb_config to make sure that the "actual live" values match config.ini values. Perhaps it has something to do with my ThreadConfig and extra threads consuming memory? Please school me!

version: mysql-5.6.24 ndb-7.4.6

# ps aux |grep ndbmtd
root 2031 28.1 47.4 38620704 35290504 ? Sl May06 253:19 /usr/bin/ndbmtd --ndb-nodeid=11

# free
total used free shared buffers cached
Mem: 74377352 38038176 36339176 604 216968 1508032
-/+ buffers/cache: 36313176 38064176
Swap: 3997692 0 3997692

from config.ini

NoOfReplicas = 2
DataMemory = 6G
IndexMemory = 1G
SharedGlobalMemory = 2G
DiskPageBufferMemory = 2G
RedoBuffer = 512M
FragmentLogFileSize = 64M
NoOfFragmentLogFiles = 128
ThreadConfig=ldm={count=4,cpubind=1,2,3,4},main={c ount=1,cpuset=5,13},io={count=1,cpuset=5,13},rep={ count=1,cpuset=5,13},tc={count=2,cpuset=6,7,14,15} ,send={count=1,cpuset=6,7,14,15},recv={count=1,cpu set=6,7,14,15}


Using Innobackupex to manage Fabric servers

Lastest Forum Posts - May 7, 2015 - 7:37am
Sorry about the duplicates. Every time I post I get an error message about 'Subscriptions' & stay on the edit page, so I thought posts were not working...

Using Innobackupex to manage Fabric servers

Lastest Forum Posts - May 7, 2015 - 7:28am
Managing a MySQL Fabric setup requires making new replicants fairly frequently. As I set up for production with a 300G database, mysqldump is not a very practical method for creating replicants because, even on 20 cores & 96GB RAM, it takes basically for ever - about 16 hours - to make a new replicant (not to mention the additional time that gets tacked on to that while the new slave catches up to a master that has been updating aggressively for those 16 hours). On the other hand, if I use innobackupex snapshots to create replicants and then add them to the Fabric group, it only takes me about 1 hour total. Pretty sweet.

The problem I'm having is that when the databases are configured for Fabric (which mostly means with sync-binlog = 1 and GTIDs for everything), I can only create a new slave by taking an innobackupex snapshot from the master. It seems to me that slave snapshots should be functionally identical, and are more desirable for obvious practical reasons.

So here, for jrivera, who generously offered to have a look at the problem when I posted about it on Reddit, are some details:

1. All Percona servers in this configuration are Linodes of varying sizes set up as Ubuntu 14.04LTS with Percona 72.1 R 0503478.
2. they all run the same my.cnf (enclosed) with the innodb buffer pool changed to match the particular server, and the server id changed.
3. I'm taking all snapshots using the same script, enclosed.
4. When I set up a new server, I simply scp -rp the entire snapshot into a clean /var/lib/mysql, then chown -R mysql:mysql /var/lib/mysql, start the server, and once running I reset master on the new slave and then set gtid_purged to the appropriate value (from xtrabackup_info).

Then, I go to my Fabric controller and add the server. If the snapshot was taken form a slave, I get error 1872: Slave failed to initialize relay log info structure from the repository. On the fabric controller, this error is reported as:
ServerError: Error trying to configure server (0773b58e-f44e-11e4-8143-f23c91185b4c) as slave: Command (START SLAVE , ()) failed accessing (mynewslave:3306). 1872 (HY000): Slave failed to initialize relay log info structure from the repository..

So naturally I try logging on to mynewslave from the Fabric controller with the fabric mysql account, and it works fine. Then I go to mynewslave, repeat the reset master & set grid_purged operation, restart the slave, locally, and get the exact same error.

By contrast, if I do the exact same operation, except taking the innobackupex snapshot from the master, the whole procedure goes off without a hitch:

$mysqlfabric group add mygroup mynewslave
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1

uuid finished success result
------------------------------------ -------- ------- ------
52602f60-eae6-42c6-b89a-48377124c75a 1 1 1

state success when description
----- ------- ------------- ------------------------------------------------------------------
3 2 1430962750.62 Triggered by <mysql.fabric.events.Event object at 0x7f8037995fd0>.
4 2 1430962750.62 Executing action (_add_server).
5 2 1430962751.72 Executed action (_add_server).

Oh - and, in case it helps, when the new slave fails, this is what its status looks like:
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: myoldmaster
Master_User: fabric
Master_Port: 3306
Connect_Retry: 60
Master_Log_File:
Read_Master_Log_Pos: 4
Relay_Log_File: relay-bin.000010
Relay_Log_Pos: 603177759
Relay_Master_Log_File:
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 1872
Last_Error: Slave failed to initialize relay log info structure from the repository
Skip_Counter: 0
Exec_Master_Log_Pos: 0
Relay_Log_Space: 0
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1872
Last_SQL_Error: Slave failed to initialize relay log info structure from the repository
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID: f43a8a6b-c136-11e4-b41c-f23c9133c356
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp: 150506 19:18:58
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set: f43a8a6b-c136-11e4-b41c-f23c9133c356:1-150207153
Auto_Position: 1


===============
The attachment tool (paper clip) does not appear to be working in Safari 7.5.1 under OSX 10.9.5, so here's the relevant configurations in-line:

#========= my.cnf:=========#
[mysql]

# CLIENT #
port = 3306
socket = /var/lib/mysql/mysql.sock

[mysqld]

# GENERAL #
user = mysql
default-storage-engine = InnoDB
socket = /var/lib/mysql/mysql.sock
pid-file = /var/lib/mysql/mysql.pid

# MyISAM #
key-buffer-size = 32M
myisam-recover = FORCE,BACKUP

# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
sysdate-is-now = 1
innodb = FORCE

# DATA STORAGE #
datadir = /var/lib/mysql/

# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
expire-logs-days = 14
sync-binlog = 1

# REPLICATION #
server-id = 33
read-only = 0
#skip-slave-start = 1
log-slave-updates = 1
relay-log = /var/lib/mysql/relay-bin
slave-net-timeout = 60
sync-master-info = 1
sync-relay-log = 1
sync-relay-log-info = 1

#binlog-do-db = serpsalpha
binlog-format = 'ROW'

#Fabric:
#skip-slave-start (=0), binlog-do-db (empty), & binlog-format (ROW) above are changed for Fabric.
#additionally, sync-master-info must be on for Fabric, and the following:
gtid-mode = 'ON'
enforce-gtid-consistency = 'ON'
simplified_binlog_gtid_recovery = 'ON' #Note: this variable has been changing name a lot. It can fail under special circumstances. See details at http://v.gd/2XcOhf
master-info-repository = TABLE
relay-log-info-repository = TABLE
#report-host= might want to set this for fabric
#report-port #no need to set, as it's reported correctly as of 5.5 or 5.6...

# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 55

# INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 512M
innodb-flush-log-at-trx-commit = 2
innodb-file-per-table = 1
innodb-buffer-pool-size = 14G
innodb-rollback-on-timeout = ON
transaction-isolation = READ-COMMITTED
low-priority-updates = 1

# LOGGING #
log-error = /var/log/mysql/mysql-error.log
log-queries-not-using-indexes = 0
slow-query-log = 1
slow-query-log-file = /var/log/mysql/mysql-slow.log
long-query-time = 10

#========= snapshot.sh:=========#
#!/bin/bash

if [ $SHLVL -eq 1 ]
then
echo "Please run $0 from within a screen... it just makes sense."
exit 1
fi

cd /var/backups/mysql/daily_snapshot
echo
echo "Please enter mysql root password:"
read -s p
echo
mysql -h 127.0.0.1 -u root -p$p -ABNe "select NULL;" 2>/dev/null >/dev/null
if [ $? -ne 0 ]
then
echo "Wrong root password."
exit 2
fi

echo
echo "Starting at $(date)"
#snapshot
SECONDS=0
sdate=$(date +%Y-%m-%d_%H)
date | tee -a /var/log/daily_snapshot_$sdate.log

innobackupex --host=127.0.0.1 --user=root --password=$p\
--rsync --safe-slave-backup --slave-info /var/backups/mysql/daily_snapshot\
2>&1 | egrep "innobackupex|xtrabackup" | tee -a /var/log/daily_snapshot_$sdate.log

sdir=$(ls -1 | egrep "^$sdate")
innobackupex --host=127.0.0.1 --user=root --password=$p\
--apply-log /var/backups/mysql/daily_snapshot/$( ls -1 /var/backups/mysql/daily_snapshot/ | grep $sdir)\
2>&1 | egrep "innobackupex|xtrabackup" | tee -a /var/log/daily_snapshot_$sdate.log
echo "daily snapshot complete at $(date) after $SECONDS sec" | tee -a /var/log/daily_snapshot_$sdate.log

Installing and running Xrabackup

Lastest Forum Posts - May 7, 2015 - 3:22am
Hello all,

First of all I would liek to let you know that I have a little experience on mysql so my questions could be sound stupid to some of you but I need your help.
- Installed using #rpm -Uvh percona-release-0.1-3.noarch.rpm under a temp user I created.
Question: How can I start testing xtrabackup in order to perform a test backup of mysql DB, is there any console that I have to access like mysql or just issuing below command should work (which doesn't so maybe I did soemthing wrong on instalaltion)
$ xtrabackup --backup --datadir=/var/lib/mysql/ --target-dir=/data/backups/mysql/
Thank you and lookign for your replies.
George

Installing and running Xrabackup

Lastest Forum Posts - May 7, 2015 - 3:21am
Hello all,

First of all I would liek to let you knwo that I have a littel experience on mysql so my questions could be sound stupid to some of you but I need your help.
- Installed using #rpm -Uvh percona-release-0.1-3.noarch.rpm under a temp user I created.
Question: How can I start testing xtrabackup in order to perform a test backup of mysql DB, is there any console that I have to access like mysql or just issuing below command should work (which doesn't so maybe I did soemthing wrong on instalaltion)
$ xtrabackup --backup --datadir=/var/lib/mysql/ --target-dir=/data/backups/mysql/
Thank you and lookign for your replies.
George

Introducing email reports for Percona Cloud Tools!

Latest MySQL Performance Blog posts - May 7, 2015 - 3:00am

For those PCT users who forget to regularly check query performance, you can now get server query reports delivered to your email daily or weekly. The report contains a digest of the most critical performance information related to a particular MySQL instance, enabling you to act on any recent issues. There are convenient links that open the corresponding view of the PCT Query Analytics module.

Server summary snapshot

The report also provides a list of ten slowest queries and ten queries with the highest load during the period. It is important to compare these lists, because some of the slowest queries may occur only several times during the period (so they probably do not overload the server), while seemingly quick queries may run millions of times and put a lot of unnecessary overhead, which you could investigate and fix.

List of slowest queries

List of queries with highest load

Finally, the report also provides information about new queries that you add to MySQL. This enables you to estimate the efficiency of a new query and see how it compares to the worst performing ones.

List of new queries

Reports are enabled by default for some of our customers. We will gradually enable them for more PCT users every week to make sure that the new feature rolls out smoothly. You can manage reports in the PCT web UI under Configure > Reports at https://cloud.percona.com/org/user-reports-settings.

By default, you will receive a separate weekly report for each MySQL instance in your infrastructure. If you add a new MySQL instance, weekly reports will be automatically enabled for it.

To disable all reports, clear the Enable server query reports check box. If you do not want to enable reports for new MySQL instances that you add, clear the Automatically receive reports for new MySQL instances check box.

You can enable reports for specific MySQL instances, as well as select between daily and weekly period. Disable reports for servers that do not require regular attention and enable weekly reports for more critical servers. Use daily reports for the most active servers, where you expect to have the most amount of change and load.

If you do not have a Percona Cloud Tools account yet, register at cloud.percona.com and get regular query reports by email!

The post Introducing email reports for Percona Cloud Tools! appeared first on MySQL Performance Blog.

Restoring a single database using innobackupex

Lastest Forum Posts - May 7, 2015 - 1:31am
Hi,

I think I am trying to do quite a simple thing, but I am hitting a block. I want to take a backup of a single database schema from a server and apply it in another instance of (Percona) MySQL 5.6.

I have followed carefully the instructions here: https://www.percona.com/doc/percona-x...artial-backups

The part that does not work for me is:
"It can also be done by copying back the prepared backup to a “clean” datadir (in that case, make sure to include the mysql database). System database can be created with:
$ sudo mysql_install_db --user=mysql"

I am able to copy back OK, but I am unable to create the system databases properly and start the MySQL service. What does "make sure to include the mysql database" actually mean? Just take a copy from the remoter server or move the local database out of the datadir, copy back then move it back in again? Everything I have tried has not worked, the MySQL service will not start and the logs state:

/etc/init.d/mysql[11410]: #007/usr/bin/mysqladmin: connect to server at 'localhost' failed
/etc/init.d/mysql[11410]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
/etc/init.d/mysql[11410]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!

The restoring individual tables method is not really open to me as the schema has nearly 300 tables.

This is a trivial operation using mysqldump but it is proving hard to get working using innobackupex.

Any advice welcome.

Thanks,
Stephen

Failing @ Percona

Lastest Forum Posts - May 6, 2015 - 2:01pm
Good afternoon,

I'm fairly new to using Percona's XtraBackup. Our databases have grown to a size where mysqldump is taking up to 12 hours to backup an entire server, and causes locking issues. This obviously is no longer acceptable.

What I like about mysqldump is that I can directly backup and restore individual db's. My understanding (from a face to face conversation with the CEO of Percona) is that XtraBackup is not so easy in this aspect. There was some talk about doing a backup and then restoring the data elsewhere and copying back only what we need. I'm having a heck of a time finding good documentation for entry level Percona skills.

I feel as though the Percona instructions (PDF) were written for someone with years of experience and I am spending a lot of time doing trial and error on a test VM.

Here is what I would like to do:

1. Backup all the databases on a server, and if something becomes corrupt, I want to be able to restore a single database or a table, etc. I'm not a fan of having to restore the entire thing when only portions are needed. Does this make sense? Thus far the copy-back seems to require EVERYTHING.

2. I would like to be able to exclude a database in my backup. We have a tmp db that is constantly changing as temp data and tables are written. This causes a failure on the full backup because XtraBackup will attempt to write files it has already written - I'm guessing because the tables were deleted and recreated during the backup process. I get an error like:

xtrabackup_56: Can't create/write to file '/media/Synology/2015-05-06_14-36-47/tmp/wrk_ReportStores_StoreList.ibd.pmap' (Errcode: 17 - File exists)
innobackupex: Error: The xtrabackup child process has died at /usr/bin/innobackupex line 2622.

If I scroll up in the SSH screen I see the file was created maybe 5 minutes earlier.

EXAMPLE:

[01] Copying ./tmp/wrk_reportstores_storelist.ibd to /media/Synology/2015-05-06_14-36-47/tmp/wrk_reportstores_storelist.ibd
[01] ...done


This aborts the backup all together. Is there an exclude option?

3. The apply-log switch..... do you do that AFTER you create the backup of BEFORE you do a restore? I want to make sure we aren't writing to the production DB. I thought it was taking the log it captured and applying it to the backup files, but I want to be sure.



Here is the command I am running for the full backup:

sudo innobackupex --user=backupuser --password=backuppassword --compact /media/Synology/


Any help would be greatly appreciated!

Pages

Subscribe to Percona aggregator
]]>