]]>
]]>

You are here

Feed aggregator

Can't connect after upgrade to openssl-1.0.1e-30.el6.9

Lastest Forum Posts - June 10, 2015 - 10:58am
Hi All,

I upgraded one of our XtraDB Cluster nodes and a client system on CentOS to openssl-1.0.1e-30.el6.9 and can no longer connect to and mysql node from the cli client or php when using ssl. Even on the same host (connecting using 127.0.0.1) when requiring ssl on the user account. Galera replication works fine between the nodes though and it's ssl encrypted.

This is the error from the mysql (percona) cli client:
ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1)

Downgrade to the previous CentOS release of openssl (1.0.1e-30.el6.8) resolves the problem.

One change in this release of openssl that could be related is that the client will reject DH keys below 768 bits. Does anyone know the DH key size used in XtraDB Cluster?

We're running 5.6.21-70.1-56.

Thanks.

Installing percona-xtrabackup on RHEL6.2

Lastest Forum Posts - June 10, 2015 - 10:37am
When installing percona-xtrabackup on RHEL 6.2 I get a dependancy error:

--> Finished Dependency Resolution
Error: Package: percona-xtrabackup-2.2.11-1.el6.i686 (/percona-xtrabackup-2.2.11-1.el6.i686)
Requires: libz.so.1(ZLIB_1.2.0)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

zlib is installed:

[mark@db1 ~]$ locate libz.so.1
/lib64/libz.so.1
/lib64/libz.so.1.2.3
/opt/ohai-solo/embedded/lib/libz.so.1
/opt/ohai-solo/embedded/lib/libz.so.1.2.6

Has anyone else tried percona on RHEL 6.2?

Restore one table

Lastest Forum Posts - June 10, 2015 - 8:18am
Hello everybody,

I'm new on Xtrabackup and we have to test it before use it in production.

I've a problem when I try a restore. I create a "aaa" table with 2 records inside the "test" database.

The backup is done successfuly on another server as that:
innobackupex --user=user_backup --password=user_password --stream=tar ./ | ssh root@remote_host "cat - > /var/tmp/backup.tar"

Then I would like to try a restore of a test table name "aaa".

So here are my command:
tar -xif backup.tar
innobackupex --apply-log --export /var/tmp/backup

Then I go to my MySQL VM and do that:

I remove and create again the table with the same structure.

mysql> use test;
mysql> set FOREIGN_KEY_CHECKS=0;
mysql> ALTER TABLE aaa DISCARD TABLESPACE;

Then I rsync the ibd file from the extract backup to the MySQL VM:
rsync -avz aaa.ibd root@mysql_vm:/var/lib/mysql/data/test/

ON MySQL again:
chown mysql: aaa.ibd

mysql> use test;
mysql> set FOREIGN_KEY_CHECKS=0;
mysql> ALTER TABLE aaa IMPORT TABLESPACE;

But after the IMPORT command I get this error:
#1030 - Got error -1 from storage engine

I try the same process with adding the .exp file but I got same error.

I use MySQL 5.5.41.

I'm helping with heses link:
https://www.percona.com/doc/percona-...ables_ibk.html
https://www.percona.com/blog/2012/01...a-full-backup/

Somebody get the same problem before ?

I don't find why and how to solve this error...

It seams to be easy when we would like to restore a complete database, but sometimes we have to restore only on table and not the complete database...

Thanks a lot if you can help me !

Read: ''rsync' not found in PATH', issue with rsync

Lastest Forum Posts - June 10, 2015 - 6:34am
Hi dear experts,
I have installed latest PXC from repo on CentOS 7 as well as rsync. But when i started second node to connect to primary it gave an error, indicating that there is no rsync installed.

In fact rsync is installed as dependency package from Percona repo.

: [root@centos7-node2 ~]# rsync --version rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints, socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. See the GNU General Public Licence for details. [root@centos7-node2 ~]# which rsync /usr/bin/rsync my.cnf settings:

: wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77" wsrep_cluster_name='TestingCluster' wsrep_node_address='192.168.1.88' wsrep_node_name='node1' wsrep_sst_method=rsync wsrep_sst_auth=sstuser:12345 bind-address=0.0.0.0
Portion from error log:

: 2015-06-10 18:23:12 8378 [Note] WSREP: New cluster view: global state: 609ea286-0f5b-11e5-97a6-4342c2282781:6, view# 8: Primary, number of nodes: 2, my index: 1, protocol version 3 2015-06-10 18:23:12 8378 [Warning] WSREP: Gap in state sequence. Need state transfer. which: no rsync in (/usr/sbin:/sbin:/usr//bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin) 2015-06-10 18:23:13 8378 [ERROR] WSREP: Failed to read 'ready <addr>' from: wsrep_sst_rsync --role 'joiner' --address '192.168.1.88' --auth 'sstuser:12345' --dat adir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '8378' '' Read: ''rsync' not found in PATH' 2015-06-10 18:23:13 8378 [ERROR] WSREP: Process completed with error: wsrep_sst_rsync --role 'joiner' --address '192.168.1.88' --auth 'sstuser:12345' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '8378' '' : 2 (No such file or directory) 2015-06-10 18:23:13 8378 [ERROR] WSREP: Failed to prepare for 'rsync' SST. Unrecoverable. 2015-06-10 18:23:13 8378 [ERROR] Aborting

Percona Live Europe 2015! Call for speakers; registration now open

Latest MySQL Performance Blog posts - June 10, 2015 - 12:00am

Percona Live is moving from London to Amsterdam this year and the event is also expanding to three full days. Percona Live Europe 2015, September 21-23, will be at the Mövenpick Hotel Amsterdam City Centre. The call for speakers and Super Saver registration are now open. Hurry though because the deadline for submitting a speaking proposal is June 21st and Super Saver registration ends July 5th!

This year’s conference will feature one day of tutorials and two days of keynote talks and breakout sessions related to MySQL, NoSQL and Data in the Cloud. You’ll get briefed on the hottest topics, learn about operating a high-performing deployment and hear from top-industry leaders describe the future of the ecosystem – encompassing MySQL, MariaDB, Percona Server, MongoDB (and more). Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

Have something to say? Why not lead a breakout session or a tutorial?

Breakout sessions are 50 minutes including a Q&A. Tutorial sessions focus on an immediate and practical application of in-depth MySQL and NoSQL knowledge. Tutorial speakers should assume that attendees will have laptops to work through detailed and potentially hands-on presentations. Tutorials are typically three hours long including a Q&A, however, if you have content for a full day, submissions for 6-hour-long tutorials are also being accepted. If your tutorial or breakout session is approved, you’ll receive a complimentary full-conference pass.

Huge thanks to our Conference Committee!

There would be no Percona Live without the hard work of our conference committees. Meet this year’s Percona Live Europe 2015 Conference Committee – a dedicated group of experts in MySQL, NoSQL and Data in the Cloud:

  • Erik Beebe – Founder and CTO, ObjectRocket
  • Luis Motta Campos – Database Administrator, ebay Classifieds Group
  • Colin Charles – Chief Evangelist, MariaDB
  • César Trigo Esteban – Development Director, Gigigo
  • Kenny Gorman – Chief Technologist; Data. Office of the CTO, Rackspace
  • Amrith Kumar – Founder and CTO, Tesora
  • Giuseppe Maxia – Quality Assurance Architect, VMWare
  • Shlomi Noach – Senior Systems Engineer, Booking.com
  • Konstantin Osipov – Engineering Manager, Mail.Ru
  • Morgan Tocker – MySQL Community Manager, Oracle
  • Art van Scheppingen – Head of Database Engineering, Spil Games
  • Charity Majors- Production Engineering Manager, Facebook
  • Peter Zaitsev – Co-founder and CEO, Percona
Sponsorships

Sponsorship opportunities for Percona Live Europe 2015 are now available. Sponsors become part of a dynamic and fast-growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event. This year’s conference will feature expanded accommodations and turnkey kiosks.

Planning to attend?

Super Saver registration discounts for Percona Live Europe 2015 are available through July 5th (at 11:30 p.m. CEST). Visit the Percona Live Europe 2015 website for more information about the conference. Interested community members can also register to receive email updates about Percona Live Europe 2015.

Percona has also negotiated a special hotel rate at the Mövenpick Hotel Amsterdam City Centre. If you book your hotel before July 6th your delicious breakfast is included.

I hope to see you in Amsterdam!

The post Percona Live Europe 2015! Call for speakers; registration now open appeared first on MySQL Performance Blog.

Does pt-table-checksum follow slaves in a 2-hop (&amp;quot;chained&amp;quot;) replication?

Lastest Forum Posts - June 9, 2015 - 9:46am
We've been using pt-table-checksum for several years to check replication consistency. Just recently we started getting errors for slaves that are 2 hops away from the master. I was under the impression that pt-table-checksum only checks a master and its IMMEDIATE (i.e. 1 hop) slaves. I looked through the documentation and googled for a while but couldn't find anything saying whether pt-table-checksum finds all slaves in a chained replication topology. Can someone clarify this issue? btw, we are using the "hosts" method of slave discovery in case that makes any difference.

Auditing MySQL with McAfee and MongoDB

Latest MySQL Performance Blog posts - June 9, 2015 - 6:00am

Greetings everyone! Let’s discuss a 3rd Party auditing solution to MySQL and how we can leverage MongoDB to make sense out of all of that data.

The McAfee MySQL Audit plugin does a great job of capturing, at low level, activities within a MySQL server. It does this through some non-standard APIs which is why installing and configuring the plugin can be a bit difficult. The audit information is stored in JSON format, in a text file, by default.

There is 1 JSON object for each action that takes place within MySQL. If a user logs in, there’s an object. If that user queries a table, there’s an object. Imagine 1000 active connections from an application, each doing 2 queries per second. That’s 2000 JSON objects per second being written to the audit log. After 24 hours, that would be almost 173,000,000 audit entries!

How does one make sense of that many JSON objects? One option would be to write your own parser in $YOUR_FAVORITE_LANGUAGE and convert the JSON to INSERT statements and write the data back to MySQL (Note: If you do this, you can whitelist this table within the plugin so that these INSERTs are not re-audit logged). Or, we can use a system designed to import, store and query JSON objects, such as MongoDB.

Install McAfee Audit Plugin

First we need to download the source code for the plugin and download the source code for the specific MySQL version you are running. This is not a complete step-by-step HOWTO on installing this plugin; just some high-level points.

My client for this exercise is still on Percona Server 5.1.73, so we need the source for that EXACT version from percona.com.

We can clone the mcafee/mysql-audit using git.

Unzip the MySQL source and compile it; just don’t do “make install”, only “./configure” and “make” are necessary.

Now compile the plugin. You may want to read the detailed instructions.

This next step is tricky and really only necessary if you are not using vanilla MySQL. It is a required step to allow the plugin to use those non-standard API’s I mentioned earlier. You need to extract the offsets for the plugin to work. Follow the instructions given.

Once that is all done, you can:

INSTALL PLUGIN AUDIT SONAME 'libaudit_plugin.so';

If the plugin fails to load, check MySQL’s error log for the reason why and confer with the plugin documentation on how to resolve.

We now need to enable audit logging because nothing is enabled by default.

SET GLOBAL audit_record_cmds = "select,insert,update,delete"; SET GLOBAL audit_json_file = ON; SET GLOBAL audit_record_objs = "*.*,{}"; SET GLOBAL audit_force_record_logins = ON;

Look inside @@datadir and you should see a file called mysql-audit.json. You can tail -f this file if you’d like to watch it to make sure data is being written.

If you’d like some more background reading on the audit plugin, check out Fernando’s post on Experiences with McAfee Audit Plugin.

Setting Up MongoDB

Let me begin by stating this is my first time really dealing with MongoDB in any real sense. I spun up an EC2 instance in AWS (m3.large, CentOS 6) and installed MongoDB using yum and the Mongo repositories.

As the ephemeral storage for my instance had been mounted at /opt, I changed just this one option in the supplied /etc/mongod.conf and restarted mongo (service mongod restart).

dbpath=/opt/mongo

I then copied the mysql-audit.json from the MySQL host using SSH:

[percona@mysql-host ~]$ scp -i .ssh/amazon.pem /data/mysql/mysql-audit.json root@54.177.22.22:/tmp/

Then I imported this JSON file directly into MongoDB:

[root@ip-10-255-8-15 ~]# mongoimport --db test --collection audit --drop --file /tmp/mysql-audit.json

The above mongoimport command specifies the database in which to import (test) and in which collection (audit). I also specify to –drop the database before importing. This drop is necessary because the Audit Plugin appends to JSON file and if we repeated these import steps without the –drop, we would be duplicating data.

If there is enough interest, via the comments below, I will investigate the potential of using the socket functionality of the Audit Plugin to have the events stream directly into mongo.

For now though, it’s a wash-rinse-repeat cycle; though there is the ability to rotate the JSON audit log after a certain amount of time and import each file on a daily basis.

Making Data Make Sense

Here is a sample “document” (ie: audit event) that is created by the Audit Plugin.

{ "_id" : ObjectId("5571ea51b1e714b8d6d804c8"), "msg-type" : "activity", "date" : "1433438419388", "thread-id" : "10214180", "query-id" : "295711011", "user" : "activebatchSVC", "priv_user" : "activebatchSVC", "host" : "ecn.corp", "ip" : "10.2.8.9", "cmd" : "select", "objects" : [ { "db" : "", "name" : "*", "obj_type" : "TABLE" }, { "db" : "risque", "name" : "markets_source_tfutvol_eab", "obj_type" : "VIEW" }, { "db" : "historical", "name" : "futureopt", "obj_type" : "TABLE" }, { "db" : "risque", "name" : "securities_futures_optdef", "obj_type" : "TABLE" }, { "db" : "risque", "name" : "markets_source_tfutvol_eab", "obj_type" : "VIEW" }, { "db" : "historical", "name" : "futureopt", "obj_type" : "TABLE" }, { "db" : "risque", "name" : "securities_futures_optdef", "obj_type" : "TABLE" } ], "query" : "SELECT far, bar, baz FROM mytable" }

!! MongoDB BUG !!

Notice that last field in the document is named “query.” When I attempted some basic aggregate() functions on this field, I received errors on bad syntax. After much frustration, lots Googling and repeated testing, I came to the only conclusion that “query” is a reserved word in MongoDB. There is little to no documentation on this, aside from an almost 3 year old bug report that simply helped confirm my suspicion.

To work around the above bug issue, let’s rename all of the “query” fields to “qry”:

db.audit.update({}, { $rename: { "query": "qry"} }, false, true);

Now we can begin.

Basic Command Counters

Using any of the “top level” fields in each document, we can run reports (called aggregates in Mongo). So an easy one is to get a list of all unique “commands” and how many times they occurred.

> db.audit.aggregate([ { $group: { "_id": "$cmd", "count": { $sum: 1 } } } ]); { "_id" : "Failed Login", "count" : 2 } { "_id" : "select", "count" : 458366 } { "_id" : "Connect", "count" : 455090 } { "_id" : "insert", "count" : 2 } { "_id" : "Quit", "count" : 445025 } { "_id" : null, "count" : 1 }

Breaking down the command above, we are grouping all values in the “cmd” field and counting them up. The SQL equivalent would be:

SELECT cmd, count(cmd) FROM audit GROUP BY cmd;

User Counts

Let’s get a list and count of all user activities. This will include any of the commands listed in the previous aggregate.

> db.audit.aggregate([ { $group: { "_id": "$user", "count": { $sum: 1 } } } ]); { "_id" : "baw", "count" : 1883 } { "_id" : "eq_shrd", "count" : 1 } { "_id" : "reski", "count" : 3452 } { "_id" : "alin", "count" : 1 } { "_id" : "oey", "count" : 62 } { "_id" : "dule", "count" : 380062 } { "_id" : "ashi", "count" : 802 } { "_id" : "tech_shrd", "count" : 392464 }

A couple interesting things come out here. Firstly, the tech_shrd user does the most ‘activities’ over all other users. Is this expected? Is this normal? Your environment will determine that.

Specific User Activities

Let’s pick a specific user and get their activity counts to make sure they aren’t doing anything weird.

> db.audit.aggregate([ ... { $match: { "user": "tech_shrd" } }, ... { $group: { "_id": "$cmd", "count": { $sum: 1 } } } ... ]); { "_id" : "select", "count" : 132970 } { "_id" : "Connect", "count" : 133919 } { "_id" : "Quit", "count" : 125575 }

The SQL equivalent:

SELECT cmd, count(cmd) FROM audit WHERE user = 'tech_shrd';

Activities By User

We saw above that there were 2 insert commands. Who ran those?

> db.audit.aggregate([ ... { $match: { "cmd": "insert" } }, ... { $group: { "_id": "$user", "count": { $sum: 1 } } } ... ]); { "_id" : "graz", "count" : 2 }

More simply, we could have just done this to see the entire document/record which would include the SQL that the user executed, timestamp, hostname, etc.

> db.audit.find({ "cmd": "insert" });

The SQL equivalents:

SELECT user, count(user) FROM audit WHERE cmd = 'insert'; SELECT * FROM audit WHERE cmd = 'insert';

Table Activity

The most complex example I could come up with was trying to find out how many times each table was referenced. In theory, with weeks or even months of audit data, we could decide which tables aren’t needed any longer by the application.

> db.audit.aggregate( ... { $unwind: "$objects" }, ... { $group: { _id : "$objects.name", count: { $sum: 1 } } }, ... { $sort: { "count": -1 } } ... ); { "_id" : "*", "count" : 17359 } { "_id" : "swaps", "count" : 4392 } { "_id" : "futureopt", "count" : 3666 } ...(more)

You’ll notice in the sample document above that “objects” is an array of objects with 1 element for each table/view referenced in the ‘qry’ field. We need to “unwind” this array into single elements before we can count them. If someone knows a better way, please let me know. The Audit Plugin uses “*” to represent a derived table from a sub-SELECT, which has no proper name. We can remove all of these using:

> db.audit.update({ }, { $pull: { "objects": { "name": "*" } } }, false, true);

Audit Plugin Caveat: The ‘objects’ array is not a distinct list of the tables involved. For example, a SELECT statement that self-joins twice would produce 3 identical elements in the ‘objects’ array for that audit record. This may skew results. If anyone knows a cool Mongo trick to remove duplicates, please share in the comments.

Conclusion

For a quick wrap-up, we installed the McAfee Audit Plugin, exported some audit data, set up a MongoDB instance in AWS and imported the audit data. As you can see, the possibilities are plentiful on what kind of information you can gather. Feel free to comment on an aggregation you’d like to see if we were running this type of audit on your system.

Cheers,
Matthew

The post Auditing MySQL with McAfee and MongoDB appeared first on MySQL Performance Blog.

only want to do heathcheck

Lastest Forum Posts - June 8, 2015 - 11:03pm
what sort of configuration that I need to configure to know which cluster node that is still up or down?

any configuration sample?

Why pt-table-checksum diffs 0 while my master and slave are certainly different?

Lastest Forum Posts - June 8, 2015 - 8:02pm
: pt-table-checksum :2.2.14 MySQL:5.6.19 Binlog:row : Master: mysql> select * from test.gf; +------+ | id | +------+ | 2 | | 5 | | 7 | +------+ Slave: mysql> select * from test.gf; +------+ | id | +------+ | 7 | +------+ So master and slave are certainly different,but running pt-table-checksum shows no difference:
: ./pt-table-checksum h=localhost,u=root,p=PASSWORD,P=3306 ...... other table checking result ...... 06-09T10:55:31 0 0 3 1 0 0.003 test.gf And percona.checksums table in slave shows difference crc:
: select * from percona.checksums; ...... other output rows ...... | test | gf | 1 | 0.000625 | NULL | NULL | NULL | 46156dca | 1 | 435a7b4f | 3 | 2015-06-09 10:55:31 | My question is since percona.checksums table in slave shows that crc checking diffs in master and slave ,why pt-table-checksum report's diffs column shows 0?

VM vs Dedicated

Lastest Forum Posts - June 8, 2015 - 9:27am
Hi,

My organization decided to deploy applications on VM environment, one VM per application for controlling resource (CPU, RAM, I/O, space) usage of each application. We have some option for MySQL database which are same VM with application, separate VM (one VM per databases related to each application) or dedicated servers (all databases on same servers). I'm looking for pros vs cons of using MySQL on VM (same vs separate) environment vs dedicated server environment. I don't have knowledge of resource controlling on MySQL (if any).

Best regards,

MaxScale: A new tool to solve your MySQL scalability problems

Latest MySQL Performance Blog posts - June 8, 2015 - 6:00am

Ever since MySQL replication has existed, people have dreamed of a good solution to automatically split read from write operations, sending the writes to the MySQL master and load balancing the reads over a set of MySQL slaves. While if at first it seems easy to solve, the reality is far more complex.

First, the tool needs to make sure it parses and analyses correctly all the forms of SQL MySQL supports in order to sort writes from reads, something that is not as easy as it seems. Second, it needs to take into account if a session is in a transaction or not.

While in a transaction, the default transaction isolation level in InnoDB, Repeatable-read, and the MVCC framework insure that you’ll get a consistent view for the duration of the transaction. That means all statements executed inside a transaction must run on the master but, when the transaction commits or rollbacks, the following select statements on the session can be again load balanced to the slaves, if the session is in autocommit mode of course.

Then, what do you do with sessions that set variables? Do you restrict those sessions to the master or you replay them to the slave? If you replay the set variable commands, you need to associate the client connection to a set of MySQL backend connections, made of at least a master and a slave. What about temporary objects like with “create temporary table…”? How do you deal when a slave lags behind or what if worse, replication is broken? Those are just a few of the challenges you face when you want to build a tool to perform read/write splitting.

Over the last few years, a few products have tried to tackle the read/write split challenge. The MySQL_proxy was the first attempt I am aware of at solving this problem but it ended up with many limitations. ScaleARC does a much better job and is very usable but it stills has some limitations. The latest contender is MaxScale from MariaDB and this post is a road story of my first implementation of MaxScale for a customer.

Let me first introduce what is MaxScale exactly. MaxScale is an open source project, developed by MariaDB, that aims to be a modular proxy for MySQL. Most of the functionality in MaxScale is implemented as modules, which includes for example, modules for the MySQL protocol, client side and server side.

Other families of available modules are routers, monitors and filters. Routers are used to determine where to send a query, Read/Write splitting is accomplished by the readwritesplit router. The readwritesplit router uses an embedded MySQL server to parse the queries… quite clever and hard to beat in term of query parsing.

There are other routers available, the readconnrouter is basically a round-robin load balancer with optional weights, the schemarouter is a way to shard your data by schema and the binlog router is useful to manage a large number of slaves (have a look at Booking.com’s Jean-François Gagné’s talk at PLMCE15 to see how it can be used).

Monitors are modules that maintain information about the backend MySQL servers. There are monitors for a replicating setup, for Galera and for NDB cluster. Finally, the filters are modules that can be inserted in the software stack to manipulate the queries and the resultsets. All those modules have well defined APIs and thus, writing a custom module is rather easy, even for a non-developer like me, basic C skills are needed though. All event handling in MaxScale uses epoll and it supports multiple threads.

Over the last few months I worked with a customer having a challenging problem. On a PXC cluster, they have more than 30k queries/s and because of their write pattern and to avoid certification issues, they want to have the possibility to write to a single node and to load balance the reads. The application is not able to do the Read/Write splitting so, without a tool to do the splitting, only one node can be used for all the traffic. Of course, to make things easy, they use a lot of Java code that set tons of sessions variables. Furthermore, for ISO 27001 compliance, they want to be able to log all the queries for security analysis (and also for performance analysis, why not?). So, high query rate, Read/Write splitting and full query logging, like I said a challenging problem.

We experimented with a few solutions. One was a hardware load balancer that failed miserably – the implementation was just too simple, using only regular expressions. Another solution we tried was ScaleArc but it needed many rules to whitelist the set session variables and to repeat them to multiple servers. ScaleArc could have done the job but all the rules increases the CPU load and the cost is per CPU. The queries could have been sent to rsyslog and aggregated for analysis.

Finally, the HA implementation is rather minimalist and we had some issues with it. Then, we tried MaxScale. At the time, it was not GA and was (is still) young. Nevertheless, I wrote a query logging filter module to send all the queries to a Kafka cluster and we gave it a try. Kafka is extremely well suited to record a large flow of queries like that. In fact, at 30k qps, the 3 Kafka nodes are barely moving with cpu under 5% of one core. Although we encountered some issues, remember MaxScale is very young, it appeared to be the solution with the best potential and so we moved forward.

The folks at MariaDB behind MaxScale have been very responsive to the problems we encountered and we finally got to a very usable point and the test in the pilot environment was successful. The solution is now been deployed in the staging environment and if all goes well, it will be in production soon. The following figure is simplified view of the internals of MaxScale as configured for the customer:

The blocks in the figure are nearly all defined in the configuration file. We define a TCP listener using the MySQL protocol (client side) which is linked with a router, either the readwritesplit router or the readconn router.

The first step when routing a query is to assign the backends. This is where the read/write splitting decision is made. Also, as part of the steps required to route a query, 2 filters are called, regexp (optional) and Genlog. The regexp filter may be used to hot patch a query and the Genlog filter is the logging filter I wrote for them. The Genlog filter will send a json string containing about what can be found in the MySQL general query log plus the execution time.

Authentication attempts are also logged but the process is not illustrated in the figure. A key point to note, the authentication information is cached by MaxScale and is refreshed upon authentication failure, the refresh process is throttled to avoid overloading the backend servers. The servers are continuously monitored, the interval is adjustable, and the server status are used when the decision to assign a backend for a query is done.

In term of HA, I wrote a simple Pacemaker resource agent for MaxScale that does a few fancy things like load balancing with IPTables (I’ll talk about that in future post). With Pacemaker, we have a full fledge HA solution with quorum and fencing on which we can rely.

Performance wise, it is very good – a single core in a virtual environment was able to read/write split and log to Kafka about 10k queries per second. Although MaxScale supports multiple threads, we are still using a single thread per process, simply because it yields a slightly higher throughput and the custom Pacemaker agent deals with the use of a clone set of MaxScale instances. Remember we started early using MaxScale and the beta versions were not dealing gracefully with threads so we built around multiple single threaded instances.

So, since a conclusion is needed, MaxScale has proven to be a very useful and flexible tool that allows to elaborate solutions to problems that were very hard to tackle before. In particular, if you need to perform read/write splitting, then, try MaxScale, it is best solution for that purpose I have found so far. Keep in touch, I’ll surely write other posts about MaxScale in the near future.

The post MaxScale: A new tool to solve your MySQL scalability problems appeared first on MySQL Performance Blog.

fail joining cluster

Lastest Forum Posts - June 5, 2015 - 1:20pm
Hello, I am trying to set up an xtradb cluster on centos 7.

I have successfully bootstrapped the first node when I try to join a node to the cluster on the donor I cannot see any error however the logs are here:

http://notepad.cc/share/euJkcdxNIw

but on the joiner there are errors:

http://notepad.cc/share/6LxB3Fhoc5

High availability using MySQL in the cloud

Latest MySQL Performance Blog posts - June 4, 2015 - 12:00am

Next Wednesday (June 10) I’ll be co-presenting a webinar on using MySQL in the cloud for High Availability (HA). Joining me will be 451 Research analyst Jason Stamper and together we’ll talk about the realities of HA using MySQL in the cloud and how vendors are responding to changing application requirements with new developments that can enhance your deployment.

We’ll also present a comparison of available solutions along with key best practices you can follow for successfully attaining HA in the cloud with MySQL. The webinar is scheduled for June 10 at 10 a.m. Pacific. Register here.

Together we’ll cover:

  • What do HA MySQL deployments in the cloud look like today?
  • What are the developing requirements of applications based on future growth and scalability needs?
  • How are key vendors responding to these needs with new features and solution offerings, including those from OpenStack, Amazon, and others?
  • A high level, more technical comparison of the solutions
  • Keys to a successful HA MySQL deployment, including scaling from a single-node application to a cluster of MySQL instances

At the end of this webinar, you will have a good understanding of the options available for High Availability using MySQL in the cloud and how your current HA MySQL deployment in the cloud compares. You’ll also learn the tradeoffs you face depending on your HA solution and be able to identify which vendors and technologies are best suited for your needs.

This webinar, as usual, is free. Register now to reserve your spot and I hope to see you next Wednesday!

The post High availability using MySQL in the cloud appeared first on MySQL Performance Blog.

Xtradb cluster plus master/slave replication

Lastest Forum Posts - June 3, 2015 - 11:54pm
Hi everybody,

we currently have a 3 nodes xtradb cluster. In order not to "disturb" these nodes with specific tasks like backup,
or heavy calculation, we want to set up slave nodes, the master being the Xtradb cluster.

Do you think it is a good architecture ? By good, i mean : robust, scalable, maintainable.

Thanks very much for any help,

Percona XtraDB Cluster 5.6.24-25.11 is now available

Latest MySQL Performance Blog posts - June 3, 2015 - 1:19pm

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on June 3rd 2015. Binaries are available from downloads area or from our software repositories.

Based on Percona Server 5.6.24-72.2 including all the bug fixes in it, Galera Replicator 3.11, and on Codership wsrep API 25.11, Percona XtraDB Cluster 5.6.24-25.11 is now the current General Availability release. All of Percona’s software is open-source and free, and all the details of the release can be found in the 5.6.24-25.11 milestone at Launchpad.

New Features:

  • Percona XtraDB Cluster now allows reads in non-primary state by introducing a new session variable wsrep_dirty_reads. This variable is boolean and is OFF by default. When set to ON, a Percona XtraDB Cluster node accepts queries that only read, but not modify data even if the node is in the non-PRIM state (#1407770).
  • Percona XtraDB Cluster now allows queries against INFORMATION_SCHEMA and PERFORMANCE_SCHEMA even with variables wsrep_ready and wsrep_dirty_reads set to OFF. This allows monitoring applications to monitor the node when it is even in non-PRIM state (#1409618).
  • wsrep_replicate_myisam variable is now both global and session variable (#1280280).
  • Percona XtraDB Cluster now uses getifaddrs for node address detection (#1252700).
  • Percona XtraDB Cluster has implemented two new status variables: wsrep_cert_bucket_count and wsrep_gcache_pool_size for better instrumentation of galera memory usage. Variable wsrep_cert_bucket_count shows the number of cells in the certification index hash-table and variable wsrep_gcache_pool_size shows the size of the page pool and/or dynamic memory allocated for gcache (in bytes).

Bugs Fixed:

  • Using concurrent REPLACE, LOAD DATA REPLACE or INSERT ON DUPLICATE KEY UPDATE statements in the READ COMMITTED isolation level or with the innodb_locks_unsafe_for_binlog option enabled could lead to a unique-key constraint violation. Bug fixed #1308016.
  • Using the Rolling Schema Upgrade as a schema upgrade method due to conflict with wsrep_desync would allows only one ALTER TABLE to run concurrently. Bugs fixed #1330944 and #1330941.
  • SST would resume even when the donor was already detected as being in SYNCED state. This was caused when wsrep_desync was manually set to OFF which caused the conflict and resumed the donor sooner. Bug fixed #1288528.
  • DDL would fail on a node when running a TOI DDL, if one of the nodes has the table locked. Bug fixed #1376747.
  • xinet.d mysqlchk file was missing type = UNLISTED to work out of the box. Bug fixed #1418614.
  • Conflict between enforce_storage_engine and wsrep_replicate_myisam for CREATE TABLE has been fixed. Bug fixed #1435482.
  • A specific trigger execution on the master server could cause a slave assertion error under row-based replication. The trigger would satisfy the following conditions: 1) it sets a savepoint; 2) it declares a condition handler which releases this savepoint; 3) the trigger execution passes through the condition handler. Bug fixed #1438990.
  • Percona XtraDB Cluster Debian init script was testing connection with wrong credentials. Bug fixed #1439673.
  • Race condition between IST and SST in xtrabackup-v2 SST has been fixed. Bugs fixed #1441762, #1443881, and #1451524.
  • SST will now fail when move-back fails instead of continuing and failing at the next step. Bug fixed #1451670.
  • Percona XtraDB Cluster .deb binaries were built without fast mutexes. Bug fixed #1457118.
  • The error message text returned to the client in the non-primary mode is now more descriptive ("WSREP has not yet prepared node for application use"), instead of "Unknown command" returned previously. Bug fixed #1426378.
  • Out-of-bount memory access issue in seqno_reset() function has been fixed.
  • wsrep_local_cached_downto would underflow when the node on which it is queried has no writesets in gcache.

Other bugs fixed: #1290526.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

The post Percona XtraDB Cluster 5.6.24-25.11 is now available appeared first on MySQL Performance Blog.

Optimizing Percona XtraDB Cluster for write hotspots

Latest MySQL Performance Blog posts - June 3, 2015 - 6:50am

Some applications have a heavy write workload on a few records – for instance when incrementing a global counter: this is called a write hotspot. Because you cannot update the same row simultaneously from multiple threads, this can lead to performance degradation. When using Percona XtraDB Cluster (PXC), some users try to solve this specific issue by writing on multiple nodes at the same time. Good idea or bad idea? Read on!

Simultaneous writes on a standalone InnoDB server

Say you have these 3 transactions being run simultaneously (id is the primary key of the table):

# T1 UPDATE t SET ... WHERE id = 100 # T2 UPDATE t SET ... WHERE id = 100 # T3 UPDATE t SET ... WHERE id = 101

All transactions will require a row lock on the record they want to modify. So T3 can commit at the same time than T1 and/or T2, because it will not lock the same record as T1 and T2.

But T1 and T2 cannot execute simultaneously because they need to set a lock on the same record. Let’s assume T2 is executed by InnoDB first, how long does T1 need to wait? It is essentially the time needed for T2 to execute.

Simultaneous writes on multiple nodes (PXC)

Now is it any different if you have a 3-node PXC cluster and if you want to run T1 on node1 on T2 on node2? Let’s review step by step how the cluster will execute these queries:

1. T1 starts executing on node1: a row lock is set on the record where id=100. T2 also starts executing on node2 and also sets a row lock on the record where id=100. How is it possible that 2 transactions set the same lock on the same record? Remember that locking in Galera is optimistic, meaning that a transaction can only set locks on the node where it is executing, but never on the remote nodes: here, T1 sets a lock on node1 and T2 sets a lock on node2.

2. Let’s assume T2 reaches commit before T1. T2 is then synchronously replicated on all nodes and it gets a global sequence number. At this point, a certification test is run on node2 to determine whether there is any write conflicts between T2 and other “in-flight” transactions in the cluster. Go to the next section if you want some clarification about “in-flight” transactions and the certification test.

Back to our transactions: T2 is the first transaction to try to commit, so no other transaction is “in-flight”: the certification test will succeed and InnoDB will be able to apply the transaction on node1. On node2, the same certification test will be run and T2 will be put in the apply queue (and it will be applied at some point in the near future).

Ok, wait a minute! No other transaction is “in-flight”, really? What about T1 on node1? Actually T1 is simply a local transaction on node1 and it is not known by the cluster yet. Therefore it is not what I called an “in-flight” transaction and it does not play any role in the certification test.

3. Now T1 reaches commit on node1. It is then synchronously replicated to all nodes and a certification test will run on node1. If T1 and T2 did commit simultaneously, there is a good chance that when T1 starts committing, T2 is still in the apply queue of node1. In this case, the certification test for T1 will fail. Why? To make sure that T2 will commit on all nodes no matter what, any other transaction that wants to set a lock on the record where id=100 will be rejected.

Then if the certification test for T1 fails, T1 is rolled back. The only option to commit T1 is to retry executing it.

Let’s assume that this second try is successful, how long did T1 have to wait before being executing? Essentially we had to execute T1 twice so we had to replicate it twice, each replication taking 1 network RTT, we had to roll T1 back on node1 (rollback is expensive in InnoDB) and the application had to decide that T1 had to be executed a second time. That is a lot more work and wait compared to the scenario on a single server.

So where is the fundamental problem when we tried to write on several nodes? Galera uses optimistic locking, so we had to go very far in the execution of T1 before realizing that the query will not succeed. Multi-node writing is therefore not a good solution at all when the system sees write hotspots.

“In-flight” transactions and certification test

“In-flight” transactions are transactions that have already been applied on at least one node of the cluster but not on all nodes. Remember that even if replicating transactions is synchronous, applying committed transactions on remote nodes is not. So a transaction Tx can be committed and applied on node1 but not on node2: on node2, Tx will simply sit in an apply queue, waiting to be executed. Tx is then an “in-flight” transaction.

The goal of the certification test is to make sure that no transaction can prevent Tx to execute on node2: as Tx is already on node1 and as we want data consistency, we must make sure that Tx will execute successfully no matter what can happen. And the magic of Galera is that the test is deterministic so group communication is not necessary when a node runs a certification test.

Conclusion

So what are your options with PXC when the workload has write hotspots? The most obvious one is to write on a single node: then you will have the same locking model as with a standalone InnoDB server. Performance will not be as good as with a standalone server as you will have to pay for synchronous replication, but you will avoid the very expensive write conflicts.

Another option would be to rearchitect your application to write less often. For instance, for a global counter, you could maintain it with Redis and only periodically flush the value to PXC.

And if you want to understand more about the benefits and drawback of writing on multiple nodes of a PXC cluster, you can read these two posts.

The post Optimizing Percona XtraDB Cluster for write hotspots appeared first on MySQL Performance Blog.

Is 80% of RAM how you should tune your innodb_buffer_pool_size?

Latest MySQL Performance Blog posts - June 2, 2015 - 3:00am

It seems these days if anyone knows anything about tuning InnoDB, it’s that you MUST tune your innodb_buffer_pool_size to 80% of your physical memory. This is such prolific tuning advice, it seems engrained in many a DBA’s mind.  The MySQL manual to this day refers to this rule, so who can blame the DBA?  The question is: does it makes sense?

What uses the memory on your server?

Before we question such advice, let’s consider what can take up RAM in a typical MySQL server in their broad categories.  This list isn’t necessarily complete, but I think it outlines the large areas a MySQL server could consume memory.

  • OS Usage: Kernel, running processes, filesystem cache, etc.
  • MySQL fixed usage: query cache, InnoDB buffer pool size, mysqld rss, etc.
  • MySQL workload based usage: connections, per-query buffers (join buffer, sort buffer, etc.)
  • MySQL replication usage:  binary log cache, replication connections, Galera gcache and cert index, etc.
  • Any other services on the same server: Web server, caching server, cronjobs, etc.

There’s no question that for tuning InnoDB, the innodb_buffer_pool_size is the most important variable.  It’s expected to occupy most of the RAM on a dedicated MySQL/Innodb server, but of course other local services may affect how it is tuned.  If it (and other memory consumption on the server) is too large, swapping can kick in and degrade your performance rapidly.

Further, the workload of the MySQL server itself may cause a lot of variation.  Does the server have a lot of open connections and active query workload consuming memory?  The memory consumption caused by this can be dramatically different server to server.

Finally, replication mechanisms like Galera have their own memory usage pattern and can require some adjustments to your buffer pool.

We can see clearly that the 80% rule isn’t as nuanced as reality.

A rule of thumb

However, for the sake of argument, let’s say the 80% rule is a starting point.  A rule of thumb to help us get a quick tuning number to get the server running.  Assuming we don’t know anything really about the workload on the system yet, but we know that the system is dedicated to InnoDB, how might our 80% rule play out?

Total Server RAMBuffer pool with 80% ruleRemaining RAM1G800MB200MB16G13G3G32G26G6G64G51G13G128G102G26G256G205G51G512G409G103G1024G819G205G

At lower numbers, our 80% rule looks pretty reasonable.  However, as we get into large servers, it starts to seem less sane.  For the rule to hold true, it must mean that workload memory consumption increases in proportion to needed size of the buffer pool, but that usually isn’t the case.  Our server that has 1TB of RAM likely doesn’t need 205G of that to handle things like connections and queries (likely MySQL couldn’t handle that many active connections and queries anyway).

So, if you really just spent all that money on a beefy server do you really want to pay a 20% tax on that resource because of this rule of thumb?

The origins of the rule

At one of my first MySQL conferences, probably around 2006-2007 when I worked at Yahoo, I attended an InnoDB tuning talk hosted by Heikki Tuuri (the original author of InnoDB) and Peter Zaitsev.  I distinctly remember asking about the 80% rule because at the time Yahoo had some beefy 64G servers and the rule wasn’t sitting right with me.

Heikki’s answer stuck with me.  He said something to the effect of (not a direct quote): “Well, the server I was testing on had 1GB of RAM and 80% seemed about right”.  He then, if memory serves, clarified it and said it would not apply similarly to larger servers.

How should you tune?

80% is maybe a great start and rule of thumb.  You do want to be sure the server has plenty of free RAM for the OS and the usually unknown workload.  However, as we can see above, the larger the server, the more likely the rule will wind up wasting RAM.   I think for most people it starts and ends at the rule of thumb, mostly because changing the InnoDB buffer pool requires a restart in current releases.

So what’s a better rule of thumb?  My rule is that you tune the innodb_buffer_pool_size as large as possible without using swap when the system is running the production workload.  This sounds good in principle, but again, it requires a bunch of restarts and may be easier said than done.

Fortunately MySQL 5.7 and it’s online buffer pool resize feature should make this an easier principle to follow.  Seeing lots of free RAM (and/or filesystem cache usage)?  Turn the buffer pool up dynamically.  Seeing some swap activity?  Just turn it down with no restart required.   In practice, I suspect there will be some performance related hiccups of using this feature, but it is at least a big step in the right direction.

The post Is 80% of RAM how you should tune your innodb_buffer_pool_size? appeared first on MySQL Performance Blog.

Percona TokuMXse 3.0.3-1.0-rc6 is now available

Latest MySQL Performance Blog posts - June 1, 2015 - 10:54am

Percona is glad to announce the release of Percona TokuMXse 3.0.3-1.0rc6 on June 1st 2015. TokuMXse is a TokuMX storage engine for MongoDB 3.0.3. Downloads are available from our download site here. Packages for this Release Candidate release, will be available in our Ubuntu Trusty testing and CentOS 7 testing repositories.

Based on MongoDB 3.0.3, including all the bug fixes in it, Percona TokuMXse 3.0.3-1.0 is the current release candidate.

This release contains minor changes to the Fractal Tree, including:

  • Improved tokuftdump information.
  • Fixed sporadic recovery issue due to rare race between transaction rollback and logging.
  • Report capped boolean for uncapped collections when using tokuft storage engine.
Getting started

After installation, you can start mongod with tokuft as the storage engine, with:

$ mongod --storageEngine tokuft

Note: Transparent huge pages must be turned off for the fractal tree engine to work properly. If you attempt to run mongod with that option enabled, an error informing you of this will be printed to the output of the mongod process and it will fail to start.

You can check if the Transparent huge pages are enabled with:

$ cat /sys/kernel/mm/transparent_hugepage/enabled

[always] madvise never

You can disable the transparent huge pages by running the following command as root (NOTE: Setting this will last only until the server is rebooted):

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

Percona TokuMXse currently supports various tuning parameters to pass to the mongod instance. Full list can be read by running mongod --help (this will print options for all available storage engines, including tokuft).

The post Percona TokuMXse 3.0.3-1.0-rc6 is now available appeared first on MySQL Performance Blog.

New PERFORMANCE_SCHEMA defaults in MySQL 5.7.7

Latest MySQL Performance Blog posts - June 1, 2015 - 7:27am

I thought it was worth a moment to reiterate on the new Performance Schema related defaults that MySQL 5.7.7 brings to the table, for various reasons.

For one, most of you might have noticed that profiling was marked as deprecated in MySQL 5.6.7. So it is expected that you invest into learning more about Performance Schema (and Mark’s sys schema!).

Second, there are lots of virtual environments and appliances out there running Community Edition MySQL where Performance Schema can be a useful tool for analyzing performance. Thus, expect to see more articles about using PERFORMANCE_SCHEMA and SYS_SCHEMA from us!

Third, we have more and more junior readers who might benefit from light reads such as this.

The new defaults that I wanted to highlight are mentioned in the MySQL 5.7.7 release notes:
– The MySQL sys schema is now installed by default during data directory installation.
– The events_statements_history and events_transactions_history consumers now are enabled by default.

Note that if you are upgrading from an earlier version of MySQL to 5.7.7 to get these goodies you will need to run mysql_upgrade and restart the database for the above changes to take effect.

So what do these mean?

If you haven’t had a chance to dig into PERFORMANCE_SCHEMA, check out the quick start guide here. PERFORMANCE_SCHEMA is a nify tool (implemented as a union of a storage engine and a schema in MySQL) to monitor MySQL server execution at a low level, with a focus on performance metrics. It monitors for events that have been “instrumented”, such as function calls, OS wait times, synchronization calls, etc. With performance nomenclature “instruments” are essentially “probes”. The events that the instruments generate can be processed by consumers. Note that not all instruments or consumers are enabled by default.

Some would say that the structure of PERFORMANCE_SCHEMA may be complex and may not be very DBA-friendly. This is what led to the birth of SYS_SCHEMA. For those who are not familiar with Mark Leith’s SYS_SCHEMA and prefer TL;DR – it provides human friendly views, functions and procedures that can help you analyze database usage using PERFORMANCE_SCHEMA. If you haven’t had a chance to check it out yet you might want to read Miguel’s article on using the sys schema or Alexander Rubin’s article about using it in multitenant environments and give it a spin!

I welcome the fact that events_statements_history and events_transactions_history consumers are enabled by default in MySQL 5.7.7 as it means that we get some handy performance details available to us out of the box in vanilla MySQL. Note that these are per-thread tables and by default the history length (the length of the number of entries present; more on those variables here) is automatically sized, thus you may need to increase them.

What details do you get off the bat with them?

Consider the following example:

mysql> select * from performance_schema.events_statements_history where event_id=353G *************************** 1. row *************************** THREAD_ID: 20 EVENT_ID: 353 END_EVENT_ID: 456 EVENT_NAME: statement/sql/select SOURCE: mysqld.cc:963 TIMER_START: 1818042501405000 TIMER_END: 1818043715449000 TIMER_WAIT: 1214044000 LOCK_TIME: 67000000 SQL_TEXT: select * from imdb.title limit 100 DIGEST: ec93c38ab021107c2160259ddee31faa DIGEST_TEXT: SELECT * FROM `imdb` . `title` LIMIT ? CURRENT_SCHEMA: performance_schema OBJECT_TYPE: NULL OBJECT_SCHEMA: NULL OBJECT_NAME: NULL OBJECT_INSTANCE_BEGIN: NULL MYSQL_ERRNO: 0 RETURNED_SQLSTATE: NULL MESSAGE_TEXT: NULL ERRORS: 0 WARNINGS: 0 ROWS_AFFECTED: 0 ROWS_SENT: 100 ROWS_EXAMINED: 100 CREATED_TMP_DISK_TABLES: 0 CREATED_TMP_TABLES: 0 SELECT_FULL_JOIN: 0 SELECT_FULL_RANGE_JOIN: 0 SELECT_RANGE: 0 SELECT_RANGE_CHECK: 0 SELECT_SCAN: 1 SORT_MERGE_PASSES: 0 SORT_RANGE: 0 SORT_ROWS: 0 SORT_SCAN: 0 NO_INDEX_USED: 1 NO_GOOD_INDEX_USED: 0 NESTING_EVENT_ID: NULL NESTING_EVENT_TYPE: NULL 1 row in set (0.00 sec)

As you can see from above you get similar data that you are used to seeing from EXPLAINs and slow query logs, such as query run time, locking time, rows sent/examined, etc. For instance, in above output my query obtained about a 100 rows (lines 26-27), avoided creating temp tables (lines 28-29) and didn’t have to sort (lines 36-38) and no index was used (line 39) and it ran for about 121 ms (TIMER_END-TIMER_START). The list of details provided is not as abundant as it could be but I imagine that with newer releases the list may grow.

If you want to read on and are curious about how to use Performance Schema for profiling check out Jervin’s great article here!

The post New PERFORMANCE_SCHEMA defaults in MySQL 5.7.7 appeared first on MySQL Performance Blog.

Error 1025 (errno -1 unknown error -1) on Alter Table Command

Lastest Forum Posts - May 31, 2015 - 9:59pm
Hi, I've been trying to alter a Table named import_permit, adding about 12 new columns for it, using the alter statements:
: ALTER TABLE eips2live.import_permit ADD `closed_by_id` bigint(20) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `date_closed` datetime DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `date_disapproved` datetime DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `date_for_cancellation` datetime DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `disapproved_by_id` bigint(20) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `for_cancellation_by_id` bigint(20) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `for_cancellation_by_etrade` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `reason_for_disapproval` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `remarks_for_cancellation` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `remarks_for_close` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `remarks_for_closed` varchar(255) DEFAULT NULL; ALTER TABLE eips2live.import_permit ADD `date_opened` datetime DEFAULT NULL; but always ended with Error : Code: 1025. Error on rename of './eips2live/#sql-5e74_42' to './eips2live/import_permit' (errno: -1 - Unknown error -1) checking my mysql-error.log, I can see this log about the same time the error appears on workbench:

: InnoDB: Error: 'eips2live/import_permit' is already in tablespace memory cache I have been trying to find this errno-1 everywhere but the closest one I got was err 150 which seems to be related to constraints, but I still have to read more of it. Another one was a MySQL Bug, which uses the same errno: -1 and Unknown error -1, but I do not know what to make of it, like how to avoid/resolve this current issue.

Hoping someone can give me a heads up on what to do/where to look for next. thanks.

Here is my my.cnf for reference.
: [mysql] # CLIENT # #port = 3306 [mysqld] # GENERAL # server-id = 2002 user = mysql default-storage-engine = InnoDB socket = /srv/data/mysql/mysql.sock pid-file = /srv/data/mysql/mysql.pid bind-address = 0.0.0.0 # MyISAM # key-buffer-size = 32M myisam-recover = FORCE,BACKUP # SAFETY # max-allowed-packet = 16M max-connect-errors = 1000000 #sql-mode = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE sysdate-is-now = 1 innodb = FORCE # DATA STORAGE # datadir = /srv/data/mysql/ # BINARY LOGGING # #log-bin = /srv/data/mysql/mysql-bin binlog-format = row log-bin = slavedb-bin expire-logs-days = 14 sync-binlog = 1 # CACHES AND LIMITS # #tmp-table-size = 32M #max-heap-table-size = 32M query-cache-type = 0 query-cache-size = 0 max-connections = 500 thread-cache-size = 50 open-files-limit = 65535 #table-definition-cache = 4096 #table-open-cache = 4096 # INNODB # innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 #innodb-log-file-size = 512M innodb-log-file-size = 5M innodb-flush-log-at-trx-commit = 1 innodb-file-per-table = 1 innodb-buffer-pool-size = 26G # LOGGING # log-error = /srv/data/mysql/mysql-error.log log-queries-not-using-indexes = 1 slow-query-log = 1 slow-query-log-file = /srv/data/mysql/mysql-slow.log

Pages

Subscribe to Percona aggregator
]]>