]]>
]]>

You are here

Feed aggregator

2 questions about innodb_buffer_pool_size parameter on RDS

Lastest Forum Posts - February 6, 2015 - 8:36am
Hi All !
I am using mysql 5.5.40 , running on amazon RDS. (instance - m1.large)
I ran the following query:
SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM(SELECT SUM(data_length+index_length) Total_InnoDB_Bytes FROM information_schema.tables WHERE engine='InnoDB') A that suppose to give me the recommended value for this parameter , and I got 85 GB.
The current size is 5.3GB .({DBInstanceClassMemory*3/4}).
I have 2 questions ,
1. This query is reliable ? should I upgrade the instance ?
2. If I understand , in this situation , most of the queries using IO to the disk . There is any way to check if query wrote to the disk?


Thank you!!

pt-table-checksums with interrupted connection

Lastest Forum Posts - February 6, 2015 - 12:46am
My scenario:
Master-Server with a database named "management" and a table "teamdriveinvitation"
And there is a slave-server with row-based replication enabled. All logins and passwords are the same.

I find that the update of the checksum-table is NOT done with the logged-in session of the percona-tool, but the newly generated checksums are transported via the replication-statements.
Is this intended?

I logged this on the slave with the mysql.log:
root@EUHQTMSQL001-1A:/usr/local/scripts# tail -n1 -f mysql.log |grep -i teamdriveinvitation
17137 Query SHOW TABLES FROM `management` LIKE 'teamdriveinvitation'
17137 Query SHOW CREATE TABLE `management`.`teamdriveinvitation`
17137 Query EXPLAIN SELECT * FROM `management`.`teamdriveinvitation` WHERE 1=1
3 Query REPLACE INTO `percona`.`euirtmsql001_1a` (db, tbl, chunk, chunk_index, lower_boundary, upper_boundary, this_cnt, this_crc) SELECT 'management', 'teamdriveinvitation', '1', NULL, NULL, NULL, COUNT(*) AS cnt, COALESCE(LOWER(CONV(BIT_XOR(CAST(CRC32(CONCAT_WS(' #', `id`, `teamdriveid`, `invitedby`, `invited`, `message`, `dateinvited`, `role`, CONCAT(ISNULL(`invitedby`), ISNULL(`message`)))) AS UNSIGNED)), 10, 16)), 0) AS crc FROM `management`.`teamdriveinvitation` /*checksum table*/
3 Query UPDATE `percona`.`euirtmsql001_1a` SET chunk_time = '0.001044', master_crc = '91204457', master_cnt = '32' WHERE db = 'management' AND tbl = 'teamdriveinvitation' AND chunk = '1'
17137 Query SELECT MAX(chunk) FROM `percona`.`euirtmsql001_1a` WHERE db='management' AND tbl='teamdriveinvitation' AND master_crc IS NOT NULL
17137 Query SELECT CONCAT(db, '.', tbl) AS `table`, chunk, chunk_index, lower_boundary, upper_boundary, COALESCE(this_cnt-master_cnt, 0) AS cnt_diff, COALESCE(this_crc <> master_crc OR ISNULL(master_crc) <> ISNULL(this_crc), 0) AS crc_diff, this_cnt, master_cnt, this_crc, master_crc FROM `percona`.`euirtmsql001_1a` WHERE (master_cnt <> this_cnt OR master_crc <> this_crc OR ISNULL(master_crc) <> ISNULL(this_crc)) AND (db='management' AND tbl='teamdriveinvitation')

As you can see, there is a connection "17137" which was recently opened by the percona-tool. Then suddenly the update of the checksum-table is done from the connection "3", which is the normal replication-stream. Then it switches back to connection "17137" to check for any differences.

mariadb, galera cluster, haproxy, keepalived

Lastest Forum Posts - February 4, 2015 - 7:37am
hello guys

nice being part of you.

recently I've setup a galera cluster with haproxy/keepalived as follow

*3 nodes setup
Node1 act as writing node
node2,3 are for reading. ( in case of failure of node1 , node2 will be the writing node )

* i have haproxy-keepalived on all 3 machines.
* i have a VIP pointing at node1 but the reading sequence configured on haproxy is node2,node3,node1.

my problem is when i was testing the failover scenario.
I shutdown node1 and everything moved to node2/3 ( node2 took the place of writing node )

after getting back node1 the mysql was not able to start as was not able to sync from the other 2 nodes.


my configuration on 3 nodes is as following

#node1
binlog_format = ROW
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name = "cluster"
wsrep_cluster_address = gcomm://
wsrep_node_address = 192.168.116.68
wsrep_node_name = 'MariaDBNode1'
wsrep_node_incoming_address = 192.168.116.68
wsrep_sst_method = rsync
wsrep_sst_donor = 192.168.116.69,192.168.116.70

#node2
binlog_format = ROW
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name = "cluster"
wsrep_cluster_address = gcomm://192.168.116.68,192.168.116.70
wsrep_node_address = 192.168.116.69
wsrep_node_name = 'MariaDBNode2'
wsrep_node_incoming_address = 192.168.116.69
wsrep_sst_method = rsync

#node3

binlog_format = ROW
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name = "cluster"
wsrep_cluster_address = gcomm://192.168.116.68,192.168.116.69
wsrep_node_address = 192.168.116.70
wsrep_node_name = 'MariaDBNode3'
wsrep_node_incoming_address = 192.168.116.70
wsrep_sst_method = rsync


please note that before my final testing i had
wsrep_cluster_address = gcomm://192.168.116.68,192.168.116.69,192.168.116.70
on all nodes.

i changed it now as shown on the above configuration

i would appreciate if you could help me.

Does percona xtradb supporting partitioning with foreign keys?

Lastest Forum Posts - February 4, 2015 - 2:04am
Does percona xtradb support partitioning with foreign keys?

The future of MySQL quality assurance: Introducing pquery

Latest MySQL Performance Blog posts - February 4, 2015 - 12:00am

Being a QA Engineer, how would you feel if you had access to a framework which can generate 80+ crashes – a mix of hitting developer introduced assertions (situations that should not happen), and serious unforeseen binary crashes – for the world’s most popular open source database software – each and ever hour? What if you could do this running on a medium spec machine – even a laptop?

The seniors amongst you may object “But… generating a crash or assertion is one thing – creating a repeatable testcase for the same is quite another.”

Introducing pquery, mtr_to_sql, reducer.sh (the pquery-enabled version), and more:

80+ coredumps per hour. Fully automatic testcase creation. Near-100% testcase reproducibility. C++ core. 15 Seconds run time per trial. Up to 20-25k lines of SQL executed per trial. CLI testcases. Compatible with sporadic issues. High-end automation of many aspects.

It all started when we realized how slow RQG really is. The Random Query Generator by Philip Stoev is a fantastic tool, and it has been greatly expanded over the years, but though it is a Swiss army knife in what you can do with it, when it comes to speed it is not the fastest. The Perl backend – combined with much random-ness – has made the system slow. At least when compared with the ultra-fast execution of C++ code.

I discussed things with Alexey Bychko, and before long pquery was born. The C++ core code of pquery is Alexey’s creation. It easily executes 20k random lines of SQL in 15-20 seconds, with full logging (including errors) turned on. Though the tool looks fairly simple when reviewing the code, it is incredibly powerful.

Now, one thing which people being introduced to QA for MySQL (or any other large software with many features/functions/options etc.) have to grasp is “random spread testing”. If your random spread (and the amount of samples) is large enough (read: ‘sufficient’), it is relatively easy to get a good “overall quality” estimate of your software by doing a few runs (taking usually around 1-3 days – or longer if various options are being tested).

So,we now had speed (pquery) and near-perfect testcase simplification/reduction & reproducibility (the new pquery adaption of reducer.sh) – but we needed one thing more: SQL which would cover… well… every area of mysqld. A fast framework without matching grammars is not worth much…

Converting the grammars from RQG seemed like a mammoth task – and we would really just be re-writing RQG in C. And creating a new SQL generator was an almost hopeless venture (I gave it a short try) – given the huge variety and complexity when constructing SQL statements.

I took the cheeky road. And it paid off. mtr_to_sql.sh was born.

The MTR testcases included (and delivered) with the MySQL server download contain individual tests for nearly every possible SQL syntax possible, including ones that – ought to – error out (but not crash). Not only that, there are specific MTR testcases for each feature, not to mention the many MTR testcases that were added to cover bug regression testing. Where there is smoke…

[roel@localhost 5.6]$ pwd /bzr/5.6 [roel@localhost 5.6]$ find . | grep ".test$" | wc -l 3867 [roel@localhost 5.6]$ cat ~/percona-qa/mtr_to_sql.sh | grep BZR_PATH | head -n1 BZR_PATH="/bzr/5.6/mysql-test" [roel@localhost 5.6]$ time ~/percona-qa/mtr_to_sql.sh Done! Generated /tmp/mtr_to_sql.yy for RQG, and /tmp/mtr_to_sql.sql for pquery (SQL is indentical in both, but first is formatted for use with RQG) real 0m20.150s user 1m2.951s sys 0m1.214s [roel@localhost 5.6]$ cat /tmp/mtr_to_sql.sql | wc -l 107541

107.5K of high-quality SQL. Covering every possible functionality and buggy area out there. Free.

Let the testing begin!

I was quite dumbfounded when (after further tuning and scripting) we started seeing 50+, then 80+ cores per hour. Sig11’s (crashes), Sig6’s (asserts), server hangs, character set issues, error log errors and so. Many crashes and issues in optimized code. Fun.

Our best weeks yet?

Pquery update: 220+ bugs logged, of which 140+ in MySQL, 60+ in Percona, 10+ in TokuTek (with limited TokuTek testing) #mysql #percona #qa

— Roel Van de Paar (@RoelVandePaar) November 21, 2014

Last week @rameshvs02 & @RoelVandePaar logged 70+ bugs. Today reducer.sh was updated w/ pquery functionality. Testcases on their way! #mysql

— Roel Van de Paar (@RoelVandePaar) October 27, 2014

All of the above can be done on commodity hardware, running a single server, running single-threaded SQL (single client) and with no special mysqld options activated.

Compare this to RQG. Even with combinations.pl running hundreds if not thousands of mysqld — option combinations, and with nearly-everything-is-random-sql, it still comes nowhere near even one tenth of that number/speed. And this is even when using high-end hardware, 8 simultaneous mysqld servers, up to 20-25 client threads and at times special grammar-mix tools like MaxiGen etc.

In preparation for the Twitter week mentioned above we started running 4-5 simultaneous pquery run’s (5x mysqld, still all single threaded; a single client per mysqld) in different shell screen sessions, controlled by cron jobs.

A whole set of automation scripts were quickly added to handle the huge influx in bugs (you can get all for free (GPLv2) at $bzr branch lp:percona-qa – see pquery*.sh files), and now you can quickly review a list of issues pquery discovered. For writing this article, I started a run and in it’s first hour it found exactly 85 crashes. Here is a report from around ~2h;

[roel@localhost 830147]$ ~/percona-qa/pquery-results.sh ================ Sorted unique issue strings (Approx 439 trials executed, 167 remaining reducer scripts) btr0btr.ic line 143 (Seen 31 times: reducers 123,124,135,150,159,164,16,173,175,178,179,18,196,199,224,22,238,245,286,310,319,324,366,388,38,401,67,73,78,88,9) btr0btr.ic line 169 (Seen 1 times: reducers 158) btr0cur.cc line 769 (Seen 1 times: reducers 304) buf0buf.cc line 2738 (Seen 2 times: reducers 113,257) fsp0fsp.cc line 1899 (Seen 5 times: reducers 145,174,409,69,85) . is_set (Seen 32 times: reducers 112,165,170,192,203,218,231,249,24,253,259,273,278,280,289,329,331,333,336,338,363,371,373,379,384,398,404,44,47,6,72,82) .length % 4 (Seen 4 times: reducers 169,220,307,80) m_lock .= __null .... thd->mdl_context.is_lock_owner.m_namespace, ...., ...., MDL_SHARED(Seen 3 times: reducers 297,403,86) row0quiesce.cc line 683 (Seen 1 times: reducers 97) row0umod.cc line 338 (Seen 1 times: reducers 357) .slen % 2 (Seen 21 times: reducers 106,122,131,144,221,250,251,252,275,282,296,316,318,32,359,375,39,405,407,43,46) .slen % 4 (Seen 5 times: reducers 103,382,76,7,81) sort_field->length >= length (Seen 1 times: reducers 138) timer == __null (Seen 36 times: reducers 133,139,149,160,161,181,183,184,185,20,212,227,229,234,244,260,266,274,292,294,295,298,301,308,326,327,330,343,346,364,367,400,48,50,59,99) .tlen % 2 (Seen 8 times: reducers 117,119,200,205,213,217,285,35) .tlen % 4 (Seen 3 times: reducers 25,355,365) trx0roll.cc line 264 (Seen 1 times: reducers 40) Z21mysql_execute_commandP3THD (Seen 4 times: reducers 182,237,291,393) ZN13Bounded_queueIhhE4pushEPh (Seen 3 times: reducers 101,118,214) ZN8Protocol13end_statementEv (Seen 4 times: reducers 211,410,42,61) ================

For these (standard by now) pquery runs, we use pquery-run.sh. It starts a server, executes and monitors the pquery binary, and then checks on the outcome:

[roel@localhost percona-qa]$ ./pquery-run.sh [07:23:21] [0] Workdir: /sda/459689 | Rundir: /dev/shm/459689 | Basedir: /sda/Percona-Server-5.6.21-rel69.0-687.Linux.x86_64-debug [07:23:21] [0] mysqld Start Timeout: 60 | Client Threads: 1 | Queries/Thread: 25000 | Trials: 100000 | Save coredump trials only: TRUE [07:23:21] [0] Pquery timeout: 15 | SQL file used: /home/roel/percona-qa/pquery/main-new.sql [07:23:21] [0] MYSAFE: --maximum-bulk_insert_buffer_size=1M --maximum-join_buffer_size=1M --maximum-max_heap_table_size=1M --maximum-max_join_size=1M --maximum-myisam_max_sort_file_size=1M --maximum-myisam_mmap_size=1M --maximum-myisam_sort_buffer_size=1M --maximum-optimizer_trace_max_mem_size=1M --maximum-preload_buffer_size=1M --maximum-query_alloc_block_size=1M --maximum-query_prealloc_size=1M --maximum-range_alloc_block_size=1M --maximum-read_buffer_size=1M --maximum-read_rnd_buffer_size=1M --maximum-sort_buffer_size=1M --maximum-tmp_table_size=1M --maximum-transaction_alloc_block_size=1M --maximum-transaction_prealloc_size=1M --log-output=none --sql_mode=ONLY_FULL_GROUP_BY [07:23:21] [0] Archiving a copy of this script (/home/roel/percona-qa/pquery-run.sh) in the workdir (/sda/459689) for later reference... [07:23:21] [0] Archiving a copy of the infile used for this run (/home/roel/percona-qa/pquery/main-new.sql) in the workdir (/sda/459689) for later reference... [07:23:21] [0] Generating datadir template (using mysql_install_db)... [07:23:29] [0] Making a copy of mysqld used to /sda/459689/mysqld (handy for core file analysis and manual bundle creation)... [07:23:29] [0] Storing a copy of ldd files for mysqld in same directory also... [07:23:29] [0] Starting pquery testing iterations... [07:23:29] [0] ====== TRIAL #1 ====== [07:23:29] [0] Ensuring there are no relevant servers running... [07:23:29] [0] Clearing rundir... [07:23:29] [0] Generating new workdir /dev/shm/459689/1... [07:23:29] [0] Copying datadir from template... [07:23:29] [0] Starting mysqld. Error log is stored at /dev/shm/459689/1/log/master.err [07:23:29] [0] Waiting for mysqld (pid: 10879) to fully start... [07:23:31] [0] Server started ok. Client: /sda/Percona-Server-5.6.21-rel69.0-687.Linux.x86_64-debug/bin/mysql -uroot -S/dev/shm/459689/1/socket.sock [07:23:31] [0] Starting pquery (log stored in /dev/shm/459689/1/pquery.log)... [07:23:31] [0] pquery running... ./pquery-run.sh: line 150: 10879 Aborted (core dumped) $CMD > ${RUNDIR}/${TRIAL}/log/master.err 2>&1 <---- success! [07:23:32] [0] Cleaning up... [07:23:36] [0] pquery summary: 578/769 queries failed (24.84% were successful) [07:23:36] [0] mysqld core detected at /dev/shm/459689/1/data/core.10879.1000.1000.6.1414700611.mysqld [07:23:36] [1] Copying rundir from /dev/shm/459689/1 to /sda/459689/1 [07:23:36] [1] ====== TRIAL #2 ====== [...]

(The MYSAFE settings are some generic settings specifically suited for QA tested – kindly provided by Shane Bester)

Within the scripts many QA-goodies are already built-in: automated gdb query extraction from the core and the error log (each added 3x to the end of the sql trace to ensure maximum reproducibility), unique issue classification using bug-relevant strings, /dev/shm execution for optimal speed, etc. – it is all based/build on many years of mysqld QA experience.

If you can’t wait to spin off some I-crashed-mysqld (it’s easy you know…) bugs of your own, download the tools at lp:percona-qa ($bzr branch lp:percona-qa) and checkout the many pquery* scripts. Some shown in use above.

And, when you’re ready for slightly more serious feature testing – add whatever statements (+matching mysqld options/parameters) your feature uses to the sql file (or even replace it), plus the relevant mysqld options to pquery-run.sh (MYEXTRA string) (You can use sql-interleave.sh/sql-interleave-switch.sh to interleave new sql into the main sql file(s), available in the pquery/ directory of the percona-qa branch mentioned above). Soon you will see fireworks.

Enjoy the show!

The post The future of MySQL quality assurance: Introducing pquery appeared first on MySQL Performance Blog.

Async replication from XtraDB cluster?

Lastest Forum Posts - February 3, 2015 - 7:45am
Hi,
I have an XtraDB cluster (4 nodes). on each node of the cluster gtid_mode is on with all the other settings for enabling GTID. I have another stand alone instance (GTID enabled) that is replicating in a master/master setup from one node of XtraDB cluster. Basically data flows from XtraDB cluster node to stand alone node and the oterh way around. It works fine until the XtraDB node is going down. At this stage I would like to point the stand alone instance to replicate from another node. If I just run the change master to master_host='second_node', MASTER_AUTO_POSITION=1 it may fail as it doesn't find all the binary logs. I can always run a reset master on the stand alone instance (replicated from the XtraDB cluster) and set GTID_PURGED to whatever gtid_executed shows on the new master node.
But this would break replication from the node to xtradb cluster as the cluster node will not find the binary logs on the stand alone node.

Has anyone tried this setup?

Thank you,
Liviu Lintes

Percona Live 2015 Lightning Talks, BoF submission deadline Feb. 13! And introducing “MySQL 101″ program

Latest MySQL Performance Blog posts - February 3, 2015 - 5:00am

It’s hard to believe that the Percona Live MySQL Conference and Expo is just over two months away (April 13-16 in Santa Clara, California). So if you’ve been thinking about submitting a proposal for the popular “Lightning Talks” and/or “Birds of a Feather” sessions, it’s time to get moving because the deadline to do so if February 13.

Lightning Talks provide an opportunity for attendees to propose, explain, exhort, or rant on any MySQL-related topic for five minutes. Topics might include a new idea, successful project, cautionary story, quick tip, or demonstration. All submissions will be reviewed, and the top 10 will be selected to present during the one-hour Lightning Talks session on Wednesday (April 15) during the Community Networking Reception. Lighthearted, fun or otherwise entertaining submissions are highly welcome. Submit your proposal here.

“MySQL 101″ is coming to Percona Live 2015.

Birds of a Feather (BoF) sessions enable attendees with interests in the same project or topic to enjoy some quality face time. BoFs can be organized for individual projects or broader topics (e.g., best practices, open data, standards). Any attendee or conference speaker can propose and moderate an engaging BoF. Percona will post the selected topics and moderators online and provide a meeting space and time. The BoF sessions will be held Tuesday night, (April 14) from 6- 7 p.m. Submit your BoF proposal here.

This year we’re also adding a new program for MySQL “newbies.” It’s called “MySQL 101,” and the motto of this special two-day program is: “You send us developers and admins, and we will send you back MySQL DBAs.” The two days of practical training will include everything they need to know to handle day-to-day MySQL DBA tasks.

“MySQL 101,” which is not included in regular Percona Live registration, will cost $400. However, the first 101 tickets are just $101 if you use the promo code “101” during checkout.

New: 25-Minute Sessions
On the first day of the conference, Percona is now offering 25-minute talks that pack tons of great information into a shorter format to allow for a wider range of topics. The 25-minute sessions include:

I also wanted to give another shout-out to Percona Live 2015’s awesome sponsor, which include: VMware, Yahoo, Deep Information Sciences, Pythian, Codership, Machine Zone, Box, Yelp, MariaDB, SpringbokSQL, Tesora, BlackMesh, SolidFire, Severalnines, Tokutek, VividCortex, FoundationDB, ScaleArc, Walmart eCommerce and more.(Sponsorship opportunities are still available.)

The great thing about Percona Live conferences is that there is something for everyone within the MySQL ecosystem – veterans and newcomers alike. And for the first time this year, that community expands to encompass OpenStack. Percona Live attendees can also attend OpenStack Live events. Those events run April 13-14, also at the Hyatt Regency Santa Clara and Santa Clara Convention Center.

OpenStack Live 2015’s awesome sponsors include: PMC Sierra and Nimble Storage!

With so much to offer this year, this is why there are several more options in terms of tickets. Click the image below for a detailed view of what access each ticket type provides.

Register here for the Percona Live MySQL Conference and Expo.

Register here for the OpenStack Live Conference and Expo.

For full conference schedule details please visit the Percona Live MySQL Conference and Expo website and the OpenStack Live Conference Website!

I hope to see you in Santa Clara in a couple months!

 

The post Percona Live 2015 Lightning Talks, BoF submission deadline Feb. 13! And introducing “MySQL 101″ program appeared first on MySQL Performance Blog.

can PK UPDATE be slow ?

Lastest Forum Posts - February 3, 2015 - 2:30am
Hello,

Can I ask some question regarding update error?

I use mysql 5.6 community version.

I have a table like below.

create table t1
(
col01 varchar(32) primary key
...
col20 varchar(32)
) innodb;

the table has about 500,000 rows.

I ran update statements 16 times in series. each update statements are different and they update about 90% of data each.
When eight update statement was executed, it died with an error - Statement cancelled due to timeout or client request
until seventh they were successful and take 2 or 3 mins each.
update statements are like

update t1 set col01 = '#uid', col02='...', ... , col20='...' where col01='#uid';
commit;

they execute commit row by row.

Can anyone let me know the cause of the error?
Can the cause be the pk update?

Thanks

Percona MYSQL Memory Tuning and Monitoring

Lastest Forum Posts - February 3, 2015 - 1:52am
Hi,

This question has probably been asked 100 times but here goes. We are having strange memory issues on our percona-galera cluster. Every so often we see chunks of memory being consumed. We have extremely high spec servers (32 cores 32GB RAM) so ample capacity. What I would like to do is getting a better idea of what is consuming the memory, it could be a bad query but nothing out of the ordinary is running.

We currently have the innodb buffer set to 4GB, we originally had this as 20GB but had the same issues.

Any help or advice would be appreciated.

Thanks

Faster fingerprints and Go packages for MySQL

Latest MySQL Performance Blog posts - February 2, 2015 - 7:31am

I’m happy to announce Go packages for MySQL. Particularly exciting is a new query fingerprint function which is very fast and efficient, but I’ll talk about that later. First, go-mysql is currently three simple Go packages for parsing and aggregating MySQL slow logs. If you’ve been following Percona development, you’ve no doubt heard of Percona Cloud Tools (PCT), a somewhat new performance management web service for MySQL.

One tool in PCT is “Query Analytics” which continuously analyzes query metrics from the slow log. The slow log provides the most metrics and therefore the most performance insight into MySQL. percona-agent, the open-source agent for PCT, uses go-mysql to parse and analyze the slow log, so the code has both heavy formal testing and heavy real-world testing. If you’re working with Go, MySQL, and MySQL slow logs, we invite you to try go-mysql.

Last October we implemented a completely new query fingerprint function. (See “Fingerprints” in the pt-query-digest doc for a background on query fingerprints.) Since mydumpslow, the very first slow log parser circa 2000, fingerprints have been accomplished with regular expressions. This approach is normally fine, but percona-agent needs to be faster and more efficient than normal to reduce the cost of observation. Regex patterns are like little state machines. One regex can be very fast, but several are required to produce a good fingerprint. Therefore, the regex approach requires processing the same query several times to produce a fingerprint. Even worse: a regex can backtrack which means a single logical pass through the query can result in several physical passes. In short: regular expressions are a quick and easy solution, but they are very inefficient.

Several years ago, a former colleague suggested a different approach: a single pass, purpose-built, character-level state machine. The resulting code is rather complicated, but the resulting performance is a tremendous improvement: 3-5x faster in informal benchmarks on my machine, and it handles more edge cases. In simplest terms: the new fingerprint function does more with less, which makes percona-agent and Query Analytics better.

Check out  github.com/percona/go-mysql, browse the percona-agent source code if you wish, and try Percona Cloud Tools for free.

The post Faster fingerprints and Go packages for MySQL appeared first on MySQL Performance Blog.

Failed to connect to MySQL server: DBI connect

Lastest Forum Posts - February 2, 2015 - 7:14am
After upgrading to CentOS 6.6, our extrabackup backup has stopped working. When I attempt to run the backup off the command line, it fails with: "Access denied for user 'mysql'@'localhost' (using password: NO) at /usr/bin/innobackupex line 2978".

The complete command line and traceback is at: http://pastebin.com/vEu7jMpR

Here are a few notes about my setup:
# mysql --version
mysql Ver 14.14 Distrib 5.1.73, for redhat-linux-gnu (x86_64) using readline 5.1
# uname -a
Linux hostname.domain.tld 2.6.32-504.8.1.el6.x86_64 #1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/redhat-release
CentOS release 6.6 (Final)
# yum list perl-DBD-MySQL
Loaded plugins: versionlock
Installed Packages
perl-DBD-MySQL.x86_64 4.013-3.el6 @base

Thanks very much for your help.
-bsquared


Cluster not starting when mysqld_multi is used

Lastest Forum Posts - February 1, 2015 - 11:11pm
I have a Percona cluster.
For a new application I needed to run a non-clustered instance of mysqld on one node.
So I created a separate configuration in my.cnf, and everything worked great.

However, I decided to check if it runs after reboot, and found out that it does not.
Mysqld starts, but it doesn't try to access the cluster, despite all the setting regarding the clustered instance remained the same.

Here is the startup log:

150202 10:47:48 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
150202 10:47:48 mysqld_safe Skipping wsrep-recover for 646956ec-a7c6-11e4-a49a-0eafe484ab9f:35 pair
150202 10:47:48 mysqld_safe Assigning 646956ec-a7c6-11e4-a49a-0eafe484ab9f:35 to wsrep_start_position
150202 10:47:48 [Note] WSREP: wsrep_start_position var submitted: '646956ec-a7c6-11e4-a49a-0eafe484ab9f:35'
150202 10:47:48 [Note] Plugin 'FEDERATED' is disabled.
150202 10:47:48 InnoDB: The InnoDB memory heap is disabled
150202 10:47:48 InnoDB: Mutexes and rw_locks use GCC atomic builtins
150202 10:47:48 InnoDB: Compressed tables use zlib 1.2.3
150202 10:47:48 InnoDB: Using Linux native AIO
150202 10:47:48 InnoDB: Initializing buffer pool, size = 128.0M
150202 10:47:49 InnoDB: Completed initialization of buffer pool
150202 10:47:49 InnoDB: highest supported file format is Barracuda.
150202 10:47:49 InnoDB: Waiting for the background threads to start
150202 10:47:50 Percona XtraDB (http://www.percona.com) 5.5.39-36.0 started; log sequence number 1600827
150202 10:47:50 [Note] Event Scheduler: Loaded 0 events
150202 10:47:50 [Note] WSREP: Initial position: 646956ec-a7c6-11e4-a49a-0eafe484ab9f:35
150202 10:47:50 [Note] WSREP: wsrep_load(): loading provider library 'none'
150202 10:47:50 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.39-36.0-55' socket: '/var/lib/mysql/mysql.sock' port: 3306 Percona XtraDB Cluster (GPL), Release rel36.0, Revision 824, WSREP version 25.11, wsrep_25.11.r4023

If I return back to the single-instance my.cnf then this node joins cluster again.
Does myslqd_multi support clustered instances at all? And if it does, which option should I add?

One node crashing

Lastest Forum Posts - February 1, 2015 - 10:07pm
I have a 4 node cluster, where 3 of the nodes are VPSes in the same data centre, and the fourth node is in the office, which is connected to the internet by fibre, and to the rest of the cluster via OpenVPN.

The VPSes are quad E5620 CPU, 32GB mem, 2TB disk. The dedicated office server is 8 thread i7-4820K CPU, 48GB mem, and runs 3 KVM VPSes as well. All have of the order of 10GB memory unused.

All are running Ubuntu 14.04 LTS, and Percona 5.6.21 ( the latest GHOST vulnerability has been patched on all servers ).

MySQL on the dedicated node has crashed twice over the weekend with the message below.

Could anyone provide any pointers on where to look while I'm working through the manual, or in the meantime indicate how to reverse the message

'150201 06:11:25 mysqld_safe WSREP: not restarting wsrep node automatically' so that it does restart?

Cheers,


Steve


7:11:24 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=134217728
read_buffer_size=2097152
max_used_connections=2
max_threads=153
thread_count=3
connection_count=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 603396 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x49f4130
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7fe1d403ee20 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x923b1c]
/usr/sbin/mysqld(handle_fatal_signal+0x352)[0x67cac2]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10340)[0x7fe209627340]
/usr/sbin/mysqld[0xb42ab0]
/usr/sbin/mysqld(_ZN11MDL_context12release_lockEP10MDL_ticke t+0x4e)[0x66e29e]
/usr/sbin/mysqld(_ZN18Global_backup_lock7releaseEP3THD+0x1c)[0x66127c]
/usr/sbin/mysqld(_ZN3THD7cleanupEv+0xee)[0x6ccb5e]
/usr/sbin/mysqld(_ZN3THD17release_resourcesEv+0x298)[0x6cf988]
/usr/sbin/mysqld(_Z29one_thread_per_connection_endP3THDb+0x2 a)[0x5920fa]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0xf0)[0x6d7840]
/usr/sbin/mysqld(handle_one_connection+0x39)[0x6d7ad9]
/usr/sbin/mysqld(pfs_spawn_thread+0x140)[0xb41570]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182)[0x7fe20961f182]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fe208b2c00d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): is an invalid pointer
Connection ID (thread ID): 196
Status: KILL_CONNECTION

You may download the Percona XtraDB Cluster operations manual by visiting
http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.




Moving data directory to another location

Lastest Forum Posts - January 31, 2015 - 4:45pm
Recently i installed fresh copy of Percona MySql 5.6 on server. I created my.cnf with following parameters but when i start "/etc/init.d/mysql start" i got "Starting MySQL (Percona Server). ERROR! The server quit without updating PID file (/home/databases/server.pid)." error.

Note: It works properly when i move /etc/my.cnf to another location.

[mysqld]
datadir=/home/databases
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

thread-cache-size=50
join-buffer-size=4M
max_allowed_packet=64M
table-cache=1024
tmp-table-size=64M
max-heap-table-size=64M
key-buffer-size=256M

innodb_buffer_pool_size=512M
innodb_log_buffer_size=8M
#innodb_log_file_size=64M
innodb_log_file_size = 5242880
innodb_flush_log_at_trx_commit=2
innodb_thread_concurrency=8
innodb_flush_method=O_DIRECT
innodb_file_per_table
innodb_file_format=barracuda
innodb_lazy_drop_table=1

[mysqld_safe]
log-error=/home/databases/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid


Whats wrong with this configuration file.


Thanks & Regards

I am getting segmentation fault while running xtrabackup with MySQL 5.6 version

Lastest Forum Posts - January 30, 2015 - 9:37pm
here are the details of RPM's and percona xtrabackup version i am using in my server:

[mysql@trprtelgldb3 db01]$ innobackupex --user=backup --password=******** --use-memory=500M /db01/

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p

150130 22:57:50 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'backup' (using password: YES).
150130 22:57:50 innobackupex: Connected to MySQL server
150130 22:57:50 innobackupex: Executing a version check against the server...
Segmentation fault
[mysql@trprtelgldb3 db01]$ rpm -qa | grep -i mysql
MySQL-server-5.6.19-1.rhel5
MySQL-shared-5.6.19-1.rhel5
perl-DBD-MySQL-3.0007-2.el5
MySQL-shared-compat-5.6.19-1.rhel5
MySQL-client-5.6.19-1.rhel5
[mysql@trprtelgldb3 db01]$ uname -a
Linux trprtelgldb3.intra.searshc.com 2.6.18-274.18.1.el5xen #1 SMP Thu Feb 9 13:27:02 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[mysql@trprtelgldb3 db01]$ rpm -qa | grep -i percona
percona-xtrabackup-2.2.8-5059.el5




************************************************** ************************************************** **********************

[mysql@trprtelgldb3 db01]$ rpm -qa | grep -i mysql
MySQL-server-5.6.19-1.rhel5
MySQL-shared-5.6.19-1.rhel5
perl-DBD-MySQL-3.0007-2.el5
MySQL-shared-compat-5.6.19-1.rhel5
MySQL-client-5.6.19-1.rhel5
[mysql@trprtelgldb3 db01]$ uname -a
Linux trprtelgldb3.intra.searshc.com 2.6.18-274.18.1.el5xen #1 SMP Thu Feb 9 13:27:02 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[mysql@trprtelgldb3 db01]$ rpm -qa | grep -i percona
percona-xtrabackup-2.1.7-721.rhel5




[mysql@trprtelgldb3 ~]$ innobackupex --user=backup --password=******** --use-memory=500M /db01/

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p

150130 23:01:23 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'backup' (using password: YES).
150130 23:01:23 innobackupex: Connected to MySQL server
150130 23:01:23 innobackupex: Executing a version check against the server...
Segmentation fault

Please let me know in fixing this issue.

pt-archiver writing to STDOUT

Lastest Forum Posts - January 30, 2015 - 4:03pm
For the next version of pt-archiver, I would like to suggest adding the ability to stream directly to STDOUT. This way, we can more easily pump mysql data into our favorite object store. What's the best way to suggest this to the tool developers?

Thanks,
Jeffrey

Problem PREPARING incremental backup

Lastest Forum Posts - January 30, 2015 - 1:00pm
I'm testing out Percona XtraBackup 2.2 and ran into a problem the first time I tried to restore an incremental backup. I assume I've done something wrong; hopefully someone can tell me what that is, I've included all the details below.

Executive Summary:

The PREPARE of the first incremental backup fails with the following error:
2015-01-30 14:33:35 b735b700 InnoDB: Error: space id and page n stored in the page
InnoDB: read in are 0:524293, should be 0:5!

Operating System = Ubuntu 14.04.01 (Bitnami Tomcat Stack)
Database = MySQL 5.5.38

Backup Steps
============
1) Start up a virtual machine (V1) from image (V) and log in
2) Create a full backup (B) of the database:
/usr/bin/innobackupex --defaults-file=/opt/bitnami/mysql/my.cnf --user=root --password=xxxx /mnt/mySQLDumps/BackupTesting
==> Successful, creates directory 2015-01-30_10-44-37

3) Add a few records to the database
4) Create an incremental backup (b1) of the database
/usr/bin/innobackupex --defaults-file=/opt/bitnami/mysql/my.cnf --incremental --user=root --password=xxxx /mnt/mySQLDumps/BackupTesting --incremental-basedir=/mnt/mySQLDumps/BackupTesting/2015-01-30_10-44-37
==> Successful, creates directory 2015-01-30_10-59-30

5) Add a few more records to the database
6) Create an incremental backup (b2) of the database
/usr/bin/innobackupex --defaults-file=/opt/bitnami/mysql/my.cnf --incremental --user=root --password=xxxx /mnt/mySQLDumps/BackupTesting --incremental-basedir=/mnt/mySQLDumps/BackupTesting/2015-01-30_10-59-30
==> Successful, creates directory 2015-01-30_11-06-54

7) Disconnect the disk (D) with these backups on it from V1.


Restore Steps
=============
My first testcase was to do a point in time restore the database on a new virtual machine V2 to the state after the first incremental backup (b1)

8) Start up another virtual machine (V2) from image (V) and log in
9) Connect the disk (D) to (V2).
10) Prepare the base backup (B)
/usr/bin/innobackupex --defaults-file=/opt/bitnami/mysql/my.cnf --user=root --password=xxxx --apply-log --redo-only /mnt/mySQLDumps/BackupTesting/2015-01-30_10-44-37
==> Works OK

11) Prepare the incremental backup (b1)
/usr/bin/innobackupex --defaults-file=/opt/bitnami/mysql/my.cnf --user=root --password=xxxx --apply-log /mnt/mySQLDumps/BackupTesting/2015-01-30_10-44-37 --incremental-dir=/mnt/mySQLDumps/BackupTesting/2015-01-30_10-59-30
==> At this point I get an error:

xtrabackup version 2.2.8 based on MySQL server 5.6.22 Linux (i686) (revision id: )
incremental backup from 19492498688 is enabled.
xtrabackup: cd to /mnt/mySQLDumps/BackupTesting/2015-01-30_10-44-37
xtrabackup: This target seems to be already prepared.
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(19492581719)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = /mnt/mySQLDumps/BackupTesting/2015-01-30_10-59-30
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: Generating a list of tablespaces
xtrabackup: page size for /mnt/mySQLDumps/BackupTesting/2015-01-30_10-59-30/ibdata1.delta is 16384 bytes
Applying /mnt/mySQLDumps/BackupTesting/2015-01-30_10-59-30/ibdata1.delta to ./ibdata1...
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = /mnt/mySQLDumps/BackupTesting/2015-01-30_10-59-30
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.8
InnoDB: Not using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 100.0M
InnoDB: Completed initialization of buffer pool
2015-01-30 14:33:35 b735b700 InnoDB: Error: space id and page n stored in the page
InnoDB: read in are 0:524293, should be 0:5!
2015-01-30 14:33:35 b735b700 InnoDB: Assertion failure in thread 3073750784 in file srv0start.cc line 1373
InnoDB: Failing assertion: prev_space_id + 1 == undo_tablespace_ids[i]
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/...-recovery.html
InnoDB: about forcing recovery.
19:33:35 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Thread pointer: 0xab7ff60
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x10000
xtrabackup(my_print_stacktrace+0x2d) [0x85a011d]
xtrabackup(handle_fatal_signal+0x270) [0x8439be0]
[0xb7760400]
[0xb7760424]
/lib/i386-linux-gnu/libc.so.6(gsignal+0x47) [0xb738b827]
/lib/i386-linux-gnu/libc.so.6(abort+0x143) [0xb738ec53]
xtrabackup(srv_undo_tablespaces_init(unsigned long, unsigned long, unsigned long, unsigned long*)+0x6a8) [0x8207ff8]
xtrabackup(innobase_start_or_create_for_mysql()+0x d86) [0x8208eb6]
xtrabackup() [0x81adde7]
xtrabackup(main+0x173f) [0x8197f8f]
/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0xb7376a83]
xtrabackup() [0x81ab151]

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 2633.
main::apply_log() called at /usr/bin/innobackupex line 1561
innobackupex: Error:
innobackupex: ibbackup failed at /usr/bin/innobackupex line 2633.

No metrics after master slave switch

Lastest Forum Posts - January 30, 2015 - 5:59am
I had percona-agent running and reporting on a set of master and slave DBs. Everything was working fine, until I promoted the slaves to masters and then rejoined the old masters as a slaves. Once the switch was finished, my new masters were reporting correctly, but the slaves were not. The agents are connected, but I'm no longer getting any metrics. On one slave, the log is clean. On the other slave, I keep getting this error: write unix /var/run/mysqld/mysqld.sock: broken pipe. Why would switching roles break percona-agent? Any help is appreciated.

No metrics after master slave switch

Lastest Forum Posts - January 30, 2015 - 5:47am
I had percona-agent running and reporting on a set of master and slave DBs. Everything was working fine, until I promoted the slaves to masters and then rejoined the old masters as a slaves. Once the switch was finished, my new masters were reporting correctly, but the slaves were not. The agents are connected, but I'm no longer getting any metrics. On one slave, the log is clean. On the other slave, I keep getting this error: write unix /var/run/mysqld/mysqld.sock: broken pipe. Why would switching roles break percona-agent? Any help is appreciated.

OpenStack Live 2015: FAQs on the who, what, where, when, why & how

Latest MySQL Performance Blog posts - January 30, 2015 - 5:00am

This April 13-14 Percona is introducing an annual conference called OpenStack Live. I’ve seen a few questions about the new event so decided to help clarify what this show is about and who should attend.

Unlike OpenStack Summits, held twice a year and dedicated to primarily to developers, OpenStack Live is an opportunity for OpenStack evaluators and users of all levels to learn from experts on topics such as how to deploy, optimize, and manage OpenStack and the role of MySQL as a crucial technology in this free and open-source cloud computing software platform. A full day of hands-on tutorials will also focus on making OpenStack users more productive and confident in this emerging technology.

Still confused about OpenStack Live 2015? Fear not! Here are the answers to commonly asked questions.

Q: Who should attend?
A: You should attend…

  • if you are currently using OpenStack and want to improve your skills and knowledge
  • if you are evaluating or considering using it.
  • if you are a solutions provider – this is your opportunity to show the world your contributions and services

Q: Percona Live has a conference committee. Does OpenStack Live have one, too?
A: Yes and it’s a completely different committee comprised of:

  • Mark Atwood, Director of Open Source Engagement at HP (Conference Chairman)
  • Rich Bowen, OpenStack Community Liaison at Red Hat
  • Jason Rouault, Senior Director OpenStack Cloud at Time Warner Cable
  • Peter Boros, Principal Architect at Percona

Q: Are the tutorials really “hands-on”?
A: Yes and most are at least 3-hours long. So you’ll need your laptop and power cord. Here’s a look at all of the OpenStack tutorials.

Q: How meaty are the sessions?
A: Very meaty indeed! Here’s a sample:

Q: I am going to attend the Percona Live MySQL Conference and Expo. Will my pass also include OpenStack Live 2015?
A: Yes, your Percona Live pass will be honored at the OpenStack Live conference. OpenStack Live attendees will also have access to the Percona Live/OpenStack Live Exhibit hall, keynotes, receptions and FUN activities April 13 and 16, allowing them to dive deeper into MySQL topics such as high availability, security, performance optimization, and much more. However, the OpenStack Live pass does not allow access to Percona Live breakout sessions or tutorials.

Q: Where can I register?
A: You can register here and take advantage of Early Bird discounts but those end Feb. 1 at 11:30 p.m. PST, so hurry!

The post OpenStack Live 2015: FAQs on the who, what, where, when, why & how appeared first on MySQL Performance Blog.

Pages

Subscribe to Percona aggregator
]]>