EmergencyEMERGENCY? Get 24/7 Help Now!

Percona Live Crash Courses: for MySQL and MongoDB!

Latest MySQL Performance Blog posts - February 2, 2016 - 11:45am

Percona Live Crash Courses for MySQL and MongoDB

The database community constantly tells us how hard it is to find someone with MySQL and MongoDB DBA skills who can help with the day-to-day management of their databases. This is especially difficult when companies don’t have a full-time requirement for a DBA. Developers, system administrators and IT staff spend too much time trying to solve basic database problems that keep them from doing their day job. Eventually the little problems or performance inefficiencies that start to pile up  lead to big problems.  

In answer to this growing need, Percona Live is once again hosting Crash Courses for developers, systems administrators, and other technical resources. This year, we’ve compacted the training into a single day, and are offering two options: MySQL 101 and MongoDB 101!

Don’t let the name fool you: these courses are led by Percona MySQL experts who will show you the fundamentals of MySQL or MongoDB tools and techniques.  

And it’s not just for DBAs: developers are encouraged to attend to hone their database skills. Developers who create code that can scale to match the demands of the online community are both a resource and and an investment.

Below are a list of the topics covered for each course:

MySQL 101 Topics

MongoDB 101 Topics

  • Schema Review 101: How and What You Should Be Looking at…
  • Choosing a MySQL High Availability Solution Today
  • MySQL Performance Troubleshooting Best Practices
  • Comparing Synchronous Replication Solutions in the Cloud
  • Cost Optimizations Through MySQL Performance Optimizations
  • SQL with MySQL or NoSQL with MongoDB?
  • MongoDB for MySQL DBA’s
  • MongoDB Storage Engine Comparison
  • MongoDB 3.2: New Features Overview

 

Attendees will return ready to quickly and correctly take care of the day-to-day and week-to-week management of your MySQL or MongoDB environment.

The schedule and non-conference cost for the 101 courses are:

  • MySQL 101: Tuesday April 19th ($400)
  • MongoDB 101: Wednesday April 20th ($400)
  • Both MySQL and MongoDB 101 sessions ($700)

(Tickets to the 101 sessions do not grant access to the main Percona Live breakout sessions. Full Percona Live conferences passes will grant admission to the 101 sessions. 101 Crash Course attendees will have full access to Percona Live keynote speakers the exhibit hall and receptions.)

As a special promo, the first 101 people to purchase the 101 talks receive a $299.00 discount off the ticket price! Each session only costs $101! Get both sessions for a mere $202! Register now, and use the following codes for your first 101 discount:

  • Single101= $299 off of either the MySQL or MongoDB tickets
  • Double101= $498 off of the combined MySQL/MongoDB ticket

Sign up now for special track pricing. Click here to register.

Birds of a Feather

Birds of a Feather (BOF) sessions enable attendees with interests in the same project or topic to enjoy some quality face time. BOFs can be organized for individual projects or broader topics (e.g., best practices, open data, standards). Any attendee or conference speaker can propose and moderate an engaging BOF. Percona will post the selected topics and moderators online and provide a meeting space and time. The BOF sessions will be held Tuesday, April 19, 2016 at 6:00 p.m. The deadline for BOF submissions is February 7.

Lightning Talks

Lightning Talks provide an opportunity for attendees to propose, explain, exhort, or rant on any MySQL, NoSQL or Data in the Cloud-related topic for five minutes. Topics might include a new idea, successful project, cautionary story, quick tip, or demonstration. All submissions will be reviewed, and the top 10 will be selected to present during one of the scheduled breakout sessions during the week. Lighthearted, fun or otherwise entertaining submissions are highly welcome. The deadline for submitting a Lightning Talk topic is February 7, 2016.

Experimental Percona Docker images for Percona Server

Latest MySQL Performance Blog posts - February 2, 2016 - 9:02am

Docker is incredibly popular tool for deploying software, so we decided to provide a Percona Docker image for both Percona Server MySQL and Percona Server for MongoDB.

We want to create an easy way to try our products.

There are actually some images available from https://hub.docker.com/_/percona/, but these images are provided by Docker itself, not from Percona.

In our images, we provide all the varieties of storage engines available in Percona Server (MySQL/MongoDB).

Our images are available from https://hub.docker.com/r/percona/.

The simplest way to get going is to run the following:

docker run --name ps -e MYSQL_ROOT_PASSWORD=secret -d percona/percona-server:latest

for Percona Server/MySQL, and:

docker run --name psmdb -d percona/percona-server-mongodb:latest

for Percona Server/MongoDB.

It is very easy to try the different storage engines that comes with Percona Server for MongoDB. For example, to use RocksDB, run:

docker run --name psmdbrocks -d percona/percona-server-mongodb:latest --storageEngine=RocksDB

or PerconaFT:

docker run --name psmdbperconaft -d percona/percona-server-mongodb:latest --storageEngine=PerconaFT

We are looking for any feedback  you’d like to provide: if this is useful, and what improvements we could make.

Deadlock Encountered when using pt-online-schema-change

Lastest Forum Posts - February 2, 2016 - 7:54am
I have a shell script looping 1000 times, doing inserts into a database table.
I am running pt-online-schema-change, creating a unique index on a column in that table.
pt-online-schema-change --alter-foreign-keys-method=auto --alter="ADD UNIQUE INDEX UQ_E_ID (E_ID)" --execute h=localhost,u=xx,p=**,D=db,t=t1

My shell script reports
./do_sql.sh
ERROR 1213 (40001) at line 1: Deadlock found when trying to get lock; try restarting transaction


The deadlock reported is
pt-deadlock-logger h=localhost,u=xx,p=**,D=db,t=t1
server ts thread txn_id txn_time user hostname ip db tbl idx lock_type lock_mode wait_hold victim query
localhost 2016-02-02T10:40:06 13616 0 0 root localhost db t1 PRIMARY RECORD S w 0 INSERT LOW_PRIORITY IGNORE INTO `db.`_t1_new` (`uid`, `name`, `accountid`, `deleted`, `e_id`, `parent_id`, `classification`) SELECT `uid`, `name`, `accountid`, `deleted`, `e_id`, `parent_id`, `classification` FROM `db`.`t1` LOCK IN SHARE MODE /*pt-online-schema-change 23347 copy table*/
localhost 2016-02-02T10:40:06 13847 0 0 root localhost db _t1_new TABLE AUTO-INC w 1 REPLACE INTO `db`.`_t1_new` (`uid`, `name`, `accountid`, `deleted`, `e_id`, `parent_id`, `classification`) VALUES (NEW.`uid`, NEW.`name`, NEW.`accountid`, NEW.`deleted`, NEW.`e_id`, NEW.`parent_id`, NEW.`classification`)


We are seriously considering using this tool to prevent downtime in our application when we need to modify our database schema, but am quite concerned by this finding. Can anyone suggest work-arounds or problems with my commands.
Thanks in advance!

Older versions of XtraDB Cluster via apt-get

Lastest Forum Posts - February 1, 2016 - 10:33am
Hello,

As the title suggests, I am trying to get an older version of XtraDB Cluster using apt-get but I am not having any luck using sudo apt-get install percona-xtradb-cluster-56=5.6.27-25.13.wheezy . Is this version no longer available via apt-get or am I missing something here?

Thanks for any help.

InnoDB and TokuDB on AWS

Latest MySQL Performance Blog posts - February 1, 2016 - 8:38am

In a recent post, Vadim compared the performance of Amazon Aurora and Percona Server on AWS. This time, I am comparing write throughput for InnoDB and TokuDB, using the same workload (sysbench oltp/update/update_non_index) and a similar set-up (r3.xlarge instance, with general purpose ssd, io2000 and io3000 volumes) to his experiments.

All the runs used 16 threads for sysbench, and the following MySQL configuration files for InnoDB and TokuDB respectively:

[mysqld] table-open-cache-instances=32 table_open_cache=8000 innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 innodb-log-file-size = 16G innodb-flush-log-at-trx-commit = 1 innodb_log_compressed_pages =0 innodb-file-per-table = 1 innodb-buffer-pool-size = 20G innodb_write_io_threads = 8 innodb_read_io_threads = 32 innodb_open_files = 1024 innodb_old_blocks_pct =10 innodb_old_blocks_time =2000 innodb_checksum_algorithm = crc32 innodb_file_format =Barracuda innodb_io_capacity=1500 innodb_io_capacity_max=2000 metadata_locks_hash_instances=256 innodb_max_dirty_pages_pct=90 innodb_flush_neighbors=1 innodb_buffer_pool_instances=8 innodb_lru_scan_depth=4096 innodb_sync_spin_loops=30 innodb-purge-threads=16

[mysqld] tokudb_read_block_size=16K tokudb_fanout=128 table-open-cache-instances=32 table_open_cache=8000 metadata_locks_hash_instances=256 [mysqld_safe] thp-setting=never

You can see the full set of graphs here, and the complete results here.

Let me start illustrating the results with this summary graph for the io2000 volume, showing how write throughput varies over time, per engine and workload (for all graphs, size is in 1k rows, so 1000 is actually 1M):

We can see a few things already:

  • InnoDB has better throughput for smaller table sizes.
  • The reverse is true as size becomes big enough (after 10M rows here).
  • TokuDB’s advantage is not noticeable on the oltp workload, though it is for InnoDB.

Let’s dig in a bit more and look at the extreme ends in terms of table size, starting with 1M rows:

and ending in 50M:

In the first case, we can see that not only does InnoDB show better write throughput, it also shows less variance. In the second case, we can confirm that the difference does not seem significant for oltp, but it is for the other workloads.

This should come as no surprise, as one of the big differences between TokuDB’s Fractal trees and InnoDB’s B-tree implementation is the addition of message buffers to nodes, to handle writes (the other big difference, for me, is node size). For write-intensive workloads, TokuDB needs to do a lot  less tree traversing than InnoDB (in fact, this is done only to validate uniqueness constraints when required, otherwise writes are just injected into the message buffer and the buffer is flushed to lower levels of the tree asynchronously. I refer you to this post for more details).

For oltp, InnoDB is at advantage at smaller table sizes, as it does not need to scan message buffers all across the search path when reading (nothing is free in life, and this is the cost for TokuDB’s advantage for writes). I suspect this advantage is lost for high enough table sizes because at that point, either engine will be I/O bound anyway.

My focus here was write throughput, but as a small example see how this is reflected on response time if we pick the 50M table size and drop oltp from the mix:

At this point, you may be wondering why I focused on the io2000 results (and if you’re not, bear with me please!). The reason is the results for io3000 and the general purpose ssd showed characteristics that I attribute to latency on the volumes. You can see what I mean by looking at the io3000 graph:

I say “I attribute” because, unfortunately, I do not have any metrics other than sysbench’s output to go with (an error I will amend on future benchmarks!). I have seen the same pattern while working on production systems on AWS, and in those cases I was able to correlate it with increases in stime and/or qtime on diskstats. The fact that this is seen on the lower and higher capacity volumes for the same workload, but not the io2000 one, increases my confidence in this assumption.

Conclusion

I would not consider TokuDB a general purpose replacement for InnoDB, by which I mean I would never blindly suggest someone to migrate from one to the other, as the performance characteristics are different enough to make this risky without a proper assessment.

That said, I believe TokuDB has great advantages for the right scenarios, and this test highlights some of its strengths:

  • It has a significant advantage over InnoDB on slower devices and bigger data sets.
  • For big enough data sets, this is even the case on fast devices and write intensive workloads, as the B-tree becomes I/O bound much faster

Other advantages of TokuDB over InnoDB, not directly evidenced from these results, are:

  • Better compression (helped by the much larger block size).
  • Better SSD lifetime, due to less and more sequential writes (sequential writes have, in theory at least, no write amplification compared to random ones, so even though the sequential/random difference should not matter for SSDs for performance, it does for lifetime).

Backup is aborting with corrupt page in innodb system tablespace

Lastest Forum Posts - January 29, 2016 - 12:31pm
We are using MySQL Community Server 5.5.29 and percona-xtrabackup-2.2.5-5027.el6.x86_64
with InnoDB (innodb_file_per_table).

The backup is aborting after a few minutes with this error:

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p

160119 13:10:43 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root' (using password: YES).
160119 13:10:43 innobackupex: Connected to MySQL server
160119 13:10:43 innobackupex: Executing a version check against the server...
160119 13:10:43 innobackupex: Done.
IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints "completed OK!".

innobackupex: Using mysql server version 5.5.29

innobackupex: Created backup directory /data/mysqlbackup/2016-01-19_13-10-43

160119 13:10:43 innobackupex: Starting ibbackup with command:
xtrabackup
--defaults-group="mysqld"
--backup
--suspend-at-end
--target-dir=/data/mysqlbackup/2016-01-19_13-10-43
--tmpdir=/var/lib/mysql/tmp
--use-memory=10G
--extra-lsndir='/var/lib/mysql/tmp'

innobackupex: Waiting for ibbackup (pid=25998) to suspend
innobackupex: Suspend file '/data/mysqlbackup/2016-01-19_13-10-43/xtrabackup_suspended_2'

xtrabackup version 2.2.5 based on MySQL server 5.6.21 Linux (x86_64) (revision id: )
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 102400, set to 102400
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = /var/lib/mysql/idbdata
xtrabackup: innodb_data_file_path = idbdata01:2G:autoextend
xtrabackup: innodb_log_group_home_dir = /var/lib/mysql/idblog
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 1073741824
xtrabackup: using O_DIRECT
>> log scanned up to (9868489235053)
[01] Copying /var/lib/mysql/idbdata/idbdata01 to /data/mysqlbackup/2016-01-19_13-10-43/idbdata01
>> log scanned up to (9868489235073)
>> log scanned up to (9868489235073)
>> log scanned up to (9868489235368)
>> log scanned up to (9868489235368)
>> log scanned up to (9868489235834)
>> log scanned up to (9868489235834)
>> log scanned up to (9868489235834)
>> log scanned up to (9868491216496)
>> log scanned up to (9868493327763)
>> log scanned up to (9868493608898)
>> log scanned up to (9868493645151)
>> log scanned up to (9868493646385)
>> log scanned up to (9868493647052)
>> log scanned up to (9868493647052)
>> log scanned up to (9868493647517)
>> log scanned up to (9868493647517)
>> log scanned up to (9868493647517)
>> log scanned up to (9868493647767)
>> log scanned up to (9868493647767)
>> log scanned up to (9868493649035)
>> log scanned up to (9868493649035)
>> log scanned up to (9868496525927)
>> log scanned up to (9868497839778)
>> log scanned up to (9868498830477)
>> log scanned up to (9868500494321)
>> log scanned up to (9868500494382)
>> log scanned up to (9868500494382)
>> log scanned up to (9868500494615)
>> log scanned up to (9868500494615)
>> log scanned up to (9868500495081)
>> log scanned up to (9868500495091)
>> log scanned up to (9868500495101)
>> log scanned up to (9868501246079)
>> log scanned up to (9868503846069)
>> log scanned up to (9868504726785)
>> log scanned up to (9868504726785)
>> log scanned up to (9868504798588)
>> log scanned up to (9868504799081)
>> log scanned up to (9868504799081)
>> log scanned up to (9868504799564)
>> log scanned up to (9868504799564)
>> log scanned up to (9868504799564)
>> log scanned up to (9868504799813)
>> log scanned up to (9868504799813)
>> log scanned up to (9868504800296)
>> log scanned up to (9868504800296)
>> log scanned up to (9868504800296)
>> log scanned up to (9868508666113)
>> log scanned up to (9868511227997)
>> log scanned up to (9868511228480)
>> log scanned up to (9868511228480)
>> log scanned up to (9868511228480)
>> log scanned up to (9868511228713)
>> log scanned up to (9868511228713)
>> log scanned up to (9868511229179)
>> log scanned up to (9868511229179)
>> log scanned up to (9868511229179)
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
>> log scanned up to (9868511229421)
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
[01] xtrabackup: Database page corruption detected at page 206334, retrying...
[01] xtrabackup: Error: failed to read page after 10 retries. File /var/lib/mysql/idbdata/idbdata01 seems to be corrupted.
[01] xtrabackup: Error: xtrabackup_copy_datafile() failed.
[01] xtrabackup: Error: failed to copy datafile.
innobackupex: Error: The xtrabackup child process has died at /usr/bin/innobackupex line 2681.


innochecksum shows probelm in log sequence number check

[root@atfkmysql04 idbdata]# innochecksum -p 206334 /var/lib/mysql/idbdata/idbdata01 -v -d
file /var/lib/mysql/idbdata/idbdata01 = 3833593856 bytes (233984 pages)...
checking pages in range 206334 to 206334
page 206334: log sequence number: first = 2722042417; second = 0 <<<<<<<<<<<<<<<<<
page 206334 invalid (fails log sequence number check)

I am currently innodbchecksumming all ibd files, but so far no other files are affected. We are using innodb_file_per_table parameter, so there should not be user data in idb system tablespace.

How can I repair this corruption?

Best regards,
Martin




EXPLAIN FORMAT=JSON knows everything about UNIONs: union_result and query_specifications

Latest MySQL Performance Blog posts - January 29, 2016 - 11:09am

Ready for another post in the EXPLAIN FORMAT=JSON is Cool series! Great! This post will discuss how to see all the information that is contained in optimized queries with UNION using the union_result and query_specifications commands.

 

When optimizing complicated queries with UNION, it is easy to get lost in the regular EXPLAIN  output trying to identify which part of the output belongs to each part of the UNION.

Let’s consider the following example:

mysql> explain -> select emp_no, last_name, 'low_salary' from employees -> where emp_no in (select emp_no from salaries -> where salary < (select avg(salary) from salaries)) -> union -> select emp_no, last_name, 'high salary' from employees -> where emp_no in (select emp_no from salaries -> where salary >= (select avg(salary) from salaries))G *************************** 1. row *************************** id: 1 select_type: PRIMARY table: employees partitions: NULL type: ALL possible_keys: PRIMARY key: NULL key_len: NULL ref: NULL rows: 299778 filtered: 100.00 Extra: NULL *************************** 2. row *************************** id: 1 select_type: PRIMARY table: salaries partitions: NULL type: ref possible_keys: PRIMARY,emp_no key: PRIMARY key_len: 4 ref: employees.employees.emp_no rows: 9 filtered: 33.33 Extra: Using where; FirstMatch(employees) *************************** 3. row *************************** id: 3 select_type: SUBQUERY table: salaries partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 2557022 filtered: 100.00 Extra: NULL *************************** 4. row *************************** id: 4 select_type: UNION table: employees partitions: NULL type: ALL possible_keys: PRIMARY key: NULL key_len: NULL ref: NULL rows: 299778 filtered: 100.00 Extra: NULL *************************** 5. row *************************** id: 4 select_type: UNION table: salaries partitions: NULL type: ref possible_keys: PRIMARY,emp_no key: PRIMARY key_len: 4 ref: employees.employees.emp_no rows: 9 filtered: 33.33 Extra: Using where; FirstMatch(employees) *************************** 6. row *************************** id: 6 select_type: SUBQUERY table: salaries partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 2557022 filtered: 100.00 Extra: NULL *************************** 7. row *************************** id: NULL select_type: UNION RESULT table: <union1,4> partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: NULL filtered: NULL Extra: Using temporary 7 rows in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select `employees`.`employees`.`emp_no` AS `emp_no`,`employees`.`employees`.`last_name` AS `last_name`,'low_salary' AS `low_salary` from `employees`.`employees` semi join (`employees`.`salaries`) where ((`employees`.`salaries`.`emp_no` = `employees`.`employees`.`emp_no`) and (`employees`.`salaries`.`salary` < (/* select#3 */ select avg(`employees`.`salaries`.`salary`) from `employees`.`salaries`))) union /* select#4 */ select `employees`.`employees`.`emp_no` AS `emp_no`,`employees`.`employees`.`last_name` AS `last_name`,'high salary' AS `high salary` from `employees`.`employees` semi join (`employees`.`salaries`) where ((`employees`.`salaries`.`emp_no` = `employees`.`employees`.`emp_no`) and (`employees`.`salaries`.`salary` >= (/* select#6 */ select avg(`employees`.`salaries`.`salary`) from `employees`.`salaries`)))

While we can guess that subquery 3 belongs to the first query of the union, and subquery 6 belongs to the second (which has number 4 for some reason), we have to be very careful (especially in our case) when queries use the same tables in both parts of the UNION.

The main issue with the regular EXPLAIN for UNION  is that it has to re-present the hierarchical structure as a table. The same issue occurs when you want to store objects created in programming language, such as Java, in the database.

EXPLAIN FORMAT=JSON, on the other hand, has hierarchical structure and more clearly displays how UNION was optimized:

mysql> explain format=json select emp_no, last_name, 'low_salary' from employees where emp_no in (select emp_no from salaries where salary < (select avg(salary) from salaries)) union select emp_no, last_name, 'high salary' from employees where emp_no in (select emp_no from salaries where salary >= (select avg(salary) from salaries))G *************************** 1. row *************************** EXPLAIN: { "query_block": { "union_result": { "using_temporary_table": true, "table_name": "<union1,4>", "access_type": "ALL", "query_specifications": [ { "dependent": false, "cacheable": true, "query_block": { "select_id": 1, "cost_info": { "query_cost": "921684.48" }, "nested_loop": [ { "table": { "table_name": "employees", "access_type": "ALL", "possible_keys": [ "PRIMARY" ], "rows_examined_per_scan": 299778, "rows_produced_per_join": 299778, "filtered": "100.00", "cost_info": { "read_cost": "929.00", "eval_cost": "59955.60", "prefix_cost": "60884.60", "data_read_per_join": "13M" }, "used_columns": [ "emp_no", "last_name" ] } }, { "table": { "table_name": "salaries", "access_type": "ref", "possible_keys": [ "PRIMARY", "emp_no" ], "key": "PRIMARY", "used_key_parts": [ "emp_no" ], "key_length": "4", "ref": [ "employees.employees.emp_no" ], "rows_examined_per_scan": 9, "rows_produced_per_join": 299778, "filtered": "33.33", "first_match": "employees", "cost_info": { "read_cost": "302445.97", "eval_cost": "59955.60", "prefix_cost": "921684.48", "data_read_per_join": "4M" }, "used_columns": [ "emp_no", "salary" ], "attached_condition": "(`employees`.`salaries`.`salary` < (/* select#3 */ select avg(`employees`.`salaries`.`salary`) from `employees`.`salaries`))", "attached_subqueries": [ { "dependent": false, "cacheable": true, "query_block": { "select_id": 3, "cost_info": { "query_cost": "516948.40" }, "table": { "table_name": "salaries", "access_type": "ALL", "rows_examined_per_scan": 2557022, "rows_produced_per_join": 2557022, "filtered": "100.00", "cost_info": { "read_cost": "5544.00", "eval_cost": "511404.40", "prefix_cost": "516948.40", "data_read_per_join": "39M" }, "used_columns": [ "salary" ] } } } ] } } ] } }, { "dependent": false, "cacheable": true, "query_block": { "select_id": 4, "cost_info": { "query_cost": "921684.48" }, "nested_loop": [ { "table": { "table_name": "employees", "access_type": "ALL", "possible_keys": [ "PRIMARY" ], "rows_examined_per_scan": 299778, "rows_produced_per_join": 299778, "filtered": "100.00", "cost_info": { "read_cost": "929.00", "eval_cost": "59955.60", "prefix_cost": "60884.60", "data_read_per_join": "13M" }, "used_columns": [ "emp_no", "last_name" ] } }, { "table": { "table_name": "salaries", "access_type": "ref", "possible_keys": [ "PRIMARY", "emp_no" ], "key": "PRIMARY", "used_key_parts": [ "emp_no" ], "key_length": "4", "ref": [ "employees.employees.emp_no" ], "rows_examined_per_scan": 9, "rows_produced_per_join": 299778, "filtered": "33.33", "first_match": "employees", "cost_info": { "read_cost": "302445.97", "eval_cost": "59955.60", "prefix_cost": "921684.48", "data_read_per_join": "4M" }, "used_columns": [ "emp_no", "salary" ], "attached_condition": "(`employees`.`salaries`.`salary` >= (/* select#6 */ select avg(`employees`.`salaries`.`salary`) from `employees`.`salaries`))", "attached_subqueries": [ { "dependent": false, "cacheable": true, "query_block": { "select_id": 6, "cost_info": { "query_cost": "516948.40" }, "table": { "table_name": "salaries", "access_type": "ALL", "rows_examined_per_scan": 2557022, "rows_produced_per_join": 2557022, "filtered": "100.00", "cost_info": { "read_cost": "5544.00", "eval_cost": "511404.40", "prefix_cost": "516948.40", "data_read_per_join": "39M" }, "used_columns": [ "salary" ] } } } ] } } ] } } ] } } } 1 row in set, 1 warning (0.00 sec) Note (Code 1003): /* select#1 */ select `employees`.`employees`.`emp_no` AS `emp_no`,`employees`.`employees`.`last_name` AS `last_name`,'low_salary' AS `low_salary` from `employees`.`employees` semi join (`employees`.`salaries`) where ((`employees`.`salaries`.`emp_no` = `employees`.`employees`.`emp_no`) and (`employees`.`salaries`.`salary` < (/* select#3 */ select avg(`employees`.`salaries`.`salary`) from `employees`.`salaries`))) union /* select#4 */ select `employees`.`employees`.`emp_no` AS `emp_no`,`employees`.`employees`.`last_name` AS `last_name`,'high salary' AS `high salary` from `employees`.`employees` semi join (`employees`.`salaries`) where ((`employees`.`salaries`.`emp_no` = `employees`.`employees`.`emp_no`) and (`employees`.`salaries`.`salary` >= (/* select#6 */ select avg(`employees`.`salaries`.`salary`) from `employees`.`salaries`)))

First it puts member union_result in the query_block  at the very top level:

EXPLAIN: { "query_block": { "union_result": {

The union_result object contains information about how the result set of the UNION was processed:

"using_temporary_table": true, "table_name": "<union1,4>", "access_type": "ALL",

And also contains the query_specifications array which also contains all the details about queries in the UNION:

"query_specifications": [ { "dependent": false, "cacheable": true, "query_block": { "select_id": 1, <skipped> { "dependent": false, "cacheable": true, "query_block": { "select_id": 4,

This representation is much more clear, and also contains all the details which the regular EXPLAIN misses for regular queries.

Conclusion: EXPLAIN FORMAT=JSON not only contains additional optimization information for each query in the UNION, but also has a hierarchical structure that is more suitable for the hierarchical nature of the UNION.

building Percona server on osx

Lastest Forum Posts - January 29, 2016 - 7:26am
I'm trying to build Percona server on osx, using 'cmake' (something I'm not familar with),
and having issues.

using the command line specified in the documentation:

cmake . -DCMAKE_BUILD_TYPE=RelWithDebInfo -DBUILD_CONFIG=mysql_release -DFEATURE_SET=community

I get the following errors from the cmake phase:

-- Performing Test HAVE_C_-Wno-error=tautological-constant-out-of-range-compare - Failed
-- Performing Test HAVE_CXX_-Wno-error=tautological-constant-out-of-range-compare - Failed
-- Performing Test HAVE_C_-Wno-error=extern-c-compat - Failed
-- Performing Test HAVE_CXX_-Wno-error=extern-c-compat - Failed


and I get the following errors when attempting to build (there are more, but I'm just posting
a few since they're similar):

/Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/tokudb_thread.h:211:13: error: use of
undeclared identifier 'pthread_mutex_timedlock'
int r = pthread_mutex_timedlock(&_mutex, &waittime);

/Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/tokudb_thread.h:260:17: error: use of
undeclared identifier 'pthread_rwlock_timedrdlock'
while ((r = pthread_rwlock_timedrdlock(&_rwlock, &waittime)) != 0) {

In file included from /Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/ha_tokudb.cc:31:
In file included from /Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/ha_tokudb.h:31:
/Users/bobmeyer/Devel/tools/percona/percona-server-5.6.28-76.1/storage/tokudb/tokudb_background.h:30:10: fatal error:
'atomic' file not found

#include <atomic>
^
7 warnings and 6 errors generated.
make[2]: *** [storage/tokudb/CMakeFiles/tokudb.dir/ha_tokudb.cc.o] Error 1
make[1]: *** [storage/tokudb/CMakeFiles/tokudb.dir/all] Error 2
make: *** [all] Error 2

I'm running osx 10.7.5, with xcode 4.6.3, and gcc 4.9.1

thanks in advance for any assistance.

rm.

searching forums?

Lastest Forum Posts - January 29, 2016 - 7:09am
I'm new to this forum, so sorry if this is a dumb question, but I can't find
any way to search the forums for specific topics...

Percona XtraDB Cluster 5.6.28-25.14 is now available

Latest MySQL Performance Blog posts - January 29, 2016 - 5:34am

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on January 29, 2016. Binaries are available from the downloads area or from our software repositories.

Percona XtraDB Cluster 5.6.28-25.14 is now the current release, based on the following:

All of Percona software is open-source and free, and all the details of the release can be found in the 5.6.28-25.14 milestone at Launchpad.

For more information about relevant Codership releases, see this announcement.

Bugs Fixed:

  • 1494399: Fixed issue caused by replication of events on certain system tables (for example, mysql.slave_master_info, mysql.slave_relay_log_info). Replication in the Galera eco-system is now avoided when bin-logging is disabled for said tables.
    NOTE: As part of this fix, when bin-logging is enabled, replication in the Galera eco-system will happen only if BINLOG_FORMAT is set to either ROW or STATEMENT. The recommended format is ROW, while STATEMENT is required only for the pt-table-checksum tool to operate correctly. If BINLOG_FORMAT is set to MIXED, replication of events in the Galera eco-system tables will not happen even with bin-logging enabled for those tables.
  • 1522385: Fixed GTID holes caused by skipped replication. A slave might ignore an event replicated from master, if the same event has already been executed on the slave. Such events are now propagated in the form of special GTID events to maintain consistency.
  • 1532857: The installer now creates a /var/lib/galera/ directory (assigned to user nobody), which can be used by garbd in the event it is started from a directory that garbd cannot write to.

Known Issues:

  • 1531842: Two instances of garbd cannot be started from the same working directory. This happens because each instance creates a state file (gvwstate.dat) in the current working directory by default. Although garbd is configured to use the base_dir variable, it was not registered due to a bug. Until garbd is fixed, you should start each instance from a separate working directory.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

XtraBackup for single database

Lastest Forum Posts - January 29, 2016 - 2:22am
We have a production MySQL installation with about 20 databases containing 100+ tables each. The sizes of the databases vary between 5 and 30 GB of tablespace. We have to copy individual production databases to test servers regularly, which we have been doing with mysqldump so far, but that takes several days in some cases. I read a lot of documentation on XtraBackup, innobackupex, transportable tablespaces, etc., but still haven't found a suitable solution. Our main concern is restore time, not backup time. The production system runs a dedicated backup slave that can be taken off replication and even powered off if needed. I read in the manual (https://www.percona.com/doc/percona-...obackupex.html) that XtraBackup has the --databases option, which we could use, but do I understand correctly that to restore the database(s), you have to do it individually for each table instead of using --copy-back? With the number of tables in our databases, I don't think that's feasible, particularly if we have to manually create tables on the target systems before restoring them from the backup. But on the other hand, transferring full backups of the entire set of databases to the test systems is not what is desired either; it has to happen independently. Any ideas?

Percona command found error when i use it

Lastest Forum Posts - January 28, 2016 - 8:00pm
When I use below Percona command, i found it error. Can anyone tell me what the reason? Thanks


----------------------------
www.homemadespatreatmentsandrecipes.com

Vote Percona Server in LinuxQuestions.org Members Choice Awards

Latest MySQL Performance Blog posts - January 28, 2016 - 1:13pm

Percona is calling on you! Vote Percona for Database of the Year in LinuxQuestions.org Members Choice Awards 2015. Help our Percona Server get recognized as one of the best database options for data performance. Percona Server is a free, fully compatible, enhanced, open source drop-in replacement for MySQL® that provides superior performance, scalability and instrumentation.

LinuxQuestions.org, or LQ for short, is a community-driven, self-help web site for Linux users. Each year, LinuxQuestions.org holds an annual competition to recognize the year’s best-in-breed technologies. The winners of each category are determined by the online Linux community!

You can vote now for your favorite products of 2015 (Percona, of course!). This is your chance to be heard!

Voting ends on February 10th, 2016. You must be a registered member of LinuxQuestions.org with at least one post on their forums to vote.

Setup a MongoDB replica/sharding set in seconds

Latest MySQL Performance Blog posts - January 28, 2016 - 11:09am

In the MySQL world, we’re used to playing in the MySQL Sandbox. It allows us to deploy a testing replication environment in seconds, without a great deal of effort or navigating multiple virtual machines. It is a tool that we couldn’t live without in Support.

In this post I am going to walk through the different ways we have to deploy a MongoDB replica/sharding set test in a similar way. It is important to mention that this is not intended for production, but to be used for troubleshooting, learning or just playing around with replication.

Replica Set regression test’s diagnostic commands

MongoDB includes a .js that allows us to deploy a replication set from the MongoDB’s shell. Just run the following:

# mongo --nodb > var rstest = new ReplSetTest( { name: 'replicaSetTest', nodes: 3 } ) > rstest.startSet() ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "replicaSetTest", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 0, "set" : "replicaSetTest" } } ReplSetTest Starting.... [...]

At some point our mongod daemons will be running, each with its own data directory and port:

2133 pts/0 Sl+ 0:01 mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet replicaSetTest --dbpath /data/db/replicaSetTest-0 --setParameter enableTestCommands=1 2174 pts/0 Sl+ 0:01 mongod --oplogSize 40 --port 31001 --noprealloc --smallfiles --rest --replSet replicaSetTest --dbpath /data/db/replicaSetTest-1 --setParameter enableTestCommands=1 2213 pts/0 Sl+ 0:01 mongod --oplogSize 40 --port 31002 --noprealloc --smallfiles --rest --replSet replicaSetTest --dbpath /data/db/replicaSetTest-2 --setParameter enableTestCommands=1

Perfect. Now we need to initialize the replicaset:

> rstest.initiate() { "replSetInitiate" : { "_id" : "replicaSetTest", "members" : [ { "_id" : 0, "host" : "debian:31000" }, { "_id" : 1, "host" : "debian:31001" }, { "_id" : 2, "host" : "debian:31002" } ] } } m31000| 2016-01-24T10:42:36.639+0100 I REPL [ReplicationExecutor] Member debian:31001 is now in state SECONDARY m31000| 2016-01-24T10:42:36.639+0100 I REPL [ReplicationExecutor] Member debian:31002 is now in state SECONDARY [...]

and it is done!

> rstest.status() { "set" : "replicaSetTest", "date" : ISODate("2016-01-24T09:43:41.261Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "debian:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 329, "optime" : Timestamp(1453628552, 1), "optimeDate" : ISODate("2016-01-24T09:42:32Z"), "electionTime" : Timestamp(1453628554, 1), "electionDate" : ISODate("2016-01-24T09:42:34Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "debian:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 68, "optime" : Timestamp(1453628552, 1), "optimeDate" : ISODate("2016-01-24T09:42:32Z"), "lastHeartbeat" : ISODate("2016-01-24T09:43:40.671Z"), "lastHeartbeatRecv" : ISODate("2016-01-24T09:43:40.677Z"), "pingMs" : 0, "configVersion" : 1 }, { "_id" : 2, "name" : "debian:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 68, "optime" : Timestamp(1453628552, 1), "optimeDate" : ISODate("2016-01-24T09:42:32Z"), "lastHeartbeat" : ISODate("2016-01-24T09:43:40.672Z"), "lastHeartbeatRecv" : ISODate("2016-01-24T09:43:40.690Z"), "pingMs" : 0, "configVersion" : 1 } ], "ok" : 1 }

There are many more commands you can run, just type rstest. and then press Tab twice to get the list. Follow this link if you need more info:

http://api.mongodb.org/js/current/symbols/_global_.html#ReplSetTest

What about sharding? Pretty similar:

> var shtest = new ShardingTest({ shards: 2, mongos: 1 })

This is the documentation link if you need more info:

http://api.mongodb.org/js/current/symbols/_global_.html#ShardingTest

It is important to mention that if you close the mongo shell where you run the commands, then all the spawned mongod will also shut down.

Mtools

mtools is a collection of tools and scripts that make MongoDB’s DBA lives much easier. It includes mlaunch, which can be used to start replicate sets and sharded systems for testing.

https://github.com/rueckstiess/mtools

The mlaunch tool requires pymongo, so you need to install it:

# pip install pymongo

You can also use pip to install mtools:

# pip install mtools

Then, we can just start our replica set. In this case, with two nodes and one arbiter:

# mlaunch --replicaset --nodes 2 --arbiter --name "replicaSetTest" --port 3000 launching: mongod on port 3000 launching: mongod on port 3001 launching: mongod on port 3002 replica set 'replicaSetTest' initialized. # ps -x | grep mongod 10246 ? Sl 0:03 mongod --replSet replicaSetTest --dbpath /root/data/replicaSetTest/rs1/db --logpath /root/data/replicaSetTest/rs1/mongod.log --port 3000 --logappend --fork 10257 ? Sl 0:03 mongod --replSet replicaSetTest --dbpath /root/data/replicaSetTest/rs2/db --logpath /root/data/replicaSetTest/rs2/mongod.log --port 3001 --logappend --fork 10274 ? Sl 0:03 mongod --replSet replicaSetTest --dbpath /root/data/replicaSetTest/arb/db --logpath /root/data/replicaSetTest/arb/mongod.log --port 3002 --logappend --fork

Done. You can also deploy a shared cluster, or a sharded replica set. More information in the following link:

https://github.com/rueckstiess/mtools/wiki/mlaunch

Ognom Toolkit

“It is a set of utilities, functions and tests with the goal of making the life of MongoDB/TokuMX administrators easier.”

This toolkit has been created by Fernando Ipar and Sveta Smirnova, and includes a set of scripts that allow us to deploy a testing environment for both sharding and replication configurations. The main difference is that you can specify what storage engine will be the default, something you cannot do with other to methods.

https://github.com/Percona-Lab/ognom-toolkit

We have the tools we need under “lab” directory. Most of the names are pretty self-explanatory:

~/ognom-toolkit/lab# ls README.md start_multi_dc_simulation start_sharded_test stop_all_mongo stop_sharded_test common.sh start_replica_set start_single stop_replica_set stop_single

So, let’s say we want a replication cluster with four nodes that will use PerconaFT storage engine. We have to do the following:

Set a variable with the storage engine we want to use:

# export MONGODB_ENGINE=PerconaFT

Specify where is our mongod binary:

# export MONGOD=/usr/bin/mongod

Start our 4 nodes replica set:

# ./start_replica_set Starting 4 mongod instances 2016-01-25T12:36:04.812+0100 I STORAGE Compression: snappy 2016-01-25T12:36:04.812+0100 I STORAGE MaxWriteMBPerSec: 1024 2016-01-25T12:36:04.813+0100 I STORAGE Crash safe counters: 0 about to fork child process, waiting until server is ready for connections. forked process: 1086 child process started successfully, parent exiting [...] MongoDB shell version: 3.0.8 connecting to: 127.0.0.1:27001/test { "set" : "rsTest", "date" : ISODate("2016-01-25T11:36:09.039Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "debian:27001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 5, "optime" : Timestamp(1453721767, 5), "optimeDate" : ISODate("2016-01-25T11:36:07Z"), "electionTime" : Timestamp(1453721767, 2), "electionDate" : ISODate("2016-01-25T11:36:07Z"), "configVersion" : 4, "self" : true }, { "_id" : 1, "name" : "debian:27002", "health" : 1, "state" : 5, "stateStr" : "STARTUP2", "uptime" : 1, "optime" : Timestamp(0, 0), "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2016-01-25T11:36:07.991Z"), "lastHeartbeatRecv" : ISODate("2016-01-25T11:36:08.093Z"), "pingMs" : 0, "configVersion" : 2 }, { "_id" : 2, "name" : "debian:27003", "health" : 1, "state" : 0, "stateStr" : "STARTUP", "uptime" : 1, "optime" : Timestamp(0, 0), "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2016-01-25T11:36:07.991Z"), "lastHeartbeatRecv" : ISODate("2016-01-25T11:36:08.110Z"), "pingMs" : 2, "configVersion" : -2 }, { "_id" : 3, "name" : "debian:27004", "health" : 1, "state" : 0, "stateStr" : "STARTUP", "uptime" : 1, "optime" : Timestamp(0, 0), "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2016-01-25T11:36:08.010Z"), "lastHeartbeatRecv" : ISODate("2016-01-25T11:36:08.060Z"), "pingMs" : 18, "configVersion" : -2 } ], "ok" : 1 }

Now, just start using it:

rsTest:PRIMARY> db.names.insert({ "a" : "Miguel"}) rsTest:PRIMARY> db.names.stats() { "ns" : "mydb.names", "count" : 1, "size" : 36, "avgObjSize" : 36, "storageSize" : 16384, "capped" : false, "PerconaFT" : { [...]

Conclusion

When dealing with bugs, troubleshooting or testing some application that needs a complex MongoDB infrastructure, these processes can save us lot of time. No need of set up multiple virtual machines, deal with networking and human mistakes. Just say “I want a sharded cluster, do it for me.”

Cluster crashing quite often

Lastest Forum Posts - January 28, 2016 - 6:30am
In preparation for migrating our tokumx prod cluster to mongo3 cluster and before our percona visit in a couple weeks, we decided to stand up a dev and qa cluster. This was done this week and it is quite unstable, crashing several times a day.

Both environments have 3 machines. generally the same two will crash (the primary and one secondary) and the third will remain in a secondary state.

Machines are Red Hat Enterprise VMs. 8 CPU, 16gb ram. data directory is on a SAN.

This is the log entry during the crash. I looked on both machines and it appears to be the exact same trace dump

2016-01-27T18:36:10.526-0600 F - Got signal: 6 (Aborted).

0x10b6cd2 0x10b6583 0x10b694a 0x7f22377286a0 0x7f2237728625 0x7f2237729e05 0x15c0a23 0x161a714 0x15ddf3f 0x15df70d 0x161fe60 0x1644f36 0x7f2238c8ca51 0x7f22377de93d
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"CB6CD2"},{"b":"400000","o":"CB6 583"},{"b":"400000","o":"CB694A"},{"b":"7F22376F60 00","o":"326A0"},{"b":"7F22376F6000","o":"32625"}, {"b":"7F22376F6000","o":"33E05"},{"b":"400000","o" :"11C0A23"},{"b":"400000","o":"121A714"},{"b":"400 000","o":"11DDF3F"},{"b":"400000","o":"11DF70D"},{ "b":"400000","o":"121FE60"},{"b":"400000","o":"124 4F36"},{"b":"7F2238C85000","o":"7A51"},{"b":"7F223 76F6000","o":"E893D"}],"processInfo":{ "mongodbVersion" : "3.0.8", "gitVersion" : "nogitversion", "uname" : { "sysname" : "Linux", "release" : "2.6.32-573.7.1.el6.x86_64", "version" : "#1 SMP Thu Sep 10 13:42:16 EDT 2015", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "7E06EF067281BA0E4AB5A7FDD89C759DFE5CEB71" }, { "b" : "7FFD648EF000", "elfType" : 3, "buildId" : "2426D85978796C7ED259CDC601A7C310C339A21C" }, { "b" : "7F22392C9000", "path" : "/usr/lib64/libsasl2.so.2", "elfType" : 3, "buildId" : "E0AEE889D5BF1373F2F9EE0D448DBF3F5B5113F0" }, { "b" : "7F22390B3000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "D053BB4FF0C2FC983842F81598813B9B931AD0D1" }, { "b" : "7F2238EA2000", "path" : "/lib64/libbz2.so.1", "elfType" : 3, "buildId" : "1250B1D041DD7552F0C870BB188DC3A34DF2651D" }, { "b" : "7F2238C85000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "D467973C46E563CDCF64B5F12B2D6A50C7A25BA1" }, { "b" : "7F2238A19000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "93610457BCF424BEBBF1F3FB44E51B51B50F2B55" }, { "b" : "7F2238636000", "path" : "/usr/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "06DDBB192AF74F99DB58F2150BFB83F42F5EBAD3" }, { "b" : "7F223842E000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "58C5A5FF5C82D7BE3113BE36DD87C7004E3C4DB1" }, { "b" : "7F223822A000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "B5AE05CEDC0CE917F50A3A468CFA2ACD8592E8F6" }, { "b" : "7F2237F24000", "path" : "/usr/lib64/libstdc++.so.6", "elfType" : 3, "buildId" : "28AF9321EBEA9D172CA43E11A60E02D0F7014870" }, { "b" : "7F2237CA0000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "989FE3A42CA8CEBDCC185A743896F23A0CF537ED" }, { "b" : "7F2237A8A000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "2AC15B051D1B8B53937E3341EA931D0E96F745D9" }, { "b" : "7F22376F6000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "A6D15926E61580E250ED91F84FF7517F3970CD83" }, { "b" : "7F22394E3000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "04202A4A8BE624D2193E812A25589E2DD02D5B5C" }, { "b" : "7F22374DC000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "F704FA7D21D05EF31E90FB4890FCA7F3D91DA138" }, { "b" : "7F22372A5000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "128802B73016BE233837EA9F2DCBC2153ACC2D6A" }, { "b" : "7F2237061000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "0C72521270790A1BD52C8F6B989EEA5A575085BF" }, { "b" : "7F2236D7A000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "DC11D5D89BDC77FF242481122D51E5A08DB60DA8" }, { "b" : "7F2236B76000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "13FFCD68952B7715DDF34C9321D82E3041EA9006" }, { "b" : "7F223694A000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "15782495E3AF093E67DDAE9A86436FFC6B3CC4D3" }, { "b" : "7F2236747000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "58BAC04A1DB3964A8F594EFFBE4838AD01214EDC" }, { "b" : "7F223653C000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "44A3A1C1891B4C8170C3DB80E7117A022E5EECD0" }, { "b" : "7F2236339000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "3BCCABE75DC61BBA81AAE45D164E26EF4F9F55DB" }, { "b" : "7F223611A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "2D0F26E648D9661ABD83ED8B4BBE8F2CFA50393B" } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x10b6cd2]
mongod(+0xCB6583) [0x10b6583]
mongod(+0xCB694A) [0x10b694a]
libc.so.6(+0x326A0) [0x7f22377286a0]
libc.so.6(gsignal+0x35) [0x7f2237728625]
libc.so.6(abort+0x175) [0x7f2237729e05]
mongod(_Z23toku_ftnode_pf_callbackPvS_S_iP11pair_a ttr_s+0xAC3) [0x15c0a23]
mongod(_Z30toku_cachetable_pf_pinned_pairPvPFiS_S_ S_iP11pair_attr_sES_P9cachefile10blocknum_sj+0x104 ) [0x161a714]
mongod(_Z24toku_ft_flush_some_childP2ftP6ftnodeP14 flusher_advice+0x23F) [0x15ddf3f]
mongod(_Z28toku_ftnode_cleaner_callbackPv10blocknu m_sjS_+0x1DD) [0x15df70d]
mongod(_ZN7cleaner11run_cleanerEv+0x270) [0x161fe60]
mongod(+0x1244F36) [0x1644f36]
libpthread.so.0(+0x7A51) [0x7f2238c8ca51]
libc.so.6(clone+0x6D) [0x7f22377de93d]
----- END BACKTRACE -----

Can't access cloud.percona.com

Lastest Forum Posts - January 28, 2016 - 1:32am
I have tried many time to access cloud.percona.com, but the I still cannot enter the website. Is there any other ways to access the website? Any of you have the same problem with me?



-----------------------------------
www.bestbeginnerkettlebellworkoutroutines.com

Percona TokuDB cannot build on OS X EI Capitan

Lastest Forum Posts - January 28, 2016 - 1:20am
percona tokuDB plugin cannot build on OS X EI Capitan,it's show this info

Scanning dependencies of target tokudb_static_conv
[ 61%] Building CXX object storage/tokudb/PerconaFT/src/CMakeFiles/tokudb_static_conv.dir/ydb.cc.o
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/src/ydb.cc:52:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/ft/ft-flusher.h:41:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/ft/ft-internal.h:49:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/ft/node.h:40:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/ft/bndata.h:40:
In file included from /Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.h:679:
/Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.cc:873:9: error:
nonnull parameter 'outlen' will evaluate to 'true' on first encounter
[-Werror,-Wpointer-bool-conversion]
if (outlen) {
~~ ^~~~~~
/Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.cc:883:9: error:
nonnull parameter 'outlen' will evaluate to 'true' on first encounter
[-Werror,-Wpointer-bool-conversion]
if (outlen) {
~~ ^~~~~~
/Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.cc:893:9: error:
nonnull parameter 'outlen' will evaluate to 'true' on first encounter
[-Werror,-Wpointer-bool-conversion]
if (outlen) {
~~ ^~~~~~
/Users/Eric/Downloads/percona-server-5.6.28-76.1/storage/tokudb/PerconaFT/util/dmt.cc:903:9: error:
nonnull parameter 'outlen' will evaluate to 'true' on first encounter
[-Werror,-Wpointer-bool-conversion]
if (outlen) {
~~ ^~~~~~
4 errors generated.
make[2]: *** [storage/tokudb/PerconaFT/src/CMakeFiles/tokudb_static_conv.dir/ydb.cc.o] Error 1
make[1]: *** [storage/tokudb/PerconaFT/src/CMakeFiles/tokudb_static_conv.dir/all] Error 2
make: *** [all] Error 2



Can't add Google account linked users

Lastest Forum Posts - January 27, 2016 - 5:40pm

I have some user signed up with their google account, I wanted to add them to Persona Cloud Organization.But, I can't add the users and the computer shows Error: "A user does not exist for (email)" Any other ways to add the users?

Thanks


--------------------------------------
www.howtostopandreversegrayhair.com

MongoDB revs you up: What storage engine is right for you? (Part 4)

Latest MySQL Performance Blog posts - January 27, 2016 - 12:13pm
Differentiating Between MongoDB Storage Engines: PerconaFT

In this series of posts, we discussed what a storage engine is, and how you can determine the characteristics of one versus the other:

“A database storage engine is the underlying software that a DBMS uses to create, read, update and delete data from a database. The storage engine should be thought of as a “bolt on” to the database (server daemon), which controls the database’s interaction with memory and storage subsystems.”

Generally speaking, it’s important to understand what type of work environment the database is going to interact with, and to select a storage engine that is tailored to that environment.

The first post looked at MMAPv1, the original default engine for MongoDB (through release 3.0). The second post examined WiredTiger, the new default MongoDB engine. The third post reviewed RocksDB, an engine developed for the Facebook environment.

This post will cover PerconaFT. PerconaFT was developed out of Percona’s acquisition of Tokutek, from their TokuDB product.

PerconaFT

Find it in: Percona Builds

PerconaFT is the newest version of the Fractal Tree storage engine that was designed and implemented by Tokutek, which was acquired by Percona in April of 2015. Designed at MIT, SUNY Stony Brook and Rutgers, the Fractal Tree is a data structure that aimed to remove disk bottlenecks from databases that were using the B-tree with datasets that were several times larger that cache.

PerconaFT is arguably the most “mature” storage engine for MongoDB, with support for document level concurrency and compression. The Fractal Tree was first commercially implemented in June of 2013 in TokuMX, a fork of MongoDB, with an advanced feature set.

As described previously, the Fractal Tree (which is available for MongoDB in the PerconaFT storage engine) is a write-optimized data structure utilizing many log-like “queues” called message buffers, but has an arrangement like that of a read-optimized data structure. With the combination of these properties, PerconaFT can provide high performance for applications with high insert rates, while providing very efficient lookups for update/query-based applications. This will theoretically provide very predictable and consistent performance as the database grows. Furthermore, PerconaFT typically provides, comparatively, the deepest compression rates of any of the engines we’ve discussed in this series.

An ideal fit for the PerconaFT storage engine is a system with varied workloads, where predictable vertical scaling is required in addition to the horizontal scaling provide MongoDB. Furthermore, the ability of PerconaFT to maintain performance while compressing – along with support for multiple compression algorithms (snappy, quicklz, zlib and lzma) – make it one of the best options for users looking to optimize their data footprint.

Conclusion

Most people don’t know that they have a choice when it comes to storage engines, and that the choice should be based on what the database workload will look like. Percona’s Vadim Tkachenko performed an excellent benchmark test comparing the performances of PerconaFT and WiredTiger to help specifically differentiate between these engines.

Part 1: Intro and the MMAPv1 storage engine.

Part 2: WiredTiger storage engine.

Part 3: RocksDB storage engine.

Percona CEO Peter Zaitsev discusses working remotely with Fortune Magazine

Latest MySQL Performance Blog posts - January 27, 2016 - 11:33am

As a company that believes in and supports the open source community, embracing innovation and change is par for the course at Percona. We wouldn’t be the company we are today without fostering a culture that rewards creative thinking and rapid evolution.

Part of this culture is making sure that Percona is a place where people love to work, and can transmit their passion for technology into tangible rewards – both personally and financially. One of the interesting facts about Percona’s culture is that almost 95 percent of its employees are working remotely. Engineers, support, marketing, even executive staff – most of these people interact daily via electronic medium rather than in person. Percona’s staff is worldwide across 29 countries and 19 U.S. states. How does that work? How do you make sure that the staff is happy, committed, and engaged enough to stay on? How do you attract prospective employees with this unusual model?

It turns out that not only does it work, but it works very well. It can be challenging to manage the needs of such a geographically diverse group, but the rewards (and the results) outweigh the effort.

The secret is, of course, good communication, an environment of respect and personal empowerment.

Percona’s CEO Peter Zaitsev recently provided some of his thoughts to Fortune magazine about how our business model helps to not only to foster incredible dedication and innovation, but create a work environment that encourages passion, commitment and teamwork.

Read about his ideas on Percona’s work model here.

Oh, and by the way, Percona is currently hiring! Perhaps a career here might fit in with your plans . . .



General Inquiries

For general inquiries, please send us your question and someone will contact you.