Buy Percona SupportEmergency? Get 24/7 Help Now!

libperconaserverclient

Lastest Forum Posts - September 25, 2016 - 4:43am
Hello,

After I've installed percona server 5.7 I am experiencing problems with connecting my apps with mysql using socket. It says weird error:
MySQL Error Message: Can't connect to local MySQL server through socket '' (111)



My /etc/mysql/my.cnf is

# Generated by Percona Configuration Wizard (http://tools.percona.com/) version REL5-20120208
# Configuration name server-9 generated for gunzo@gunzo.eu at 2016-09-25 08:23:14

[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock



[mysql]

# CLIENT #
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld]

# GENERAL #
user = mysql
default-storage-engine = InnoDB
socket = /var/run/mysqld/mysqld.sock
pid-file = /var/lib/mysql/mysql.pid

# MyISAM #
key-buffer-size = 32M
# myisam-recover = FORCE,BACKUP

# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000

# DATA STORAGE #
datadir = /var/lib/mysql/

# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048

# INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 2G
innodb-flush-log-at-trx-commit = 2
innodb-file-per-table = 1
innodb-buffer-pool-size = 20G
innodb_doublewrite=1
innodb_flush_log_at_trx_commit=1
innodb_buffer_pool_instances=6
innodb_change_buffering=none
innodb_adaptive_hash_index=OFF
innodb_flush_method=O_DIRECT
innodb_flush_neighbors=0
innodb_read_io_threads=6
innodb_write_io_threads=6
innodb_lru_scan_depth=8192
innodb_io_capacity=15000
innodb_io_capacity_max=25000
loose-innodb-page-cleaners=4
table_open_cache_instances=64
table_open_cache=5000
loose-innodb-log_checksum-algorithm=crc32
loose-innodb-checksum-algorithm=strict_crc32
max_connections=50000
skip_name_resolve=ON
loose-performance_schema=ON
loose-performance-schema-instrument='wait/synch/%=ON'

# innodb-flush-neighbor_pages = 0
# innodb-adaptive-flushing_method = keep_average

# LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 0
slow-query-log = 0
slow-query-log-file = /var/lib/mysql/mysql-slow.log
sync_binlog = 0

sql_mode = ""

I would appriciate some help. Thank you!




pt-table-checksum can't complete from slave to slave

Lastest Forum Posts - September 23, 2016 - 12:21pm
We try to migration from the MySQL 5.5 to the MySQL 5.6. Since the Master is under a really heavy duty, we could only validate the date between one MySQL 5.5 slave and one MySQL 5.6 slave.

But when I try the pt-table-checksum, it is show the waiting for several hours.

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain


My Command is:

pt-table-checksum --socket=/apollo/env/AWSManagementDatastore/var/mysql/state/mysql.sock -u "root" -p "password" -h "localhost" -P 8895 --recursion-method=dsn=D=test,t=dsns --nocheck-replication-filters --no-check-binlog-format --databases=management --replicate=management.liranchecksums

And I set the dsn table like below:
*************************** 1. row ***************************

id: 1

parent_id: NULL

dsn: h=mdb-gamma.cluster-cmur5xirflse.us-east-1.rds.amazonaws.com,P=8895,u=root,p=password

Mongodb monitoring

Lastest Forum Posts - September 23, 2016 - 1:40am
Hi,
I have installed pmm-server on my local machine:

osmanf ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8247c5b8c07c percona/pmm-server:1.0.4 "/opt/entrypoint.sh" 38 minutes ago Up 38 minutes 0.0.0.0:80->80/tcp, 443/tcp pmm-server

And installed pmm-client on a server running mongodb:

[root@textserver log]# pmm-admin list
pmm-admin 1.0.4

PMM Server | x.x.x.x
Client Name | testserver
Client Address | y.y.y.y
Service manager | unix-systemv

---------------- ------- ------------ -------- ---------------- --------
SERVICE TYPE NAME CLIENT PORT RUNNING DATA SOURCE OPTIONS
---------------- ------- ------------ -------- ---------------- --------
linux:metrics testserver 42000 YES -
mongodb:metrics testserver 42003 YES localhost:27017

But from the web interface (metrics monitor), I could not see the mongo client.
How can I debug the problem?

Thanks,

COMMIT command on pt-query-digest

Lastest Forum Posts - September 22, 2016 - 7:00pm
Hello everyone,

First of all thanks for your time.

I have been using pt-query-digest to optimize some queries on my App and after the lastest changes, I end up with the following results:

Code: # 1565.7s user time, 5.7s system time, 173.72M rss, 325.88M vsz # Current date: Thu Sep 22 20:04:56 2016 # Hostname: centos6-64 # Files: /var/lib/mysql/centos6-64-slow.log # Overall: 5.50M total, 2.67k unique, 63.62 QPS, 0.20x concurrency _______ # Time range: 2016-09-21 19:30:00 to 2016-09-22 19:29:59 # Attribute total min max avg 95% stddev median # ============ ======= ======= ======= ======= ======= ======= ======= # Exec time 16960s 1us 12s 3ms 16ms 36ms 224us # Lock time 520s 0 743ms 94us 176us 1ms 44us # Rows sent 4.04M 0 28.58k 0.77 0.99 24.78 0 # Rows examine 1.47G 0 2.61M 287.82 174.84 7.36k 0 # Query size 1.40G 6 14.64k 272.73 755.64 647.70 76.28 # Profile # Rank Query ID Response time Calls R/Call V/M Item # ==== ================== ================ ======= ====== ===== ========== # 1 0x813031B8BBC3B329 10221.3411 60.3% 378094 0.0270 0.03 COMMIT # 2 0xE604EE106818FA0B 745.0663 4.4% 344 2.1659 1.05 SELECT ... # Query 1: 4.38 QPS, 0.12x concurrency, ID 0x813031B8BBC3B329 at byte 2083111980 # This item is included in the report because it matches --limit. # Scores: V/M = 0.03 # Time range: 2016-09-21 19:30:00 to 2016-09-22 19:29:59 # Attribute pct total min max avg 95% stddev median # ============ === ======= ======= ======= ======= ======= ======= ======= # Count 6 378094 # Exec time 60 10221s 11us 2s 27ms 61ms 29ms 18ms # Lock time 0 0 0 0 0 0 0 0 # Rows sent 0 0 0 0 0 0 0 0 # Rows examine 0 0 0 0 0 0 0 0 # Query size 0 2.16M 6 6 6 6 0 6 # String: # Databases XXXX... (41949/11%)... 43 more # Hosts localhost # Users XXXX # Query_time distribution # 1us # 10us # # 100us # # 1ms ### # 10ms ################################################################ # 100ms # # 1s # # 10s+ commit\G
I was wondering if anyone could enlighten me about this.

Thanks in advance!

Percona Live Europe featured talk with Anthony Yeh — Launching Vitess: How to run YouTube’s MySQL sharding engine

Latest MySQL Performance Blog posts - September 22, 2016 - 1:59pm

Welcome to another Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Anthony Yeh, Software Engineer, Google. His talk will be on Launching Vitess: How to run YouTube’s MySQL sharding engine. Vitess is YouTube’s solution for scaling MySQL horizontally through sharding, built as a general-purpose, open-source project. Now that Vitess 2.0 has reached general availability, they’re moving beyond “getting started” guides and working with users to develop and document best practices for launching Vitess in their own production environments.

I had a chance to speak with Anthony and learn a bit more about Vitess:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it.

Anthony: Before joining YouTube as a software engineer, I worked on photonic integrated circuits as a graduate student researcher at U.C. Berkeley. So I guess you could say I took a rather circuitous path to the database field. My co-presenter Dan and I have that in common. If you see him at the conference, I recommend asking him about his story.

I don’t actually think of myself as being in database development though; that’s probably more Sugu‘s area. I treat Vitess as just another distributed system, and my job is to make it more automated, more reliable, and easier to administer. My favorite part of this job is when open-source contributors send us new features and plug-ins, and all I have to do is review them. Keep those pull requests coming!

Percona: Your talk is going to be on “Launching Vitess: How to run YouTube’s MySQL sharding engine.” How has Vitess moved from a YouTube fix to a viable enterprise data solution?

Anthony: I joined Vitess a little over two years ago, right when they decided to expand the team’s focus to include external usability as a key goal. The idea was to transform Vitess from a piece of YouTube infrastructure that happens to be open-source, into an open-source solution that YouTube happens to use.

At first, the biggest challenge was getting people to tell us what they needed to make Vitess work well in their environments. Attending Percona Live is a great way to keep a pulse on how the industry uses MySQL, and talk with exactly the people who can give us that feedback. Progress really picked up early this year when companies like Flipkart and Pixel Federation started not only trying out Vitess on their systems, but contributing back features, plug-ins, and connectors.

My half of the talk will summarize all the things we’ve learned from these early adopters about migrating to Vitess and running it in various environments. We also convinced one of our Site Reliability Engineers to give the second half of the talk, to share firsthand what it’s like to run Vitess in production.

Percona: What new features and fixes can people look forward to in the latest release?

Anthony: The biggest new feature in Vitess 2.0 is something that was codenamed “V3” (sorry about the naming confusion). In a nutshell, this completes the transition of all sharding logic from the app into Vitess: at first you had to give us a shard name, then you just had to tell us the sharding key value. Now you just send a regular query and we do the rest.

To make this possible, Vitess has to parse and analyze the query, for which it then builds a distributed execution plan. For queries served by a single shard, the plan collapses to a simple routing decision without extra processing. But for things like cross-shard joins, Vitess will generate new queries and combine results from multiple shards for you, in much the same way your app would otherwise do it.

Percona: Why is sharding beneficial to databases? Are there pros and cons to sharding?

Anthony: The main pro for sharding is horizontal scalability, the holy grail of distributed databases. It offers the promise of a magical knob that you simply turn up when you need more capacity. The biggest cons have usually been that it’s a lot of work to make your app handle sharding, and it multiplies the operational overhead as you add more and more database servers.

The goal of Vitess is to create a generalized solution to these problems, so we can all stop building one-off sharding layers within our apps, and replace a sea of management scripts with a holistic, self-healing distributed database.

Percona: Vitess is billed as being for web applications based in cloud and dedicated hardware infrastructures. Was it designed specifically for one or the other, and does it work better for certain environments?

Anthony: Vitess started out on dedicated YouTube hardware and later moved into Borg, which is Google’s internal precursor to Kubernetes. So we know from experience that it works in both types of environments. But like any distributed system, there are lots of benefits to running Vitess under some kind of cluster orchestration system. We provide sample configs to get you started on Kubernetes, but we would love to also have examples for other orchestration platforms like Mesos, Swarm, or Nomad, and we’d welcome contributions in this area.

Percona: What are you most looking forward to at Percona Live Data Performance Conference 2016?

Anthony: I hope to meet people who have ideas about how to make Vitess better, and I look forward to learning more about how others are solving similar problems.

You can read more about Anthony and Vitess on the Vitess blog.

Want to find out more about Anthony, Vitess, YouTube and and sharding? Register for Percona Live Europe 2016, and come see his talk Launching Vitess: How to run YouTube’s MySQL sharding engine.

Use the code FeaturedTalk and receive €25 off the current registration price!

Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.

This conference has something for everyone!

Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.

Amsterdam eWeek

Percona Live Europe 2016 is part of Amsterdam eWeek. Amsterdam eWeek provides a platform for national and international companies that focus on online marketing, media and technology and for business managers and entrepreneurs who use them, whether it comes to retail, healthcare, finance, game industry or media. Check it out!

Percona XtraDB Cluster 5.5.41-25.11.1 is now available

Latest MySQL Performance Blog posts - September 22, 2016 - 10:56am

Percona announces the new release of Percona XtraDB Cluster 5.5.41-25.11.1 (rev. 855) on September 22, 2016. Binaries are available from the downloads area or our software repositories.

Bugs Fixed:
  • Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory. This fix also addresses issue with where limiting didn’t work correctly for relative paths. Bug fixed #1624247.
  • Fixed possible privilege escalation that could be used when running REPAIR TABLE on a MyISAM table. Bug fixed #1624397.
  • The general query log and slow query log cannot be written to files ending in .ini and .cnf anymore. Bug fixed #1624400.
  • Implemented restrictions on symlinked files (error_log, pid_file) that can’t be used with mysqld_safe. Bug fixed #1624449.

Other bugs fixed: #1553938.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Sixth Annual Percona Live Open Source Database Conference 2017 Call for Speakers Now Open

Latest MySQL Performance Blog posts - September 22, 2016 - 9:44am

The Call for Speakers for Percona Live Open Source Database Conference 2017 is open and accepting proposals through Oct. 31, 2016.

The Percona Live Open Source Database Conference 2017 is the premier event for the diverse and active open source community, as well as businesses that develop and use open source software. Topics for the event will focus on three key areas – MySQL, MongoDB and Open Source Databases – and the conference sessions will feature a range of in-depth discussions and hands-on tutorials.

The 2017 conference will feature four formal tracks – Developer, Operations, Business/Case Studies, and Wildcard – that will explore a variety of new and trending topics, including big data, IoT, analytics, security, scalability and performance, architecture and design, operations and management and development. Speaker proposals are welcome on these topics as well as on a variety of related technologies, including MySQL, MongoDB, Amazon Web Services (AWS), OpenStack, Redis, Docker and many more. The conference will also feature sponsored talks.

Percona Live Open Source Database Conference 2017 will take place April 24-27, 2017 at The Hyatt Regency Santa Clara and Santa Clara Convention Center. Sponsorship opportunities are still available, and Super Saver Registration Discounts can be purchased through Nov. 13, 2016 at 11:30 p.m. PST.

Click here to see all the submission criteria, and to submit your talk.

Sponsorships

Sponsorship opportunities for Percona Live Open Source Database Conference 2017 are available and offer the opportunity to interact with more than 1,000 DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event.

Planning to Attend?

Super Saver Registration Discounts for Percona Live Open Source Database Conference 2017 are available through Nov. 13, 2016 at 11:30 p.m. PST.

Visit the Percona Live Open Source Database Conference 2017 website for more information about the conference. Interested community members can also register to receive email updates about Percona Live Open Source Database Conference 2017.

Percona Server 5.7 - slave lagging behind master with GTID replication and MTS

Lastest Forum Posts - September 22, 2016 - 5:48am
Hello guys,

We have a MySQL master-slave setup (one master and one slave) running "5.7.14-7-log Percona Server (GPL), Release 7, Revision 083e298" on both master and slave. We use GTID and MTS. Following are replication and InnoDB settings on master and slave:


- on master

#replication
server_id=1
log_bin
gtid_mode=ON
enforce_gtid_consistency=ON

#Innodb
innodb_buffer_pool_size = 48G
innodb_buffer_pool_instances = 48
innodb_log_file_size = 4G
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_thread_concurrency = 0
innodb_read_io_threads=48
innodb_write_io_threads=48
innodb_flush_neighbors = 0
innodb_io_capacity = 10000
innodb_buffer_pool_dump_pct = 100
binlog_cache_size = 32M


- on slave

#replication
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
slave_parallel_type = LOGICAL_CLOCK
slave_parallel_workers = 8
slave_pending_jobs_size_max = 128M
slave_preserve_commit_order = On
log_bin = ON
log_slave_updates = ON

#Innodb
innodb_buffer_pool_size = 48G
innodb_buffer_pool_instances = 48
innodb_log_file_size = 4G
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_thread_concurrency = 0
innodb_read_io_threads=32
innodb_write_io_threads=32
innodb_flush_neighbors = 0
innodb_io_capacity = 10000
innodb_buffer_pool_dump_pct = 100
binlog_cache_size = 32M


When we check slave status (show slave status \G), slave is lagging behind master for a few seconds (1-5 secs) most of the time and Slave_SQL_Running_State is 'Waiting for dependent transaction to commit'.

Example:

=======================

Slave_IO_State: Waiting for master to send event
Master_Host: x.x.x.x
Master_User: xxxxx
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: db-bin.000062
Read_Master_Log_Pos: 114817922
Relay_Log_File: dbslv-relay-bin.000213
Relay_Log_Pos: 114256657
Relay_Master_Log_File: db-bin.000062
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 114814928
Relay_Log_Space: 114818844
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 4
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: b7a72845-68e9-11e6-9fcc-00269e9bc0c4
Master_Info_File: /var/lib/mysql/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Waiting for dependent transaction to commit
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set: b7a72845-68e9-11e6-9fcc-00269e9bc0c4:81-7675670
Executed_Gtid_Set: b7a72845-68e9-11e6-9fcc-00269e9bc0c4:1-7675662
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:

=======================

When we were using the plain old 'Binary Log File Position Based Replication', we did not have this problem and Seconds_Behind_Master was rarely greater than 0 seconds.

Could this be due to slave_preserve_commit_order = On?

Anyone experienced this problem with GTID replication and MTS?

Thank you!

Incremental cluster update

Lastest Forum Posts - September 22, 2016 - 3:47am
Hi All

We have a percona cluster where 1x of the nodes has crashed multiple times without any obvious reason. I've given the server a thorough check over and hardware/filesystems etc all look healthy.

It's running quite an old version of Percona 5.6.x so looking at upgrading it. I just want to check there are no problems with running different minor versions of percona in the same cluster, so I can upgrade them incrementally? Assume it follows the usual MySQL rules, so as long as its still 5.6.xx there should be no issues?

Current version: Percona-XtraDB-Cluster-client-56-5.6.15-25.4.731.rhel6
Latest version: Percona-XtraDB-Cluster-client-56-5.6.30-25.16.3.el6.x86_64

Thanks

How to remove Galera and revert to plan Percona

Lastest Forum Posts - September 22, 2016 - 3:10am
How do i remove the galera plugin and revert back to Percona server ?

Percona XtraDB Cluster 5.6.30-25.16.3 is now available

Latest MySQL Performance Blog posts - September 21, 2016 - 11:22am

Percona  announces the new release of Percona XtraDB Cluster 5.6 on September 21, 2016. Binaries are available from the downloads area or our software repositories.

Percona XtraDB Cluster 5.6.30-25.16.3 is now the current release, based on the following:

  • Percona Server 5.6.30-76.3
  • Galera Replication library 3.16
  • Codership wsrep API version 25
Bugs Fixed:
  • Limiting ld_preload libraries to be loaded from specific directories in mysqld_safe didn’t work correctly for relative paths. Bug fixed #1624247.
  • Fixed possible privilege escalation that could be used when running REPAIR TABLE on a MyISAM table. Bug fixed #1624397.
  • The general query log and slow query log cannot be written to files ending in .ini and .cnf anymore. Bug fixed #1624400.
  • Implemented restrictions on symlinked files (error_log, pid_file) that can’t be used with mysqld_safe. Bug fixed #1624449.

Other bugs fixed: #1553938.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Percona Server 5.7.14-8 is now available

Latest MySQL Performance Blog posts - September 21, 2016 - 11:11am

Percona announces the GA release of Percona Server 5.7.14-8 on September 21, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Based on MySQL 5.7.14, including all the bug fixes in it, Percona Server 5.7.14-8 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.14-8 milestone at Launchpad.

Bugs Fixed:
  • Limiting ld_preload libraries to be loaded from specific directories in mysqld_safe didn’t work correctly for relative paths. Bug fixed #1624247.
  • Fixed possible privilege escalation that could be used when running REPAIR TABLE on a MyISAM table. Bug fixed #1624397.
  • The general query log and slow query log cannot be written to files ending in .ini and .cnf anymore. Bug fixed #1624400.
  • Implemented restrictions on symlinked files (error_log, pid_file) that can’t be used with mysqld_safe. Bug fixed #1624449.

Other bugs fixed: #1553938.

The release notes for Percona Server 5.7.14-8 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Percona Server 5.6.32-78.1 is now available

Latest MySQL Performance Blog posts - September 21, 2016 - 11:04am

Percona announces the release of Percona Server 5.6.32-78.1 on September 21st, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Based on MySQL 5.6.32, including all the bug fixes in it, Percona Server 5.6.32-78.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.32-78.1 milestone on Launchpad.

Bugs Fixed:
  • Limiting ld_preload libraries to be loaded from specific directories in mysqld_safe didn’t work correctly for relative paths. Bug fixed #1624247.
  • Fixed possible privilege escalation that could be used when running REPAIR TABLE on a MyISAM table. Bug fixed #1624397.
  • The general query log and slow query log cannot be written to files ending in .ini and .cnf anymore. Bug fixed #1624400.
  • Implemented restrictions on symlinked files (error_log, pid_file) that can’t be used with mysqld_safe. Bug fixed #1624449.

Other bugs fixed: #1553938.

Release notes for Percona Server 5.6.32-78.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Percona Server 5.5.51-38.2 is now available

Latest MySQL Performance Blog posts - September 21, 2016 - 10:58am

Percona announces the release of Percona Server 5.5.51-38.2 on September 21, 2016. Based on MySQL 5.5.51, including all the bug fixes in it, Percona Server 5.5.51-38.2 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. You can find release details of the release in the 5.5.51-38.2 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:
  • Limiting ld_preload libraries to be loaded from specific directories in mysqld_safe didn’t work correctly for relative paths. Bug fixed #1624247.
  • Fixed possible privilege escalation that could be used when running REPAIR TABLE on a MyISAM table. Bug fixed #1624397.
  • The general query log and slow query log cannot be written to files ending in .ini and .cnf anymore. Bug fixed #1624400.
  • Implemented restrictions on symlinked files (error_log, pid_file) that can’t be used with mysqld_safe. Bug fixed #1624449.

Other bugs fixed: #1553938.

Find the release notes for Percona Server 5.5.51-38.2 in our online documentation. Report bugs on the launchpad bug tracker.

Regular Expressions Tutorial

Latest MySQL Performance Blog posts - September 21, 2016 - 6:48am

This blog post highlights a video on how to use regular expressions.

It’s been a while since I did the MySQL QA and Bash Training Series. The 13 episodes were quite enjoyable to make, and a lot of people watched the video’s and provided great feedback.

In today’s new video, I’d like to briefly go over regular expressions. The session will cover the basics of regular expressions, and then some. I’ll follow up later with a more advanced regex session too.

Regular expressions are very versatile, and once you know how to use them – especially as a script developer or software coder – you will return to them again and again. Enjoy!

Presented by Roel Van de Paar. Full-screen viewing @ 720p resolution recommended

 

Webinar Thursday September 22 – Black Friday and Cyber Monday: How to Avoid an E-Commerce Disaster

Latest MySQL Performance Blog posts - September 21, 2016 - 6:11am

Join Percona’s Sr. Technical Operations Architect, Tim Vaillancourt on Thursday, September 22, at 10 am PDT (UTC-7) for the webinar Black Friday and Cyber Monday: How to Avoid an E-Commerce Disaster. This webinar will provide some best practices to ensure the performance of your system under high-traffic conditions.

Can your retail site handle the traffic deluge on the busiest shopping day of the year?

Black Friday and Cyber Monday is mere months away. Major retailers have already begun stress-testing their e-commerce sites to make sure they can handle the load. Failure to accommodate the onslaught of post-Thanksgiving shoppers might result in both embarrassing headlines and millions of dollars in lost revenue. Our advice to retailers: September stress tests are essential to a glitch-free Black Friday.

This webinar will cover:

  • Tips to avoid bottlenecks in data-driven apps
  • Techniques to allow an app to grow and shrink for large events/launches
  • Solutions to alleviate load on an app’s database
  • Developing and testing scalable apps
  • Deployment strategies to avoid downtime
  • Creating lighter, faster user facing requests

For more ideas on how to optimize your E-commerce database, read Tim’s blog post here.

Please register here.

Timothy Vaillancourt, Senior Technical Operations Architect

Tim joined Percona in 2016 as Sr. Technical Operations Architect for MongoDB with a goal to make the operations of MongoDB as smooth as possible. With experience operating infrastructures in industries such as government, online marketing/publishing, SaaS and gaming, combined with experience tuning systems from the hard disk all the way up to the end-user, Tim has spent time in nearly every area of the modern IT stack with many lessons learned.

Tim is based in Amsterdam, NL and enjoys traveling, coding and music. Before Percona Tim was the Lead MySQL DBA of Electronic Arts’ DICE studios, helping some of the largest games in the world (“Battlefield” series, “Mirrors Edge” series, “Star Wars: Battlefront”) launch and operate smoothly while also leading the automation of MongoDB deployments for EA systems. Before the role of DBA at EA’s DICE studio, Tim served as a subject matter expert in NoSQL databases, queues and search on the Online Operations team at EA SPORTS. Before moving to the gaming industry, Tim served as a Database/Systems Admin operating a large MySQL-based SaaS infrastructure at AbeBooks/Amazon Inc.

Using pt-online-schema-change on Master (MySQL 5.5) - Slave (Amazon Aurora) setup

Lastest Forum Posts - September 20, 2016 - 8:09pm
I want to alter the schema of a table which has around 200 million rows. The change involves adding an existing column to the primary key. The data present in this table is fully compatible with the new schema.

In the current setup, Master is using MySQL 5.5 database. There are 10 slaves which are replicating from the Master. Out of which 5 slaves are on Amazon Aurora which means they are using MySQL 5.6 and the rest are on MySQL 5.5. I am using statement-based-replication to replicate data to the slaves.

Given this setup where the Master is using MySQL 5.5 and slaves are using MySQL 5.6, can I use pt-online-schema-change on Master to make the schema change ? Is this operation safe, considering the different versions ? What are the important things to consider in such a setup ?

MongoDB point-in-time backups made easy

Latest MySQL Performance Blog posts - September 20, 2016 - 4:03pm

In this blog post we’ll look at MongoDB point-in-time backups, and work with them.

Mongodump is the base logical backup tool included with MongoDB. It takes a full BSON copy of database/collections, and optionally includes a log of changes during the backup used to make it consistent to a point in time. Mongorestore is the tool used to restore logical backups created by Mongodump. I’ll use these tools in the steps in this article to restore backed-up data. This article assumes a mongodump-based backup that was taken consistently with oplog changes (by using the command flag “–oplog”), and the backup is being restored to a MongoDB instance.

In this example, a mongodump backup is gathered and restored for the base collection data, and separately the oplogs/changes necessary to restore the data to a particular point-in-time are collected and applied to this data.

Note: Percona developed a backup tool named mongodb_consistent_backup, which is a wrapper for ‘mongodump’ with added cluster-wide backup consistency. The backups created by mongodb_consistent_backup (in Dump/Mongodump mode) can be restored using the same steps as a regular “mongodump” backup.

Stages Stage 1: Get a Mongodump Backup Mongodump Command Flags –host/–port (and –user/–password)

Required, even if you’re using the default host/port (localhost:27017).  If authorization is enabled, add –user/–password flags also.

–oplog

Required for any replset member! Causes “mongodump” to capture the oplog change log during the backup for consistent to one point in time.

–gzip

Optional. For mongodump >= 3.2, enables inline compression on the backup files.

Steps
  1. Get a mongodump backup via (pick one):
    • Running “mongodump” with the correct flags/options to take a backup (w/oplog) of the data: $ mongodump --host localhost --port 27017 --oplog --gzip 2016-08-15T12:32:28.930+0200 writing wikipedia.pages to 2016-08-15T12:32:31.932+0200 [#########...............] wikipedia.pages 674/1700 (39.6%) 2016-08-15T12:32:34.931+0200 [####################....] wikipedia.pages 1436/1700 (84.5%) 2016-08-15T12:32:37.509+0200 [########################] wikipedia.pages 2119/1700 (124.6%) 2016-08-15T12:32:37.510+0200 done dumping wikipedia.pages (2119 documents) 2016-08-15T12:32:37.521+0200 writing captured oplog to 2016-08-15T12:32:37.931+0200 [##......................] .oplog 44/492 (8.9%) 2016-08-15T12:32:39.648+0200 [########################] .oplog 504/492 (102.4%) 2016-08-15T12:32:39.648+0200 dumped 504 oplog entries
    • Use the latest daily automatic backup, if it exists.
Stage 2: Restore the Backup Data Steps
  1. Locate the shard PRIMARY member.
  2. Triple check you’re restoring the right backup to the right shard/host!
  3. Restore a mongodump-based backup to the PRIMARY node using the steps in this article: Restore a Mongodump Backup.
  4. Check for errors.
  5. Check that all SECONDARY members are in sync with the PRIMARY.
Stage 3: Get Oplogs for Point-In-Time-Recovery

In this stage, we will gather the changes needed to roll the data forward from the time of backup to the time/oplog-position to which we would like to restore.

In this example below, let’s pretend someone accidentally deleted an entire collection at oplog timestamp: “Timestamp(1470923942, 3)” and we want to fix it. If we decrement the Timestamp increment (2nd number) of “Timestamp(1470923942, 3)” we will have the last change before the accidental command, which in this case is: “Timestamp(1470923942, 2)“. Using the timestamp, we can capture and replay the oplogs from when the backup occurred to just before the issue/error.

A start and end timestamp are required to get the oplog data. In all cases, this will need to be gathered manually, case-by-case.

Helper Script #!/bin/bash # # This tool will dump out a BSON file of MongoDB oplog changes based on a range of Timestamp() objects. # The captured oplog changes can be applied to a host using 'mongorestore --oplogReplay --dir /path/to/dump'. set -e TS_START=$1 TS_END=$2 MONGODUMP_EXTRA=$3 function usage_exit() { echo "Usage $0: [Start-BSON-Timestamp] [End-BSON-Timestamp] [Extra-Mongodump-Flags (in quotes for multiple)]" exit 1 } function check_bson_timestamp() { local TS=$1 echo "$TS" | grep -qP "^Timestamp(d+,sd+)$" if [ $? -gt 0 ]; then echo "ERROR: Both timestamp fields must be in BSON Timestamp format, eg: 'Timestamp(########, #)'!" usage_exit fi } if [ -z "$TS_START" ] || [ -z "$TS_END" ]; then usage_exit else check_bson_timestamp "$TS_START" check_bson_timestamp "$TS_END" fi MONGODUMP_QUERY='{ "ts" : { "$gte" : '$TS_START' }, "ts" : { "$lte" : '$TS_END' } }' MONGODUMP_FLAGS='--db=local --collection=oplog.rs' [ ! -z "$MONGODUMP_EXTRA" ] && MONGODUMP_FLAGS="$MONGODUMP_FLAGS $MONGODUMP_EXTRA" if [ -d dump ]; then echo "'dump' subdirectory already exists! Exiting!" exit 1 fi echo "# Dumping oplogs from '$TS_START' to '$TS_END'..." mkdir dump mongodump $MONGODUMP_FLAGS --query "$MONGODUMP_QUERY" --out - >dump/oplog.bson if [ -f dump/oplog.bson ]; then echo "# Done!" else echo "ERROR: Cannot find oplog.bson file! Exiting!" exit 1 fi

 

Script Usage: $ ./dump_oplog_range.sh Usage ./dump_oplog_range.sh: [Start-BSON-Timestamp] [End-BSON-Timestamp] [Extra-Mongodump-Flags (in quotes for multiple)]

 

Steps
  1. Find the PRIMARY member that contains the oplogs needed for the PITR restore.
  2. Determine the “end” Timestamp() needed to restore to. This oplog time should be before the problem occurred.
  3. Determine the “start” Timestamp() from right before the backup was taken.
    1. This timestamp doesn’t need to be exact, so something like a Timestamp() object equal-to “a few min before the backup started” is fine, but the more accurate you are, the fewer changes you’ll need to re-apply (which saves on restore time).
  4. Use the MongoToolsAndSnippets script: “get_oplog_range.sh (above in “Helper Script”) to dump the oplog time-ranges you need to restore to your chosen point-in-time. In this example I am gathering the oplog between two point-in-times (also passing in –username/–password flags in quotes the 3rd parameter):
    1. The starting timestamp: the BSON timestamp from before the mongodump backup in “Stage 2: Restore Collection Data” was taken, in this example. “Timestamp(1470923918, 0)” is a time a few seconds before my mongodump was taken (does not need to be exact).
    2. The end timestamp: the end BSON Timestamp to restore to, in this example. “Timestamp(1470923942, 2)” is the last oplog-change BEFORE the problem occurred.

    Example:

    $ wget -q https://raw.githubusercontent.com/percona/MongoToolsAndSnippets/master/rdba/dump_oplog_range.sh $ bash ./dump_oplog_range.sh 'Timestamp(1470923918, 0)' 'Timestamp(1470923942, 2)' '--username=secret --password=secret --host=mongo01.example.com --port=27024' # Dumping oplogs from 'Timestamp(1470923918, 0)' to 'Timestamp(1470923942, 2)'... 2016-08-12T13:11:17.676+0200    writing local.oplog.rs to stdout 2016-08-12T13:11:18.120+0200    dumped 22 documents # Done!

    Note: all additional mongodump flags (optional 3rd field) must be in quotes!

  5. Double check it worked by looking for the ‘oplog.bson‘ file and checking that the file has some data in it (168mb in the below example):

    $ ls -alh dump/oplog.bson -rw-rw-r--. 1 tim tim 168M Aug 12 13:11 dump/oplog.bson

     

Stage 4: Apply Oplogs for Point in Time Recovery (PITR)

In this stage, we apply the time-range-based oplogs gathered in Stage 3 to the restored data set to bring it from the time of the backup to a particular point in time before a problem occurred.

Mongorestore Command Flags –host/–port (and –user/–password)

Required, even if you’re using the default host/port (localhost:27017).  If authorization is enabled, add –user/–password flags also.

–oplogReplay

Required. This is needed to replay the oplogs in this step.

–dir

Required. The path to the mongodump data.

Steps
  1. Copy the “dump” directory containing only the “oplog.bson”. file (captured in Stage 3) to the host that needs the oplog changes applied (the restore host).
  2. Run “mongorestore” on the “dump” directory to replay the oplogs into the instance. Make sure the “dump” dir contains only “oplog.bson”! $ mongorestore --host localhost --port 27017 --oplogReplay --dir ./dump 2016-08-12T13:12:28.105+0200    building a list of dbs and collections to restore from dump dir 2016-08-12T13:12:28.106+0200    replaying oplog 2016-08-12T13:12:31.109+0200    oplog   80.0 MB 2016-08-12T13:12:34.109+0200    oplog   143.8 MB 2016-08-12T13:12:35.501+0200    oplog   167.8 MB 2016-08-12T13:12:35.501+0200    done
  3. Validate the data was restored with the customer or using any means possible (examples: .count() queries, some random .find() queries, etc.).

Figuring out which version of TokuDB I'm running

Lastest Forum Posts - September 20, 2016 - 3:50pm
It is surprisingly hard to find the pertinent version number to reference https://www.percona.com/doc/percona-...ase-notes.html
The Percona Server version numbers are completely different from those in the change log and I can't find any mapping between the two.

I have been handed a Percona Server installation, specifically Percona-Server-5.7.11-4-Linux.x86_64.ssl101
Running @@tokudb_version is "5.7.11-4" and @@version is "5.7.11-4-log"
I know the TokuDB version is at least 5.0.3 because I can hot-add columns. On the other hand I cannot hot-expand a varchar column, so I'm below version 6.5.0.
How do I go about finding where I am relative to the release notes from https://www.percona.com/doc/percona-...ase-notes.html ?

Thanks in advance for any help.

Percona Live Europe featured talk with Marc Berhault — Inside CockroachDB’s Survivability Model

Latest MySQL Performance Blog posts - September 20, 2016 - 9:41am

Welcome to another Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Marc Berhault, Engineer at Cockroach Labs.His talk will be on Inside CockroachDB’s Survivability Model. This talk takes a deep dive into CockroachDB, a database whose “survive and thrive” model aims to bring the best aspects of Google’s next generation database, Spanner, to the rest of the world via open source.

I had a chance to speak with Marc and learn a bit more about these questions:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it.

Marc: I started out as a Site Reliability Engineer managing Google’s storage infrastructure (GFS). Back in those days, keeping a cluster up and running mostly meant worrying about the masters.

I then switched to a developer role on Google’s next-generation storage system, which replaced the single write master with sharded metadata handlers. This increased the reliability of the entire system considerably, allowing for machine and network failures. SRE concerns gradually shifted away from machine reliability towards more interesting problems, such as multi-tenancy issues (quotas, provisioning, isolation) and larger scale failures.

After leaving Google, I found myself back in a world where one had to worry about a single machine all over again – at least when running your own infrastructure. I kept hearing the same story: a midsize company starts to grow out of its single-machine database and starts trimming the edges. This means moving tables to other hosts, shrinking schemas, etc., in order to avoid the dreaded “great sharding of the monolithic table,” often accompanied by its friends: cross-shard coordination layer and production complexity.

This was when I joined Cockroach Labs, a newly created startup with the goal of bringing a large-scale, transactional, strongly consistent database to the world at large. After contributing to various aspects of the projects, I switched my focus to production: adding monitoring, working on deployment, and of course rolling out our test clusters.

Percona: Your talk is called “Inside CockroachDB’s Survivability Model.” Define “survivability model”, and why it is important to database environments.

Marc: The survivability model in CockroachDB is centered around data redundancy. By default, all data is replicated three times (this is configurable) and is only considered written if a quorum exists. When a new node holding one of the copies of the data becomes unavailable, a node is picked and given a snapshot of the data.

This redundancy model has been widely used in distributed systems, but rarely with strongly consistent databases. CockroachDB’s approach provides strong consistency as well as transactions across the distributed data. We see this as a critical component of modern databases: allowing scalability while guaranteeing consistency.

Percona: What are the workloads and database environments that are best suited for a CockroachDB deployment? Do you see an expansion of the solution to encompass other scenarios?

Marc: CockroachDB is a beta product and is still in development. We expect to be out of beta by the end of 2016. Ideal workloads are those requiring strong consistency – those applications that manage critical data. However, strong consistency comes at a cost, usually directly proportional to latency between nodes and replication factor. This means that a widely distributed CockroachDB cluster (e.g., across multiple regions) will incur high write latencies, making it unsuitable for high-throughput operations, at least in the near term.

Percona: What is changing in the way businesses use databases that keeps you awake at night? How do you think CockroachDB is addressing those concerns?

Marc: In recent years, more and more businesses have been reaching the limits of what their single-machine databases can handle. This has forced many to implement their own transactional layers on top of disjoint databases, at the cost of longer development time and correctness.

CockroachDB attempts to find a solution to this problem by allowing a strongly consistent, transactional database to scale arbitrarily.

Percona: What are looking forward to the most at Percona Live Europe this year?

Marc: This will be my first time at a Percona Live conference, so I’m looking forward to hearing from other developers and learning what challenges other architects and DBAs are facing in their own work.

You can read more about Marc’s thoughts on CockroachDB at their blog.

Want to find out more about Marc, CoachroachDB and survivability? Register for Percona Live Europe 2016, and come see his talk Inside CockroachDB’s Survivability Model.

Use the code FeaturedTalk and receive €25 off the current registration price!

Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.

This conference has something for everyone!

Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.

Amsterdam eWeek

Percona Live Europe 2016 is part of Amsterdam eWeek. Amsterdam eWeek provides a platform for national and international companies that focus on online marketing, media and technology and for business managers and entrepreneurs who use them, whether it comes to retail, healthcare, finance, game industry or media. Check it out!



General Inquiries

For general inquiries, please send us your question and someone will contact you.