Buy Percona ServicesBuy Now!

What’s Next for SQL Databases?

In this blog, I’ll go over my thoughts on what we can expect in the world of SQL databases.

After reading Baron’s prediction on databases, here:

https://www.xaprb.com/blog/defining-moments-in-database-history/

I want to provide my own view on what’s coming up next for SQL databases. I think we live in interesting times, when we can see the beginning of the next-generation of RDBMSs.

There are defining characteristics of such databases:

  1. Auto-scaling. The ability to add and use resources depending on the current load and database size. This is done transparently for users and DBAs.
  2. Auto-healing. The automatic handling of node failures.
  3. Multi-regional, cloud-agnostic, geo-distributed. The ability to support multiple data centers and multiple clouds, in different parts of the world.
  4. Transactional. All the above, with the ability to support multi-statements transactional workloads.
  5. Strong consistency. The full definition of strong consistency is pretty involved. For simplicity, let’s say it means that reads (in the absence of ongoing writes) will return the same data, despite what region or data center you are getting it from. A simple counter-example is the famous MySQL asynchronous replication, where (with the slave delay) reading the data on a slave can return very outdated data. I am focusing on reads, because in a distributed environment the consistent reads performance will be affected. This is where network latency (often limited by the speed of light) will define performance.
  6. SQL language. SQL, despite being old and widely criticized, is not going anywhere. This is a universal language for app developers to access data.

With this, I see following interesting projects:

  • Google Cloud Spanner (https://cloud.google.com/spanner/). Recently announced and still in the Beta stage. Definitely an interesting projects, with the obvious limitation of running only in Google Cloud.
  • FaunaDB (https://fauna.com/). Also very recently announced, so it is hard to say how it performs. The major downside I see is that it does not provide SQL access, but uses a custom language.
  • Two open source projects:
    • CockroachDB (https://www.cockroachlabs.com/). This is still in the Beta stage, but definitely an interesting project to follow. Initially, the project planned to support only key-value access, but later they made a very smart decision to provide SQL access via a PostgreSQL-compatible protocol.
    • TiDB (https://github.com/pingcap/tidb). Right now in RC stages, and the target is to provide SQL access over a MySQL compatible protocol (and later PostgreSQL protocol).

Protocol compatibility is a wise approach, although not strictly necessary. It lowers an entry barrier for the existing applications.

Both CockroachDB and TiDB, at the moment of this writing, still have rough edges and can’t be used in serious deployments (from my experience). I expect both projects will make a big progress in 2017.

What shared characteristics can we expect from these systems?

As I mentioned above, we may see that the read performance is degraded (as latency increases), and often it will be defined more by network performance than anything else. Storage IO and CPU cycles will be secondary factors. There will be more work on how to understand and tune the network traffic.

We may need to get used to the fact that point or small range selects become much slower. Right now, we see very fast point selects for traditional RDBM (MySQL, PostgreSQL, etc.).

Heavy writes will be problematic. The problem is that all writes will need to go through the consistency protocol. Write-optimized storage engines will help (both CockroachDB and TiDB use RocksDB in the storage layer).

The long transactions (let’s say changing 100000 or more rows) also will be problematic. There is just too much network round-trips and housekeeping work on each node, making long transactions an issue for distributed systems.

Another shared property (at least between CockroachDB and TiDB) is the active use of the Raft protocol to achieve consistency. So it will be important to understand how this protocol works to use it effectively. You can find a good overview of the Raft protocol here: http://container-solutions.com/raft-explained-part-1-the-consenus-problem/.

There probably are more NewSQL technologies than I have mentioned here, but I do not think any of them captured critical market- or mind-share. So we are at the beginning of interesting times . . .

What about MySQL? Can MySQL become the database that provides all these characteristics? It is possible, but I do not think it will happen anytime soon. MySQL would need to provide automatic sharding to do this, which will be very hard to implement given the current internal design. It may happen in the future, though it will require a lot of engineering efforts to make it work properly.

Percona Toolkit 3.0.2 is now available

Latest MySQL Performance Blog posts - 11 hours 40 min ago

Percona announces the availability of Percona Toolkit 3.0.2 on March 27, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

This release includes the following changes:

New Features
  • PT-73: Added support for SSL connections to pt-mongodb-summary and pt-mongodb-query-digest
  • 1642751: Enabled gathering of information about locks and transactions by pt-stalk using Performance Schema if it is enabled (Thanks, Agustin Gallego)
Bug Fixes
  • PT-74: Fixed gathering of security settings when running pt-mongodb-summary on a mongod instance that is specified as the host
  • PT-75: Changed the default sort order in pt-mongodb-query-digest output to descending
  • PT-76: Added support of & and # symbols in passwords for pt-mysql-summary
  • PT-77: Updated Makefile to support new MongoDB tools
  • PT-89: Fixed pt-stalk to run top more than once to collect useful CPU usage
  • PT-93: Fixed pt-mongodb-query-digest to make query ID match query key (Thanks, Kamil Dziedzic)
  • PT-94: Fixed pt-online-schema-change to not make duplicate rows in _t_new when updating the primary key. Also, see 1646713.
  • PT-101: Fixed pt-table-checksum to correctly use the –slave-user and –slave-password options. Also, see 1651002.
  • PT-105: Fixed pt-table-checksum to continue running if a database is dropped in the process

You can find release details in the release notes. Bugs can be reported on Toolkit’s launchpad bug tracker.

Percona Server for MySQL 5.7.17-12 is Now Available

Lastest Forum Posts - 20 hours 12 min ago
Percona announces the GA release of Percona Server for MySQL 5.7.17-12 on March 24, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.17, including all the bug fixes in it, Percona Server for MySQL 5.7.17-12 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.17-12 milestone at Launchpad. New Features:

  • Percona Server has implemented new mysqldump --order-by-primary-desc option. This feature tells mysqldump to take the backup by descending primary key order (PRIMARY KEY DESC) which can be useful if storage engine is using reverse order column family for a primary key.
  • mysqldump will now detect when MyRocks is installed and available by seeing if there is a session variable named rocksdb_skip_fill_cache and setting it to 1 if it exists.
  • mysqldump will now automatically enable session variable rocksdb_bulk_load if it is supported by the target server.
Bugs Fixed:

  • If the variable thread_handling was set to pool-of-threads in the MySQL configuration file, the server couldn’t be gracefully shut down. Bug fixed #1537554.
  • When innodb_ft_result_cache_limit was exceeded by internal memory allocated by InnoDB during the FT scan not all memory was released which could lead to server assertion. Bug fixed #1634932 (upstream #83648).
  • Executing the FLUSH LOGS on a read-only slave with a user that doesn’t have the SUPER privilege would result in Error 1290. Bug fixed #1652852 (upstream #84350).
  • FLUSH LOGS was disabled with read_only and super_read_only variables. Bug fixed #1654682(upstream #84437).
  • If SHOW BINLOGS or PERFORMANCE_SCHEMA.GLOBAL_STATUS query, and a transaction commit would run in parallel, they could deadlock. Bug fixed #1657128.
  • A long-running binary log commit would block SHOW STATUS, which in turn could block a number of other operations such as client connects and disconnects. Bug fixed #1646100.
  • Log tracking initialization did not find last valid bitmap data correctly. Bug fixed #1658055.
  • A query using range scan with a complex range condition could lead to a server crash. Bug fixed #1660591(upstream #84736).
  • Race condition between buffer pool page optimistic access and eviction could lead to a server crash. Bug fixed #1664280.
  • If Audit Log Plugin was unable to create file pointed by audit_log_file, the server would crash during the startup. Bug fixed #1666496.
  • A DROP TEMPORARY TABLE ... for a table created by a CREATE TEMPORARY TABLE ... SELECT ... would get logged in the binary log on a disconnect with mixed mode replication. Bug fixed #1671013.
  • TokuDB did not use an index with even if cardinality was good. Bug fixed #1671152.
  • Row-based replication events were not reflected in Rows_updated fields in the User StatisticsINFORMATION_SCHEMA tables. Bug fixed #995624.
  • When DuplicateWeedout strategy was used for joins, use was not reported in the query plan info output extension for the slow query log. Bug fixed #1592694.
  • It was impossible to use column compression dictionaries with partitioned InnoDB tables. Bug fixed #1653104.
  • Diagnostics for OpenSSL errors have been improved. Bug fixed #1660339 (upstream #75311).

Percona Server 5.6.35-81.0 is Now Available

Lastest Forum Posts - 20 hours 13 min ago
Percona announces the release of Percona Server 5.6.35-81.0 on March 24, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.6.35, and including all the bug fixes in it, Percona Server 5.6.35-81.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.35-81.0 milestone on Launchpad. New Features:

  • Percona Server has implemented new mysqldump --order-by-primary-desc option. This feature tells mysqldump to take the backup by descending primary key order (PRIMARY KEY DESC) which can be useful if storage engine is using reverse order column for a primary key.
Bugs Fixed:

  • When innodb_ft_result_cache_limit was exceeded by internal memory allocated by InnoDB during the FT scan not all memory was released which could lead to server assertion. Bug fixed #1634932 (upstream #83648).
  • Log tracking initialization did not find last valid bitmap data correctly, potentially resulting in needless redo log retracking or hole in the tracked LSN range. Bug fixed #1658055.
  • If Audit Log Plugin was unable to create file pointed by audit_log_file, the server would crash during the startup. Bug fixed #1666496.
  • A DROP TEMPORARY TABLE ... for a table created by a CREATE TEMPORARY TABLE ... SELECT ... would get logged in the binary log on a disconnect with mixed mode replication. Bug fixed #1671013.
  • TokuDB did not use an index with even if cardinality was good. Bug fixed #1671152.
  • Row-based replication events were not reflected in Rows_updated fields in the User StatisticsINFORMATION_SCHEMA tables. Bug fixed #995624.
  • A long-running binary log commit would block SHOW STATUS, which in turn could block a number of other operations such as client connects and disconnects. Bug fixed #1646100.
  • It was impossible to use column compression dictionaries with partitioned InnoDB tables. Bug fixed #1653104.
  • Diagnostics for OpenSSL errors have been improved. Bug fixed #1660339 (upstream #75311).
  • When DuplicateWeedout strategy was used for joins, use was not reported in the query plan info output extension for the slow query log. Bug fixed #1592694.

$5 Percona Live keynote passes and $101 101 crash courses

Latest MySQL Performance Blog posts - March 24, 2017 - 2:49pm

The Percona Live Open Source Database Conference 2017 in Santa Clara, California is just around the corner: April 24-27, 2017. We’re busy getting things ready to make sure everybody gets the most out of their time there. As part of that, we have some news and a couple of outstanding offers for you!

$5 Percona Live Keynote Passes and
$101 101 Crash Courses

Keynote Speakers
We are extremely pleased with our keynote speakers this year. We have a wide assortment of industry leaders and experts presenting a great set of topics, with more to come! Below are some of our keynote presentations:

Day 1

Day 2

Day 3

  • 9:00 AM – Peter Zaitsev, Percona: Closing Keynote
  • 9:30 AM – Jean Francois Gagne, Booking.com LT: Bookings-per-seconds
  • 9:50 AM – Community Award Ceremony

$5 Keynote Passes
To help make the keynotes and the community events accessible to the greatest number of community members, we are once again offering $5 Percona Live keynote passes for the Percona Live Open Source Database Performance Conference 2017.

A keynote pass provides access to the keynote addresses, Birds of a Feather sessions, the exhibit floor and the Community Networking Reception on Wednesday night. The first 100 people who register for an Expo-Only pass (new registrations only) using the discount code “KEY” will be able to register for just $5.

$101 101 Passes

For a limited time, you can get access to the 101 Crash Courses for only $101! Percona Live is once again hosting Crash Courses for developers, systems administrators, and other technical resources.

The MySQL 101 will be on Tuesday, April 25 and MongoDB 101 will be on Wednesday, April 26.

To learn more about our crash courses, read this blog. Register now using the following codes for your discount:

  • 101: $299 off either the MySQL or MongoDB tickets
  • 202: $498 off the combined MySQL/MongoDB ticket

This deal expires soon, so reserve your spot now!

All Percona Live registration options can be found here. Register now! See you at Percona Live Open Source Database Performance Conference 2017!

Make sure to follow us on Twitter using #PerconaLive, and to visit our conference website to stay up-to-date with the exciting announcements yet to come! We hope to see you in Santa Clara, CA in April!

Want to sponsor Percona Live? Booth selection for our limited sponsorship opportunities is on a first-come, first-served basis. Download the sponsorship prospectus.

Percona Server for MySQL 5.7.17-12 is Now Available

Latest MySQL Performance Blog posts - March 24, 2017 - 10:43am

Percona announces the GA release of Percona Server for MySQL 5.7.17-12 on March 24, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.17, including all the bug fixes in it, Percona Server for MySQL 5.7.17-12 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.17-12 milestone at Launchpad.

New Features:
  • Percona Server has implemented new mysqldump --order-by-primary-desc option. This feature tells mysqldump to take the backup by descending primary key order (PRIMARY KEY DESC) which can be useful if storage engine is using reverse order column family for a primary key.
  • mysqldump will now detect when MyRocks is installed and available by seeing if there is a session variable named rocksdb_skip_fill_cache and setting it to 1 if it exists.
  • mysqldump will now automatically enable session variable rocksdb_bulk_load if it is supported by the target server.
Bugs Fixed:
  • If the variable thread_handling was set to pool-of-threads in the MySQL configuration file, the server couldn’t be gracefully shut down. Bug fixed #1537554.
  • When innodb_ft_result_cache_limit was exceeded by internal memory allocated by InnoDB during the FT scan not all memory was released which could lead to server assertion. Bug fixed #1634932 (upstream #83648).
  • Executing the FLUSH LOGS on a read-only slave with a user that doesn’t have the SUPER privilege would result in Error 1290. Bug fixed #1652852 (upstream #84350).
  • FLUSH LOGS was disabled with read_only and super_read_only variables. Bug fixed #1654682 (upstream #84437).
  • If SHOW BINLOGS or PERFORMANCE_SCHEMA.GLOBAL_STATUS query, and a transaction commit would run in parallel, they could deadlock. Bug fixed #1657128.
  • A long-running binary log commit would block SHOW STATUS, which in turn could block a number of other operations such as client connects and disconnects. Bug fixed #1646100.
  • Log tracking initialization did not find last valid bitmap data correctly. Bug fixed #1658055.
  • A query using range scan with a complex range condition could lead to a server crash. Bug fixed #1660591 (upstream #84736).
  • Race condition between buffer pool page optimistic access and eviction could lead to a server crash. Bug fixed #1664280.
  • If Audit Log Plugin was unable to create file pointed by audit_log_file, the server would crash during the startup. Bug fixed #1666496.
  • A DROP TEMPORARY TABLE ... for a table created by a CREATE TEMPORARY TABLE ... SELECT ... would get logged in the binary log on a disconnect with mixed mode replication. Bug fixed #1671013.
  • TokuDB did not use an index with even if cardinality was good. Bug fixed #1671152.
  • Row-based replication events were not reflected in Rows_updated fields in the User Statistics INFORMATION_SCHEMA tables. Bug fixed #995624.
  • When DuplicateWeedout strategy was used for joins, use was not reported in the query plan info output extension for the slow query log. Bug fixed #1592694.
  • It was impossible to use column compression dictionaries with partitioned InnoDB tables. Bug fixed #1653104.
  • Diagnostics for OpenSSL errors have been improved. Bug fixed #1660339 (upstream #75311).

Other bugs fixed: #1665545, #1650321, #1654501, #1663251, #1659548, #1663452, #1670834, #1672871, #1626545, #1658006, #1658021, #1659218, #1659746, #1660239, #1660243, #1660348, #1662163 (upstream #81467), #1664219, #1664473, #1671076, and #1671123.

The release notes for Percona Server for MySQL 5.7.17-12 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Percona Server 5.6.35-81.0 is Now Available

Latest MySQL Performance Blog posts - March 24, 2017 - 10:25am

Percona announces the release of Percona Server 5.6.35-81.0 on March 24, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.6.35, and including all the bug fixes in it, Percona Server 5.6.35-81.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.35-81.0 milestone on Launchpad.

New Features:
  • Percona Server has implemented new mysqldump --order-by-primary-desc option. This feature tells mysqldump to take the backup by descending primary key order (PRIMARY KEY DESC) which can be useful if storage engine is using reverse order column for a primary key.
Bugs Fixed:
  • When innodb_ft_result_cache_limit was exceeded by internal memory allocated by InnoDB during the FT scan not all memory was released which could lead to server assertion. Bug fixed #1634932 (upstream #83648).
  • Log tracking initialization did not find last valid bitmap data correctly, potentially resulting in needless redo log retracking or hole in the tracked LSN range. Bug fixed #1658055.
  • If Audit Log Plugin was unable to create file pointed by audit_log_file, the server would crash during the startup. Bug fixed #1666496.
  • A DROP TEMPORARY TABLE ... for a table created by a CREATE TEMPORARY TABLE ... SELECT ... would get logged in the binary log on a disconnect with mixed mode replication. Bug fixed #1671013.
  • TokuDB did not use an index with even if cardinality was good. Bug fixed #1671152.
  • Row-based replication events were not reflected in Rows_updated fields in the User Statistics INFORMATION_SCHEMA tables. Bug fixed #995624.
  • A long-running binary log commit would block SHOW STATUS, which in turn could block a number of other operations such as client connects and disconnects. Bug fixed #1646100.
  • It was impossible to use column compression dictionaries with partitioned InnoDB tables. Bug fixed #1653104.
  • Diagnostics for OpenSSL errors have been improved. Bug fixed #1660339 (upstream #75311).
  • When DuplicateWeedout strategy was used for joins, use was not reported in the query plan info output extension for the slow query log. Bug fixed #1592694.

Other bugs fixed: #1650321, #1650322, #1654501, #1663251, #1666213, #1652912, #1659548, #1663452, #1670834, #1672871, #1626545, #1644174, #1658006, #1658021, #1659218, #1659746, #1660239, #1660243, #1660255, #1660348, #1662163 upstream (#81467), #1664219, #1664473, #1671076, and #1671123.

Release notes for Percona Server 5.6.35-81.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Percona Server for MySQL 5.5.54-38.7 is Now Available

Lastest Forum Posts - March 24, 2017 - 5:51am
Perconaannounces the release of Percona Server for MySQL 5.5.54-38.7 on March 22, 2017. Based on MySQL 5.5.54, including all the bug fixes in it, Percona Server for MySQL 5.5.54-38.7 is now the current stable release in the 5.5 series.

Percona Server for MySQL is open-source and free. You can find release details in the 5.5.54-38.7 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories. Bugs Fixed:

  • Log tracking initialization did not find last valid bitmap data correctly, potentially resulting in needless redo log retracking or hole in the tracked LSN range. Bug fixed #1658055.
Other bugs fixed: #1652912, and #1655587.

Find the release notes for Percona Server for MySQL 5.5.54-38.7 in our online documentation. Report bugs on the launchpad bug tracker.

Percona Live Featured Session with Luís Soares: The New MySQL Replication Features in MySQL 8

Latest MySQL Performance Blog posts - March 23, 2017 - 6:15pm

Welcome to another post in the series of Percona Live featured session blogs! In these blogs, we’ll highlight some of the session speakers that will be at this year’s Percona Live conference. We’ll also discuss how these sessions can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!

In this Percona Live featured session, we’ll meet Luís Soares, Principal Software Engineer at Oracle. His session is The New MySQL Replication Features in MySQL 8 (with fellow presenter Lars Thalmann, Development Director at Oracle). The most popular high availability (HA) techniques deployed are based on making services redundant, in particular by means of replication. This fits quite naturally in the MySQL universe, as MySQL server has provided a mature replication solution for over a decade now. Moreover, the new replication developments (and their roadmap) show that MySQL is also catering for the requirements posed by popular environments such as the cloud.

I had a chance to speak with Luís about MySQL 8.0 replication:

Percona: How did you get into database technology? What do you love about it?

Luís: My background is in distributed systems, particularly in database replication, message passing technologies and fault-tolerance. It all started while I was taking my BSc in computer science. As I finished it, I felt very much drawn towards these subjects. That alone made me enroll in a Master’s course that allowed me to focus almost full time on database replication using group communication technologies. I continued to study this field for years, and further deepened my knowledge on this area. That was great fun and quite a learning experience!

Years went by, and eventually I ended up working at MySQL on the replication team. This happened after I came to a MySQL user conference to present some of the work that I was doing at the time.

These are very fond memories! But I digress!

Back to the point. In general, the thing I love about working on database replication is that I am constantly facing new and interesting problems. Data replication in itself is hard. Add to that the semantics and requirements of a database server, and complexity increases quite a bit. Also, building a generic database replication service that fits in a large set of use cases requires a lot of discipline and careful thinking when designing new features. And let’s not forget the Web itself, which is constantly changing. New technologies come and go at a fast pace. The volume of data that has to be handled, year after year, increases considerably. This poses scalability and integration challenges that need to be addressed.

All in all, these are very exciting times to work with high availability, data replication and data integration.

Now specifically about MySQL, I love the fact that I work on a popular database technology that embraced replication very early in its life cycle. Replication awareness runs deep in the product and in its ecosystem. Consequently, MySQL has an extensive user base exploring many different use case scenarios around replication. And this is extremely motivating, rewarding and exciting. I can honestly say that my day-to-day work is never boring!

Percona: Your talk is called The New MySQL Replication Features in MySQL 8. What are the key replication features in MySQL 8.0, and why are they important?

Luís: It was a huge amount of work to get the MySQL Group Replication plugin out with MySQL 5.7.17. Group Replication is a new plugin that gives the user some replication nice properties by resorting to group communication and state machine replication. This makes the system able to protect data against split brain situations, enables fault-tolerance and high availability and provides coordination between servers committing transactions that change the data.

In addition to Group Replication, the team has also invested quite a bit on core replication features. Some of these features were already released, and others will be released at some point in time in a MySQL DMR.

In the first 8.0 DMR (MySQL 8.0.0) replication has better instrumentation for row-based replication. The user can observe the row-based replication applier progress by querying performance schema tables. There is also an enhanced global transaction identifier.

GTIDs history management, as the user can set the variable GTID_PURGED in scenarios other than those where the server has an empty GTID execution history. And the user can now specify the stop condition when starting the relay log applier, even if there are multiple applier threads started.

All these features combined are of great help, since they reduce operations overhead through automation, better observability and coordination between servers.

Work continues on many fronts: performance, availability, scalability, efficiency and observability. Stay tuned!

Percona: How do these features make DBAs lives easier? What problems do they solve?

Luís: As mentioned above, the features in MySQL 8.0.0 take some of the operations burden from the DBA. Moreover, they allow the user to better observe what is happening inside the replication pipeline. This alone is quite interesting, since DBAs need to make decisions both when designing new deployments and when tackling issues, possibly having to meet very tight deadlines.

Simply put, these features will help DBAs to diagnose and fix problems faster.

Percona: What do you want attendees to take away from your session? Why should they attend?

Luís: Our session is primarily about the shiny new replication features already in MySQL 8. This is the first takeaway. To know, first hand, what is in MySQL 8 replication-wise. But there is another takeaway, and quite an interesting one. Part of the session is dedicated to presenting the overall ideas around MySQL replication. So attendees will get an overview of the roadmap, and will be able to participate and provide feedback along the way. They will learn more about the big picture, and we will bring together some of the hot MySQL technologies that we keep hearing about nowadays: Group Replication, InnoDB Clusters, Multi-Threaded Replication and more!

It will be fun.

Percona: What are you most looking forward to at Percona Live 2017?

Luís: As a conference participant, I look forward to doing some networking with the vibrant MySQL community. I must say, that I really enjoy engaging in nice technical discussions about my favorite topics: fault-tolerance, replication, dependability and distributed systems overall. The conference gives me a great opportunity to do this.

As a MySQL developer, and one that has been developing MySQL replication for quite some time now, I look forward to talking about the recent work that my team has done and getting all the feedback I can.

As a bystander, conferences like Percona Live make me realize how much MySQL has grown, and how much it has evolved. Replication, for instance, has had so many interesting features, release after release over the last eight or nine years. The community has embraced and deployed them, often worked/interacted with the developers to improve them by providing feedback, feature requests or contributions. And this means that they are part of the story too!

These conferences are always a great learning experience! After spending a week with the MySQL community, I always feel refreshed, energized, extra motivated and with lots of food for thought when I get back home.

Go MySQL!

Register for Percona Live Data Performance Conference 2017, and see Luís present his session on The New MySQL Replication Features in MySQL 8 (with fellow presenter Lars Thalmann, Development Director at Oracle). Use the code FeaturedTalk and receive $100 off the current registration price!

Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community, as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Running Percona XtraDB Cluster on Windows … in Docker

Latest MySQL Performance Blog posts - March 23, 2017 - 11:03am

In this blog post, we’ll look at how to run Percona XtraDB Cluster on Windows using Docker.

This is a follow-up to my previous post on Percona XtraBackup on Windows. The fact is that with Docker you can now run a variety of software applications on Windows that previously were available only for Linux platforms.

We can run (to evaluate and for testing purposes) several nodes of Percona XtraDB Cluster on a single Windows box.

The steps for this are:

  1. Setup Docker on the Windows box.
    https://docs.docker.com/docker-for-windows/install
  2. Create a Docker network.
    docker network create net1
  3. Bootstrap the cluster.
    docker run -e MYSQL_ROOT_PASSWORD=test -e CLUSTER_NAME=cl1 --name=node1 --net=net1 perconalab/percona-xtradb-cluster
  4. Join the nodes.
    docker run -e MYSQL_ROOT_PASSWORD=test -e CLUSTER_NAME=cl1 -e CLUSTER_JOIN=node1 --net=net1 perconalab/percona-xtradb-cluster

Repeat step 4 as many times as you want.

The result: the cluster is now running on Windows!

The Puzzling Performance of the Samsung 960 Pro

Latest MySQL Performance Blog posts - March 22, 2017 - 5:14pm

In this blog post, I’ll take a look at the performance of the Samsung 960 Pro SSD NVME.

First, I know the Samsung 960 Pro is a consumer SSD NVME drive, not intended for sustained data center workloads. But the AnandTech review looked good enough that I decided to take it for a test spin to see if it would work well with MySQL benchmarks.

Before that, I decided to do a simple sysbench file IO test to see how the drives handled sustained workloads, and if it would start acting up.

My expectation for a consumer SSD drive is that its write consistency will suffer. Many of those drives can sustain high bursts for short periods of time but have to slow down to keep up with write leveling (and other internal activities SSDs must to do). This is not what I saw, however.

I did a benchmark on E5-2630L V3, 64GB RAM Ubuntu 16.04 LTS, XFS Filesystem, Samsung 960 Pro 512GB (FW:1B6QCXP7):  

sysbench --num-threads=64 --max-time=86400 --max-requests=0 --test=fileio --file-num=1 --file-total-size=260G --file-io-mode=async --file-extra-flags=direct --file-test-mode=rndrd run

Note: I used asynchronous direct IO to keep it close to how MySQL (InnoDB) submits IO requests.

This is what the “Read Throughput” graph looks in Percona Monitoring and Management (PMM):

As you can see, in addition to some reasonable ebbs and flows we have some major dips from about 1.5GB/sec of random reads to around 800MB/sec. This almost halves the performance. We can clearly see two of those dips, with the third one starting when the test ended.  

What is really interesting is that as I did a read-write test, it performed much more uniformly:

sysbench --num-threads=64 --max-time=86400 --max-requests=0 --test=fileio --file-num=1 --file-total-size=260G --file-io-mode=async --file-extra-flags=direct --file-test-mode=rndrw run

Any ideas on what the cause of such strange periodic IO performance regression for reads could be?

This does not look like overheating throttling. It is much too regular for that (and I checked the temperature – is wasn’t any different during this performance regression).

One theory I have is “read disturb management”: could the SSD need to rewrite the data after so many reads? By my calculations, every cell is read some 166 times during the eight hours between those gaps. This doesn’t sound like a lot.

What are your thoughts?

Percona Server for MySQL 5.5.54-38.7 is Now Available

Latest MySQL Performance Blog posts - March 22, 2017 - 10:26am

Percona announces the release of Percona Server for MySQL 5.5.54-38.7 on March 22, 2017. Based on MySQL 5.5.54, including all the bug fixes in it, Percona Server for MySQL 5.5.54-38.7 is now the current stable release in the 5.5 series.

Percona Server for MySQL is open-source and free. You can find release details in the 5.5.54-38.7 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:
  • Log tracking initialization did not find last valid bitmap data correctly, potentially resulting in needless redo log retracking or hole in the tracked LSN range. Bug fixed #1658055.

Other bugs fixed: #1652912, and #1655587.

Find the release notes for Percona Server for MySQL 5.5.54-38.7 in our online documentation. Report bugs on the launchpad bug tracker.

Dropping the Foreign Key Constraint Using pt-online-schema-change

Latest MySQL Performance Blog posts - March 21, 2017 - 3:35pm

In this blog post, we’ll look at how to get rid of the unused Foreign Key (FK) constraint and/or related columns/keys with the help of pt-online-schema-change and the power of its plugins.

Before we proceed, here is a useful blog post written by Peter Zaitsev on Hijacking Innodb Foreign Keys.

If you are trying to get rid of an unused foreign key (FK) constraint and related columns from versions older than MySQL 5.6, or tables that cannot be executed with ALTER TABLE ... ALGORITHM=INPLACE because of limitations mentioned here (specifically, tables with 5.5 TIMESTAMP formats), you can use pt-online-schema-change.

For DROP FOREIGN KEY constraint_name  with pt-online-schema-change requires specifying _constraint_name rather than the real constraint_name. This is due to a limitation in MySQL: pt-online-schema-change adds a leading underscore to foreign key constraint names when creating the new table. Here’s is a simple example of one such case:

CREATE TABLE `test3` (  `Id` int(11) NOT NULL DEFAULT '0',  `Firstname` varchar(32) DEFAULT NULL,  `City` varchar(32) DEFAULT NULL,  PRIMARY KEY (`Id`),  CONSTRAINT `FKID` FOREIGN KEY (`Id`) REFERENCES `test4` (`Id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

To drop the constraint, we are supposed to add an underscore prior to constraint_name FKID:

[root@siddhant ~]# pt-online-schema-change --user=root --execute --set-vars=foreign_key_checks=0  --alter-foreign-keys-method=rebuild_constraints --alter="DROP FOREIGN KEY _FKID" D=apps02,t=test3 --socket=/tmp/mysql-master5520.sock Operation, tries, wait: analyze_table, 10, 1 copy_rows, 10, 0.25 ……...Altering `apps02`.`test3`... Creating new table... Created new table apps02._test3_new OK. Altering new table….... ……. 2017-02-11T12:45:12 Dropped old table `apps02`.`_test3_old` OK. 2017-02-11T12:45:12 Dropping triggers... 2017-02-11T12:45:12 Dropped triggers OK. Successfully altered `apps02`.`test3`.

Below is one case where if, for some reason, you already have an FK constraint with an underscore the above method of adding an additional underscore to already underscored _FK will fail with an error while dropping it:

Error altering new table `apps02`.`_test3_new`: DBD::mysql::db do failed: Error on rename of './apps02/_test3_new' to './apps02/#sql2-697-19' (errno: 152) [for Statement "ALTER TABLE `apps02`.`_test3_new` DROP FOREIGN KEY ___FKID"] at /usr/bin/pt-online-schema-change line 9069.

In such cases, we will have to make use of the --plugin  option used along with a file that calls the pt_online_schema_change_plugin class and a hook after_alter_new_table to drop the FK constraint. For example, a table with the FK constraint with an underscore is:

CREATE TABLE `test` (  `Id` int(11) NOT NULL DEFAULT '0',  `Firstname` varchar(32) DEFAULT NULL,  `City` varchar(32) DEFAULT NULL,  PRIMARY KEY (`Id`),  CONSTRAINT `___fkId` FOREIGN KEY (`Id`) REFERENCES `test2` (`Id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

Here we have a table with foreign key ___fkid using three underscores. Our plugin for dropping the constraint should be as follows:

[root@siddhant ~]# cat ptosc_plugin_drop_fk.pl package pt_online_schema_change_plugin; use strict; sub new {   my ($class, %args) = @_;   my $self = { %args };   return bless $self, $class; } sub after_alter_new_table {   my ($self, %args) = @_;   my $new_tbl = $args{new_tbl};   my $dbh     = $self->{cxn}->dbh;   my $sth = $dbh->prepare("ALTER TABLE $new_tbl->{name} DROP FOREIGN KEY __fkId");    $sth->execute(); } 1;

NOTE: DROP FOREIGN KEY CONSTRAINT in the plugin has one underscore less than original foreign key constraint, __fkId vs. ___fkId. Also, the alter statement will be NOOP alter (i.e., --alter ="ENGINE=INNODB").

Here is the pt-online-schema-change execution example with the plugin.

[root@siddhant ~]#  pt-online-schema-change --user=root --execute  --set-vars=foreign_key_checks=0  --alter-foreign-keys-method=rebuild_constraints --alter="ENGINE=INNODB" --plugin=/root/ptosc_plugin_drop_fk.pl  D=apps01,t=test --socket=/tmp/mysql-master5520.sock Created plugin from /root/ptosc_plugin_drop_fk.pl. Operation, tries, wait:  analyze_table, 10, 1  copy_rows, 10, 0.25  create_triggers, 10, 1  drop_triggers, 10, 1 swap_tables, 10, 1  update_foreign_keys, 10, 1 Altering `apps01`.`test`... Creating new table... Created new table apps01._test_new OK. Altering new table... Altered `apps01`.`_test_new` OK. 2017-02-11T11:26:14 Creating triggers... 2017-02-11T11:26:14 Created triggers OK. 2017-02-11T11:26:14 Copied rows OK. 2017-02-11T11:26:14 Swapping tables... 2017-02-11T11:26:14 Swapped original and new tables OK. 2017-02-11T11:26:14 Dropping old table... 2017-02-11T11:26:14 Dropped old table `apps01`.`_test_old` OK. 2017-02-11T11:26:14 Dropping triggers... 2017-02-11T11:26:14 Dropped triggers OK. Successfully altered `apps01`.`test`.

.exp files not deleted with drop database

Lastest Forum Posts - March 21, 2017 - 2:53pm
The .exp file create when a database is backed up with Xtrabackup do not get deleted when the the database is dropped. This causes the error:
ERROR 1010 (HY000): Error dropping database (can't rmdir './zfoo/', errno: 17)

>ls –l /data/mysql/zfoo
-rwxrwxr-x 1 mysql mysql 953 Mar 8 16:16 ztest.cfg
-rwxrwxr-x 1 mysql mysql 16384 Mar 8 16:16 ztest.exp
-rwxrwxr-x 1 mysql mysql 8948 Mar 8 16:15 ztest.frm
-rwxrwxr-x 1 mysql mysql 98304 Mar 8 16:01 ztest.ibd

mysql> drop database zfoo;
ERROR 1010 (HY000): Error dropping database (can't rmdir './zfoo/', errno: 17)

>ls –l /data/mysql/zfoo
-rwxrwxr-x 1 mysql mysql 16384 Mar 8 16:16 ztest.exp

Any ideas how to fix this?

*** buffer overflow detected *** trying to query a PXC XtraDB cluster 5.7

Lastest Forum Posts - March 21, 2017 - 8:52am
hi,

i have a running PXC 5.7 running with proxysql.
the setup is made using percona docker images

all nodes are up and `show status like '%wsrep%';` shows that all nodes are up and synced.

when i try to access one node from another host in the same network using mysql client. i have these messages:

from the client:
Code: Warning: Using a password on the command line interface can be insecure. ERROR 2013 (HY000): Lost connection to MySQL server at 'reading authorization packet', system error: 2 from the node:

Code: 2017-03-21T15:30:53.784961Z 0 [Note] mysqld: ready for connections. Version: '5.7.17-11-57' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Percona XtraDB Cluster (GPL), Release rel11, Revision e2a7fdd, WSREP version 27.20, wsrep_27.20 2017-03-21T15:35:23.062096Z 0 [Note] WSREP: (4d3c097c, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.20.2.3:4567 *** buffer overflow detected ***: mysqld terminated ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x731af)[0x7f6f4a7c51af] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f6f4a84aaa7] /lib/x86_64-linux-gnu/libc.so.6(+0xf6cc0)[0x7f6f4a848cc0] mysqld(_Z19find_or_create_hostP10PFS_threadPKcj+0x395)[0x1264035] mysqld(_Z22find_or_create_accountP10PFS_threadPKcjS2_j+0x3d2)[0x12ae8f2] mysqld(_Z18set_thread_accountP10PFS_thread+0x36)[0x126bde6] mysqld(pfs_set_thread_account_v1+0xa0)[0x124df40] mysqld(_Z16acl_authenticateP3THD19enum_server_commandb+0xc52)[0x7c4882] mysqld[0xc4a054] mysqld(_Z22thd_prepare_connectionP3THDb+0x5a)[0xc4b8ba] mysqld(handle_connection+0x30d)[0xd590bd] mysqld(pfs_spawn_thread+0x1b4)[0x124d874] /lib/x86_64-linux-gnu/libpthread.so.0(+0x8064)[0x7f6f4c7fa064] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6f4a83a62d] ======= Memory map: ======== ***snip*** 15:35:23 UTC - mysqld got signal 6 ; ***snip*** key_buffer_size=8388608 read_buffer_size=131072 max_used_connections=1 max_threads=152 thread_count=4 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 68407 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f6f1c2afae0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f6f187f7ab0 thread_stack 0x30000 mysqld(my_print_stacktrace+0x2c)[0xebe56c] mysqld(handle_fatal_signal+0x479)[0x7a4b89] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f6f4c801890] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7f6f4a787067] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f6f4a788448] /lib/x86_64-linux-gnu/libc.so.6(+0x731b4)[0x7f6f4a7c51b4] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f6f4a84aaa7] /lib/x86_64-linux-gnu/libc.so.6(+0xf6cc0)[0x7f6f4a848cc0] mysqld(_Z19find_or_create_hostP10PFS_threadPKcj+0x395)[0x1264035] mysqld(_Z22find_or_create_accountP10PFS_threadPKcjS2_j+0x3d2)[0x12ae8f2] mysqld(_Z18set_thread_accountP10PFS_thread+0x36)[0x126bde6] mysqld(pfs_set_thread_account_v1+0xa0)[0x124df40] mysqld(_Z16acl_authenticateP3THD19enum_server_commandb+0xc52)[0x7c4882] mysqld[0xc4a054] mysqld(_Z22thd_prepare_connectionP3THDb+0x5a)[0xc4b8ba] mysqld(handle_connection+0x30d)[0xd590bd] mysqld(pfs_spawn_thread+0x1b4)[0x124d874] /lib/x86_64-linux-gnu/libpthread.so.0(+0x8064)[0x7f6f4c7fa064] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6f4a83a62d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 60 Status: NOT_KILLED You may download the Percona XtraDB Cluster operations manual by visiting http://www.percona.com/software/percona-xtradb-cluster/. You may find information in the manual which will help you identify the cause of the crash.
what the heck is going on ? how come a simply connexion breaks a mysql node ?

any clues appreciated

Regards,

Webinar Wednesday March 22, 2017: TokuDB Troubleshooting

Latest MySQL Performance Blog posts - March 21, 2017 - 7:02am

Please join Percona’s Principal Technical Services Engineer, Sveta Smirnova, Senior Software Engineer, George Lorch and Software Engineer, Vlad Lesin as they present TokuDB Troubleshooting on March 22, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

Register Now  TokuDB is an alternative storage engine, designed for big data applications. It provides great write workload scalability features. While the feature set is similar to InnoDB’s, this engine has its own specific configuration settings and troubleshooting instruments. This webinar will look at how to use them for optimal performance.

We will discuss how to deal with:

  • Data corruption issues
  • Inconsistent data
  • Locks
  • Slow performance

We will use well-known instruments and tools, and how they work with the TokuDB storage engine.

Register for the webinar here.

Vladislav Lesin, Software Engineer

Vladislav Lesin is a software engineer at Percona, joining in April 2012. Before coming to Percona, he worked on improving the performance and reliability of high load projects with LAMP architectures. His work consisted of developing fast servers and modules with C and C++, projects state monitoring, searching bottlenecks, and open source projects patching including nginx, memcache, sphinx, php, ejabberd. He took part in developing not only server-side applications, but desktop and mobile ones too. He also has experience in project/product management, hiring, partners negotiations.

Before that he worked in several IT companies, where he developed desktop applications on C++ for such areas as industrial automation, parallel computing, media production. He holds a Master’s Degree in Technique and Technology from Tula State University. Now he lives in Tula City with his wife and daughter.

Sveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can solve typical issues quicker and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as a Support Engineer in the MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle.

She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

George Lorch, Software Engineer George joined the Percona development team in April 2012. George has over 20 years of experience in software support, development, architecture and project management. Prior to joining Percona, George was focused on Windows-based enterprise application server development and network protocol classification and optimization, with heavy doses of database schema design, architecture and tuning.

pmm_rep - A script to generate a weekly query analytics email report

Lastest Forum Posts - March 21, 2017 - 5:25am
Hi,

I couldn't find a tool to generate a weekly query analytics email report, when using an on-premises PMM installation.

I then wrote pmm_rep, which fetches data from PMMs qan-api and generates a multipart email report containing last weeks top 10:

https://arch-ed.dk/pmm_query_analytics_reporting
https://github.com/RasmusEdgar/pmm_rep

Perhaps, some of you might find it useful.

Br,
Rasmus Edgar

Running Percona XtraBackup on Windows … in Docker

Latest MySQL Performance Blog posts - March 20, 2017 - 4:06pm

In this blog, we’ll look at running Percona XtraBackup on Windows via a Docker container.

The question whether Percona XtraBackup is available for Windows comes up every so often. While we are not planning to provide regular releases for Windows, I decided to share a way to run Percona XtraBackup in a Docker container (especially since Docker support for Windows has become more and more stable).

For this exercise, I created a playground Docker image: perconalab/percona-xtrabackup.

First, we need to prepare a few things to make it work:

  1. Install Docker on Windows (the current version I am running is 17.03)
  2. Enable the sharing of disk C in Docker settings
  3. Find out the IP address MySQL is running on (192.168.1.122 in my case)
  4. Grant backup-required privileges for the xtrabackup user:

GRANT RELOAD,PROCESS,LOCK TABLES,REPLICATION CLIENT ON *.* TO 'xtrabackup'@'192.%' IDENTIFIED by 'xtrapassword'

Now, let’s assume our datadir is in C:/mysqldata, and we want to backup to C:/mysqlbackup. Needless to say, that XtraBackup image must run on the same server as MySQL’s datadir (since XtraBackup needs to access the data to copy it).

Now to take a backup we execute:

docker run --rm -it -v //C/mysqldata:/var/lib/mysql -v //C/mysqlbackup:/xtrabackup_backupfiles perconalab/percona-xtrabackup --backup --host=192.168.1.122 --user=xtrabackup --password=xtrapassword

We find our backup in C:/mysqlbackup when it is done.

Enjoy!

Running Docker pxc container joiner node always fails

Lastest Forum Posts - March 20, 2017 - 8:05am
I am trying to set this up without a discovery service, just two nodes connected across the network. Can someone check my work and let me know if you have any ideas on what is going wrong? I do not have an innodb logs (I see those requested every time someone has an issue), and the mysql.log on the donor is all but empty. Not sure if the container is setup to save the logs elsewhere, but I have spent a good amount of time in that container looking for them.

Docker Run command for Donor:
Code: docker run -d --name="xtradb-cluster-master" --restart=unless-stopped -v /data/xtradb:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=secret -e CLUSTER_NAME=XtraDBCluster -p 3306:3306 -p 4567-4568:4567-4568 percona/percona-xtradb-cluster Docker Run command for joiner:
Code: docker run -i -t --name="xtradb-cluster-joiner" --restart=unless-stopped -v /data/xtradb:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=secret -e CLUSTER_NAME=XtraDBCluster -e CLUSTER_JOIN=10.21.1.34 -p 3306:3306 -p 4567-4568:4567-4568 percona/percona-xtradb-cluster I run the joiner with the -i -t so I can see the logs. Here are the logs for that joiner:

Log coming in the next post because of the limit on characters used.

"MySQL General" -> "Process States" show "No datapoints"

Lastest Forum Posts - March 19, 2017 - 5:16am
2. "MySQL General" -> "Process States" show "No datapoints" for bad server, for other servers there is data.
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.