Buy Percona ServicesBuy Now!

Slave info show GTID information not binlog file & position

Lastest Forum Posts - September 29, 2017 - 12:57am
I'm running MySQL 5.7. GTID is currently turned off, but, in the past has been in a few OFF_PERMISSIVE states, it's (GTID) never been fully enabled or ON.

I'm using innobackupex to try and rebuild a new slave, but, whenever I use --slave-info I get a GTID rather than the expected binlog file and position.

SET GLOBAL gtid_purged='44c0008b-4453-11e6-9065-000f53396de0:1-8';
CHANGE MASTER TO MASTER_AUTO_POSITION=1

The following files are generated:
  • xtrabackup_binlog_info
  • xtrabackup_checkpoints
  • xtrabackup_info
  • xtrabackup_logfile
  • xtrabackup_slave_info
Master and slaves all show similar:

> show variables like "%gtid%";
+----------------------------------+-----------+
| Variable_name | Value |
+----------------------------------+-----------+
| binlog_gtid_simple_recovery | ON |
| enforce_gtid_consistency | OFF |
| gtid_executed_compression_period | 1000 |
| gtid_mode | OFF |
| gtid_next | AUTOMATIC |
| gtid_owned | |
| gtid_purged | |
| session_track_gtids | OFF |
+----------------------------------+-----------+
8 rows in set (0.00 sec)

The GTID is in gtid_executed and slave_master_info on the slaves.

Is there something else I need to do to purge / remove these from the tables, somewhere else, to get this to provide the information the standard binlog file and position?

PMM Version 1.3.0 and mongodb

Lastest Forum Posts - September 28, 2017 - 5:13pm
I've upgraded to pmm 1.3.0 (clients and servers). I was wondering how adding mongodb:queries has changed? Mongo is running. mongodb:metrics is running.
but it barfs on sudo pmm-admin add mongodb:queries. And after it barfed I had to run a pmm-admin repair.
the pmm-admin check-network shows that everything is ok

Percona Live Europe Session Interview: MySQL on Docker – Containerizing the Dolphin

Latest MySQL Performance Blog posts - September 28, 2017 - 11:04am

One of the widely discussed technologies at Percona Live Europe was the logistics of running MySQL in containers. Containers – particularly Docker – have become a hot topic, so there was a good-sized crowd on day one of the conference for Ashraf Sharif, Senior Support Engineer with Severalnines. He presented his talk “MySQL on Docker: Containerizing the Dolphin”. 

During his presentation, Ashraf shared some recommendations for best practices when setting out on containerizing MySQL. He sees the trend of moving to containers as a progression from the use of virtual hosts.

After his talk on day one of the Percona Live Europe conference, I caught up with Ashraf and asked about his presentation. I was interested in which concepts are most important for ensuring a smoother implementation.

If you enjoy this brief presentation and would like to find out more, then you might like to subscribe to Ashraf’s blog on the Severalnines website where he regularly posts insights on his special interests of system scalability and high availability.

Percona Monitoring and Management 1.3.0 Query Analytics Support for MongoDB

Latest MySQL Performance Blog posts - September 28, 2017 - 10:27am

Percona is pleased to announce the General Availability of Query Analytics (QAN) from Percona Monitoring and Management 1.3.0 (PMM). This new release introduces the support of MongoDB.

In general, the purpose of QAN is to help detect queries that consume the most amount of time inside of your database server. It provides detailed real-time analysis of queries so that your application can work with data efficiently. In the Percona Monitoring and Management 1.3.0 release, QAN adds support for MongoDB.

MongoDB is conceptually different from relational database management systems, such as MySQL or MariaDB. Relational database management systems store data in separate tables that represent single entities, and you may need to link records from multiple tables to represent a complex object. MongoDB, on the other hand, allows a more flexible approach to data storage and stores all essential information pertaining to a complex object together.

In QAN, the difference between the monitored systems is transparent, and you can analyze queries in the same way regardless of the technology used in the database engine. QAN presents the monitored data in both visual and numeric form. The performance-related characteristics appear as plotted graphics.

To start working with QAN, click the Query Analytics button on the Percona Monitoring and Management 1.3.0 home page. Select a MongoDB database from the list of available database instances at the top of the page. The list of the top ten queries opens below. These are the queries that take the longest time to run. To view more queries, click the Load next 10 queries button below the list.

You can limit the list of available queries to only those that you are interested in by using the Query Filter field next to the database selection button.

In the Query Filter field, you can enter a query ID or its fingerprint. The ID is a unique signature of a query. A fingerprint is a simplified form of your query: it replaces all specific values with placeholders. You can enter only a fragment of the fingerprint to make the search less restrictive.

The queries that match your criterion appear below the Query Filter field in a summary table.

In the summary table represents each query as a row, with each column referring to an essential attribute of queries. The Load, Count, and Latency columns visualize their values graphically along with summaries in the numeric form.

The load attribute is the percentage of the amount of time expressed as a percentage value that the MongoDB server spent executing a specific query. The count attribute informs how often the given query appeared in the search traffic. The latency attribute is the amount of time that it takes to run the query and return its result.

If you hover the cursor over one of these attributes in a query, you can see a concrete value appear over your cursor. Move the cursor along the plotted line to watch how the value is changing. Click one of the queries to select it. QAN displays detailed information about the query. The detailed information includes the metrics specific to the query type. It also contains details about the database and tables that the query uses.

Hope this helps you explore your MongoDB queries and get better performance from them!

pmm 1.3.0 docker image gaps in graphs

Lastest Forum Posts - September 28, 2017 - 7:03am
Hi,

I just updated pmm to the 1.3.0 docker image and am seeing gaps in the graphs. 1.2.2 is fine.

I tried doubling the METRICS_MEMORY to 4194304 but it did not help.

Does 1.3.0 require much more memory than 1.2.2 or could something else be up?

thx

Percona Live Europe Session Interview: Building Multi-Petabyte Data Warehouses with ClickHouse

Latest MySQL Performance Blog posts - September 27, 2017 - 5:23pm

Percona Live Europe provides open source professionals with an opportunity to discuss how various technologies get used in order to solve database problems. Alexander Zaitsev of LifeStreet/Altinity gave one such session: Building Multi-Petabyte Data Warehouses with ClickHouse.

LifeStreet needed to scale their real-time ad analytics platform to multiple petabytes. They evaluated and used a number of open source and commercial solutions, but most solutions were not efficient enough or too expensive. When Yandex released ClickHouse to open source, LifeStreeet quickly realized its potential and started an implementation project. It took a bit of time and effort, but it finally worked out and became an excellent way to address scale in LifeStreet’s database environment.

In this presentation, LifeStreet/Altinity Director of Engineering Alexander Zaitsev talked about their experiences from an application developer’s viewpoint: what worked well and not so well, what challenges they had to overcome as well as share the best practices for building large-scale platforms based on ClickHouse.

I got a chance to talk with Alexander in the video below. Check it out!

Percona Live Europe Session Interviews with Yandex ClickHouse: A DBMS for Interactive Analytics at Scale and Quick Tour of ClickHouse Internals

Latest MySQL Performance Blog posts - September 27, 2017 - 4:55pm

Percona Live Europe 2017 keeps providing excellent sessions with useful information on great open source database technologies. Yandex’s Clickhouse was one of these technologies that was well covered at the conference this year. There were several talks that featured Clickhouse this year. I was able to attend two of them.

The first was a discussion of Clickhouse internals.

ClickHouse is an open source DBMS for high-performance analytics, originally developed at Yandex for the needs of Yandex.Metrica web analytics system. It is capable of storing petabytes of data and processing billions of rows per second per server, all while ingesting new data in real-time. In his talk A Quick Tour of Clickhouse Internals, Yandex’s Alex Zatelepin discussed architectural decisions made by ClickHouse, their consequences from the point of view of an application developer and how to determine if ClickHouse is a good fit for a particular use case.

He covered the following topics:

  • Overview of storage engine and query execution engine.
  • Data distribution and distributed query processing.
  • Replication and where it sits on the consistency-availability spectrum.

In a second Percona Live Europe talk, Aleksei Milovidov of Yandex presented ClickHouse: A DBMS for Interactive Analytics at Scale. In this presentation, Aleksei walked through Yandex’s development of ClickHouse, and how its iterative approach to organizing data storage resulted in a powerful and extremely fast open source system.

You can see my chat with both of these presenters in the video below. Check it out!

PXC scale

Lastest Forum Posts - September 27, 2017 - 4:54pm
Hello.

How big PXC scale can be?
I mean, PXC is right DBMS to deal with PB sized data like HBase?

Also, is there any use case of PXC with large scale data.

Percona Server for MongoDB 3.2.16-3.7 Is Now Available

Latest MySQL Performance Blog posts - September 27, 2017 - 11:41am

Percona announces the release of Percona Server for MongoDB 3.2.16-3.7 on September 27, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database that supports the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocksPercona Memory Engine and PerconaFT storage engine, as well as enterprise-grade features like External Authentication, Audit Logging, Profiling Rate Limiting, and Hot Backup at no extra cost. The software requires no changes to MongoDB applications or code.

NOTE: The PerconaFT storage engine is deprecated as of 3.4. It is no longer supported and isn’t available in higher version releases.

This release is based on MongoDB 3.2.16 and includes the following additional changes:

  • #PSMDB-164: Fixed MongoRocks failure to repair if database metadata is inconsistent with dropped collections and indexes.
  • Added packages for Debian 9 (“stretch”).

The Percona Server for MongoDB 3.2.16-3.7 release notes are available in the official documentation.

Percona Live Europe Session Interview: Spatial Data in MySQL 8.0 with Norvald Ryeng (Oracle)

Latest MySQL Performance Blog posts - September 27, 2017 - 10:56am

Day one of the Percona Live Europe Conference was a huge hit. The first day of sessions went well. People spoke on many different open source database topics, and talks were well-attended.

One such talk I got to sit in was on Spatial Data in MySQL 8.0, given by Norvald Ryeng of Oracle.

MySQL 8.0 is still in development, but we already see a lot of improvement in GIS support. The latest development release comes with support for around 5000 different spatial reference systems, improved standard compliance and a lot of new functionality. How does it all work, and how can it be used to build applications? 

This talk started with the basics of GIS and spatial data in MySQL: geometries, data types, functions, storage and indexes. Then Norvald walked through a demo of how all the parts fit together to support a GIS web application. We also got a sneak peek into the future, including what to do right now to prepare for the upgrade to MySQL 8.0.

Whether you’re currently storing or planning to store spatial data in MySQL, this talk was for you. It covers the topics in a way that is accessible to both novices and more advanced GIS users.

After the talk, I had a chance to interview Norvald, and here is the video:

Percona Live Europe 2017 Keynotes Day 2

Latest MySQL Performance Blog posts - September 27, 2017 - 8:15am

Black coffee was flowing this morning for day two Percona Live Europe 2017 Keynotes after many of the delegates had spent a good few hours the night before enjoying Irish hospitality at the Community Dinner.

So today Laurie Coffin, Chief Marketing Officer for Percona, introduced proceedings for day two and later also took to the stage for a Q&A session with authors Laine Campbell and Charity Majors. More on that later…

State of the Dolphin

Geir Høydalsvik, Development Director for MySQL at Oracle, delivers his keynote “State of the Dolphin”

First up Geir Høydalsvik, Development Director for MySQL at Oracle, delivered juicy tidbits of what to expect in MySQL 8.0 (beyond what you see in the current Developer Milestone Releases). He gave a comprehensive overview of plans and current developments to what had become an almost full house – despite the night before’s revelries.

Many Faces of Continuent Tungsten

M C Brown, VP Products at Continuent, delivers his keynote “Many Faces of Continuent Tungsten”

MC Brown brought the conference up to date with the latest Tungsten developments, as well as some thoughts for the future. He described the wide-ranging deployments of Tungsten out in the field and his thoughts on how it might look going forward.

Database Reliability Engineering

Laine Campbell, Charity Majors are quizzed by Laurie Coffin

Laurie Coffin took to the stage to quiz Laine Campbell, Senior Director Production Engineering at OpsArtisan, and Charity Majors, CEO of Honeycomb Q&A about the newly released O’Reilly title: Database Reliability Engineering. The book focuses on designing and operating resilient database systems and uses open-source engines such as MySQL, PostgreSQL, MongoDB, and Cassandra as examples throughout.

Database Performance in High Traffic Environments

Pavel Genov, Head of Software Development at Pepper, delivers his keynote “Database Performance in High Traffic Environments”

Pepper.com is purposely different than other platforms that list daily deals. Around the clock, the community seeks and finds the best offers in fashion, electronics, traveling and much more. Pavel described how Pepper optimizes their database performance to make sure their web applications remain responsive and meet users’ expectations.

InnoDB: Waiting for page_cleaner to finish flushing of buffer pool keep cleaning

Lastest Forum Posts - September 27, 2017 - 3:55am
I have a very small percona 5.7 test server. I wanted to add the slow query log. After change the .cnf I tried to restart my percona but in the logs I keep seeing this.


2017-09-27T10:21:12.196823Z 0 [Note] InnoDB: FTS optimize thread exiting.
2017-09-27T10:21:12.197154Z 0 [Note] InnoDB: Starting shutdown...
2017-09-27T10:21:12.297422Z 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2017-09-27T10:21:12.309798Z 0 [Note] InnoDB: Buffer pool(s) dump completed at 170927 12:21:12
2017-09-27T10:21:12.497668Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:22:12.692242Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:23:12.883339Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:24:13.076417Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:25:13.296141Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:26:13.476340Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:27:13.665939Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:28:13.862782Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:29:14.063633Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:30:14.255680Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:31:14.446311Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:32:14.638242Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:33:14.832268Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:34:15.020485Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:35:15.215550Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:36:15.408604Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:37:15.597909Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:38:15.792235Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:39:15.991048Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:40:16.184162Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:41:16.379047Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:42:16.570736Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:43:16.763312Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:44:16.952856Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:45:17.138069Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:46:17.330885Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:47:17.517536Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:48:17.707317Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:49:17.901309Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:50:18.092379Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2017-09-27T10:51:18.286690Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool

How to recover it the fastest way ?

Pepper Turns to Percona to Ensure a Great Customer Experience at Pepper.com

Latest MySQL Performance Blog posts - September 27, 2017 - 12:06am

Pepper.com, the world’s largest community deal platform, has selected Percona to manage its open source database performance.

Pepper.com’s around-the-clock community seeks and finds the best offers in fashion, electronics, traveling and much more. With 500 million page views, over 25 million users and over 65,000 user-submitted deals per month across communities in America, Europe and Asia, Pepper has quickly risen to be the largest community deal platform worldwide.

When Pepper.com’s primary MySQL database administrator left the company, Pepper decided to shift to a managed service to maintain uptime and responsiveness. Having previously attended Percona Live Europe, the premier European open source database conference, as well as being avid readers of the Percona Database Performance Blog, the Pepper team turned to Percona for open source database remote managed service expertise.

“Guaranteeing database performance is key to making sure our web applications are responsive and up-to-date,” said Pavel Genov, Head of Software Development at Pepper.com. “Percona Care Ultimate helps us to achieve these objectives.”

Pepper was already using Percona Server for MySQL. Following a Percona Database Performance Audit to review the Pepper.com environment, architecture and setup, Percona XtraBackup was deployed to provide online non-blocking, tightly compressed, highly secure backups.

Check out the case study on Pepper.com and Percona’s engagement to improve and manage Pepper’s database environment.

pt-table-checksum recursion method

Lastest Forum Posts - September 26, 2017 - 12:28pm
All,

I am trying to use recursion method with pt-table-checksum using hosts method. The documentation says, "The hosts method requires replicas to be configured with report_host, report_port, etc".

Where do I define these parameters and what do I do if there are more than 1 slave? Please help me if anyone has done with some detailed steps.

FYI, I have tungsten replication between the servers.

Thanks.

delete table space

Lastest Forum Posts - September 26, 2017 - 12:21pm
Hi All,
Kindly advice for the below issue,

i have table it's size more than 1TB and i try to drop it but it didn't free space in file system.
what i should do if i need to freeup this table space ??

i have Percona-Server-5.6.21-70.1-r698-el6-x86_64

Percona Live Europe 2017 Keynotes Day One

Latest MySQL Performance Blog posts - September 26, 2017 - 11:01am

After yesterday’s very successful tutorial day, everyone looked forward to an inspiring first day of the conference and the Percona Live Europe 2017 keynotes. There were some fantastic keynotes delivered, and some excellent sessions scheduled.

Note. These videos are as shot, and the slides will be superimposed very soon so you can enjoy the full conference experience!

Laurie Coffin, Chief Marketing Office of Percona, opened proceedings with a welcome address where she paid tribute to Jaako Pesonen: a true champion of open source and friend to our community who passed away just this month. He will be missed.

Championing Open Source Databases


Peter Zaitsev delivers his keynote “Championing Open Source Databases”

Laurie then introduced Peter Zaitsev, CEO of Percona, who delivered his keynote “Championing Open Source Databases.” He reiterating Percona’s commitment to remaining an unbiased champion of the open source database ecosystem.

At Percona, we see a lot of compelling open source projects and trends that we think the community will find interesting, and following Peter’s keynote there was a round of lightning talks from projects that we think are stellar and deserve to be highlighted.

Percona Monitoring and Management Demo


Michael Coburn delivers his keynote “Percona Monitoring and Management Demo”

The second keynote was by Percona Product Manager Michael Coburn on Percona Monitoring and Management. How can you optimize database performance if you can’t see what’s happening? Percona Monitoring and Management (PMM) is a free, open source platform for managing and monitoring MySQL, MariaDB, MongoDB and ProxySQL performance. PMM uses Metrics Monitor (Grafana + Prometheus) for visualization of data points, along with Query Analytics, to help identify and quantify non-performant queries and provide thorough time-based analysis to ensure that your data works as efficiently as possible. Michael provided a brief demo of PMM.

MySQL as a Layered Service: How to use Proxy SQL to Control Traffic and Scale-Out

René Cannaò delivers his keynote “MySQL as a Layered Service: How to use Proxy SQL to Control Traffic and Scale-Out”

The next keynote was from René Cannaò, Founder at ProxySQLThe inability to control the traffic sent to MySQL is one of the worse nightmares for a DBA. Scaling out and high availability are only buzz words if the application doesn’t support such architectures. ProxySQL is able to create an abstraction layer between the application and the database: controlling traffic at this layer hides the complexity of the database infrastructure from the application, allowing both HA and scale out. The same layer is able to protect the database infrastructure from abusive traffic, acting as a firewall and cache, and rewriting queries.

Realtime DNS Analytics at Cloudflare with ClickHouse


Tom Arnfeld delivers his keynote “Realtime DNS Analytics at Cloudflare with ClickHouse” 

Cloudflare operates multiple DNS services that handle over 100 billion queries per day for over 6 million internet properties, collecting and aggregating logs for customer analytics, DDoS attack analysis and ad-hoc debugging. Tom Arnfeld, Systems Engineer at Cloudflare, talks briefly in his keynote on how Cloudflare securely and reliably ingests these log events, and uses ClickHouse as an OLAP system to both serve customer real-time analytics and other queries.

Why Open Sourcing our Database Tooling was a Smart Decision


Shlomi Noach delivers his keynote “Why Open Sourcing our Database Tooling was a Smart Decision” 

Drawing from experience at GitHub, Senior Infrastructure Engineer Shlomi Noach, argues in his keynote that open sourcing your database infrastructure/tooling is not only a good, but a smart business decision, that may reward you in unexpected ways. Here are his observations.

MyRocks at Facebook and a Roadmap


Yoshinori Matsunobu delivers his keynote “MyRocks at Facebook and a Roadmap”

A major objective of creating MyRocks at Facebook was replacing InnoDB as the main storage engine, with more space optimisations, and without big migration pains. They have made good progress and extended their goals to cover more use cases. In this keynote, Yoshinori Matsunobu, Production Engineer at Facebook, shares MyRocks production deployment status and MyRocks development plans.

Prometheus for Monitoring Metrics


Brian Brazil, CEO of Prometheus, delivers his keynote “Prometheus for Monitoring Metrics”

From its humble beginnings in 2012, the Prometheus monitoring system has grown a substantial community with a comprehensive set of integrations. Brian Brazil, CEO of Prometheus, provides an overview of the core ideas behind Prometheus and its feature set.

That sums up today’s keynotes. Stay tuned for the next set tomorrow!

Percona Monitoring and Management 1.3.0 Is Now Available

Latest MySQL Performance Blog posts - September 26, 2017 - 12:09am

Percona announces the release of Percona Monitoring and Management 1.3.0 on September 26, 2017.

Percona Monitoring and Management 1.3.0 introduces basic support for the MyRocks storage engine. There is a special dashboard in Metrics Monitor that presents the essential metrics of MyRocks as separate graphs. Also, Metrics Monitor graphs now feature on-demand descriptions that remain visible as long as hover over them.

For example, this graph helps you visualize MyRocks database operations of Next and Seek attributes:

There are many improvements to QAN (Query Analytics) both in the user interface design and in its capabilities. In this release, QAN starts supporting all types of MongoDB queries. For example, if you need to limit the list of available queries to only those that you are interested in, use the Query Filter field next to the database selection button:

Orchestrator is not enabled by default because leaving it in a non-configured state was confusing to users. It is still possible to enable it along with the docker run command.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

New Features
  • PMM-1290: Basic support for the metrics of the MyRocks storage engine in MySQL via the mysqld-exporter.
  • PMM-1312: Metrics Monitor now features a MyRocks dashboard.
  • PMM-1330: Basic telemetry data are collected from PMM Servers.
  • PMM-1417: A new dashboard in Metrics Monitor designed to enable exploring any data in Prometheus
  • PMM-1437pmm-admin allows passing parameters to exporters
  • PMM-685: The EXPLAIN command is now supported in QAN.
Improvements
  • PMM-1262: The system checks for updates much faster
  • PMM-1015QAN should shows all collections from a mongod instance. Make sure that profiling is enabled in MongoDB.
  • PMM-1057QAN supports all MongoDB query types.
  • PMM-1270: In Metrics Monitor, the dashboard filter displays only MariaDB hosts.
  • PMM-1287: In pmm-admin mongodb:queries is not experimental anymore and the dev-enable option is no longer needed.
  • PMM-1446: In Metrics Monitor, the MySQL Active Threads graph displays data more accurately.
  • PMM-1455: In Metrics Monitor, features descriptions of graphs
  • PMM-1476: QAN2 is used by default in pmmdemo.percona.com
  • PMM-1479: It is now possible to go to QAN directly from Metrics Monitor.
  • PMM-515Orchestrator is disabled by default. It is possible to enable it when running your docker container.
Bug fixes
  • PMM-1298: In QAN, the query abstract could be empty for MySQL hosts for low-ranking queries. This bug is fixed to contain Low Ranking Queries as the value of the query abstract.
  • PMM-1314: The selected time range in QAN could be applied incorrectly. This bug is now fixed.
  • PMM-1398: Prometheus memory was not updated after PMM upgrade. This bug is now fixed.
  • PMM-1427: The CPU Usage/Load graph in the MySQL Overview dashboard was displayed with slightly incorrect dimensions. This bug is now solved.
  • PMM-1439: If the EXPLAIN command was not supported for the selected query, there could appear a JavaScript error.
  • PMM-1472: It could happen that monitoring of queries for MongoDB with replication could not be enabled.
  • PMM-943: InnoDB AHI Usage Graph had incorrect naming and hit ratio computation.

Avoid Shared Locks from Subqueries When Possible

Latest MySQL Performance Blog posts - September 25, 2017 - 5:50pm

In this blog post, we’ll look at how to avoid shared locks from subqueries.

I’m pretty sure most of you have seen an UPDATE statement matching rows returned from a SELECT query:

update ibreg set k=1 where id in (select id from ibcmp where id > 90000);

This query, when executed with autocommit=1, is normally harmless. However, this can have bad effects when combined with other statements in the same transaction that result in holding the shared locks from the SELECT query. But first, let me explain why the SELECT query would hold locks in the first place.

Due to InnoDB’s ACID properties, to make sure that the outer UPDATE statement has a consistent view of the matching rows from the SELECT query the server has to acquire a shared lock on those rows. No other thread should modify those matching rows to maintain consistency within the transaction. To demonstrate, let’s take two transactions executed in specific order below:

mysql1> begin; mysql1> update ibreg set k=1 where id in (select id from ibcmp where id > 90000); mysql2> begin; mysql2> delete from ibcmp where id > 90000;

By the time the second session executes, it will be in a LOCK WAIT state (as confirmed from INFORMATION_SCHEMA):

mysql1> select * from information_schema.innodb_trx G *************************** 1. row *************************** trx_id: 3932449 trx_state: LOCK WAIT trx_started: 2017-09-06 00:20:05 trx_requested_lock_id: 3932449:13:1354:31 trx_wait_started: 2017-09-06 00:20:05 trx_weight: 2 trx_mysql_thread_id: 9 trx_query: delete from test.ibcmp where id > 90000 trx_operation_state: starting index read ... mysql1> select * from information_schema.innodb_locks G *************************** 1. row *************************** lock_id: 3932449:13:1354:31 lock_trx_id: 3932449 lock_mode: X lock_type: RECORD lock_table: `test`.`ibcmp` lock_index: PRIMARY lock_space: 13 lock_page: 1354 lock_rec: 31 lock_data: 90001 *************************** 2. row *************************** lock_id: 3932174:13:1354:31 lock_trx_id: 3932174 lock_mode: S lock_type: RECORD lock_table: `test`.`ibcmp` lock_index: PRIMARY lock_space: 13 lock_page: 1354 lock_rec: 31 lock_data: 90001

Information_Schema.INNODB_LOCKS confirms that our first transaction has held a shared lock on the rows that matched the SELECT queries from the first transaction. This can be bad for a number of reasons:

  1. As the number of rows that matches the SELECT grows, DEADLOCK and lock wait timeouts can become more frequent
  2. As a consequence of this, ROLLBACKs would also increase (and are expensive operations)
  3. Your users can become unhappy, especially if it is not handled gracefully from the application

If you really need the consistency of the view between the table being read from and the table getting updated, the lock is necessary and unavoidable. Avoiding the deadlocks and lock wait timeouts can be minimized, but not totally avoided.

On the other hand, if you’re not worried about view consistency, there are two ways you can avoid such problems: by using variables or making sure the SELECT becomes a transient read inside the transaction (i.e., by dumping the results into an OUTFILE).

mysql1> begin; mysql1> select group_concat(id) into @ids from ibcmp where id > 90000; mysql1> update ibreg set k=1 where id in (@ids); mysql2> begin; mysql2> delete from ibcmp where iid > 90000;

The first method is bound by the group_concat_max_len variable. If you think you will only have a few resulting IDs that fit into group_concat_max_len, this is a good solution.

mysql1> begin; mysql1> select id into outfile '/tmp/id.csv' from ibcmp where id > 90000; mysql1> create temporary table t (id int unsigned not null) engine=innodb; mysql1> load data infile '/tmp/id.csv' into table t; mysql1> update ibreg inner join t on ibreg.id = t.id; mysql2> begin; mysql2> delete from ibcmp where id > 90000;

The second approach is only meant to overcome the limitation of the GROUP_CONCAT method.

Again, these two approaches only work if you do not care if the result of the SELECT queries changes on the other table between the BEGIN statement and UPDATE within the transaction.

Percona Live Europe: Tutorials Day

Latest MySQL Performance Blog posts - September 25, 2017 - 11:25am

Welcome to the first day of the Percona Live Open Source Database Conference Europe 2017: Tutorials day! Technically the first day of the conference, this day focused on provided hands-on tutorials for people interested in learning directly how to use open source tools and technologies.

Today attendees went to training sessions taught by open source database experts and got first-hand experience configuring, working with, and experimenting with various open source technologies and software.

The first full day (which includes opening keynote speakers and breakout sessions) starts Tuesday 9/26 at 9:15 am.

Some of the tutorial topics covered today were:

Monitoring MySQL Performance with Percona Monitoring and Management (PMM)

Michael Coburn, Percona

This was a hands-on tutorial covering how to set up monitoring for MySQL database servers using the Percona Monitoring and Management (PMM) platform. PMM is an open-source collection of tools for managing and monitoring MySQL and MongoDB performance. It provides thorough time-based analysis for database servers to ensure that they work as efficiently as possible.

We learned about:

  • The best practices on MySQL monitoring
  • Metrics and time series
  • Data collection, management and visualization tools
  • Monitoring deployment
  • How to use graphs to spot performance issues
  • Query analytics
  • Alerts
  • Trending and capacity planning
  • How to monitor HA

Hands-on ProxySQL

Rene Cannao, ProxySQL

ProxySQL is an open source proxy for MySQL that can provide HA and high performance with no changes in the application, using several built-in features and integration with clustering software. Those were only a few of the features we learned about in this hands-on tutorial.

MongoDB: Sharded Cluster Tutorial

Jason Terpko, ObjectRocket
Antonios Giannopoulos, ObjectRocket

This tutorial guided us through the many considerations when deploying a sharded cluster. It covered the services that make up a sharded cluster, configuration recommendations for these services, shard key selection, use cases, and how data is managed within a sharded cluster. Maintaining a sharded cluster also has its challenges. We reviewed these challenges and how you can prevent them with proper design or ways to resolve them if they exist today.

InnoDB Architecture and Performance Optimization

Peter Zaitsev, Percona

InnoDB is the most commonly used storage engine for MySQL and Percona Server for MySQL. It is the focus of most of the storage engine development by the MySQL and Percona Server for MySQL development teams.

In this tutorial, we looked at the InnoDB architecture, including new feature developments for InnoDB in MySQL 5.7 and Percona Server for MySQL 5.7. Peter explained how to use InnoDB in a database environment to get the best application performance and provide specific advice on server configuration, schema design, application architecture and hardware choices.

Peter updated this tutorial from previous versions to cover new MySQL 5.7 and Percona Server for MySQL 5.7 InnoDB features.

Join us tomorrow for the first full day of the Percona Live Open Source Database Conference Europe 2017!

5.5 to 5.6 slave upgrade issue

Lastest Forum Posts - September 25, 2017 - 2:21am
Hi,

We are in the process of a 5.5 to 5.6 upgrade, and after successfully updating the master, I had updated the first slave, and attempted to do the restart with the "--skip-grant-tables" option as per the upgrade guide.

There are 4 files in /home/mysqltmp, which the startup appears to have an issue with, and I'm currently unable to get the server to come up.

Even with an explicit definition of tmdir, the command fails as follows.

[root@h77-245-67-66 tmp]# /usr/sbin/mysqld --skip-grant-tables --user=mysql &
2017-09-25 09:30:16 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-09-25 09:30:16 0 [Note] /usr/sbin/mysqld (mysqld 5.6.37-82.2-log) starting as process 9158 ...
2017-09-25 09:30:16 9158 [Note] Plugin 'FEDERATED' is disabled.
2017-09-25 09:30:16 9158 [Note] InnoDB: Using atomics to ref count buffer pool pages
2017-09-25 09:30:16 9158 [Note] InnoDB: The InnoDB memory heap is disabled
2017-09-25 09:30:16 9158 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-09-25 09:30:16 9158 [Note] InnoDB: Memory barrier is not used
2017-09-25 09:30:16 9158 [Note] InnoDB: Compressed tables use zlib 1.2.3
2017-09-25 09:30:16 9158 [Note] InnoDB: Using Linux native AIO
2017-09-25 09:30:16 9158 [Note] InnoDB: Using CPU crc32 instructions
2017-09-25 09:30:16 9158 [Note] InnoDB: Initializing buffer pool, size = 24.0G
2017-09-25 09:30:16 9158 [Note] InnoDB: Completed initialization of buffer pool
2017-09-25 09:30:16 9158 [Note] InnoDB: Highest supported file format is Barracuda.
2017-09-25 09:30:17 7f7f1fa2a7e0 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.
2017-09-25 09:30:17 9158 [ERROR] InnoDB: Could not find a valid tablespace file for 'mysqltmp/#sqld17_dd9ab8_6592'. See http://dev.mysql.com/doc/refman/5.6/...-datadict.html for how to resolve the issue.
2017-09-25 09:30:17 9158 [ERROR] InnoDB: Tablespace open failed for '"mysqltmp"."#sqld17_dd9ab8_6592"', ignored.
2017-09-25 09:30:17 7f7f1fa2a7e0 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.
2017-09-25 09:30:17 9158 [ERROR] InnoDB: Could not find a valid tablespace file for 'mysqltmp/#sqld17_dd9ab8_6593'. See http://dev.mysql.com/doc/refman/5.6/...-datadict.html for how to resolve the issue.
2017-09-25 09:30:17 9158 [ERROR] InnoDB: Tablespace open failed for '"mysqltmp"."#sqld17_dd9ab8_6593"', ignored.
2017-09-25 09:30:17 7f7f1fa2a7e0 InnoDB: Assertion failure in thread 140183968327648 in file pars0pars.cc line 865
InnoDB: Failing assertion: sym_node->table != NULL
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/...-recovery.html
InnoDB: about forcing recovery.
08:30:17 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=67108864
read_buffer_size=4194304
max_used_connections=0
max_threads=4098
thread_count=0
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 33693261 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0x8da27c]
/usr/sbin/mysqld(handle_fatal_signal+0x461)[0x65b061]
/lib64/libpthread.so.0[0x3e9b40f7e0]
/lib64/libc.so.6(gsignal+0x35)[0x3e9ac325e5]
/lib64/libc.so.6(abort+0x175)[0x3e9ac33dc5]
/usr/sbin/mysqld[0x9b4669]
/usr/sbin/mysqld(_Z7yyparsev+0xe1b)[0xaf722b]
/usr/sbin/mysqld[0x9b5f71]
/usr/sbin/mysqld[0x9b9f56]
/usr/sbin/mysqld[0x9e121c]
/usr/sbin/mysqld[0x9e333d]
/usr/sbin/mysqld[0x98e23e]
/usr/sbin/mysqld[0xa0b543]
/usr/sbin/mysqld[0x94767c]
/usr/sbin/mysqld(_Z24ha_initialize_handlertonP13st_plugin_in t+0x48)[0x59bc58]
/usr/sbin/mysqld[0x6e9ee1]
/usr/sbin/mysqld(_Z11plugin_initPiPPci+0x982)[0x6f04b2]
/usr/sbin/mysqld[0x5944fd]
/usr/sbin/mysqld(_Z11mysqld_mainiPPc+0x40d)[0x59557d]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x3e9ac1ed1d]
/usr/sbin/mysqld[0x586e8d]
You may download the Percona Server operations manual by visiting
http://www.percona.com/software/percona-server/. You may find information
in the manual which will help you identify the cause of the crash.

[1]+ Exit 1 /usr/sbin/mysqld --skip-grant-tables --user=mysql

Any suggestions, we've run out of ideas.

Mike


Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.