Buy Percona ServicesBuy Now!

cluster down all nodes. how to config auto start?

Lastest Forum Posts - September 15, 2017 - 2:09am
I have 2 nodes
IP node 1: 192.168.2.1
IP node 2: 192.168.2.2
I have completed the Percona XtraDB Cluster configuration
Then I shutdown node 1 and then node 2
But it didn't start service PXC when I restart them
Please show me how to configure auto start

Percona Live Europe Featured Talks: Monitoring Open Source Databases with Icinga with Bernd Erk

Latest MySQL Performance Blog posts - September 14, 2017 - 3:31pm

Welcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Bernd Erk, CEO of Icinga. His talk is titled Monitoring Open Source Databases with Icinga. Icinga is a popular open source successor of Nagios that checks hosts and services, and notifies you of their statuses. But you also need metrics for performance and growth to deal with your scaling needs. Adding conditional behaviors and configuration in Icinga is not just intuitive, but also intelligently adaptive at runtime. In our conversation, we how to intelligently monitor open source databases:

Percona: How did you get into database technology? What do you love about it?

Bernd: I started a position as a junior systems engineer in a large German mail order company. They were totally committed to Oracle databases and the tool stack around it. As Linux gained more and more attention, we became aware of MySQL very early and were fascinated by the simplicity of installation and administration. There were of course so many things Oracle had in those days that MySQL didn’t have, but most of our uses also didn’t require those extra (and of course expensive) features.

Percona: You’re presenting a session called “Monitoring Open Source Databases with Icinga”. Why is monitoring databases important, and what sort of things need to be monitored?

Bernd: Usually databases are a very important part of an IT infrastructure, and need to be online 24/7. I also had the personal experience of database downtime putting a lot of pressure on both the organization in general and the team in charge. Since most open source databases provide very good interfaces, it is not so hard to figure out if they are up and running. Like in many monitoring arenas, knowing what to monitor is the important information.

In addition to the basic local and remote availability checks, monitoring database replication is very important. We often see environments where the standby slave is outdated by, years or not able to keep up with the incoming load. From there you can go into databases and application metrics to learn more about performance and IO behavior.

Percona: Why are you using Icinga specifically? What value does it provide above other monitoring solutions?

Bernd: I’ve been involved with Icinga from the beginning, so it is my number one choice in open source monitoring. In my opinion, the great advance of Icinga 2 is the simplicity of legacy systems like Nagios (or Icinga 1), but also its support for complex environments (such as application-based clustering). There is also the live configuration of the Icinga 2 monitoring core through our REST API. With all the supported tools for metrics, logs and management around it, for me Icinga 2 is the best match for open source monitoring.

Percona: What do you want attendees to take away from your session? Why should they attend?

Bernd: Attendees will get a short overview on Icinga 2, and why it is different to Nagios (Icinga 1). I will also guide them through practical monitoring examples and show implemented checks in a live demo. After my talk, they should be able to adapt and extend on-premise or cloud monitoring with Icinga 2 using the default open source plugins.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Bernd: Getting together with the great database community in all aspects, and going to Dublin (to be honest). I have never been there, and so it is my first time.

Want to find out more about Bernd and database monitoring? Register for Percona Live Europe 2017, and see his talk Monitoring Open Source Databases with Icinga. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Percona Server for MongoDB 3.4.7-1.8 is Now Available

Latest MySQL Performance Blog posts - September 14, 2017 - 11:08am

Percona announces the release of Percona Server for MongoDB 3.4.7-1.8 on September 14, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features:

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.4.7 and includes the following additional change:

  • Added packages for Debian 9 (“stretch”)

WSREP: cluster conflict due to certification failure for threads:

Lastest Forum Posts - September 14, 2017 - 10:36am
Hi All,

We are using an online application built on Liferay framework with Ubunt mysql as the database. We use multi-thread calls for the application and it used to work fine with stand alone server(dev) and 2 node cluster for UAT and Prod. We have upgraded the db version to Percona now and we are struggling with multiple issue such as pxc_strict_mode set as Enforcing causing deadlocks and mandatory to add primary key in all the tables. We have addressed these two issues, however we are facing the WSREP: cluster conflict due to certification failure for threads. We are going crazy on how to get this resolved.

Kindly request any of the experts to help us in this regard.

Thanks
Paramesh

Percona Live Europe Featured Talks: Visualize Your Data with Grafana Featuring Daniel Lee

Latest MySQL Performance Blog posts - September 13, 2017 - 9:47am

Welcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Daniel Lee, a software developer at Grafana. His tutorial is Visualize Your Data With Grafana. This presentation teaches you how to create dashboards and graphs in Grafana and how to use them to gain insight into the behavior of your systems. In our conversation, we discussed how data visualization could benefit your database environment:

Percona: How did you get into database technology? What do you love about it?

Daniel: I’m a developer and my first job was working on a transport logistics system, which was mostly composed of Stored Procedures in SQL Server 2000. Today, I would not build a system with all the logic in Stored Procedures – but that database knowledge is the foundation that I built everything else on. Databases and their data flows will always be the core of most interesting systems. More recently, I have switched from Windows to working with MariaDB on Linux. Grafana Labs uses Percona Server for MySQL for most of our internal applications (worldPing and Hosted Grafana). Working with Grafana also means working with time series databases like Graphite, which is also very interesting.

I enjoy working with data as it is one of the ways to learn how users use a system. Design decisions are theories until you have data to either back them up or disprove them.

Percona: Your presenting a session called “Visualize Your Data With Grafana”. How does monitoring make DBAs life easier, and how do graphs make this information easier to apply for DBAs?

Daniel: Good monitoring provides top-level metrics (throughput, number of errors, performance) for alerting, and other lower-level metrics to allow you to dig into the details and quickly diagnose and resolve an outage. Monitoring also helps you find any constraints (for example, finding bottlenecks for query performance: CPU, row locks, disk, buffer pool size, etc.). Performance monitoring allows you to see trends and lets you know when it is time to scale out or purchase more hardware.

Monitoring can also be used to communicate with business people. It is a way of translating lots of different system metrics into a measurable user experience. Visualizing your data with graphs is a very good way to communicate that information, both within your team and with your business stakeholders. Building dashboards with the metrics that are important to you rather than just the standard checklists (CPU, disk, network etc.) allows you to measure the user experience for your application and to see long-term trends.

Percona: Why Grafana? What does Grafana do better than other monitoring solutions?

Daniel: Grafana is the de facto standard in open source for visualizing time series data. It comes with tons of different ways to visualize your data (graphs, heat maps, gauges). Each data source comes with its own custom query editor that simplifies writing complex queries, and it is easy to create dynamic dashboards that look great on a TV.

Being open source, it can be connected to any data source/database, which makes it easy to unify different data sources in the same dashboard (for example, Prometheus or Graphite data combined with MySQL data). This also means your data is not subject to vendor lock-in like it is in other solutions. Grafana has a large and very active community that creates plugins and dashboards that extend Grafana into lots of niches, as well as providing ways to quickly get started with whatever you want to monitor.

Percona: What do you want attendees to take away from your session? Why should they attend?

Daniel: I want them to know that you can make the invisible visible, with that knowledge start to make better decisions based on data. I hope that my session helps someone take the first step to being more proactive in their monitoring by showing them what can be done with Grafana and other tools in the monitoring space.

In my session, I will give an overview of monitoring and metrics, followed by an intro to Grafana. I plan to show how to monitor MySQL and finish off with a quick look at the new MySQL data source for Grafana.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Daniel: Firstly, it is always great to have an excuse to visit Ireland (I’m an Irishman living in Sweden). I’m also looking forward to getting feedback from the community on Grafana’s new MySQL data source plugin, as well as just talking to people and hearing about their experiences with database monitoring.

Want to find out more about Daniel and data visualization? Register for Percona Live Europe 2017, and see their talk Visualize Your Data With Grafana. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Does percona integrate with System Center Operations Manager ?

Lastest Forum Posts - September 13, 2017 - 4:36am
I am writing this to know if there is any way prometheus can rely data to Microsoft's System Center Operations Manager ?

Massive Parallel Log Processing with ClickHouse

Latest MySQL Performance Blog posts - September 13, 2017 - 1:17am

In this blog, I’ll look at how to use ClickHouse for parallel log processing.

Percona is seen primarily for our expertise in MySQL and MongoDB (at this time), but neither is quite suitable to perform heavy analytical workloads. There is a need to analyze data sets, and a very popular task is crunching log files. Below I’ll show how ClickHouse can be used to efficiently perform this task. ClickHouse is attractive because it has multi-core parallel query processing, and it can even execute a single query using multiple CPUs in the background.

I am going to check how ClickHouse utilizes multiple CPU cores and threads. I will use a server with two sockets, equipped with “Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz” in each. That gives a total of 28 CPU cores / 56 CPU threads.

To analyze workload, I’ll use an Apache log file from one of Percona’s servers. The log has 1.56 billion rows, and uncompressed it takes 274G. When inserted into ClickHouse, the table on disk takes 9G.

How do we insert the data into ClickHouse? There is a lot of scripts to transform Apache log format to CSV, which ClickHouse can accept. As for the base, I used this one:

https://gist.github.com/sepehr/fff4d777509fa7834531

and my modification you can find here:

https://github.com/vadimtk/clickhouse-misc/blob/master/apachelog-to-csv.pl

The ClickHouse table definition:

CREATE TABLE default.apachelog ( remote_host String, user String, access_date Date, access_time DateTime, timezone String, request_method String, request_uri String, status UInt32, bytes UInt32, referer String, user_agent String) ENGINE = MergeTree(access_date, remote_host, 8192)

To test how ClickHouse scales on multiple CPU cores/threads, I will execute the same query by allocating from 1 to 56 CPU threads for ClickHouse processes. This can be done as:

ps -eLo cmd,tid | grep clickhouse-server | perl -pe 's/.* (d+)$/1/' | xargs -n 1 taskset -cp 0-$i

where $i is (N CPUs-1).

We must also take into account that not all queries are equal. Some are easier to execute in parallel than others. So I will test three different queries. In the end, we can’t get around Amdahl’s Law!

The first query should be easy to execute in parallel:

select extract(request_uri,'(w+)$') p,sum(bytes) sm,count(*) c from apachelog group by p order by c desc limit 100

Speedup:

CPUs Time, sec Speedup to 1 CPU 1 823.646 1 2 413.832 1.990291 3 274.548 3.000007 4 205.961 3.999039 5 164.997 4.991885 6 137.455 5.992114 7 118.079 6.975381 8 103.015 7.995399 9 92.01 8.951701 10 82.853 9.941052 11 75.334 10.93326 12 69.23 11.89724 13 63.848 12.90011 14 59.388 13.8689 15 55.433 14.85841 16 52.158 15.79136 17 49.054 16.7906 18 46.331 17.77743 19 43.985 18.72561 20 41.795 19.70681 21 39.763 20.71388 22 38.031 21.65723 23 36.347 22.66063 24 34.917 23.58868 25 33.626 24.49432 26 32.42 25.40549 27 31.21 26.39045 28 30.135 27.33187 29 29.947 27.50346 30 29.709 27.72379 31 29.283 28.1271 32 28.979 28.42217 33 28.807 28.59187 34 28.477 28.9232 35 28.146 29.26334 36 27.921 29.49916 37 27.613 29.8282 38 27.366 30.09742 39 27.06 30.43777 40 26.817 30.71358 41 26.644 30.913 42 26.394 31.2058 43 26.215 31.41888 44 25.994 31.686 45 25.762 31.97135 46 25.554 32.23159 47 25.243 32.62869 48 25.102 32.81197 49 24.946 33.01716 50 24.668 33.38925 51 24.537 33.56751 52 24.278 33.92561 53 24.035 34.26861 54 23.839 34.55036 55 23.734 34.70321 56 23.587 34.91949

 

It’s much more interesting to chart these results:

From the chart, we can see that the query scales linearly up to 28 cores. After that, it continues to scale up to 56 threads (but with a lesser slope). I think this is related to the CPU architecture (remember we have 28 physical cores and 56 CPU “threads”). Let’s look at the results again. With one available CPU, the query took 823.6 sec to execute. With all available CPUs, it took 23.6 sec. So the total speedup is 34.9 times.

But let’s consider a query that allows a lesser degree of parallelism. For example, this one:

select access_date c2, count(distinct request_uri) cnt from apachelog group by c2 order by c2 limit 300

This query uses aggregation that counts unique URIs, which I am sure limits the counting process to a single shared structure. So some part of the execution is limited to a single process. I won’t show the full results for all 1 to 56 CPUs, but for one CPU the execution time is 177.715 sec, and for 56 CPUs the execution time is 11.564 sec. The total speedup is 15.4 times.

The speedup chart looks like this:

As we suspected, this query allows less parallelism. What about even heavier queries? Let’s consider this one:

SELECT y, request_uri, cnt FROM (SELECT access_date y, request_uri, count(*) AS cnt FROM apachelog GROUP BY y, request_uri ORDER BY y ASC ) ORDER BY y,cnt DESC LIMIT 1 BY y

In that query, we build a derived table (to resolve the subquery) and I expect it will limit the parallelism even further. And it does: with one CPU the query takes 183.063 sec to execute. With 56 CPUs it takes 28.572 sec. So the speedup is only 6.4 times.

The chart is:

Conclusions

ClickHouse can capably utilize multiple CPU cores available on the server, and query execution is not limited by a single CPU (like in MySQL). The degree of parallelism is defined by the complexity of the query, and in the best case scenario, we see linear scalability with the number of CPU cores. For the scaling on multiple servers you can see my previous post:

https://www.percona.com/blog/2017/06/22/clickhouse-general-analytical-workload-based-star-schema-benchmark/

However, if query execution is serial, it limits the speedup (as described in Amdahl’s Law).

One example is a 1.5 billion record Apache log, and we can see that ClickHouse can execute complex analytical queries within tens of seconds.

cscope: Searching Code Efficiently

Latest MySQL Performance Blog posts - September 12, 2017 - 10:58am

In this post, we will discuss how to search code with the help of cscope. Let’s begin by checking its description and capabilities (quoting directly from http://cscope.sourceforge.net/):

Cscope is a developer’s tool for browsing source code.

  • Allows searching code for:
    • all references to a symbol
    • global definitions
    • functions called by a function
    • functions calling a function
    • text string
    • regular expression pattern
    • a file
    • files including a file
  • Curses based (text screen)
  • An information database is generated for faster searches and later reference
  • The fuzzy parser supports C, but is flexible enough to be useful for C++ and Java, and for use as a generalized ‘grep database’ (use it to browse large text documents!)

Of course, developers aren’t the only ones browsing the code (as implied by the tool’s description). In the Support team, we find ourselves having to check code many times. This tool is a great aid in doing so. As you can imagine already, this tool can replace find and grep -R "<keyword(s)>" *, and will even add more functionality! Not only this, but our searches run faster (since they are indexed).

The main focus of this post is to explore cscope’s searching capabilities regarding code, but note that you can also use it for text searches that aren’t linked to function names or symbols (supporting regular expressions) and for file searches. This also means that even if the tool doesn’t recognize a function name, you can still use the text search as a fallback.

There is an online manual page, for quick reference:

http://cscope.sourceforge.net/cscope_man_page.html

To install it under RHEL/CentOS, simply issue:

shell> yum install cscope

You can use cscope with MySQL, Percona Server for MySQL or MariaDB code alike. In my case, I had a VM with Percona Server for MySQL 5.7.18 already available, so I’ve used that for demonstration purposes.

We should first get the source code for the exact version we are working with, and build the cscope database (used by the tool to perform searches):

shell> wget https://www.percona.com/downloads/Percona-Server-LATEST/Percona-Server-5.7.18-15/source/tarball/percona-server-5.7.18-15.tar.gz shell> tar xzf percona-server-5.7.18-15.tar.gz shell> cd percona-server-5.7.18-15 shell> cscope -bR

-b will build the database only, without accessing the CLI; -R will recursively build the symbol database from the directory it’s executed, down. We can also add -q for fast symbol lookup, at the expense of a larger database (we’ll check how much more below).

Now that we have built the cscope database, we will see a new file created: cscope.out. If we used -q, we will also see: cscope.in.out and cscope.po.out. Their sizes depend on the size of the codebase in question. Here are the sizes before and after building the cscope database (with -q):

shell> du -d 1 -h .. 615M ../percona-server-5.7.18-15 shell> cscope -bqR shell> du -h cscope.* 8.2M cscope.in.out 69M cscope.out 103M cscope.po.out shell> du -d 1 -h .. 794M ../percona-server-5.7.18-15

This gives around 30% increase in size while using -q, and around 10% increase without it. Your mileage may vary: be aware of this if you are using it on a test server with many different versions, or if the project size is considerably larger. It shouldn’t be much of a problem, but it’s something to take into account.

Ok, enough preamble already, let’s see it in action! To access the CLI, we can use cscope -d.

A picture is worth a thousand words. The following output corresponds to searching for the MAX_MAX_ALLOWED_PACKET symbol:

If there are multiple potential matches, the tool lists them for our review. If there is only one match, it will automatically open the file, with the cursor at the appropriate position. To check a match, either select it with the arrow keys and hit enter, or use the number/letter listed. When you are done and need to get back to cscope to continue checking other matches, simply exit the text editor (which can be defined by using CSCOPE_EDITOR). To get back to the main menu to modify the search, press CTRL-f. To exit the tool press CTRL-d. Lastly, CTRL-c toggles case insensitive mode on and off.

To show how the tool displays searches with many hits, let’s search for functions that call printf:

We can now see that letters are also used to list options, and that we can hit space to page down for more matches (from a total of 4508).

Lastly, as mentioned before if everything else fails and you are not able to find the function or symbol you need (due to limitations or bugs), you can use the “Find this text string” and “Find this egrep pattern” functionality.

I hope this brief tour of cscope has been useful, and helps you get you started using it. Note that you can use it for other projects, and it can be handy if you need to dive into the Linux kernel too.

Addendum

For even more power, you can read this vim tutorial (http://cscope.sourceforge.net/cscope_vim_tutorial.html), or set up ctags (http://ctags.sourceforge.net/) along with cscope.

Upcoming Webinar September 14, 2017: Supercharge Your Analytics with ClickHouse

Latest MySQL Performance Blog posts - September 12, 2017 - 9:03am

Join Percona’s CTO Vadim Tkachenko @VadimTk and Altinity’s Co-Founder, Alexander Zaitsev as they present Supercharge Your Analytics with ClickHouse on Thursday, September 14, 2017, at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Reserve Your Spot

 

ClickHouse is a real-time analytical database system. Even though they’re only celebrating one year as open source software, it has already proved itself ready for serious workloads.

We will talk about ClickHouse in general, some of its internals and why it is so fast. ClickHouse works in conjunction with MySQL – traditionally weak for analytical workloads – and this presentation demonstrates how to make the two systems work together.

There will also be an in-person presentation on How to Build Analytics for 100bn Logs a Month with ClickHouse at the meetup Wednesday, September 13, 2017. RSVP here.

Alexander Zaitsev will also be speaking at Percona Live Europe 2017 on Building Multi-Petabyte Data Warehouses with ClickHouse on Wednesday, September 27 at 11:30 am. Use the promo code “SeeMeSpeakPLE17” for 15% off.

Alexander Zaitsev
Altinity’s Co-Founder
Alexander is a co-founder of Altinity. He has 20 years of engineering and engineering management experience in several international companies. Alexander is expert in high scale analytics systems design and implementation. He designed and deployed petabyte scale data warehouses, including one of earliest ClickHouse deployments outside of Yandex.

Vadim Tkachenko
CTO
Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products.

He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition. Previously, he founded a web development company in his native Ukraine and spent two years in the High Performance Group within the official MySQL support team. Vadim received a BS in Economics and an MS in computer science from the National Technical University of Ukraine. He now lives in California with his wife and two children.

PXC 5.7 - does each node need to have the same innodb_log_file_size?

Lastest Forum Posts - September 12, 2017 - 3:35am
I have a 3 node PXC 5.7 cluster which has been running for some time and recently needed to increase the innodb_log_file_size. I gracefully stopped node 3 and applied the change to innodb_log_file_size then started it back up. It successfully resized the log files when it restarted (thanks to the automatic resize feature in 5.6) and rejoined the cluster after performing an IST.

I do plan to make the change on the other 2 nodes at some point but are there any risks in running with mismatched values for innodb_log_file_size? For example if a node crashes and requires a SST from a donor node, will the mismatched innodb_log_file_size cause any problems? I've tried to search for any mention of this sort of configuration but didn't find anything for or against it.

Thanks

Database Memory keep on increase and decrease

Lastest Forum Posts - September 11, 2017 - 9:41pm
Max 250 queries every 5 mins
Max 50 connections
Max 1 concurrent connection every 5 sec
Avg 1.5k Read/Fetch Rows
24GB allocated for Innodb Buffer Pool
32GB Server RAM

I attached my memory usage. There is something wrong with the diagram. It should not keep on increase and reach until 97% usage. I think, there is something not right with the configuration.

Updating InnoDB Table Statistics Manually

Latest MySQL Performance Blog posts - September 11, 2017 - 12:00pm

In this post, we will discuss how to fix cardinality for InnoDB tables manually.

As a support engineer, I often see situations when the cardinality of a table is not correct. When InnoDB calculates the cardinality of an index, it does not scan the full table by default. Instead it looks at random pages, as determined by options innodb_stats_sample_pages, innodb_stats_transient_sample_pages and innodb_stats_persistent_sample_pages, or by the CREATE TABLE option STATS_SAMPLE_PAGES. The default value for persistent statistics is 20. This approach works fine when the number of unique values in your secondary key grows in step with the size of the table. But what if you have a column that has a comparatively small number of unique values? This could be a common service, many-to-many relationship table, for example, or just a table containing a list of sell orders that belong to one of a dozen shops owned by the company. Such tables could grow up to billions of rows with a small (less than 100) number of unique shop IDs.

At some point, InnoDB will report the wrong values for such indexes. Really! If 20 pages have 100 unique shop IDs, how many unique shop IDs would 20000 pages have? 100 times 1000? This seems logical, and after a certain number of rows such indexes will have extraordinarily large cardinality values.

ANALYZE TABLE will not help, because it uses the same algorithm. Increasing the number of “stats” sample pages would help, but it has its own downside: the more pages you have to examine, the slower ANALYZE TABLE runs. While this command is not blocking, it still creates side effects as described in this blog post. And the longer it runs, the less control you have.

Another issue with InnoDB statistics: even if it is persistent and STATS_AUTO_RECALC is set to 0, it still adds values for secondary indexes as shown in lp:1538765. Eventually, after you insert million of rows, your statistics get corrupted. ANALYZE TABLE  can fix it only if you specify a very large number of “stats” sample pages.

Can we do anything about it?

InnoDB stores statistics in the “mysql” database, in the tables innodb_table_stats and innodb_index_stats. Since they are regular MySQL tables, privileged users can access them. We can update them and modify statistics as we like. And these statistics are used by the Optimizer!

I created a small example showing how to do this trick. I used Percona Server for MySQL version 5.7.19, but the trick will work on any supported MySQL and Percona Server for MySQL version.

First, let’s create test tables. The first table has shops, with a few shop profiles with the shop ID and name:

create table shops( shop_id int not null auto_increment primary key, name varchar(32) ) engine=innodb;

The second table refers to the “shops” table:

create table goods( id int not null auto_increment primary key, shop_id int not null, name varchar(32), create_date datetime DEFAULT NULL, key (shop_id, create_date) ) engine=innodb;

Let’s check how many unique shops we have:

mysql> select count(distinct shop_id) from shops; +-------------------------+ | count(distinct shop_id) | +-------------------------+ | 100 | +-------------------------+ 1 row in set (0.02 sec)

With 100 distinct shops, and a key on (shop_id, create_date), we expect cardinality in table goods to be not much different than this query result:

mysql> select count(distinct id) as `Cardinality for PRIMARY`, -> count(distinct shop_id) as `Cardinality for shop_id column in index shop_id`, -> count(distinct shop_id, create_date) as `Cardinality for create_date column in index shop_id` -> from goods *************************** 1. row *************************** Cardinality for PRIMARY: 8000000 Cardinality for shop_id column in index shop_id: 100 Cardinality for create_date column in index shop_id: 169861 1 row in set (2 min 8.74 sec)

However, SHOW INDEX returns dramatically different values for the column shop_id:

mysql> show index from goods; +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | goods | 0 | PRIMARY | 1 | id | A | 7289724 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 1 | shop_id | A | 13587 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 2 | create_date | A | 178787 | NULL | NULL | YES | BTREE | | | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ 3 rows in set (0.09 sec)

ANALYZE TABLE does not help:

mysql> analyze table goods; +------------+---------+----------+----------+ | Table | Op | Msg_type | Msg_text | +------------+---------+----------+----------+ | test.goods | analyze | status | OK | +------------+---------+----------+----------+ 1 row in set (0.88 sec) mysql> show index from goods; +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | goods | 0 | PRIMARY | 1 | id | A | 7765796 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 1 | shop_id | A | 14523 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 2 | create_date | A | 168168 | NULL | NULL | YES | BTREE | | | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ 3 rows in set (0.00 sec)

As a result, if we join the two tables, Optimizer chooses the wrong JOIN order and query execution plan:

mysql> explain select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486); +----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+ | 1 | SIMPLE | shops | NULL | index | PRIMARY | PRIMARY | 4 | NULL | 100 | 100.00 | Using where; Using index | | 1 | SIMPLE | goods | NULL | ref | shop_id | shop_id | 4 | test.shops.shop_id | 534 | 11.11 | Using index condition | +----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+ 2 rows in set, 1 warning (0.13 sec) mysql> P md5sum PAGER set to 'md5sum' mysql> select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486); 4a94dabc4bfbfb7dd225bcb50278055b - 31896 rows in set (43.32 sec)

If compared to STRAIGHT_JOIN order:

mysql> explain select goods.* from goods straight_join shops on(goods.shop_id = shops.shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486); +----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+ | 1 | SIMPLE | goods | NULL | range | shop_id | shop_id | 10 | NULL | 31997 | 100.00 | Using index condition | | 1 | SIMPLE | shops | NULL | eq_ref | PRIMARY | PRIMARY | 4 | test.goods.shop_id | 1 | 100.00 | Using index | +----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+ 2 rows in set, 1 warning (0.14 sec) mysql> P md5sum PAGER set to 'md5sum' mysql> select goods.* from goods straight_join shops on(goods.shop_id = shops.shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486); 4a94dabc4bfbfb7dd225bcb50278055b - 31896 rows in set (7.94 sec)

The time difference for a small 8M row table is around six times! For a big table with many columns, it would be even larger.

Is STRAIGHT_JOIN the only solution for this case?

No! It’s also not a great solution because if the query is complicated and involves more than two tables, it may be affected by bug fixes and improvements in the Optimizer code. Then the query order might not be optimal for new versions and updates. Therefore, you’ll need to test such queries at each upgrade, including minor ones.

So why does ANALYZE TABLE not work? Because the default number of pages it uses to calculate statistics is too small for the difference. You can increase the table option STATS_SAMPLE_PAGES  until you find a proper one. The drawback is that the greater you set STATS_SAMPLE_PAGES, the longer it takes for ANALYZE TABLE to finish. Also, if you update a large portion of the table, you are often affected by lp:1538765. At some point, the statistics will again be inaccurate.

Now let’s try our manual statistics update trick

InnoDB stores its persistent statistics in the tables mysql.innodb_table_stats  and mysql.innodb_index_stats:

mysql> alter table goods stats_persistent=1, stats_auto_recalc=0; Query OK, 0 rows affected (0.11 sec) Records: 0 Duplicates: 0 Warnings: 0 +---------------+------------+---------------------+---------+----------------------+--------------------------+ | database_name | table_name | last_update | n_rows | clustered_index_size | sum_of_other_index_sizes | +---------------+------------+---------------------+---------+----------------------+--------------------------+ | test | goods | 2017-09-05 00:21:12 | 7765796 | 34624 | 17600 | +---------------+------------+---------------------+---------+----------------------+--------------------------+ 1 row in set (0.00 sec) mysql> select * from mysql.innodb_index_stats where table_name='goods'; +---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+ | database_name | table_name | index_name | last_update | stat_name | stat_value | sample_size | stat_description | +---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+ | test | goods | PRIMARY | 2017-09-05 00:21:12 | n_diff_pfx01 | 7765796 | 20 | id | | test | goods | PRIMARY | 2017-09-05 00:21:12 | n_leaf_pages | 34484 | NULL | Number of leaf pages in the index | | test | goods | PRIMARY | 2017-09-05 00:21:12 | size | 34624 | NULL | Number of pages in the index | | test | goods | shop_id | 2017-09-05 00:21:12 | n_diff_pfx01 | 14523 | 20 | shop_id | | test | goods | shop_id | 2017-09-05 00:21:12 | n_diff_pfx02 | 168168 | 20 | shop_id,create_date | | test | goods | shop_id | 2017-09-05 00:21:12 | n_diff_pfx03 | 8045310 | 20 | shop_id,create_date,id | | test | goods | shop_id | 2017-09-05 00:21:12 | n_leaf_pages | 15288 | NULL | Number of leaf pages in the index | | test | goods | shop_id | 2017-09-05 00:21:12 | size | 17600 | NULL | Number of pages in the index | +---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+ 8 rows in set (0.00 sec)

And we can update these tables directly:

mysql> update mysql.innodb_table_stats set n_rows=8000000 where table_name='goods'; Query OK, 1 row affected (0.18 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> update mysql.innodb_index_stats set stat_value=8000000 where stat_description in('id', 'shop_id,create_date,id') and table_name='goods'; Query OK, 2 rows affected (0.08 sec) Rows matched: 2 Changed: 2 Warnings: 0 mysql> update mysql.innodb_index_stats set stat_value=100 where stat_description in('shop_id') and table_name='goods'; Query OK, 1 row affected (0.09 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> update mysql.innodb_index_stats set stat_value=169861 where stat_description in('shop_id,create_date') and table_name='goods'; Query OK, 1 row affected (0.08 sec) Rows matched: 1 Changed: 1 Warnings: 0

I took index values from earlier, as calculated by this query:

select count(distinct id) as `Cardinality for PRIMARY`, count(distinct shop_id) as `Cardinality for shop_id column in index shop_id`, count(distinct shop_id, create_date) as `Cardinality for create_date column in index shop_id` from goods;

mysql> select * from mysql.innodb_table_stats where table_name='goods'; +---------------+------------+---------------------+---------+----------------------+--------------------------+ | database_name | table_name | last_update | n_rows | clustered_index_size | sum_of_other_index_sizes | +---------------+------------+---------------------+---------+----------------------+--------------------------+ | test | goods | 2017-09-05 00:47:45 | 8000000 | 34624 | 17600 | +---------------+------------+---------------------+---------+----------------------+--------------------------+ 1 row in set (0.00 sec) mysql> select * from mysql.innodb_index_stats where table_name='goods'; +---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+ | database_name | table_name | index_name | last_update | stat_name | stat_value | sample_size | stat_description | +---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+ | test | goods | PRIMARY | 2017-09-05 00:48:32 | n_diff_pfx01 | 8000000 | 20 | id | | test | goods | PRIMARY | 2017-09-05 00:21:12 | n_leaf_pages | 34484 | NULL | Number of leaf pages in the index | | test | goods | PRIMARY | 2017-09-05 00:21:12 | size | 34624 | NULL | Number of pages in the index | | test | goods | shop_id | 2017-09-05 00:49:13 | n_diff_pfx01 | 100 | 20 | shop_id | | test | goods | shop_id | 2017-09-05 00:49:26 | n_diff_pfx02 | 169861 | 20 | shop_id,create_date | | test | goods | shop_id | 2017-09-05 00:48:32 | n_diff_pfx03 | 8000000 | 20 | shop_id,create_date,id | | test | goods | shop_id | 2017-09-05 00:21:12 | n_leaf_pages | 15288 | NULL | Number of leaf pages in the index | | test | goods | shop_id | 2017-09-05 00:21:12 | size | 17600 | NULL | Number of pages in the index | +---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+ 8 rows in set (0.00 sec)

Now the statistics are up to date, but not used:

mysql> explain select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486); +----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+ | 1 | SIMPLE | shops | NULL | index | PRIMARY | PRIMARY | 4 | NULL | 100 | 100.00 | Using where; Using index | | 1 | SIMPLE | goods | NULL | ref | shop_id | shop_id | 4 | test.shops.shop_id | 534 | 11.11 | Using index condition | +----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+ 2 rows in set, 1 warning (0.04 sec)

To finalize the changes, we need to run FLUSH TABLE goods:

mysql> FLUSH TABLE goods; Query OK, 0 rows affected (0.00 sec) mysql> explain select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486); +----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+ | 1 | SIMPLE | goods | NULL | range | shop_id | shop_id | 10 | NULL | 31997 | 100.00 | Using index condition | | 1 | SIMPLE | shops | NULL | eq_ref | PRIMARY | PRIMARY | 4 | test.goods.shop_id | 1 | 100.00 | Using index | +----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+ 2 rows in set, 1 warning (0.28 sec) mysql> P md5sum PAGER set to 'md5sum' mysql> select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486); 4a94dabc4bfbfb7dd225bcb50278055b - 31896 rows in set (7.79 sec)

Now everything is good.

But FLUSH TABLE is a blocking operation, right? Won’t it block queries and create a worse scenario than described for ANALYZE TABLE in this post?

At first glance this is true. But we can use the same trick Percona Toolkit uses: set lock_wait_timeout to 1 and call FLUSH in a loop. To demonstrate how it works, I use a similar scenario as described in the ANALYZE TABLE blog post.

First, let’s reset the statistics to ensure our FLUSH works as expected:

mysql> analyze table goods; +------------+---------+----------+----------+ | Table | Op | Msg_type | Msg_text | +------------+---------+----------+----------+ | test.goods | analyze | status | OK | +------------+---------+----------+----------+ 1 row in set (0.38 sec) mysql> show indexes from goods; +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | goods | 0 | PRIMARY | 1 | id | A | 7765796 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 1 | shop_id | A | 14523 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 2 | create_date | A | 168168 | NULL | NULL | YES | BTREE | | | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ 3 rows in set (0.00 sec)

And then update mysql.innodb_*_stats tables manually. Then check that Optimizer still sees outdated statistics:

mysql> update mysql.innodb_table_stats set n_rows=8000000 where table_name='goods'; Query OK, 1 row affected (0.09 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> update mysql.innodb_index_stats set stat_value=8000000 where stat_description in('id', 'shop_id,create_date,id') and table_name='goods'; Query OK, 2 rows affected (0.09 sec) Rows matched: 2 Changed: 2 Warnings: 0 mysql> update mysql.innodb_index_stats set stat_value=100 where stat_description in('shop_id') and table_name='goods'; Query OK, 1 row affected (0.11 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> update mysql.innodb_index_stats set stat_value=169861 where stat_description in('shop_id,create_date') and table_name='goods'; Query OK, 1 row affected (0.10 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> show indexes from goods; +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | goods | 0 | PRIMARY | 1 | id | A | 7765796 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 1 | shop_id | A | 14523 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 2 | create_date | A | 168168 | NULL | NULL | YES | BTREE | | | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ 3 rows in set (0.00 sec)

Now let’s start a long running query in one session that blocks our FLUSH TABLE command:

mysql> select sleep(1) from goods limit 1000, 300;

And let’s run FLUSH TABLE in a loop:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ until (`mysqlmtr -P13001 -e "set lock_wait_timeout=1; flush table goods;" test`); do sleep 1; done ERROR 1205 (HY000) at line 1: Lock wait timeout exceeded; try restarting transaction ERROR 1205 (HY000) at line 1: Lock wait timeout exceeded; try restarting transaction ERROR 1205 (HY000) at line 1: Lock wait timeout exceeded; try restarting transaction ...

Now let’s ensure we can access the table:

mysql> select * from goods order by id limit 10; ^C

We cannot! We cannot even connect to the database where the table is stored:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ mysqlmtr -P13001 test Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A ^C

The reason for this is that while the FLUSH TABLE command was killed due to the metadata lock wait timeout, it also requested table lock for flushing and blocked other incoming queries.

But we can enclose FLUSH TABLE into LOCK TABLE ... WRITE; ... UNLOCK TABLES; operations. In this case, the LOCK TABLE command gets blocked until all queries release metadata lock on the table. Then it exclusively locks the table, FLUSH TABLE runs and then the script immediately unlocks the table. Since closing the session causes an implicit unlock, I used a PHP one-liner to have everything in a single session:

$ php -r ' > $link = new mysqli("127.0.0.1", "root", "", "test", 13001); > $link->query("set lock_wait_timeout=1"); > while(!$link->query("lock table goods write")) {sleep(1);} > $link->query("flush table goods"); > $link->query("unlock tables");'

We can confirm if a parallel session can access the table:

mysql> select * from goods order by id limit 10; +----+---------+----------------------------------+---------------------+ | id | shop_id | name | create_date | +----+---------+----------------------------------+---------------------+ | 1 | 58 | 5K0z2sHTgjWKKdryTaniQdZmjGjA9wls | 2015-09-19 00:00:00 | | 2 | 17 | xNll02kgUTWAFURj6j5lL1zXAubG0THG | 2013-10-19 00:00:00 | | 3 | 30 | clHX7uQopKmoTtEFH5LYBgQncsxRtTIB | 2017-08-01 00:00:00 | | 4 | 93 | bAzoQTN98AmFjPOZs7PGfbiGfaf9Ye4b | 2013-02-24 00:00:00 | | 5 | 20 | rQuTO5GHjP60kDbN6WoPpE2S8TtMbrVL | 2017-08-05 00:00:00 | | 6 | 37 | WxqxA5tBHxikaKbuvbIF84H9QuaCnqQ3 | 2013-10-18 00:00:00 | | 7 | 13 | DoYnFpQZSVV8UswBsWklgGBUc8zW9mVW | 2017-02-06 00:00:00 | | 8 | 81 | dkNxMQyZNZuTrONEX4gxRLa0DOedatIs | 2015-07-05 00:00:00 | | 9 | 12 | Z0t2uQ9itexpPf01KUpa7qBWlT5fBmXR | 2014-06-25 00:00:00 | | 10 | 90 | 6urABBQyaUVVyxljvd11D3kUxbdDRPRV | 2013-10-23 00:00:00 | +----+---------+----------------------------------+---------------------+ 10 rows in set (0.00 sec) mysql> update goods set name='test' where id=100; Query OK, 1 row affected (0.08 sec) Rows matched: 1 Changed: 1 Warnings: 0

After the PHP script finishes its job, statistics are corrected:

mysql> show index from goods; +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | goods | 0 | PRIMARY | 1 | id | A | 8000000 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 1 | shop_id | A | 100 | NULL | NULL | | BTREE | | | | goods | 1 | shop_id | 2 | create_date | A | 169861 | NULL | NULL | YES | BTREE | | | +-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ 3 rows in set (0.00 sec)

Conclusion

We can manually update persistent InnoDB statistics to fix Optimizer plans for our queries, with almost no impact on a live server.

Upcoming Webinar Tuesday September 12: Differences between MariaDB® and MySQL®

Latest MySQL Performance Blog posts - September 11, 2017 - 8:28am

Join Percona’s Chief Evangelist, Colin Charles (@bytebot) as he presents Differences Between MariaDB and MySQL on Tuesday, September 12, 2017, at 7:00 am PDT / 10:00 am EDT (UTC-7).

Reserve Your Spot

 

Are they syntactically similar? Where do these two query languages differ? Why would I use one over the other?

MariaDB is on the path to gradually diverge from MySQL. One obvious example is the internal data dictionary currently under development for MySQL 8. This is a major change to the way metadata is stored and used within the server, and MariaDB doesn’t have an equivalent feature. Implementing this feature could mark the end of datafile-level compatibility between MySQL and MariaDB.

There are also non-technical differences between MySQL and MariaDB, including:

  • Licensing: MySQL offers their code as open-source under the GPL, and provides the option of non-GPL commercial distribution in the form of MySQL Enterprise. MariaDB can only use the GPL because their work is derived from the MySQL source code under the terms of that license.
  • Support services: Oracle provides technical support, training, certification and consulting for MySQL, while MariaDB has their own support services. Some people will prefer working with smaller companies, as traditionally it affords them more leverage as a customer.
  • Community contributions: MariaDB touts the fact that they accept more community contributions than Oracle. Part of the reason for this disparity is that developers like to contribute features, bug fixes and other code without a lot of paperwork overhead (and they complain about the Oracle Contributor Agreement). However, MariaDB has its own MariaDB Contributor Agreement — which more or less serves the same purpose.

Colin will take a look at some of the differences between MariaDB and MySQL and help answer some of the common questions our Database Performance Experts get about the two databases.

You can register for the webinar here.

Colin Charles, Percona Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team of MariaDB Server in 2009, worked at MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within open source communities in APAC and has spoken at many conferences.

Cluster down &amp;gt; BF-BF X lock conflict

Lastest Forum Posts - September 11, 2017 - 3:33am
Hello,

today we had a strange problem with our PXC (5.5.41-25.12-855.trusty
). All 3 nodes were offline.

I found something in the error logs:

BF-BF X lock conflict

RECORD LOCKS space id 1965277 page no 68 n bits 120 index `PRIMARY` of table `data_mdt`.`V30_55_044` trx id A72E0B521 lock_mode X locks rec but not gap

07:13:22 UTC - mysqld got signal 6 ;

This could be because you hit a bug. It is also possible that this binary

or one of the libraries it was linked against is corrupt, improperly built,

or misconfigured. This error can also be caused by malfunctioning hardware.

We will try our best to scrape up some info that will hopefully help

diagnose the problem, but since we have already crashed,

something is definitely wrong and this may fail.

Please help us make Percona XtraDB Cluster better by reporting any

bugs at https://bugs.launchpad.net/percona-xtradb-cluster




key_buffer_size=8388608

read_buffer_size=131072

max_used_connections=123

max_threads=153

thread_count=25

connection_count=8

It is possible that mysqld could use up to

key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 343066 K bytes of memory

Hope that's ok; if not, decrease some variables in the equation.




Thread pointer: 0x7f2f9c000990

Attempting backtrace. You can use the following information to find out

where mysqld died. If you see no messages after this, something went

terribly wrong...

stack_bottom = 7f3d08332a58 thread_stack 0x40000

/usr/sbin/mysqld(my_print_stacktrace+0x20)[0x7a4db0]

/usr/sbin/mysqld(handle_fatal_signal+0x36f)[0x693b0f]

/lib/x86_64-linux-gnu/libpthread.so.0(+0x10340)[0x7f3d3a67c340]

/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39)[0x7f3d39698cc9]

/lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f3d3969c0d8]

/usr/sbin/mysqld[0x50d0c4]

/usr/sbin/mysqld[0x8ed07a]

/usr/sbin/mysqld[0x8f223d]

/usr/sbin/mysqld[0x8f2be3]

/usr/sbin/mysqld[0x8304ea]

/usr/sbin/mysqld[0x834454]

/usr/sbin/mysqld[0x80744d]

/usr/sbin/mysqld[0x80ca53]

/usr/sbin/mysqld(_ZN14Rows_log_event8find_rowEPK14Relay_log_ info+0x64)[0x72cfa4]

/usr/sbin/mysqld(_ZN21Update_rows_log_event11do_exec_rowEPK1 4Relay_log_info+0x89)[0x72d769]

/usr/sbin/mysqld(_ZN14Rows_log_event14do_apply_eventEPK14Rel ay_log_info+0x1e5)[0x723b55]

/usr/sbin/mysqld(_Z14wsrep_apply_cbPvPKvmjPK14wsrep_trx_meta +0x6ab)[0x653bab]

/usr/lib/libgalera_smm.so(+0x196a96)[0x7f3d1ca68a96]

/usr/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM9apply_t rxEPvPNS_9TrxHandleE+0xce)[0x7f3d1ca6aabe]

/usr/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM11proces s_trxEPvPNS_9TrxHandleE+0x10e)[0x7f3d1ca6d72e]

/usr/lib/libgalera_smm.so(_ZN6galera15GcsActionSource8dispa tchEPvRK10gcs_actionRb+0x1b9)[0x7f3d1ca4d979]

/usr/lib/libgalera_smm.so(_ZN6galera15GcsActionSource7proce ssEPvRb+0x4c)[0x7f3d1ca4e86c]

/usr/lib/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_ recvEPv+0x6b)[0x7f3d1ca6d94b]

/usr/lib/libgalera_smm.so(galera_recv+0x18)[0x7f3d1ca7e9c8]

/usr/sbin/mysqld[0x6544a8]

/usr/sbin/mysqld(start_wsrep_THD+0x34b)[0x5261bb]

/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182)[0x7f3d3a674182]

/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f3d3975c47d]




Trying to get some variables.

Some pointers may be invalid and cause the dump to abort.

Query (0): is an invalid pointer

Connection ID (thread ID): 14
Status: NOT_KILLED
You may download the Percona XtraDB Cluster operations manual by visiting

http://www.percona.com/software/percona-xtradb-cluster/. You may find information

in the manual which will help you identify the cause of the crash.

170911 09:13:22 mysqld_safe Number of processes running now: 0

170911 09:13:22 mysqld_safe WSREP: not restarting wsrep node automatically

170911 09:13:22 mysqld_safe mysqld from pid file /var/lib/mysql/nsearch11.pid ended
Two questions:

- Why stopped our PXC ?
- I think the error was at 07:13:22 am, put why stopped the nodes at 09:13:22 ?

Any ideas ? :-/

best regards
Patrick

Percona server 5.5.57 generating lots of small mysql-bin.xxxx files

Lastest Forum Posts - September 11, 2017 - 1:59am
Hi Community,

I've been using Percona server for a while now without any trouble at all ever :-)
Now I'm running into my first issue so I thought I'd join the community to seek some advice.

Recently I replaced a percona server 5.5 instance running on CentOS 6.9 with a Percona 5.5 on CentOS 7.3.

I copied over the full mysql data directory and set permissions over to the new system.

The MySQL seems to be working fine (Other than users reporting intermittent disconnects).

My problem here is really, that I seem to get a new mysql-bin.xxxxx file every 10 minutes?

No entries in the error log. I'm at a loss :-(

Anyone ever come across this issue?

Thanks in advance for any help, much appreciated!

Regards,

K Faber

Percona Live Europe Featured Talks: MongoDB – Sharded Cluster Tutorial with Jason Terpko and Antonios Giannopoulos

Latest MySQL Performance Blog posts - September 8, 2017 - 11:32am

Welcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Jason Terpko and Antonios Giannopoulos, DBAs at ObjectRocket. Their tutorial is MongoDB – Sharded Cluster Tutorial. This tutorial guides you through the many considerations when deploying a sharded cluster. The talk covers the services that make up a sharded cluster, configuration recommendations for these services, shard key selection, use cases and how to manage data within a sharded cluster. In our conversation, we discussed how using a sharded cluster can benefit your database environment:

Percona: How did you get into database technology? What do you love about it?

Jason: Nowadays, a DBA must also be part sysadmin and part developer (and always awesome). Being a DBA gives me the opportunity to deal with the entire stack. I never get bored.

Antonios: I agree with Jason, and I have to add that today there are probably more databases than programming languages. Designing an application often involves using more than one database technology. The challenge is choosing the right ones each time. Honestly, how can you get bored with that?

Percona: You’re presenting a session called “MongoDB – Sharded Cluster Tutorial.” What is a MongoDB sharded cluster, and how is it useful in databases?

Jason: Scaling is one of the biggest challenges in databases world. A database can scale either vertically or horizontally. Vertical scaling is increasing the resources count to deal with database underperformance – doubling the RAM, or increasing speed through faster CPUs. Unfortunately, doubling capacity doesn’t necessarily mean doubling the performance. There is a breaking point where adding resources does not affect the performance. With horizontal scaling, we distribute the database workload among multiple servers. Each of the instances serves only a portion of the workload. If the database is underperforming, we simply add more servers. It’s faster and cheaper compared to vertical scaling, and at the same time the capacity-to-performance ratio is much higher with horizontal scaling. Sharding is MongoDB’s horizontal scaling mechanism.

Percona: How can a sharded cluster affect MongoDB performance (both negatively and positively)?

Antonios: Sharded clusters can have an immediate positive impact on application performance when the collection has been pre-distributed with a hashed shard key or manual splitting. These approaches allow your application to make use of all shards and resources from the start. For some newly deployed applications, this throughput is a requirement for a successful release.

The distribution of data, with its positive impact on performance, can also have a negative effect. Even with an evenly distributed collection, hot spotting can occur. This causes degradation for both targeted and broadcast operations. Also, write operations cause overhead when you need to move or balance this distributed data. This overhead can impact some applications when added to their workload at specific times.

Percona: What are some of the things you need to watch out for when deploying a sharded cluster?

Jason: First of all, there is the shard key selection. Choosing the right shard key makes your application rock (and you might earn the employee of the month award). Selecting a poor shard key may have a catastrophic effect on your business (and get you a much different type of company notice).

Secondly, after sharding a collection any existing applications must continue to work error free. Familiarizing yourself with shard key limitations and what operations may not work on a sharded collection is very important. Doing the research beforehand will prevent issues later.

Thirdly, you need to monitor the resources you have deployed your sharded cluster on. Whether it is physical, virtualized or containerized, all components should have a similar performance profile and reliable communication. For broadcast operations, your operation is only as fast as the slowest shard. And if internal traffic is not reliable your can cluster can be prone to issues.

Percona: What do you want attendees to take away from your session? Why should they attend?

Antonios: Our attendees will feel like Hamlet: “To shard or not to shard?”. At the end of the session, every attendee will be able to setup, maintain and troubleshoot a sharded cluster. Additionally, they will get their hands dirty in our labs. Don’t get me wrong, our slides are great! But sharded cluster mastery requires practice. Finally, we encourage discussion during the tutorial, so please come and raise your hand and ask us sharding-related questions. We would love to learn about your use cases and help you in any way.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Both: The free drinks of course! But seriously, Percona Live Euorpe is how community events should be. It’s our fourth time attending, and every single time we meet people that share our passion for databases. They are open for a conversation and everywhere we discover new ways to solve complex problems, new technologies to look at and innovative ideas to try out. The Percona Live conferences are getting better and better every year, and we are 100% sure that Percona Live Europe 2017 will be a success.

Want to find out more about Jason, Antonios and sharded clusters? Register for Percona Live Europe 2017, and see their talk MongoDB – Sharded Cluster Tutorial. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

This Week in Data with Colin Charles #5: db tech showcase and Percona Live Europe

Latest MySQL Performance Blog posts - September 8, 2017 - 11:25am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Europe 2017 in Dublin

Have you registered for Percona Live Europe Dublin? We have announced some awesome keynotes, and our sponsor list is growing (and we’re always looking for more!).

There will also be a community dinner (Tuesday, September 26, 2017), so definitely watch the announcement that will be on the blog, and I’m sure on Twitter. Besides being fun, the Lightning Talks will happen during that time.

Releases Link List db tech showcase Tokyo, Japan

The annual db tech showcase Tokyo 2017 took place this week from 5-7 September. It was a fun event as always, with capacity for 800 people per day. The event grows larger each year, and reminds me of the heyday of the MySQL Conference & Expo.

The db tech showcase is a five-parallel-track show, with each talk approximately 50 minutes. The event started with a keynote by Richard Hipp, creator of SQLite (if you were a Percona Live Santa Clara 2017 attendee, you’d have also seen him there). The rest of the event is a mix between Japanese language content and English language content. The sponsor list is lengthy, and if you walk the floor you could collect a lot of datasheets.

One thing I really liked? At some talks, you’d get a clear folder with a contact form as well as the printed slide deck. This is a great way to let the speaker’s company contact you. It’s a common issue that I (and others) speak to large amounts of people and have no idea who’s in the talk. I can only imagine our marketing and sales teams being much happier if they could get access to an attendee list! I wonder if this will work in other markets?

It’s interesting to see that there is a Japan MariaDB User Group now. It’s clear the MySQL user group needs a revival! I saw a talk from Toshiba on performance tests using MariaDB Server, but not with MySQL (a little odd?). The MongoDB content was pretty latent, which is unsurprising because we don’t see a huge MongoDB uptake or community in Japan (or South Korea for that matter).

Will I go back? Absolutely. I’ve been going for a few years, and it’s a great place for people who are crazy about database technology. You really get a spectrum of database presentations, and I expect most people go back with many ideas of what they might want to evaluate for production.

I spoke about the Engineering that goes into Percona Server for MySQL 5.6 and 5.7, with a hint of MongoDB. The slides are in a mix of Japanese and English. The Japanese translation: Percona ServerをMySQL 5.6と5.7用に作るエンジニアリング(そしてMongoDBのヒント).

Upcoming Appearances

Percona’s website keeps track of community events, so check there to see where to listen to a Perconian speak. My upcoming appearances are:

Feedback

Did you try replication-manager last week? Guillaume Lefranc, the lead developer, writes in to talk about the new features such as support for MySQL 5.7, Binlog Flashback, multi-cluster mode and various stability fixes.

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Re-enable fast upsert support for TokuDB

Lastest Forum Posts - September 8, 2017 - 3:00am
Percona-TokuDB used to support NOAR (No Affected Rows) fast upserts (https://www.percona.com/blog/2013/02...s-with-tokudb/ https://www.percona.com/blog/2013/03...ate-benchmark/)

It is already a feature request to re-enable it on the modern Percona Server and there are already branches ready for merging! (https://jira.percona.com/browse/TDB-63 https://tokutek.atlassian.net/browse/DB-871)

Please please please get this feature back in! Users could really use it!

I am seriously regretting going from an old manual tokudb patch to using Percona Server because upsert performance has tanked. NOAR has a really big impact on upsert-heavy workloads.

Always Verify Examples When Comparing DB Products (PostgreSQL and MySQL)

Latest MySQL Performance Blog posts - September 7, 2017 - 8:45am

In this blog post, I’ll look at a comparison of PostgreSQL and MySQL.

I came across a post from Hans-Juergen Schoenig, a Postgres consultant at Cybertec. In it, he dismissed MySQL and showed Postgres as better. While his post ignores most of the reasons why MySQL is better, I will focus on where his post is less than accurate. Testing for MySQL was done with Percona Server 5.7, defaults.

Mr. Schoenig complains that MySQL changes data types automatically. He claims inserting 1234.5678 into a numeric(4, 2) column on Postgres produces an error, and that MySQL just rounds the number to fit. In my testing I found this to be a false claim:

mysql> CREATE TABLE data ( -> id integer NOT NULL, -> data numeric(4, 2)); Query OK, 0 rows affected (0.07 sec) mysql> INSERT INTO data VALUES (1, 1234.5678); ERROR 1264 (22003): Out of range value for column 'data' at row 1

His next claim is that MySQL allows updating a key column to NULL and silently changes it to 0. This is also false:

mysql> INSERT INTO data VALUES (1, 12); Query OK, 1 row affected (0.00 sec) mysql> UPDATE data SET id = NULL WHERE id = 1; ERROR 1048 (23000): Column 'id' cannot be null

In the original post, we never see the warnings and so don’t have the full details of his environment. Since he didn’t specify which version he was testing on, I will point out that MySQL 5.7 does a far better job out-of-the-box handling your data than 5.6 does, and SQL Mode has existed in MySQL for ages. Any user could set it to STRICT_ALL|TRANS_TABLES and get the behavior that is now default in 5.7.

The author is also focusing on a narrow issue, using it to say Postgres is better. I feel this is misleading. I could point out factors in MySQL that are better than in Postgres as well.

This is another case of “don’t necessarily take our word for it”. A simple test of what you see on a blog can help you understand how things work in your environment and why.

many holes in pmm grafana (pmm-admin check-network --no-emoji error sometimes)

Lastest Forum Posts - September 7, 2017 - 2:50am
Hello,

I use pmm (version 1.0.6) for monitoing mysql/mariadb + linux os
While I put more than 20 clients servers for monitoring, I get sometimes holes in pmm grafana and in the same time, pmm-admin check-network --no-emoji error as following in all the clients

* Client <-- Server
-------------- ---------------- ---------------------- --------
SERVICE TYPE NAME REMOTE ENDPOINT STATUS
-------------- ---------------- ---------------------- --------
linux:metrics dl360s-62 10.198.234.139:42000 PROBLEM
mysql:metrics MAD_FUNDTP2_PRD 10.198.234.139:42002 PROBLEM

This hole (error) can last 1 minute to thirty minutes and all is ok on app server + network + firewall + all clients
And then all works, 1 hour (or dozen minutes) later, error (hole) again and so on.

I haven't this problem when i put less clients servers for monitoring

Someone knows what's matter with pmm ?

Thanks a lot
Regards
Pierre
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.