Buy Percona ServicesBuy Now!

This Week in Data with Colin Charles 27: Percona Live Tutorials Released and a Comprehensive Review of the FOSDEM MySQL DevRoom

Latest MySQL Performance Blog posts - February 9, 2018 - 5:19am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Santa Clara 2018 update: tutorials have been announced. The committee rated over 300+ talks, and easily 70% of the schedule should go live next week as well. In practice, then, you should see about 50 talks announced next week. There’s been great competition: we only have 70 slots in total, so about 1 in 5 talks get picked — talk about a competitive ratio.

FOSDEM

FOSDEM was truly awesome last week. From a Percona standpoint, we had a lot of excellent booth traffic (being outside of the PostgreSQL room on Saturday, and not too far out from the MySQL room on Sunday). We gave away bottle openers — useful in Brussels with all the beer; we tried a new design with a magnet to attach it to your fridge — stickers, some brochures, but most of all we had plenty of great conversations. There was quite a crowd from Percona, and it was excellent to see the MySQL & Friends DevRoom almost constantly full! A few of us brave souls managed to stay there the whole day, barely with any breaks, so as to enjoy all the talks.

I find the quality of talks to be extremely high. And when it comes to a community run event, with all content picked by an independent program committee, FOSDEM really sets the bar high. There is plenty of competition to get a good talk in, and I enjoyed everything we picked (yes, I was on the committee too). We’ve had plenty of events in the ecosystem that sort of had “MySQL” or related days, but FOSDEM might be the only one that has really survived. I understand we will have a day of some sort at SCALE16x, but even that has been scaled down. So if you care about the MySQL ecosystem, you will really want to ensure that you are at FOSDEM next year.

This year, we started with the usual MySQL Day on Friday. I could not be present, as I was at the CentOS Dojo, giving a presentation. So, the highlight of Friday for me? The community dinner. Over 80 people showed up, I know there was a waiting list, and lots of people were trying to get tickets at the last minute. Many missed out too; sorry, better luck next year; and also, hopefully, we will get a larger venue going forward. I really thank the organizers for this — we affectionately refer to them as the Belconians (i.e. a handful of Perconians based in Belgium). The conversation, the food, the drink —  they were all excellent. It’s good to see representation from all parts of the community: MySQL, Percona, MariaDB, Pythian, and others. So thank you again, Liz, Dimitri, Tom, and Kenny in absentia. I think Tjerk also deserves special mention for always helping (this year with the drinks)

As for FOSDEM itself, beyond the booth, I think the most interesting stuff was the talks. There are video recordings and slides of pretty much all talks, but I will also give you the “Cliff’s Notes” of them here.

MySQL DevRoom talk quick summaries Beyond WHERE and GROUP BY – Sergei Golubchik
  • EXCEPT is in MariaDB Server 10.3
  • recursive CTEs are good for hierarchical data, graphs, data generation, Turing complete (you can use it to solve Sudoku even)
  • non-recursive CTEs can be an alternative syntax for subqueries in the FROM clause
  • Window functions:
    • Normal: one result per row, depend on that row only
    • Aggregate: one result per group, depending on the whole group
    • Window: one result per row, depending on the whole group
  • System versioned tables with AS OF
  • Aggregate stored functions
MySQL 8.0 Performance: InnoDB Re-Design – Dimitri Kravtchuk
  • Contention-Aware Transactions Scheduling (CATS), since 8.0.3. Not all transactions are equal, FIFO could not be optimal, unblock the most blocking transactions first
  • CATS (VATS) had a few issues, and there were bugs (they thought everything worked since MariaDB Server had implemented it). They spent about 9 months before fixing everything.
  • Where does CATS help? Workloads hitting row lock contentions. You can monitor via SHOW ENGINE INNODB MUTEX.
  • the main problem is because of repeatable read versus read committed transaction isolation on the same workload. You really need to understand your workload when it comes to VATS.
MySQL 8.0 Roles – Giuseppe Maxia
  • Created like a user, granted like privileges. You need to activate them to use them.
  • Before roles, you created a user, then grant, grant, and more grant’s… Add another user? Same deal. Lots of repetitive work and a lot of chances to make mistakes.
  • Faster user administration – define a role, assign it many times. Centralized grant handling – grant and revoke privileges to roles, add/edit all user profiles.
  • You need to remember to set the default role.
  • A user can have many roles; default role can be a list of roles.
  • Roles are users without a login – roles are saved in user tables. This is useful from an account lock/unlock perspective.
  • You can grant a user to a user
  • SET ROLE is for session management; SET DEFAULT ROLE is a permanent assignment of a role for a user. SET ROLE DEFAULT means assign the default role for this user for this session
  • The role_edges table reports which roles are assigned to which users. default_roles keeps track of the current default roles assigned to users. A default role may not exist.
Histogram support in MySQL 8.0 – Øystein Grøvlen
  • You can now do ANALYZE TABLE table UPDATE HISTOGRAM on column WITH n BUCKETS;
  • New storage engine API for sampling (default implementation is full table scan even when sampling)
  • Histogram is stored in a JSON column in the data dictionary. Grab this from the INFORMATION_SCHEMA.
  • Histograms are useful for columns that are not the first column of any index, and used in WHERE conditions of JOIN queries, queries with IN-subqueries, ORDER BY … LIMIT queries. Best fit: low cardinality columns (e.g. gender, orderStatus, dayOfWeek, enums), columns with uneven distribution (skew), stable distribution (do not change much over time)
  • How many buckets? equi-height, 100 buckets should be enough.
  • Histograms are stored in the data dictionary, so will persist over restarts of course.
Let’s talk database optimizers – Vicențiu Ciorbaru TLS for MySQL at Large Scale – Jaime Crespo
  • Literally took 3 lines in the my.cnf to turn on TLS
  • https://dbtree.wikimedia.org
  • They wanted to do a data centre failover and wanted to ensure replication would be encrypted.
  • They didn’t have proper orchestration in place (MySQL could have this too). Every time OpenSSL or MySQL had to be upgraded, the daemon needed restarting. If there was an incompatible change, you had to sync master/replicas too.
  • The automation and orchestration that Wikipedia uses: https://fosdem.org/2018/schedule/event/cumin_automation/ (it is called Cumin: https://wikitech.wikimedia.org/wiki/Cumin)
  • Server support was poor – OpenSSL – so they had to deploy wmf-mysql and wmf-mariadb of their own
  • Currently using MariaDB 10.0, and looking to migrate to MariaDB 10.1
  • Client library pain they’ve had
  • TLSv1.2 from the beginning (2015).
  • 20-50x slower for actual connecting; the impact is less than 5% for the actual query performance. Just fix client libraries, make them use persistent connections. They are now very interested in ProxySQL for this purpose.
  • https://grafana.wikimedia.org/?orgId=1
  • Monty asks, would a double certificate help? Jaime says sure. But he may not actually use double certificates; might not solve CA issues, and the goal is not to restart the server.
  • Monty wonders why not to upgrade to 10.2? “Let’s talk outside because it’s a much larger question.”
MySQL InnoDB Cluster – Miguel Araújo
  • group replication: update everywhere (multi-master), virtually synchronous replication, automatic server failover, distributed recovery, group reconfiguration, GCS (implementation of Paxos – group communication system). HA is a critical factor.
  • mysqlsh: interactive and batch operations. Document store (CRUD and relational access)
  • admin API in mysqlsh: create & manage clusters, hide complexity of configuration/provisioning/orchestration of the InnoDB clusters. Works with JavaScript and Python
  • Usability. HA out of the box.
  • It’s easy to join a new node; new node goes into recovery mode (and as long as you have all the binary logs, this is easy; otherwise start from a backup)
  • SET PERSIST – run a command remotely, and the configuration is persisted in the server
  • Network flapping? Group replication will just reject the node from the cluster if its flapping too often
Why we’re excited about MySQL 8 – Peter Zaitsev
  • Native data dictionary – atomic, crash safe, DDLs, no more MyISAM system table requirements
  • Fast INFORMATION_SCHEMA
  • utf8mb4 as default character set
  • Security: roles, breakdown of SUPER privileges, password history, faster cached-SHA2 authentication (default), builds using OpenSSL (like Percona Server), skip grants blocks remote connections, logs now encrypted when tablespace encryption enabled
  • Persistent AUTO_INCREMENT
  • auto-managed undo tablespaces – do not use system table space for undo space. Automatically reclaim space on disks.
  • Self-tuning, limited to InnoDB (innodb_dedicated_server to auto-tune)
  • partial in-place update for JSON – update filed in JSON object without full rewrite. Good for counters/statuses/timestamps. Update/removal of element is supported
  • Invisible indexes – test impact of dropping indexes before actually dropping them. Maintained but unused by the optimizer. If not needed or used, then drop away.
  • TmpTable Storage Engine – more efficient storage engine for internal temporary tables. Efficient storage for VARCHAR and VARBINARY columns. Good for GROUP BY queries. Doesn’t support BLOB/TEXT columns yet (this reverts to InnoDB temp table now)
  • Backup locks – prevent operations which may result in inconsistent backups. CHECK INSTANCE FOR BACKUP (something Percona Server has had before)
  • Optimizer histograms – detailed statistics on columns, not just indexes
  • improved cost model for the optimizer – www.unofficialmysqlguide.com
  • Performance schematic – faster (via “fake” indexes), error instrumentation, response time histograms (global & per query), digest summaries
  • select * from sys.session – fast potential replacement for show processlist
  • RESTART (command)
  • SET PERSIST – e.g. change the buffer pool size, and this helps during a restart
  • assumes default storage is SSD now
  • binary log on by default, log_slave_updates enabled by default, and log expires after 30 days by default
  • query cache removed. Look at ProxySQL or some other caching solution
  • native partitioning only – remove partitions from MyISAM or convert to InnoDB
  • resource groups – isolation and better performance (map queries to specific CPU cores; can jail your costly queries, like analytical queries)
  • Feature Requests: better single thread performance, no parallel query support
MySQL Test Framework for Support and Bugs Work – Sveta Smirnova
  • MTR allows you to add multiple connections
  • has commands for flow control
ProxySQL – GTID Consistent Reads – René Cannaò, Nick Vyzas
  • threshold is configurable in increments of 1 second. Replication lag can be monitored with ProxySQL. Want to ensure you don’t have stale reads.
  • Why is GTID important? To guarantee consistently. Auto positioning for restructuring topologies.
  • –session-track-gtids is an important feature which allows sending the GTID for a transaction on the OK packet for a transaction. Not available in MariaDB.
  • There is a ProxySQL Binlog Reader now – GTID information about a MySQL server to all connected ProxySQL instances. Lightweight process to run on your MySQL server.
  • ProxySQL can be configured to enforce GTID consistency for reads on any hostgroup/replication hostgroup.
  • Live demo by René
Turbocharging MySQL with Vitess – Sugu Sougoumarane
  • trend for the cloud: container instances, short-lived containers, tolerate neighbors, discoverability. No good tools yet for Kubernetes.
  • non-ideal options: application sharing, NoSQL, paid solutions, NewSQL (CockroachDB, TiDB, Yugabyte)
  • Vitess: leverage MySQL at massive scale, opensource, 8+ years of work, and multiple production examples
  • Square uses Vitess for Square Cash application.
  • Can MySQL run on Docker? Absolutely, many of the companies do huge QPS on Docker.
  • YouTube does a major re-shard every 2-3 months once. No one notices nowadays when that happens.
  • app server connects to vtgate, and only underneath it’s a bunch of smaller databases with vttablet + mysqld. The lockserver is what makes it run well in the cloud.
  • pluggable architecture with no compromise on performance: monitoring, health check, ACLs, tracing, more.
  • at most, it adds about 2ms overhead to connections
  • Go coding standards are enforced, unit tests with strict coverage requirements, end-to-end tests, Travis, CodeClimate and Netlify. Readability is king.
  • On February 5 2018, it will be a CNCF project. One year of due diligence. They said there was nothing to compare it with. Looked at maturity and contributors. It’s becoming a truly community-owned project! (CNCF to Host Vitess is already live as of now)
  • roadmap: full cross-shard queries, migration tools, simplify configurability, documentation.
  • full MySQL protocol, but a limited query set – they want to get it to a point where it accepts a full MySQL query.
Orchestrator on Raft – Shlomi Noach
  • Raft: guaranteed to be in-order replication log, an increasing index. This is how nodes choose a leader based on who has the higher index. Get periodic snapshots (node runs a full backup).
  • HashiCorp raft, a Golang raft implementation, used by Consul
  • orchestrator manages topology for HA topologies; also want orchestrator to be highly available. Now with orchestrator/raft, remove the MySQL backend dependency, and you can have data center fencing too. Now you get: better cross-DC deploys, DC-local KV control, and also Kubernetes friendly.
  • n-orchestrator nodes, each node still runs its own backend (either MySQL or SQLite). Orchestrator provides the communication for SQLite between the nodes. Only one (the Raft leader) will handle failovers
  • implementation & deployment @ Github – one node per DC (deployed at 3 different DCs). 1-second raft polling interval. 2 major DCs, one in the cloud. Step-down, raft-yield, SQLite-backed log store, and still a MySQL backend (SQLite backend use case is in the works)
  • They patched the HashiCorp raft library. The library doesn’t care about the identity of nodes, with Github they do want to control the identity of the leader. There is an “active” data center, and locality is important. This is what they mean by raft-yield (picking a candidate leader).
  • The ability for a leader to step down is also something they had to patch.
  • HashiCorp Raft only supports LMDB and another database, so the replication log is now kept in a relational SQLite backed log store. Another patch.
  • once orchestrator can’t run its own self-health check, it recognizes this. The application can tell raft now that it’s stepping down. Takes 5 seconds to step down, and raft then promotes another orchestrator node to be the leader. This is their patch.
  • can also grab leadership
  • DC fencing handles network partitioning.
  • orchestrator is Consul-aware. Upon failover, orchestrator updates Consul KV with the identity of the promoted master.
  • considerations to watch out for: what happens if, upon replay of the Raft log, you hit two failovers for the same cluster? NOW() and otherwise time-based assumptions. Reapplying snapshot/log upon startup
  • roadmap: use Kubernetes (cluster IP based configuration in progress, already container friendly via auto-re-provisioning of nodes via Raft)
MyRocks Roadmaps – Yoshinori Matsunobu
  • Facebook has a large User Database (UDB). Social graph, massively sharded, low latency, automated operations, pure flash storage (constrained by space, not CPU/IOPS)
  • They have a record cache in-front of MySQL – Tao for reads. If cache misses, then it hits the database. And all write requests go thru MySQL. UDB has to be fast to ensure a good user experience.
  • they also at Facebook run 2 instances of MySQL on the same machine, because CPU wasn’t huge, but the space savings were awesome.
  • design decisions: clustered index (same as InnoDB), slower for reads, faster for writes (bloom filters, column family), support for transactions including consistency between binlog and MyRocks. Faster data loading/deletes/replication, dynamic options (instead of having to restart mysqld), TTL (comparable to HBase TTL feature, specify the TTL, any data older than time, can be removed), online logical (for recovery purposes) & binary backup (for creating replicas)
  • Pros: smaller space, better cache hit rate, writes are faster so you get faster replication, much smaller bytes written
  • Cons: no statement based replication, GAP locks, foreign keys, full-text index, spatial index support. Need to use case sensitive collations for performance. Reads are slower, especially if the data fits in memory. Dependent on file system and OS; lack of solid direct I/O (uses buffered I/O). You need a newer than 4.6 kernel. Too many tuning options beyond buffer pool such as bloom filter, compactions, etc.
  • https://twitter.com/deniszh/status/960163082642382849
  • Completed InnoDB to MyRocks migration. Saved 50% space in UDB compared to compressed InnoDB.
  • Roadmaps: getting in MariaDB and Percona Server for MySQL. Read Mark’s blog for matching read performance vs InnoDB. Supporting mixed engines. Better replication and bigger instance sizes.
  • mixed engines: InnoDB and MyRocks on the same instance, though single transaction does not overlap engines. Plan to extend star backup to integrate `myrocks_hotbackup. Backport gtid_pos_auto_engines from MariaDB?
  • Removing engine log. Could be caused by binlog and engine log, which requires 2pc and ordered commits. Use one log? Either binlog or binlog like service or RocksDB WAL? Rely on binlog now (semi-sync, binlog consumers), need to determine how much performance is gained by stopping writing to WAL.
  • Parallel replication apply is important in MySQL 8
  • support bigger instance sizes: shared nothing database is not a general purpose database. Today you can get 256GB+ RAM and 10TB+ flash on commodity servers. Why not run one big instance and put everything there? Bigger instances may help general purpose small-mid applications. Then you don’t have to worry about sharing. Atomic transactions, joins and secondary keys will just work. Amazon Aurora today supports a 60TB instance!
  • today: you can start deploying slaves with consistency check. Many status counters for instance monitoring.
ProxySQL internals – René Cannaò
  • reduce latency, scales, maximize throughput. Single instance to travel hundreds of thousands of connections and to handle thousands of backend servers.
  • threading models: one thread per connection (blocking I/O), thread pooling (non-blocking I/O, scalable).
  • ProxySQL thread pool implementation: known as “MySQL threads”, fixed number of worker threads (configurable), all threads listen on the same port(s), client connections are not shared between threads, all threads perform their own network I/O, and it uses poll() (does that scale? True, but there is a reason why poll over epoll)
  • threads never share client connections – no need for synchronization, thread contention is reduced, each thread calls poll(). Possibly imbalanced load as a con (one thread that has way more connections that another). Is it really a problem? Most of the time, no, connections will automatically balance.
  • poll() is O(N), epoll() is O(1). Poll() is faster than epoll() for fewer connections (around 1000). Performance degrees when there are a lot of connections. So by default, it uses poll() instead of epoll(), around 50,000 connections performance degrades badly – so ProxySQL has auxiliary threads.
  • MySQL_Session() is implemented as a state machine. Stores metadata associated with the client session (running timers, default hostgroup, etc.)
MySQL Point-in-time recovery like a rockstar – Frederic Descamps Releases
  • Percona Monitoring and Management 1.7.0 (PMM) – This release features improved support for external services, which enables a PMM Server to store and display metrics for any available Prometheus exporter. For example, you could deploy the postgres_exporter and use PMM’s external services feature to store PostgreSQL metrics in PMM. Immediately, you’ll see these new metrics in the Advanced Data Exploration dashboard. Then you could leverage many of the pre-developed PostgreSQL dashboards available on Grafana.com, and with a minimal amount of edits have a working PostgreSQL dashboard in PMM!
  • MariaDB Server 10.1.31 – usual updates to storage engines, and a handful of bug fixes.
Link List Upcoming appearances
  • SCALE16x – Pasadena, California, USA – March 8-11 2018
Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

'Command Not Found' during installation process

Lastest Forum Posts - February 9, 2018 - 4:16am
I am new to Linux and generally use Windows, but this service is only for Linux so perhaps someone help me figure out what I am doing wrong in this scenario.

I have followed the guide and instructions to installing the Percona XtraDB Cluster to the tee. However, I get hanged up when I get to Bootstrapping the first node. According to the documentation, it says to do this:

Instead of changing the configuration, start the first node using the following command:

[root@pxc1 ~]# /etc/init.d/mysql bootstrap-pxc I input that information into my terminal and I just get: Command Not Found.

$ [root@pxc1 ~]# /etc/init.d/mysql bootstrap-pxc1
[root@pxc1: command not found

What am I doing wrong? I copied and pasted it so it can't be a miss spelling. Googling the issue I didn't come up with any results.

Fsync Performance on Storage Devices

Latest MySQL Performance Blog posts - February 8, 2018 - 7:53am

While preparing a post on the design of ZFS based servers for use with MySQL, I stumbled on the topic of fsync call performance. The fsync call is very expensive, but it is essential to databases as it allows for durability (the “D” of the ACID acronym).

Let’s first review the type of disk IO operations executed by InnoDB in MySQL. I’ll assume the default InnoDB variable values.

The first and most obvious type of IO are pages reads and writes from the tablespaces. The pages are most often read one at a time, as 16KB random read operations. Writes to the tablespaces are also typically 16KB random operations, but they are done in batches. After every batch, fsync is called on the tablespace file handle.

To avoid partially written pages in the tablespaces (a source of data corruption), InnoDB performs a doublewrite. During a doublewrite operation, a batch of dirty pages, from 1 to about 100 pages, is first written sequentially to the doublewrite buffer and fsynced. The doublewrite buffer is a fixed area of the ibdata1 file, or a specific file with the latest Percona Server for MySQL 5.7. Only then do the writes to the tablespaces of the previous paragraph occur.

That leaves us with the writes to the InnoDB log files. During those writes, the transaction information — a kind of binary diff of the affected pages — is written to the log files and then the log file is fsynced. The duration of the fsync call can be a major contributor to the COMMIT latency.

Because the fsync call takes time, it greatly affects the performance of MySQL. Because of this, you probably noticed there are many status variables that relate to fsyncs. To overcome the inherent limitations of the storage devices, group commit allows multiple simultaneous transactions to fsync the log file once for all the transactions waiting for the fsync. There is no need for a transaction to call fsync for a write operation that another transaction already forced to disk. A series of write transactions sent over a single database connection cannot benefit from group commit.

Fsync Results

In order to evaluate the fsync performance, I used the following Python script:

#!/usr/bin/python import os, sys, mmap # Open a file fd = os.open( "testfile", os.O_RDWR|os.O_CREAT|os.O_DIRECT ) m = mmap.mmap(-1, 512) for i in range (1,1000): os.lseek(fd,os.SEEK_SET,0) m[1] = "1" os.write(fd, m) os.fsync(fd) # Close opened file os.close( fd )

The script opens a file with the O_DIRECT flag, writes and fsyncs it 1000 times and close the file. I added O_DIRECT after an internal discussion with my colleagues, but it doesn’t change the results and it doesn’t remove the need for calling fsync. We’ll discuss in more detail the impacts of O_DIRECT after we reviewed the results. The script is called with the time command like below:

root@lab:/tmp/testfsync# time python /root/fsync.py real 0m18.320s user 0m0.060s sys 0m0.096s

In the above example using a 7.2k rpm drive, the fsync rate is about 56/s for a latency of 18ms. A 7.2k RPM drive performs 120 rotations per second. On average, the fsyncs require a bit more than two rotations to complete. The filesystem appears to make very little differences: ext4 and XFS show similar results. That means if MySQL uses such storage devices for the InnoDB log files, the latency of each transaction is at least 18ms. If the application workload requires 200 write transactions per second, they’ll need to be executed using at least four database connections.

So, let’s begin with rotational devices. These are becoming a bit less common now with databases, especially without a raid controller. I could only find a few.

Drive RPM Rate Latency Notes WDC WD2500BJKT 5400 22/s 45 ms Laptop SATA from 2009 ST2000LM003 5400 15/s 66 ms USB-3 portable drive ST3750528AS 7200 40/s 25 ms Desktop grade SATA WD2502ABYS-18B7A0 7200 56/s 18 ms Desktop grade SATA HUA723020ALA641 7200 50/s 20 ms Enterprise grade SATA, md mirror Dell SAS unknown 7200 58/s 17 ms Behind Perc ctrl but no write cache HDWE150 7200 43/s 23 ms Recent Desktop grade SATA, 5TB

 

I unfortunately didn’t have access to any 10k or 15k RPM drives that were not behind a raid controller with a write cache. If you have access to such drives, run the above script a few times and send me your results, that would help create a more complete picture! So, we can see a correlation between the rotational speed and the fsync rate, which makes sense. The faster a disk turns, the faster it can fsync. The fsync call saves the data and then updates the metadata. Hence, the heads need to move. That’s probably the main explanation for the remaining disparity. A good point, all drives appears to be fully complying with the SATA flush command even though they all have an enabled write cache. Disabling the drives write caches made no difference.

With the above number, the possible transaction rates in fully ACID mode is pretty depressing. But those drives were rotating ones, what about SSD drives? SSD are memory devices and are much faster for random IO operations. There are extremely fast for reads, and good for writes. But as you will see below, not that great for fsyncs.

Drive rate latency notes SAMSUNG MZ7LN512 160/s 6.3ms Consumer grade SATA Crucial_CT480M500SSD1 108/s 9.3ms Consumer grade SATA Intel 520 2031/s 0.49ms Consumer grade SATA SAMSUNG MZVPV512HDGL 104/s 9.6ms Consumer grade NVMe Samsung SSD 960 PRO 267/s 3.8ms High-end consumer grade NVMe Intel PC-3100 1274/s 0.79ms Low-end consumer grade NVMe (cheat?) Intel 750 2038/s 0.49ms High-end consumer grade NVMe Intel PC-3700 7380/s 0.14ms High-end enterprise-grade NVMe

 

Again, this is a small sample of the devices I have access to. All SSD/Flash have write caches, but only the high-end devices have capacitors to flush their write cache to the flash with a loss of power. The PC-3100 device is actually in my home server, and it is obviously cheating. If you look at the card specs on the Intel website, it doesn’t have the “Enhanced Power Loss Data Protection” and “End-to-End Data Protection” features. The much more expansive PC-3700 does. I use the PC-3100 as a ZFS L2ARC device, so I am good. In general, the performance of a flash device varies a bit more than rotational devices, since factors like the number of recent writes and the filling factor come into play.

Even when using a high-end NVMe device like the PC-3700, you can’t reach 10k fully ACID transactions per second at low thread concurrency. How do you reach the higher levels? The answer here is the good old raid controller with a protected write cache. The write cache is basically using DRAM memory protected from power loss by a battery. SAN controllers have similar caches. The writes to the InnoDB log files are sequential writes interleaved with fsyncs. The raid controller concatenates the sequential writes, eventually writing one big chunk on disk and… ignoring the fsyncs. Here’s the result from the only device I had access to:

Drive rate latency notes Dell Perc with BBU 23000/s 0.04ms Array of 7.2k rpm drives

 

That’s extremely fast but, of course, it is memory. I modified the script to loop 10k times instead of 1k. In theory, something a single slave thread doing simple transactions could reach a rate of 20k/s or more while being fully ACID.

Discussion

We must always consider the results we got in the previous section in the context of a given application. For example, a server using an Intel PC-3700 NVMe card can do more than 7000 fully ACID transactions per second even if it is fully durable provided those transactions are issued by a sufficient number of threads. Adding threads will not allow scaling infinitely. At some point, other bottlenecks like mutex contention or page flushing will dominate.

We often say that Galera-based cluster solutions like Percona XtraDB Cluster (PXC) add latency to the transactions, since it involves communication over the network. With the Galera protocol, a commit operation returns only when all the nodes have received the data. Thus, tt is a good practice to relax the local durability and use innodb_flush_log_at_trx_commit set to 0 or 2. On a local network, the ping time is always below 1ms and often below 0.1ms. As a result, the transaction latency is often smaller.

About fdatasync

The fsync system is not the only system call that persists data to disk. There is also the fdatasync call. fdatasync persists the data to disk but does not update the metadata information like the file size and last update time. Said otherwise, it performs one write operation instead of two. In the Python script, if I replace os.fsync with os.fdatasync, here are the results for a subset of devices:

Drive rpm rate latency notes ST2000LM003 5400 72/s 13 ms USB-3 portable drive WD2502ABYS-18B7A0 7200 118/s 8.5 ms Desktop grade SATA SAMSUNG MZ7LN512 N/A 333/s 3.0ms Consumer grade SATA Crucial_CT480M500SSD1 N/A 213/s 4.7ms Consumer grade SATA Samsung SSD 960 PRO N/A 714/s 1.4ms High-end consumer grade NVMe

 

In all cases, the resulting rates have more than doubled. The fdatasync call has a troubled history, as there were issues with it many years ago. Because of those issues, InnoDB never uses fdatasync, only fsyncs. You can find the following comments in the InnoDB os/os0file.cc:

/* We let O_SYNC only affect log files; note that we map O_DSYNC to O_SYNC because the datasync options seemed to corrupt files in 2001 in both Linux and Solaris */

2001 is a long time ago. Given the above results, maybe we should reconsider the use of fdatasync. From the Linux main page on fdatasync, you find:

fdatasync() is similar to fsync(), but does not flush modified metadata unless that metadata is needed in order to allow a subsequent data retrieval to be correctly handled. For example, changes to st_atime or st_mtime (respectively, time of last access and time of last modification; see stat(2)) do not require flushing because they are not necessary for a subsequent data read to be handled correctly. On the other hand, a change to the file size (st_size, as made by say ftruncate(2)), would require a metadata flush.

So, even with fdatasync, operations like extending an InnoDB tablespace will update the metadata correctly. This appears to be an interesting low-hanging fruit in term of MySQL performance. In fact, webscalesql already have fdatasync available

O_DIRECT

Why do we need a fsync or fdatasync with O_DIRECT? With O_DIRECT, the OS is not buffering anything along the way. So the data should be persisted right? Actually, the OS is not buffering but the device very likely is. Here are a few results to highlight the point using a 7.2k rpm SATA drive:

Test rate latency O_DIRECT, drive Write cache enabled 4651/s 0.22ms O_DIRECT, drive Write cache disabled 101/s 9.9ms ASYNC + fdatasync, Write cache enabled 119/s 8.4ms ASYNC + fdatasync, Write cache disabled 117/s 8.5ms

 

The drive write cache was enabled/disabled using the hdparm command. Clearly, there’s no way the drive can persist 4651 writes per second. O_DIRECT doesn’t send the SATA flush command to the disk, so we are only writing to the drive write cache. If the drive write cache is disabled, the rate falls to a more reasonable value of 101/s. What is interesting — and I don’t really understand why — is that opening the file in async mode and performing fdatasync is significantly faster. As expected, the presence of the drive write cache has no impacts on ASYNC + fdatasync. When the fdatasync call occurs, the data is still in the OS file cache.

If you want to use only O_DIRECT, you should make sure all the storage write caches are crash safe. That’s why MySQL adds a fsync call after a write to a file opened with O_DIRECT.

ZFS

These days, I find it difficult to write a blog post without talking about ZFS. How does ZFS handles fsyncs and fdatasyncs? ZFS, like a database, performs write ahead logging in the ZIL. That means calls like fsync and fdatasync return when the data has been persisted to the ZIL, and not to the actual filesystem. The real write operation is done a few seconds later by a background thread. That means the added write for the metadata does not impact performance right away. My home server uses ZFS over a pair of 7.2k RPM drive and doesn’t have a SLOG device. The ZIL is thus stored on the 7.2k RPM drives. The results are the following:

Drive rpm rate latency ZFS fsync 7200 104/s 9.6 ms ZFS fdatasync 7200 107/s 9.3 ms

 

Remember that with ZFS, you need to disable the O_DIRECT mode. The fdatasync rate appears to be slightly faster, but it is not really significant. With ZFS, the fsync/fdatasync performance relates to where the ZIL is stored. If there is no SLOG device, the ZIL is stored with the data and thus, the persitence performance of the devices used for the data matter. If there is a SLOG device, the persistence performance is governed by the specs of the device(s) on which the SLOG is located. That’s a very important aspect we have to consider when designing a MySQL server that will use ZFS. The design of such server will be discussed in more details in a future post.

pt-archiver does not honor original where clause when used with Index hint

Lastest Forum Posts - February 8, 2018 - 7:20am
Im purging a table using pt-archiver , using a where clause of "deleted=1 and <= date_sub(now(),interval 30 day)". This was very slow , as the select statement was using the Unique key index on the table . I tested the performance of the select query by passing an index hint idx_deleted, which is an index on the deleted column, and this was much faster. So i made use of the -i parameter to use the index on the deleted column. The archiver job was running much faster and not showing up on the slow query log at all . However i noticed that the actual statement running was "DELETE from table where deleted=1 LIMIT 1". And on checking the data , I see that this statement was actually deleting records without honoring the last_activity date!! Is this a Bug? The version i'm using is pt-archiver 2.2.5

Tutorial Schedule for Percona Live 2018 Is Live

Latest MySQL Performance Blog posts - February 8, 2018 - 4:58am

Percona has revealed the line-up of in-depth tutorials for the Percona Live 2018 Open Source Database Conference, taking place April 23-25, 2018 at the Santa Clara Convention Center in Santa Clara, Calif. Secure your spot now with Advanced Registration prices (available until March 4, 2018). Sponsorship opportunities for the conference are still available.

Percona Live 2018 Open Source Database Conference is the premier open source database event. The theme for the upcoming conference is “Championing Open Source Databases,” with a range of topics on MySQL, MongoDB and other open source databases, including time series databases, PostgreSQL and RocksDB. Session tracks include Developers, Operations and Business/Case Studies. 

Tutorials take place throughout the day on April 23, 2018. Tutorials provide practical, in-depth knowledge of critical open source database issues. Topics include:

Hyatt Regency Santa Clara & The Santa Clara Convention Center

Percona Live 2018 Open Source Database Conference will be held at the Hyatt Regency Santa Clara & The Santa Clara Convention Center, at 5101 Great America Parkway Santa Clara, CA 95054.

The Hyatt Regency Santa Clara & The Santa Clara Convention Center is a prime location in the heart of the Silicon Valley. Enjoy this spacious venue with complimentary wifi, on-site expert staff and three great restaurants. You can reserve a room by booking through the Hyatt’s dedicated Percona Live reservation site.

Book your hotel using Percona’s special room block rate!

Sponsorships

Sponsorship opportunities for Percona Live 2018 Open Source Database Conference are available and offer the opportunity to interact with the DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event. Contact live@percona.com for sponsorship details.

  • Diamond Sponsors – Continuent, VividCortex
  • Gold Sponsors – Facebook, Grafana
  • Bronze Sponsors – SolarWinds, TwinDB, Yelp
  • Media Sponsors – Datanami, EnterpriseTech, HPCWire, ODBMS.org

Percona MySQL 5.7.20-19 default config file override

Lastest Forum Posts - February 8, 2018 - 4:14am
I have installed a fresh install of Percona MySQL 5.7.20-19 on my Ubuntu 16.04. I have created an override config file initially in /etc/mysql/conf.d named 000-mysqld.cnf with the following options:

[mysqld]

bind-address = 127.0.0.1
datadir = /home/mysql/data
tmpdir = /dev/shm
sql_mode = IGNORE_SPACE,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FO R_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_S UBSTITUTION

The only option that gets applied after mysql service restart is bind-address, the rest are ignored and defaults from /etc/mysql/percona-server.conf.d/mysqld.cnf are used.

I tried to move my override file to /etc/mysql/percona-server.conf.d folder, tried to rename it, but nothing helped and still defaults from mysqld.cnf were used. If I comment those options in default mysqld.cnf file, only then options from my override file get applied. My question is how can I override default mysqld.cnf file, is that even possible in Percona Server 5.7?

Slave behavior in case of autocommit set to off

Lastest Forum Posts - February 8, 2018 - 4:06am
Hello Guys,

Our master server is configured with autocommit to off as per client requirements. Just wanted to know if we keep autocommit enabled on slave server(i.e different from master), will the slave still behave as if autocommit is off because it copies binary logs from master.

unmet dependencies on ubuntu 16.04

Lastest Forum Posts - February 8, 2018 - 3:00am
Hi, I want to use pt-online-schema-change on a remote server, hence, I try to install percona-toolkit. However, both using deb file or apt install would have unmet dependencies. Do you have any suggestion about this? I am a bit worry about the unmet dependencies because I don't want to break the dependencies accidentally.


I follow the document to update apt repo:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
sudo apt update
sudo apt-cache search percona
sudo apt-get install percona-toolkit

Unmet message when installing percona-toolkit:

Reading package lists... Done
Building dependency tree
Reading state information... Done
percona-toolkit is already the newest version (3.0.6-1.xenial).
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
percona-toolkit : Depends: libdbi-perl (>= 1.13) but it is not going to be installed
Depends: libdbd-mysql-perl but it is not going to be installed or
libdbd-mysql-5.1-perl but it is not installable
Depends: libterm-readkey-perl (>= 2.10) but it is not going to be installed
Depends: libio-socket-ssl-perl but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

Timestamp &amp;quot;invalid default value&amp;quot; - doesn't work on Percona server only

Lastest Forum Posts - February 8, 2018 - 1:01am
Is there any other option that I'm missing? Both systems are on GMT time-zone.

Percona Server 5.7.20-19:

Code: mysql> SELECT @@version, @@version_comment; +---------------+--------------------------------------------------------+ | @@version | @@version_comment | +---------------+--------------------------------------------------------+ | 5.7.20-19-log | Percona Server (GPL), Release 19, Revision 3c5d3e5d53c | +---------------+--------------------------------------------------------+ 1 row in set (0.00 sec) mysql> SELECT @@sql_mode; +-------------------------------------------------------------------------------------------------------------------------------------------+ | @@sql_mode | +-------------------------------------------------------------------------------------------------------------------------------------------+ | ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION | +-------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> SELECT @@global.time_zone, @@session.time_zone; +--------------------+---------------------+ | @@global.time_zone | @@session.time_zone | +--------------------+---------------------+ | SYSTEM | SYSTEM | +--------------------+---------------------+ 1 row in set (0.00 sec) mysql> CREATE TABLE tbl1 (col1 TIMESTAMP DEFAULT '1970-01-01 00:00:01'); ERROR 1067 (42000): Invalid default value for 'col1' MySQL 5.7.21:

Code: mysql> SELECT @@version, @@version_comment; +-----------+------------------------------+ | @@version | @@version_comment | +-----------+------------------------------+ | 5.7.21 | MySQL Community Server (GPL) | +-----------+------------------------------+ 1 row in set (0.00 sec) mysql> SELECT @@sql_mode; +-------------------------------------------------------------------------------------------------------------------------------------------+ | @@sql_mode | +-------------------------------------------------------------------------------------------------------------------------------------------+ | ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION | +-------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> SELECT @@global.time_zone, @@session.time_zone; +--------------------+---------------------+ | @@global.time_zone | @@session.time_zone | +--------------------+---------------------+ | SYSTEM | SYSTEM | +--------------------+---------------------+ 1 row in set (0.00 sec) mysql> CREATE TABLE tbl1 (col1 TIMESTAMP DEFAULT '1970-01-01 00:00:01'); Query OK, 0 rows affected (0.01 sec) mysql> SHOW CREATE TABLE tbl1\G *************************** 1. row *************************** Table: tbl1 Create Table: CREATE TABLE `tbl1` ( `col1` timestamp NOT NULL DEFAULT '1970-01-01 00:00:01' ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

ProxySQL Query Cache: What It Is, How It Works

Latest MySQL Performance Blog posts - February 7, 2018 - 10:40am

In this blog post, I’ll present the ProxySQL query cache functionality. This is a query caching mechanism on top of ProxySQL. As there are already many how-tos regarding the ProxySQL prerequisites and installation process, we are going to skip these steps. For those who are already familiar with ProxySQL query cache configuration, let’s go directly to the query rules and the performance results.

Before talking about the ProxySQL query cache, let’s take a look at other caching mechanisms available for MySQL environments.

MySQL query cache is a query caching mechanism – deprecated as of MySQL 5.7.20 and removed in MySQL 8.0 – on top of MySQL itself (based on the official MySQL documentation).

The MySQL query cache stores the text of a SELECT statement together with the corresponding result sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.

Although MySQL query cache is supposed to improve performance, there are cases where MySQL query cache is not scaling well and can degrade performance due to its locking and invalidation algorithms.

You can find a really interesting post regarding MySQL query cache here.

There is also another method to cache results in a MySQL environment. It’s external caching (i.e., Memcached, Redis, etc.), but this also has some drawbacks. Introducing such a mechanism requires some changes on the application side.

But what is ProxySQL query cache if there are already the MySQL query cache and other external caching mechanisms? At the moment, although we’ve done some tests, we are not going to compare ProxySQL query cache performance against other caching mechanisms. We’ll address this in a future blog post. We will only focus on ProxySQL itself.

What is ProxySQL Query Cache

ProxySQL query cache is an in-memory key-value storage that uses:

  • as key: a combination of username, schema and query text. It is a hash derived from username, schema name and the query itself. Combining these ensures that users access only their resultsets and for the correct schema.
  • as value: the resultset returned by the backend (mysqld or another proxy).

There is some more metadata stored for each resultset:

  • length: length of the resultset
  • expire_ms: defines when the entry will expire
  • access_ms: records the last time an entry was accessed
  • ref_count: a reference count to identify resultset currently in use

Based on the configuration, the resultsets are cached on the wire while queries are executed, and the resultset is returned to the application. If the application re-executes the same query within the time slot defined by “expire_ms”, the resultset is returned by the embedded ProxySQL query cache.

The only way to invalidate entries from the ProxySQL query cache is through a time-to-live in milliseconds. This is in contrast to MySQL query cache, where the query cache gets invalidated each time a table gets updated. At the moment, it is only possible to tune the total amount of memory used by the query cache, using the variable “mysql-query_cache_size_MB”. The current implementation of mysql-query_cache_size_MB doesn’t impose a hard limit. Instead, it is used as an argument by the purging thread.

It’s obvious that it’s not easy to directly compare these two cache mechanisms, as each of them has its own way to invalidate results. Please also note a significant difference between MySQL and ProxySQL when query cache is enabled. ProxySQL query cache may serve stale resultsets due to the way it invalidates cached data (cached data are valid for the interval specified by “cache_ttl”, while MySQL’s cached data get invalidated each time data change). Every query that is cached may return stale data, and this may or may not be acceptable by the application.

How it Works

Before doing any benchmarks, I will try to give you a short description of how ProxySQL query cache gets enabled. Unlike MySQL query cache, where a common caching space exists for all tables, in ProxySQL query cache we have to define what traffic gets cached. This is done by defining query rules that match traffic that is going to be cached and setting a “cache_ttl” for the cached results. There are many ways to define matches for incoming traffic, either by query or digest using patterns. All we need to cache the resultset is to define the matching criteria and the TTL. If a query passed the matching criteria, the resultset is cached so the next requests are served directly from the ProxySQL instance instead of querying the hostgroup DB nodes (if cache_ttl has not expired).

Let’s use an example to make it clear how ProxySQL query cache is enabled.

In our setup we have three backend DB servers in a master-slave topology, with Percona Server for MySQL 5.7 and a ProxySQL ver. 1.4.3 instance with sysbench 1.0 installed. Backend servers are within the same reader hostgroup, and traffic is balanced among these servers using the same priority.

As I’ve already said, we won’t look at the ProxySQL installation. There are many topologies you can implement: deploying ProxySQL on each application server thus removing the “single point of failure” weakness, for example. But in our case, we will just present the ProxySQL query cache having a single instance. In general, you would expect to have better performance with the ProxySQL instance closer to the application.

Configuration

With the ProxySQL instance up and running, let’s confirm that all servers are OK. Querying ProxySQL admin shows that all servers are ONLINE:

Admin> select * from mysql_servers; +--------------+-----------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+-----------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 2 | 10.0.2.12 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 1 | 10.0.2.11 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 10.0.2.13 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 10.0.2.11 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+-----------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+

As you can see, there are two hostgroups: the “1” used for WRITES and the “2” used for READS.

Some random connects proves that traffic is correctly routed to the DB backends:

[RDBA] percona@atsaloux-proxysql: ~ $ mysql -h 127.0.0.1 -P 6033 -e "select @@hostname" +--------------+ | @@hostname   | +--------------+ | db1-atsaloux | +--------------+ [RDBA] percona@atsaloux-proxysql: ~ $ mysql -h 127.0.0.1 -P 6033 -e "select @@hostname" +--------------+ | @@hostname   | +--------------+ | db2-atsaloux | +--------------+ [RDBA] percona@atsaloux-proxysql: ~ $ mysql -h 127.0.0.1 -P 6033 -e "select @@hostname" +--------------+ | @@hostname   | +--------------+ | db1-atsaloux | +--------------+ [RDBA] percona@atsaloux-proxysql: ~ $ mysql -h 127.0.0.1 -P 6033 -e "select @@hostname" +--------------+ | @@hostname   | +--------------+ | db3-atsaloux | +--------------+

Let’s first take a look at some statistics. Before using sysbench, the “stats_mysql_query_digest” table (where digests are stored) is empty:

Admin> SELECT count_star,sum_time,hostgroup,digest,digest_text FROM stats_mysql_query_digest_reset ORDER BY sum_time DESC; Empty set (0.00 sec)

The “stats_mysql_query_digest: table contains statistics related to the queries routed through the ProxySQL server. How many times each query was executed and the total execution time are two of the several provided statistics.

Before doing any benchmarks, I had to create some data. The following sysbench commands were used for selects by PK or by RANGE. For simplicity, we are not going to execute benchmarks inside transactions — although ProxySQL query cache is effective. --threads will be adjusted for each benchmark:

sysbench --threads=16 --max-requests=0 --time=60 --mysql-user=percona --mysql-password=percona --mysql-db=sbtest --mysql-host=127.0.0.1 --mysql-port=6033 --oltp-table-size=1000000 --oltp-read-only=on --oltp-skip-trx=on --oltp-test-mode=simple --oltp-sum-ranges=0 --oltp-order-ranges=0 --oltp-distinct-ranges=0 --oltp-point-selects=1 --oltp-simple-ranges=0 /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua run sysbench --threads=16 --max-requests=0 --time=60 --mysql-user=percona --mysql-password=percona --mysql-db=sbtest --mysql-host=127.0.0.1 --mysql-port=6033 --oltp-table-size=1000000 --oltp-read-only=on --oltp-skip-trx=on --oltp-test-mode=simple --oltp-sum-ranges=0 --oltp-order-ranges=0 --oltp-distinct-ranges=0 --oltp-point-selects=0 --oltp-simple-ranges=1 /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua run

Before running the full benchmark, a simple sysbench was run to get the queries digests that are used for the ProxySQL query cache configuration.

After running the first benchmark with ProxySQL query cache disabled, I queried the “stats_mysql_query_digest” table again and got the following results where it logs all executed queries.

Admin> SELECT count_star,sum_time,hostgroup,digest,digest_text FROM stats_mysql_query_digest_reset ORDER BY sum_time DESC; +------------+-------------+-----------+--------------------+------------------------------------------------+ | count_star | sum_time | hostgroup | digest | digest_text | +------------+-------------+-----------+--------------------+------------------------------------------------+ | 301536 | 20962929791 | 2 | 0xBF001A0C13781C1D | SELECT c FROM sbtest1 WHERE id=? | | 3269 | 30200073 | 2 | 0x290B92FD743826DA | SELECT c FROM sbtest1 WHERE id BETWEEN ? AND ? | +------------+-------------+-----------+--------------------+------------------------------------------------+ 2 row in set (0.01 sec)

Add mysql_query_rules To Be Cached

Output above provides all the needed information in order to enable ProxySQL query cache. What we need to do now add the query rules that match the results that should be cached. In this case we use a matching pattern criteria and a cache_ttl of 5000ms. Taking this into consideration, we added the following rules:

Admin> INSERT INTO mysql_query_rules (rule_id,active,digest,cache_ttl,apply) VALUES (1,1,'0xBF001A0C13781C1D',5000,1); Query OK, 1 row affected (0.00 sec) Admin> INSERT INTO mysql_query_rules (rule_id,active,digest,cache_ttl,apply) VALUES (2,1,'0x290B92FD743826DA',5000,1); Query OK, 1 row affected (0.00 sec)

We shouldn’t forget that we must load query rules at runtime. If we don’t want to lose these rules (i.e., after a ProxySQL restart), we should also save to disk:

Admin> LOAD MYSQL QUERY RULES TO RUNTIME; SAVE MYSQL QUERY RULES TO DISK; Query OK, 0 rows affected (0.00 sec)

Now let’s reset the stats_mysql_query_digest results:

Admin> SELECT 1 FROM stats_mysql_query_digest_reset LIMIT 1; -- we reset the counters +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.01 sec) ----------

And re-run the benchmarks with query cache enabled. To confirm what traffic was cached, we have to query the stats_mysql_query_digest once again:

Admin> SELECT count_star,sum_time,hostgroup,digest,digest_text FROM stats_mysql_query_digest ORDER BY sum_time DESC; +------------+------------+-----------+--------------------+------------------------------------------------+ | count_star | sum_time | hostgroup | digest | digest_text | +------------+------------+-----------+--------------------+------------------------------------------------+ | 108681 | 6304585632 | 2 | 0x290B92FD743826DA | SELECT c FROM sbtest1 WHERE id BETWEEN ? AND ? | | 343277 | 0 | -1 | 0x290B92FD743826DA | SELECT c FROM sbtest1 WHERE id BETWEEN ? AND ? | +------------+------------+-----------+--------------------+------------------------------------------------+ 2 rows in set (0.00 sec)

Admin> SELECT count_star,sum_time,hostgroup,digest,digest_text FROM stats_mysql_query_digest ORDER BY sum_time DESC; +------------+-----------+-----------+--------------------+----------------------------------+ | count_star | sum_time | hostgroup | digest | digest_text | +------------+-----------+-----------+--------------------+----------------------------------+ | 79629 | 857050510 | 2 | 0xBF001A0C13781C1D | SELECT c FROM sbtest1 WHERE id=? | | 441194 | 0 | -1 | 0xBF001A0C13781C1D | SELECT c FROM sbtest1 WHERE id=? | +------------+-----------+-----------+--------------------+----------------------------------+ 2 rows in set (0.00 sec)

Cached queries are the ones marked with a special hostgroup -1 (this means that these queries were not sent to any hostgroup), and the total execution time for the queries cached is 0 (this means that the request was served within the same events loop).

Below you can see the benchmark results. Let’s look at what happens for selects by PK and selects by RANGE:

 

Points of Interest
  • In all cases, when threads and backend servers are increasing, ProxySQL performs better. This is achieved due to it’s connection pooling and multiplexing capabilities.
  • Enabling ProxySQL query cache provides a significant performance boost.
  • ProxySQL query cache can achieve a ~2X performance boost at a minimum.
  • This boost can be considerably valuable in cases where MySQL performance may fall to 50% (i.e., select by RANGE).
  • We shouldn’t forget that results are affected by hardware specs as well, but it’s obvious that ProxySQL with query cache enabled gives a really high throughput.
ProxySQL Query Cache Limitations

Current known limitations:

  • It is not possible to define query cache invalidation other than with cache_ttl.
  • There is no way to enforce query cache purge.
  • mysql-query_cache_size_MB is not strictly enforced, but only used as a metric to trigger automatic purging of expired entries.
  • Although access_ms is recorded, it is not used as a metric to expire an unused metric when mysql-query_cache_size_MB is achieved.
  • Query cache does not support prepared statements.
  • Query cache may serve stale data.
Conclusion

ProxySQL is generally a very powerful and easy-to-use tool.

With regards to query cache, it seems to scale very well and achieve a significant performance boost. Although having complex configs (not only for query cache) can add some extra overhead, it’s easy to maintain.

cache_ttl can be a limitation, but if it’s correctly configured in conjunction with max_replication_lag when configuring nodes in hostgroups, it does not add any significant drawback. In any case, it depends whether or not this is acceptable by the application.

PMM and postgres RDS

Lastest Forum Posts - February 7, 2018 - 5:12am
I want to add postgres RDS in PMM, please let me know if this is possible and help me with the steps to configure.

Announcing Experimental Percona Monitoring and Management (PMM) Functionality via Percona Labs

Latest MySQL Performance Blog posts - February 6, 2018 - 12:28pm

In this blog post, we’ll introduce how you can look at some experimental Percona Monitoring and Management (PMM) features using Percona Labs builds on GitHub.

Note: PerconaLabs and Percona-QA are open source GitHub repositories for unofficial scripts and tools created by Percona staff. While not covered by Percona support or services agreements, these handy utilities can help you save time and effort.

Percona software builds located in the PerconaLabs and Percona-QA repositories are not officially released software, and also aren’t covered by Percona support or services agreements. 

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

This month we’re announcing access to Percona Labs builds of Percona Monitoring and Management so that you can experiment with new functionality that’s not yet in our mainline product. You can identify the unique builds at:

https://hub.docker.com/r/perconalab/pmm-server/tags/

Most of the entries here are the pre-release candidate images we use for QA, and they follow a format of all integers (for example “201802061627”). You’re fine to use these images, but they aren’t the ones that have the experimental functionality.

Today we have two builds of note (these DO have the experimental functionality):

  • 1.6.0-prom2.1
  • 1.5.3-prometheus2

We’re highlighting Prometheus 2.1 on top of our January 1.6 release (1.6.0-prom2.1), available in Docker format. Some of the reasons you might want to deploy this experimental build to take advantage of the Prometheus 2 benefits are:

  • Reduced CPU usage by Prometheus, meaning you can add more hosts to your PMM Server
  • Performance improvements, meaning dashboards load faster
  • Reduced disk I/O, disk space usage

Please keep in mind that as this is a Percona Labs build (see our note above), so in addition note the following two criteria:

  • Support is available from our Percona Monitoring and Management Forums
  • Upgrades might not work – don’t count on upgrading out of this version to a newer release (although it’s not guaranteed to block upgrades)
How to Deploy an Experimental Build from Percona Labs

The great news is that you can follow our Deployment Instructions for Docker, and the only change is where you specify a different Docker container to pull. For example, the standard way to deploy the latest stable PMM Server release with Docker is:

docker pull percona/pmm-server:latest

To use the Percona Labs build 1.6.0-prom2.1 with Prometheus 2.1, execute the following:

docker pull perconalab/pmm-server:1.6.0-prom2.1

Please share your feedback on this build on our Percona Monitoring and Management Forums.

If you’re looking to deploy Percona’s officially released PMM Server (not the Percona Labs release, but our mainline version which currently is release 1.7) into a production environment, I encourage you to consider a Percona Support contract, which includes PMM at no additional charge!

InnoDB: Unsupported redo log format.

Lastest Forum Posts - February 6, 2018 - 11:37am
I'm trying to use xtrabackup-v2 in a galera cluster, this is what innobackup.backup.log gives me when a peer tries to join. The host is just an new mariaDB server with only an extra user to give xtrabackup access:

180206 20:26:40 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints "completed OK!".

180206 20:26:40 Connecting to MySQL server host: localhost, user: xtrabackup, password: set, port: not set, socket: /var/lib/mysql/mysql.sock
Using server version 10.3.4-MariaDB
innobackupex version 2.4.9 based on MySQL server 5.7.13 Linux (x86_64) (revision id: a467167cdd4)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql/
xtrabackup: open files limit requested 0, set to 16364
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 209715200
InnoDB: Number of pools: 1
InnoDB: Unsupported redo log format. The redo log was created with MariaDB 10.3.4. Please follow the instructions at http://dev.mysql.com/doc/refman/5.7/...wngrading.html


Since everything is a clean install, I have no clue what is going on, the mariadb server on the joiner doesn't start, I almost started thinking that the packages I use are too new?

Percona-qan-api not starting in docker container

Lastest Forum Posts - February 6, 2018 - 8:28am
I followed the guide for setting up a docker container here: https://www.percona.com/doc/percona-...on/docker.html

After setting up, I proceeded to set up clients and the metrics monitor is working fine. However, the query monitor is not. The output of Code: pmm-admin check-network shows that the Query Analytics API is down.

Code: * System Time PMM Server | 2018-02-06 16:25:34 +0000 GMT PMM Client | 2018-02-06 16:25:39 +0000 GMT PMM Client to PMM Server Time Drift | OK * Connection: Client --> Server -------------------- ------- SERVER SERVICE STATUS -------------------- ------- Consul API OK Prometheus API OK Query Analytics API DOWN Connection duration | 249.326µs Request duration | 565.454µs Full round trip | 814.78µs * Connection: Client <-- Server -------------- ----- ----------------- ------- ---------- --------- SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD -------------- ----- ----------------- ------- ---------- --------- mysql:metrics test 10.22.3.20:42003 OK YES - After delving into the docker container itself, and trying to start Code: /usr/share/percona-qan-api manually, I get the following output. I found the below command by running ps ax.

Code: [root@42d4d3569b29 bin]# /usr/sbin/percona-qan-api -srcPath /usr/share/percona-qan-api/src -importPath github.com/percona/qan-api -runMode prod ERROR: logging before flag.Parse: I0206 16:19:13.471093 8014 logutil.go:31] log: Config dev.conf not found in paths: panic: Config dev.conf not found in paths: goroutine 1 [running]: log.Panic(0xc420173ea8, 0x1, 0x1) /usr/lib/golang/src/log/log.go:326 +0xc0 github.com/percona/qan-api/config.init.0() /builddir/build/BUILD/qan-api-1.7.0/src/github.com/percona/qan-api/config/config.go:50 +0xd2 github.com/percona/qan-api/config.init() <autogenerated>:1 +0xdd github.com/percona/qan-api/app/db/mysql.init() <autogenerated>:1 +0x62 github.com/percona/qan-api/app/db.init() <autogenerated>:1 +0x49 github.com/percona/qan-api/app/agent.init() <autogenerated>:1 +0x7f github.com/percona/qan-api/app.init() <autogenerated>:1 +0x75 main.init() <autogenerated>:1 +0x5a I'm not sure what to do from this point, as I don't seem to be able to start the qan api in the docker container. The container has more than enough space, and this is a very recent clean install.

Percona Monitoring Plugins 1.1.8 Release Is Now Available

Latest MySQL Performance Blog posts - February 5, 2018 - 12:58pm

Percona announces the release of Percona Monitoring Plugins 1.1.8.

Changelog

  • Add MySQL 5.7 support
  • Changed a canary check to use timestamp.now() and return a timedelta.seconds
  • Remove an additional condition for the Dictionary memory allocated
  • Fixed a false-positive problem when the calculated delay was less than 0 and the -m was not set.
  • Fixed the problem where slaves would alert due to deadlocks on the master.
  • If using pt-heartbeat, get_slave_status was only called when the -s option is set to MASTER
  • Disabled UNK alerts by default (it is possible to enable them explicitly).
  • A fix was added for MySQL Multi-Source replication.
  • The graph Percona InnoDB Memory Allocation showed zeroes for the
    metrics Total memory (data source item nl) and Dictionary memory
    (data source item nm) when used for MySQL 5.7.18, because the syntax
    of SHOW ENGINE INNODB STATUS has changed in MySQL 5.7 (see https://dev.mysql.com/doc/refman/5.7/en/innodb-standard-monitor.html).
  • The graph Percona InnoDB I/O Pending showed NaN for the metrics
    Pending Log Writes (data source item hn) and Pending Chkp Writes
    (data source item hk) when used for MySQL 5.7.18, because the syntax
    of SHOW ENGINE INNODB STATUS has changed in MySQL 5.7 (see https://dev.mysql.com/doc/refman/5.7/en/innodb-standard-monitor.html).
  • Added server @@hostname as a possible match to avoid DNS lookups while allowing hostname-match.

A new tarball is available from downloads area or in packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. You can find links to the documentation, forums and more at the project homepage.

About Percona Monitoring Plugins
Percona Monitoring Plugins are monitoring and graphing components designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix.

Four Ways MySQL Executes GROUP BY

Latest MySQL Performance Blog posts - February 5, 2018 - 9:37am

In this blog post, I’ll look into four ways MySQL executes GROUP BY. 

In my previous blog post, we learned that indexes or other means of finding data might not be the most expensive part of query execution. For example, MySQL GROUP BY could potentially be responsible for 90% or more of the query execution time. 

The main complexity when MySQL executes GROUP BY is computing aggregate functions in a GROUP BY statement. How this works is shown in the documentation for UDF Aggregate Functions. As we see, the requirement is that UDF functions get all values that constitute the single group one after another. That way, it can compute the aggregate function value for the single group before moving to another group.

The problem, of course, is that in most cases the source data values aren’t grouped. Values coming from a variety of groups follow one another during processing. As such, we need a special step to handle MySQL GROUP BY.

Let’s look at the same table we looked at before:

mysql> show create table tbl G *************************** 1. row ***************************       Table: tbl Create Table: CREATE TABLE `tbl` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `k` int(11) NOT NULL DEFAULT '0',  `g` int(10) unsigned NOT NULL,  PRIMARY KEY (`id`),  KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=2340933 DEFAULT CHARSET=latin1 1 row in set (0.00 sec)

And the same GROUP BY statements executed in different ways:

1: Index Ordered GROUP BY in MySQL

mysql> select k, count(*) c from tbl group by k order by k limit 5; +---+---+ | k | c | +---+---+ | 2 | 3 | | 4 | 1 | | 5 | 2 | | 8 | 1 | | 9 | 1 | +---+---+ 5 rows in set (0.00 sec) mysql> explain select k, count(*) c from tbl group by k order by k limit 5 G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: tbl   partitions: NULL         type: index possible_keys: k          key: k      key_len: 4          ref: NULL         rows: 5     filtered: 100.00        Extra: Using index 1 row in set, 1 warning (0.00 sec)

In this case, we have an index on the column we use for GROUP BY. This way, we can just scan data group by group and perform GROUP BY on the fly (inexpensively).

It works especially well when we use LIMIT to restrict the number of groups we retrieve or when a “covering index” is in use, as a sequential index-only scan is a very fast operation.

If you have a small number of groups though, and no covering index, index order scans can cause a lot of IO. So this might not be the most optimal plan.

2: External Sort GROUP BY in MySQL

mysql> explain select SQL_BIG_RESULT g, count(*) c from tbl group by g limit 5 G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: tbl   partitions: NULL         type: ALL possible_keys: NULL          key: NULL      key_len: NULL          ref: NULL         rows: 998490     filtered: 100.00        Extra: Using filesort 1 row in set, 1 warning (0.00 sec) mysql> select SQL_BIG_RESULT g, count(*) c from tbl group by g limit 5; +---+---+ | g | c | +---+---+ | 0 | 1 | | 1 | 2 | | 4 | 1 | | 5 | 1 | | 6 | 2 | +---+---+ 5 rows in set (0.88 sec)

If we do not have an index that allows us to scan the data in group order, we can instead get data sorted through an external sort (also referred to as “filesort” in MySQL).

You may notice I’m using an SQL_BIG_RESULT hint here to get this plan. Without it, MySQL won’t choose this plan in this case.

In general, MySQL prefers to use this plan only if we have a large number of groups, because in this case sorting is more efficient than having a temporary table (which we will talk about next).

3: Temporary Table GROUP BY in MySQL

mysql> explain select  g, sum(g) s from tbl group by g limit 5 G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: tbl   partitions: NULL         type: ALL possible_keys: NULL          key: NULL      key_len: NULL          ref: NULL         rows: 998490     filtered: 100.00        Extra: Using temporary 1 row in set, 1 warning (0.00 sec) mysql> select  g, sum(g) s from tbl group by g order by null limit 5; +---+------+ | g | s    | +---+------+ | 0 |    0 | | 1 |    2 | | 4 |    4 | | 5 |    5 | | 6 |   12 | +---+------+ 5 rows in set (7.75 sec)

In this case, MySQL also does a full table scan. But instead of running additional sort passes, it creates a temporary table instead. This temporary table contains one row per group, and with each incoming row the value for the corresponding group is updated. Lots of updates! While this might be reasonable in-memory, it becomes very expensive if the resulting table is so large that updates are going to cause a lot of disk IO. In this case, external sort plans are usually better.

Note that while MySQL selects this plan by default for this use case, if we do not supply any hints it is almost 10x slower than the plan we get using the SQL_BIG_RESULT hint.

You may notice I added “ORDER BY NULL” to this query. This is to show you “clean” the temporary table only plan. Without it, we get this plan:

mysql> explain select  g, sum(g) s from tbl group by g limit 5 G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: tbl   partitions: NULL         type: ALL possible_keys: NULL          key: NULL      key_len: NULL          ref: NULL         rows: 998490     filtered: 100.00        Extra: Using temporary; Using filesort 1 row in set, 1 warning (0.00 sec)

In it, we get the “worst of both worlds” with Using Temporary Table and filesort.  

MySQL 5.7 always returns GROUP BY results sorted in group order, even if this the query doesn’t require it (which can then require an expensive additional sort pass). ORDER BY NULL signals the application doesn’t need this.

You should note that in some cases – such as JOIN queries with aggregate functions accessing columns from different tables – using temporary tables for GROUP BY might be the only option.

If you want to force MySQL to use a plan that does temporary tables for GROUP BY, you can use the SQL_SMALL_RESULT  hint.

4:  Index Skip-Scan-Based GROUP BY in MySQL

The previous three GROUP BY execution methods apply to all aggregate functions. Some of them, however, have a fourth method.

mysql> explain select k,max(id) from tbl group by k G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: tbl   partitions: NULL         type: range possible_keys: k          key: k      key_len: 4          ref: NULL         rows: 2     filtered: 100.00        Extra: Using index for group-by 1 row in set, 1 warning (0.00 sec) mysql> select k,max(id) from tbl group by k; +---+---------+ | k | max(id) | +---+---------+ | 0 | 2340920 | | 1 | 2340916 | | 2 | 2340932 | | 3 | 2340928 | | 4 | 2340924 | +---+---------+ 5 rows in set (0.00 sec)

This method applies only to very special aggregate functions: MIN() and MAX(). These do not really need to go through all the rows in the group to compute the value at all.

They can just jump to the minimum or maximum group value in the group directly (if there is such an index).

How can you find MAX(ID) value for each group if the index is only built on (K) column? This is an InnoDB table. Remember InnoDB tables effectively append the PRIMARY KEY to all indexes. (K) becomes (K,ID), allowing us to use Skip-Scan optimization for this query.

This optimization is only enabled if there is a large number of rows per group. Otherwise, MySQL prefers more conventional means to execute this query (like Index Ordered GROUP BY detailed in approach #1).

While we’re on MIN()/MAX() aggregate functions, other optimizations apply to them as well. For example, if you have an aggregate function with no GROUP BY (effectively  having one group for all tables), MySQL fetches those values from indexes during a statistics analyzes phase and avoids reading tables during the execution stage altogether:

mysql> explain select max(k) from tbl G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: NULL   partitions: NULL         type: NULL possible_keys: NULL          key: NULL      key_len: NULL          ref: NULL         rows: NULL     filtered: NULL        Extra: Select tables optimized away 1 row in set, 1 warning (0.00 sec)

Filtering and Group By

We have looked at four ways MySQL executes GROUP BY.  For simplicity, I used GROUP BY on the whole table, with no filtering applied. The same concepts apply when you have a WHERE clause:

mysql> explain select  g, sum(g) s from tbl where k>4 group by g order by NULL limit 5 G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: tbl   partitions: NULL         type: range possible_keys: k          key: k      key_len: 4          ref: NULL         rows: 1     filtered: 100.00        Extra: Using index condition; Using temporary 1 row in set, 1 warning (0.00 sec)

For this case, we use the range on the K column for data filtering/lookup and do a GROUP BY when there is a temporary table.

In some cases, the methods do not conflict. In others, however, we have to choose either to use one index for GROUP BY or another index for filtering:

mysql> alter table tbl add key(g); Query OK, 0 rows affected (4.17 sec) Records: 0  Duplicates: 0  Warnings: 0 mysql> explain select  g, sum(g) s from tbl where k>1 group by g limit 5 G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: tbl   partitions: NULL         type: index possible_keys: k,g          key: g      key_len: 4          ref: NULL         rows: 16     filtered: 50.00        Extra: Using where 1 row in set, 1 warning (0.00 sec) mysql> explain select  g, sum(g) s from tbl where k>4 group by g limit 5 G *************************** 1. row ***************************           id: 1  select_type: SIMPLE        table: tbl   partitions: NULL         type: range possible_keys: k,g          key: k      key_len: 4          ref: NULL         rows: 1     filtered: 100.00        Extra: Using index condition; Using temporary; Using filesort 1 row in set, 1 warning (0.00 sec)

Depending on specific constants used in this query, we can see that we either use an index ordered scan for GROUP BY (and  “give up”  benefiting from the index to resolve the WHERE clause), or use an index to resolve the WHERE clause (but use a temporary table to resolve GROUP BY).

In my experience, this is where MySQL GROUP BY does not always make the right choice. You might need to use FORCE INDEX to execute queries the way you want them to.

Summary

I hope this article provides a good overview of how MySQL executes GROUP BY.  In my next blog post, we will look into techniques you can use to optimize GROUP BY queries.

config file does not work as expected

Lastest Forum Posts - February 5, 2018 - 5:48am
It seems that percona does not set the config as I expected, but I already set the configs in mysqld.cnf under percona-xtradb-cluster.conf.d folder.

This is the percona cluster version:
Code: Server version: 5.7.20-18-57-log Percona XtraDB Cluster (GPL), Release rel18, Revision 4a4da7e, WSREP version 29.24, wsrep_29.24 /etc/mysql/my.cnf
Code: !includedir /etc/mysql/conf.d/ !includedir /etc/mysql/percona-xtradb-cluster.conf.d/ /etc/mysql/percona-xtradb-cluster.conf.d/mysqld.cnf
Code: # Template my.cnf for PXC # Edit to your requirements. [mysqld] server-id=1 datadir=/var/lib/mysql socket=/var/run/mysqld/mysqld.sock log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-bin log_slave_updates expire_logs_days=7 wait_timeout=86400 innodb_buffer_pool_size = 300M # (adjust value here, 50%-70% of total RAM) innodb_log_file_size = 64M innodb_flush_log_at_trx_commit = 1 # may change to 2 or 0 innodb_flush_method = O_DIRECT slow_query_log=1 slow_query_log_always_write_time=1 #log_queries_not_using_indexes=1 #log_output='FILE' # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 The wait_timeout is not 86400 as expected
Code: mysql> show variables like "%wait_time%"; +-------------------------------------+----------+ | Variable_name | Value | +-------------------------------------+----------+ | innodb_lock_wait_timeout | 50 | | innodb_print_lock_wait_timeout_info | OFF | | lock_wait_timeout | 31536000 | | wait_timeout | 28800 | +-------------------------------------+----------+ 4 rows in set (0.00 sec) I tried to use systemctl to reload and restart mysql, but nothing changed.
Is there anything wrong about the config?

Killed by OOM Killer when '--lock-ddl-per-table' option is specified

Lastest Forum Posts - February 5, 2018 - 2:16am
I faced the problem that extrabackup executed with `--lock-ddl-per-table` option is killed by OOM-Killer.It always happens when backing up the largest table in my database.
According to the xtrabackup STDOUT log, as soon as xtrabackup starts MDL lock on the large table, disk I/O will jump to an abnormal number and memory usage will also increase. After that, a swap occurs, and the process is finally terminated by OOM Killer.
The large table has more than 200 million rows and is partitioned.

This problems does not occur when running without the `--lock-ddl-per-table` option.
Does anyone knows how to fix it?



My command:
xtrabackup --backup --user=${my_user} --password=${my_pass} --check-privileges --stream=xbstream --parallel=4 --compress --compress-threads=2 --slave-info --target-dir="${bk_work}" --extra-lsndir="${bk_work}" --lock-ddl-per-table

version:
xtrabackup version 2.4.9 based on MySQL server 5.7.13 Linux (x86_64) (revision id: a467167cdd4)

OOM log:
See oom.txt(attached).

SST donor being removed from cluster by donee?

Lastest Forum Posts - February 3, 2018 - 1:32am
I want to check whether this is a bug to report via the Issues, or something I have misconfigured.

I got into a situation where I had one server up (A) and needed to do SSTs to the other two (B, C) in the cluster. A was under a fair amount of load as the only server left running. Repeatedly, when trying to serve an SST to B, A entered the "WSREP has not yet prepared node for application use" state. C was down throughout.

This meant that the donor and hence live system was unusable, so initially I interrupted the SST. Eventually I left the system down and let the SST continue, but it eventually failed, also with:

180202 05:53:11 [01] ...done
180202 05:53:11 Executing LOCK TABLES FOR BACKUP...
Error: failed to execute query LOCK TABLES FOR BACKUP: WSREP has not yet prepared node for application use
180202 05:53:11 >> log scanned up to (10865793398401)

Eventually I took other steps to reduce the load on A to the point where the SST could complete.

What I surmise is happening here is that A was under so much load that it wasn't responsive, and then we had a split brain between A and B. I don't know the logic well, so this may be simplistic or wrong. If so, though, it seems wrong - both A and B know that A has a working copy of the database and that A is serving to B which doesn't.

I have a logfile from A for this (attached); an example is around 2018-02-02T03:10:47.399900Z

how to modify a table.

Lastest Forum Posts - February 2, 2018 - 8:31am
How do i modify a table. I need to set a column to allow nulls. But I am not finding any information. I see the --alter command, but nothing for alter.
pt-online-schema-change --modfiy mycolumn default null d=database, t=table
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.