Buy Percona ServicesBuy Now!

MySQL 8.0 Window Functions: A Quick Taste

Latest MySQL Performance Blog posts - December 5, 2017 - 10:21am

In this post, we’ll briefly look at window functions in MySQL 8.0.

One of the major features coming to MySQL 8.0 is the support of Window functions. The detailed documentation is already available here Window functions. I wanted to take a quick look at the cases where window functions help.

Probably one the most frequent limitations in MySQL SQL syntax was analyzing a dataset. I tried to find the answer to the following question: “Find the Top N entries for each group in a grouped result.”

To give an example, I will refer to this request on Stackoverflow. While there is a solution, it is hardly intuitive and portable.

This is a popular problem, so databases without window support try to solve it in different ways. For example, ClickHouse introduced a special extension for LIMIT. You can use LIMIT n BY m to find “m” entries per group.

This is a case where window functions come in handy.

As an example, I will take the IMDB database and find the TOP 10 movies per century (well, the previous 20th and the current 21st).To download the IMDB dataset, you need to have to have an AWS account and download data from S3 storage (the details are provided on IMDB page).

I will use the following query with MySQL 8.0.3:

SELECT primaryTitle,century*100,rating,genres,rn as `Rank` FROM (SELECT primaryTitle,startYear div 100 as century,rating,genres, RANK() OVER (PARTITION BY startYear div 100 ORDER BY rating desc) rn FROM title,ratings WHERE title.tconst=ratings.tconst AND titleType='movie' AND numVotes>100000) t1 WHERE rn<=10 ORDER BY century,rating desc

The main part of this query is RANK() OVER (PARTITION BY startYear div 100 ORDER BY rating desc), which is the mentioned window function. PARTITION BY divides rows into groups, ORDER BY specifies the order and RANK() calculates the rank using the order in the specific group.

The result is:

+---------------------------------------------------+-------------+--------+----------------------------+------+ | primaryTitle | century*100 | rating | genres | Rank | +---------------------------------------------------+-------------+--------+----------------------------+------+ | The Shawshank Redemption | 1900 | 9.3 | Crime,Drama | 1 | | The Godfather | 1900 | 9.2 | Crime,Drama | 2 | | The Godfather: Part II | 1900 | 9 | Crime,Drama | 3 | | 12 Angry Men | 1900 | 8.9 | Crime,Drama | 4 | | The Good, the Bad and the Ugly | 1900 | 8.9 | Western | 4 | | Schindler's List | 1900 | 8.9 | Biography,Drama,History | 4 | | Pulp Fiction | 1900 | 8.9 | Crime,Drama | 4 | | Star Wars: Episode V - The Empire Strikes Back | 1900 | 8.8 | Action,Adventure,Fantasy | 8 | | Forrest Gump | 1900 | 8.8 | Comedy,Drama,Romance | 8 | | Fight Club | 1900 | 8.8 | Drama | 8 | | The Dark Knight | 2000 | 9 | Action,Crime,Drama | 1 | | The Lord of the Rings: The Return of the King | 2000 | 8.9 | Adventure,Drama,Fantasy | 2 | | The Lord of the Rings: The Fellowship of the Ring | 2000 | 8.8 | Adventure,Drama,Fantasy | 3 | | Inception | 2000 | 8.8 | Action,Adventure,Sci-Fi | 3 | | The Lord of the Rings: The Two Towers | 2000 | 8.7 | Action,Adventure,Drama | 5 | | City of God | 2000 | 8.7 | Crime,Drama | 5 | | Spirited Away | 2000 | 8.6 | Adventure,Animation,Family | 7 | | Interstellar | 2000 | 8.6 | Adventure,Drama,Sci-Fi | 7 | | The Intouchables | 2000 | 8.6 | Biography,Comedy,Drama | 7 | | Gladiator | 2000 | 8.5 | Action,Adventure,Drama | 10 | | Memento | 2000 | 8.5 | Mystery,Thriller | 10 | | The Pianist | 2000 | 8.5 | Biography,Drama,Music | 10 | | The Lives of Others | 2000 | 8.5 | Drama,Thriller | 10 | | The Departed | 2000 | 8.5 | Crime,Drama,Thriller | 10 | | The Prestige | 2000 | 8.5 | Drama,Mystery,Sci-Fi | 10 | | Like Stars on Earth | 2000 | 8.5 | Drama,Family | 10 | | Whiplash | 2000 | 8.5 | Drama,Music | 10 | +---------------------------------------------------+-------------+--------+----------------------------+------+ 27 rows in set (0.19 sec)

The previous century was dominated by “The Godfather” and the current one by “The Lord of the Rings”. While we may or may not agree with the results, this is what the IMDB rating tells us.
If we look at the result set, we can see that there are actually more than ten movies per century, but this is how function RANK() works. It gives the same RANK for rows with an identical rating. And if there are multiple rows with the same rating, all of them will be included in the result set.

I welcome the addition of window functions into MySQL 8.0. This definitely simplifies some complex analytical queries. Unfortunately, complex queries still will be single-threaded — this is a performance limiting factor. Hopefully, we can see multi-threaded query execution in future MySQL releases.

Webinar Wednesday, December 6, 2017: Gain a MongoDB Advantage with the Percona Memory Engine

Latest MySQL Performance Blog posts - December 5, 2017 - 9:06am

Join Percona’s, CTO, Vadim Tkachenko as he presents Gain a MongoDB Advantage with the Percona Memory Engine on Wednesday, December 6, 2017, at 11:00 am PST / 2:00 pm EST (UTC-8).

Experience: Entry Level to Intermediate

Tags: Developer, DBAs, Operations

Looking for the performance of Redis or Memcache, the expressiveness of the MongoDB query language and simple high availability and sharding? Percona Memory Engine, available as part of Percona Server for MongoDB, has it all!

In this webinar, Vadim explains the architecture of the MongoDB In-Memory storage engine. He’ll also show some benchmarks compared to disk-based storage engines and other in-memory technologies.

Vadim will share specific use cases where Percona Memory Engine for MongoDB excels, such as:

  • Caching documents
  • Highly volatile data
  • Workloads with predictable response time requirements

Register for the webinar now.

Vadim Tkachenko, CTO

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High-Performance MySQL: Optimization, Backups, and Replication 3rd Edition. Previously, he founded a web development company in his native Ukraine and spent two years in the High-Performance Group within the official MySQL support team. Vadim received a BS in Economics and an MS in computer science from the National Technical University of Ukraine.

 

Internal Temporary Tables in MySQL 5.7

Latest MySQL Performance Blog posts - December 4, 2017 - 6:51am

In this blog post, I investigate a case of spiking InnoDB Rows inserted in the absence of a write query, and find internal temporary tables to be the culprit.

Recently I was investigating an interesting case for a customer. We could see the regular spikes on a graph depicting “InnoDB rows inserted” metric (jumping from 1K/sec to 6K/sec), however we were not able to correlate those spikes with other activity. The innodb_row_inserted graph (picture from PMM demo) looked similar to this (but on a much larger scale):

Other graphs (Com_*, Handler_*) did not show any spikes like that. I’ve examined the logs (we were not able to enable general log or change the threshold of the slow log), performance_schema, triggers, stored procedures, prepared statements and even reviewed the binary logs. However, I was not able to find any single write query which could have caused the spike to 6K rows inserted.

Finally, I figured out that I was focusing on the wrong queries. I was trying to correlate the spikes on the InnoDB Rows inserted graph to the DML queries (writes). However, the spike was caused by SELECT queries! But why would SELECT queries cause the massive InnoDB insert operation? How is this even possible?

It turned out that this is related to temporary tables on disk. In MySQL 5.7 the default setting for internal_tmp_disk_storage_engine is set for InnoDB. That means that if the SELECT needs to create a temporary table on disk (e.g., for GROUP BY) it will use the InnoDB storage engine.

Is that bad? Not necessarily. Krunal Bauskar published a blog post originally about the InnoDB Intrinsic Tables performance in MySQL 5.7. The InnoDB internal temporary tables are not redo/undo logged. So in general performance is better. However, here is what we need to watch out for:

  1. Change of the place where MySQL stores temporary tables. InnoDB temporary tables are stored in ibtmp1 tablespace file. There are a number of challenges with that:
    • Location of the ibtmp1 file. By default it is located inside the innodb datadir. Originally MyISAM temporary tables were stored in  tmpdir. We can configure the size of the file, but the location is always relative to InnoDB datadir, so to move it to tmpdir we need something like this: innodb_temp_data_file_path=../../../tmp/ibtmp1:12M:autoextend
    • Like other tablespaces it never shrinks back (though it is truncated on restart). The huge temporary table can fill the disk and hang MySQL (bug opened). One way to fix that is to set the maximum size of ibtmp1 file: innodb_temp_data_file_path=ibtmp1:12M:autoextend:max:1G
    • Like other InnoDB tables it has all the InnoDB limitations, i.e., InnoDB row or column limits. If it exceeds these, it will return “Row size too large” or “Too many columns” errors. The workaround is to set internal_tmp_disk_storage_engine to MYISAM.
  2. When all temp tables go to InnoDB, it may increase the total engine load as well as affect other queries. For example, if originally all datasets fit into buffer_pool and temporary tables were created outside of the InnoDB, it will not affect the InnoDB memory footprint. Now, if a huge temporary table is created as an InnoDB table it will use innodb_buffer_pool and may “evict” the existing pages so that other queries may perform slower.
Conclusion

Beware of the new change in MySQL 5.7, the internal temporary tables (those that are created for selects when a temporary table is needed) are stored in InnoDB ibtmp file. In most cases this is faster. However, it can change the original behavior. If needed, you can switch the creation of internal temp tables back to MyISAM: set global internal_tmp_disk_storage_engine=MYISAM

percona proxysql 1.4.3 packages logrotate leaves deleted files attached to process

Lastest Forum Posts - December 2, 2017 - 1:14am
While troubleshooting a disk space issue on a proxysql node I noticed the following:

Code: sudo lsof +L1 | grep -E "COMMAND|proxysql" COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME proxysql 5403 proxysql 2w REG 253,3 1212984784 0 2424 /var/lib/proxysql/proxysql.log-20171101 (deleted) Which means that deleted files are kept open by the process and the diskspace is still claimed.

/etc/logrotate.d/proxysql-logrotate looks like this:
Code: /var/lib/proxysql/*.log { missingok daily notifempty compress create 0600 proxysql proxysql rotate 5 } It should probably include something like the following:
Code: /var/lib/proxysql/*.log { missingok daily notifempty compress create 0600 proxysql proxysql rotate 5 sharedscripts postrotate /etc/init.d/proxysql reload >/dev/null 2>&1 || true endscript } I will test this monday.

Can't import dump file to Percona Cluster with 3 node (actually 1 node is active)

Lastest Forum Posts - December 1, 2017 - 11:35pm
Hi friends,

I get just 40M database dump file from My SQL 5.7 and want to import to Percona Cluster. I can't achieve this and can't find any reason from net.

The error is

ERROR 1213 (40001) at line 69: WSREP detected deadlock/conflict and aborted the transaction. Try restarting the transaction

My config file is as following:

*****************
[mysqld]

datadir=/var/lib/mysql
user=mysql
socket=/var/lib/mysql/mysql.sock
port=3306
sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_Z ERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE _USER,NO_ENGINE_SUBSTITUTION"

skip-external-locking
key_buffer_size = 128M
skip-name-resolve
tmp_table_size=50M
max_allowed_packet = 1M
table_open_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
slow_query_log=1
event_scheduler=on

# Replication Master Server (default)
# binary logging is required for replication
log-bin=mysql-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set but will not function as a master if omitted
server-id= 1

# Uncomment the following if you are using InnoDB tables
innodb_buffer_pool_size = 384M

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

#pxc-encrypt-cluster-traffic=ON

wsrep_cluster_name=pxc-cluster
wsrep_cluster_address=gcomm://XX.XX.XX.XX,YY.YY.YY.YY,ZZ.ZZ.ZZ.ZZ
wsrep_node_name=call1
wsrep_node_address=XX.XX.XX.XX

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:kurulum

pxc_strict_mode=DISABLED #PERMISSIVE #DISABLED #ENFORCING
wsrep_log_conflicts=ON
wsrep_debug=ON

#Binary logging format - mixed recommended
wsrep_forced_binlog_format=MIXED
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout

***********************

Output of log is
..........................
2017-12-02T07:10:24.241542Z 461 [Note] WSREP: Cleaning up wsrep-transaction for local query: LOCK TABLES `agents_realtime_statuses` WRITE
2017-12-02T07:10:24.241634Z 461 [Note] WSREP: set_query_id(), assigned new next trx id: 1418
2017-12-02T07:10:24.241672Z 461 [Note] WSREP: Cleaning up wsrep-transaction for local query: /*!40000 ALTER TABLE `agents_realtime_statuses` DISABLE KEYS */
2017-12-02T07:10:24.241738Z 461 [Note] WSREP: Thread holds MDL locks at TOI begin: /*!40000 ALTER TABLE `agents_realtime_statuses` DISABLE KEYS */ 461
2017-12-02T07:10:24.241759Z 461 [Note] WSREP: Executing Query (/*!40000 ALTER TABLE `agents_realtime_statuses` DISABLE KEYS */) with write-set (-1) and exec_mode: LOCAL_STATE in TO Isolation mode
2017-12-02T07:10:24.241897Z 461 [Note] WSREP: Query (/*!40000 ALTER TABLE `agents_realtime_statuses` DISABLE KEYS */) with write-set (3156) and exec_mode: TOTAL_ORDER replicated in TO Isolation mode
2017-12-02T07:10:24.241914Z 461 [Note] WSREP: wsrep: initiating TOI for write set (3156)
2017-12-02T07:10:24.241979Z 461 [Note] WSREP: wsrep: completed TOI write set (3156)
2017-12-02T07:10:24.241999Z 461 [Note] WSREP: Setting WSREPXid (InnoDB): f0062cd7-b62d-11e7-a967-c75103ad8859:3156
2017-12-02T07:10:24.242852Z 461 [Note] WSREP: Completed query (/*!40000 ALTER TABLE `agents_realtime_statuses` DISABLE KEYS */) replication with write-set (3156) and exec_mode: TOTAL_ORDER in TO Isolation mode
2017-12-02T07:10:24.243000Z 461 [Note] WSREP: set_query_id(), assigned new next trx id: 1419
2017-12-02T07:10:24.245458Z 461 [Note] WSREP: wsrep: replicating commit (-1)
2017-12-02T07:10:24.245508Z 461 [Warning] WSREP: SQL statement (INSERT INTO `agents_realtime_statuses` VALUES ('logoff',10,'2017-12-01 19:40:09',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 144,15,2581,9,6,0,0,NULL,'2017-12-01 18:40:02',0,0,0,0,0,NULL),('logoff',8,'2017-12-01 16:56:22',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,8, 31,6,709,5,1,0,0,NULL,'2017-12-01 16:50:02',0,0,0,370,2,NULL),('logoff',7,'2017-12-01 19:46:04',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,3, 373,46,1202,19,25,0,0,NULL,'2017-12-01 18:40:00',0,0,0,59,46,NULL),('logoff',5,'2017-12-01 14:56:20',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,'1512125646.28799 ',NULL,NULL,0,20,3,0,0,2,0,0,NULL,'2017-12-01 14:42:34',0,0,0,0,20,NULL),('logoff',6,'2017-12-01 19:03:36',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 50,5,0,0,5,0,0,NULL,'2017-12-01 19:00:18',0,0,0,0,0,NULL),('logoff',11,'2017-12-01 21:00:48',NULL,NULL,'cybernet-ministry-yardim',
2017-12-02T07:10:24.245533Z 461 [Note] WSREP: commit action failed for reason: WSREP_TRX_FAIL THD: 461 Query: INSERT INTO `agents_realtime_statuses` VALUES ('logoff',10,'2017-12-01 19:40:09',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 144,15,2581,9,6,0,0,NULL,'2017-12-01 18:40:02',0,0,0,0,0,NULL),('logoff',8,'2017-12-01 16:56:22',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,8, 31,6,709,5,1,0,0,NULL,'2017-12-01 16:50:02',0,0,0,370,2,NULL),('logoff',7,'2017-12-01 19:46:04',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,3, 373,46,1202,19,25,0,0,NULL,'2017-12-01 18:40:00',0,0,0,59,46,NULL),('logoff',5,'2017-12-01 14:56:20',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,'1512125646.28799 ',NULL,NULL,0,20,3,0,0,2,0,0,NULL,'2017-12-01 14:42:34',0,0,0,0,20,NULL),('logoff',6,'2017-12-01 19:03:36',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 50,5,0,0,5,0,0,NULL,'2017-12-01 19:00:18',0,0,0,0,0,NULL),('logoff',11,'2017-12-0
2017-12-02T07:10:24.245543Z 461 [Note] WSREP: conflict state: NO_CONFLICT
2017-12-02T07:10:24.245551Z 461 [Note] WSREP: --------- CONFLICT DETECTED --------
2017-12-02T07:10:24.245558Z 461 [Note] WSREP: cluster conflict due to certification failure for threads:

2017-12-02T07:10:24.245584Z 461 [Note] WSREP: Victim thread:
THD: 461, mode: local, state: executing, conflict: cert failure, seqno: -1
SQL: INSERT INTO `agents_realtime_statuses` VALUES ('logoff',10,'2017-12-01 19:40:09',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 144,15,2581,9,6,0,0,NULL,'2017-12-01 18:40:02',0,0,0,0,0,NULL),('logoff',8,'2017-12-01 16:56:22',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,8, 31,6,709,5,1,0,0,NULL,'2017-12-01 16:50:02',0,0,0,370,2,NULL),('logoff',7,'2017-12-01 19:46:04',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,3, 373,46,1202,19,25,0,0,NULL,'2017-12-01 18:40:00',0,0,0,59,46,NULL),('logoff',5,'2017-12-01 14:56:20',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,'1512125646.28799 ',NULL,NULL,0,20,3,0,0,2,0,0,NULL,'2017-12-01 14:42:34',0,0,0,0,20,NULL),('logoff',6,'2017-12-01 19:03:36',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 50,5,0,0,5,0,0,NULL,'2017-12-01 19:00:18',0
2017-12-02T07:10:24.245793Z 461 [Note] WSREP: Cleaning up wsrep-transaction for local query: INSERT INTO `agents_realtime_statuses` VALUES ('logoff',10,'2017-12-01 19:40:09',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 144,15,2581,9,6,0,0,NULL,'2017-12-01 18:40:02',0,0,0,0,0,NULL),('logoff',8,'2017-12-01 01 14:56:20',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,'1512125646.28799 ',NULL,NULL,0,20,3,0,0,2,0,0,NULL,'2017-12-01 14:42:34',0,0,0,0,20,NULL),('logoff',6,'2017-12-01 19:03:36',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 50,5,0,0,5,0,0,NULL,'2017-12-01 19:00:18',0,0,0,0,0,NULL),('logoff',11,'2017-12-01 21:00:48',NULL,
2017-12-02T07:10:24.245841Z 461 [Note] WSREP: Retrying auto-commit query (on abort): INSERT INTO `agents_realtime_statuses` VALUES ('logoff',10,'2017-12-01 19:40:09',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 144,15,2581,9,6,0,0,NULL,'2017-12-01 18:40:02',0,0,0,0,0,NULL),('logoff',8,'2017-12-01 01 14:56:20',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,'1512125646.28799 ',NULL,NULL,0,20,3,0,0,2,0,0,NULL,'2017-12-01 14:42:34',0,0,0,0,20,NULL),('logoff',6,'2017-12-01 19:03:36',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 50,5,0,0,5,0,0,NULL,'2017-12-01 19:00:18',0,0,0,0,0,NULL),('logoff',11,'2017-12-01 21:00:48',NULL,NULL,'cy
2017-12-02T07:10:24.245863Z 461 [Note] WSREP: Assigned new trx id to retry auto-commit query: 1419
2017-12-02T07:10:24.246301Z 461 [Note] WSREP: wsrep: replicating commit (-1)
2017-12-02T07:10:24.246336Z 461 [Warning] WSREP: SQL statement (INSERT INTO `agents_realtime_statuses` VALUES ('logoff',10,'2017-12-01 19:40:09',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 144,15,2581,9,6,0,0,NULL,'2017-12-01 18:40:02',0,0,0,0,0,NULL),('logoff',8,'2017-12-01 16:56:22',NULL,NULL,'cybernet-ministry-ministry-yardim',NULL,NULL,NULL,NULL,NULL,'1512125646.28799 ',NULL,NULL,0,20,3,0,0,2,0,0,NULL,'2017-12-01 14:42:34',0,0,0,0,20,NULL),('logoff',6,'2017-12-01 19:03:36',NULL,NULL,'cybernet-ministry-yardim',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0, 50,5,0,0,5,0,0,NULL,'2017-12-01 19:00:18',0,0,0,0,0,NULL),('logoff',11,'2017-12-01 21:00:48',NULL,NULL,'cybernet-ministry-yardim',
2017-12-02T07:10:24.246353Z 461 [Note] WSREP: commit action failed for reason: WSREP_TRX_FAIL THD: 461 Query: INSERT INTO `agents_realtime_statuses` VALUES ('logoff',10,'2017-

....................................

I can't find the reason.


PLEASE HELP ME. IT IS URGENT!!!!


Thanks!!!


Percona Monitoring and Management 1.5: QAN in Grafana Interface

Latest MySQL Performance Blog posts - December 1, 2017 - 1:21pm

In this post, we’ll examine how we’ve improved the GUI layout for Percona Monitoring and Management 1.5 by moving the Query Analytics (QAN) functions into the Grafana interface.

For Percona Monitoring and Management users, you might notice that QAN appears a little differently in our 1.5 release. We’ve taken steps to unify the PMM interface so that it feels more natural to move from reviewing historical trends in Metrics Monitor to examining slow queries in QAN.  Most significantly:

  1. QAN moves from a stand-alone application into Metrics Monitor as a dashboard application
  2. We updated the color scheme of QAN to match Metrics Monitor (but you can toggle a button if you prefer to still see QAN in white!)
  3. Date picker and host selector now use the same methods as Metrics Monitor

Starting from the PMM landing page, you still see two buttons – one for Metrics Monitor and another for Query Analytics (this hasn’t changed):

Once you select Query Analytics on the left, you see the new Metrics Monitor dashboard page for PMM Query Analytics. It is now hosted as a Metrics Monitor dashboard, and notice the URL is no longer /qan:

Another advantage of the Metrics Monitor dashboard integration is that the QAN inherits the host selector from Grafana, which supports partial string matching. This makes it simpler to find the host you’re searching for if you have more than a handful of instances:

The last feature enhancement worth mentioning is the native Grafana time selector, which lets you select down to the minute resolution time frames. This was a frequent source of feature requests — previously PMM limited you to our pre-defined default ranges. Keep in mind that QAN has an internal archiving job that caps QAN history at eight days.

Last but not least is the ability to toggle between the default dark interface and the optional white. Look for the small lightbulb icon at the bottom left of any QAN screen () and give it a try!

We hope you enjoy the new interface, and we look forward to your feedback on these improvements!

What status variables to monitor when running a large ALTER TABLE

Lastest Forum Posts - December 1, 2017 - 8:08am
Hi,

First post so apologies for any failure to adhere to community standards.

I am not a MySQL (or any kind of DB) expert and have just recently discovered the pt-online-schema-change tool which is right now looking like a life saver. My problem is just that I am unsure how I can effectively use the --max-load option to ensure the DB is not overloaded during the change. The DB is written to by a single thread and so the default behaviour to monitor active threads is not a great way for me to measure the load. I would prefer something CPU based, as CPU is generally the constraining resource on our server. Is there anything which would offer this kind of functionality here? Alternatively if it is possible to tell how long queries are taking to execute that might work. Open to other suggestions but CPU is the obvious best choice to me.

Thanks
Joe

This Week in Data with Colin Charles 17: AWS Re:Invent, a New Book on MySQL Cluster and Another Call Out for Percona Live 2018

Latest MySQL Performance Blog posts - December 1, 2017 - 6:58am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The CFP for Percona Live Santa Clara 2018 closes December 22, 2017: please consider submitting as soon as possible. We want to make an early announcement of talks, so we’ll definitely do a first pass even before the CFP date closes. Keep in mind the expanded view of what we are after: it’s more than just MySQL and MongoDB. And don’t forget that with one day less, there will be intense competition to fit all the content in.

A new book on MySQL Cluster is out: Pro MySQL NDB Cluster by Jesper Wisborg Krogh and Mikiya Okuno. At 690 pages, it is a weighty tome, and something I fully plan on reading, considering I haven’t played with NDBCLUSTER for quite some time.

Did you know that since MySQL 5.7.17, connection control plugins are included? They help DBAs introduce an increasing delay in server response to clients after a certain number of consecutive failed connection attempts. Read more at the connection control plugins.

While there are a tonne of announcements coming out from the Amazon re:Invent 2017 event, I highly recommend also reading Some data of interest as AWS reinvent 2017 ramps up by James Governor. Telemetry data from sumologic’s 1,500 largest customers suggest that NoSQL database usage has overtaken relational database workloads! Read The State of Modern Applications in the Cloud. Page 8 tells us that MySQL is the #1 database on AWS (I don’t see MariaDB Server being mentioned which is odd; did they lump it in together?), and MySQL, Redis & MongoDB account for 40% of database adoption on AWS. In other news, Andy Jassy also mentions that less than 1.5 months after hitting 40,000 database migrations, they’ve gone past 45,000 over the Thanksgiving holiday last week. Have you started using AWS Database Migration Service?

Releases Link List Upcoming appearances
  • ACMUG 2017 gathering – Beijing, China, December 9-10 2017 – it was very exciting being there in 2016, I can only imagine it’s going to be bigger and better in 2017, since it is now two days long!
Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Percona Server for MongoDB 3.4.10-2.10 Is Now Available

Latest MySQL Performance Blog posts - November 30, 2017 - 1:50pm

Percona announces the release of Percona Server for MongoDB 3.4.10-2.10 on November 30, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features:

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.4.10 and includes the following additional change:

  • oplog searches have been optimized in MongoRocks, which should also increase overall performance.

Percona XtraBackup 2.4.9 Is Now Available

Lastest Forum Posts - November 30, 2017 - 12:37pm
Percona announces the GA release of Percona XtraBackup 2.4.9 on November 29, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups. New features:

  • Packages are now available for Ubuntu 17.10 (Artful).
  • xbcrypt now can decrypt files in parallel by specifying the number of threads with the --encrypt-threadsoption.
  • --copy-back option can now be used with --parallel option to copy the user data files in parallel (redo logs and system tablespaces are copied in the main thread).
Bugs Fixed:

  • Percona XtraBackup would fail to backup large databases on 32-bit platforms. Bug fixed #1602537.
  • Percona XtraBackup failed to build with GCC 7. Bug fixed #1681721.
  • Percona XtraBackup would hang during the prepare phase if there was not enough room in log buffer to accommodate checkpoint information at the end of the crash recovery process. Bug fixed #1705383.
  • When backup was streamed in tar format with the --slave-info option, output file xtrabackup_slave_info did not contain the slave information. Bug fixed #1707918.
  • If --slave-info option was used while backing up 5.7 instances, the master binary log coordinates were not properly displayed in the logs. Bug fixed #1711010.
  • innobackupex --slave-info would report a single m instead of slave info in the standard output. Bug fixed #1727920.
  • Percona XtraBackup would crash while preparing the 5.5 backup with utf8_general50_ci collation. Bug fixed #1533722 (Fungo Wang).
  • Percona XtraBackup would crash if --throttle option was used while preparing backups. Fixed by making this option available only during the backup process. Bug fixed #1691093.
  • Percona XtraBackup could get stuck if backups are taken with --safe-slave-backup option, while there were long-running queries. Bug fixed #1717158.
Other bugs fixed: #1678838, #1727922, and #1729241.

Release notes with all the bugfixes for version 2.4.9 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Percona XtraBackup 2.3.10 Is Now Available

Lastest Forum Posts - November 30, 2017 - 12:35pm
Percona announces the release of Percona XtraBackup 2.3.10 on November 29, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series. New Features

  • Packages are now available for Ubuntu 17.10 (Artful).
  • xbcrypt now can decrypt files in parallel by specifying the number of threads with the --encrypt-threadsoption.
  • Percona XtraBackup --copy-back option can now be used with --parallel option to copy the user data files in parallel (redo logs and system tablespaces are copied in the main thread).
Bugs Fixed:

  • Percona XtraBackup failed to build with GCC 7. Bug fixed #1681721.
  • Percona XtraBackup would crash while preparing the 5.5 backup with utf8_general50_ci collation. Bug fixed #1533722 (Fungo Wang).
  • Percona XtraBackup would crash if --throttle was used while preparing backups. Fixed by making this option available only during the backup process. Bug fixed #1691093.
  • Percona XtraBackup could get stuck if backups are taken with --safe-slave-backup option, while there were long-running queries. Bug fixed #1717158.
Release notes with all the bugfixes for version 2.3.10 are available in our online documentation. Report bugs on the launchpad bug tracker.

Severalnines ClusterControl Security

Lastest Forum Posts - November 30, 2017 - 5:49am
Just looking to see if anyone had insight. The ClusterControl software requires an OS system user to have super user privileges. This seems very risky. I contacted Severalnines asking if they had a complete list of commands the super user might run, and they stated that it would be quite difficult to compile a list.

Has anyone had issues using CC with Percona XtraDB Cluster? Any insight? Thanks!

timestamp issue when migrating from MySQL to Percona

Lastest Forum Posts - November 30, 2017 - 5:15am
We are trying to migrate a database used by a webapp from a very old version of MySQL (5.0.95) to Percona (5.6.37) and ran into a problem. In the old database, we can create a table and insert into it with:

CREATE TABLE `test_table` (
`id` int(10) unsigned NOT NULL auto_increment,
`name` varchar(30) default NULL,
`last_modified` timestamp NOT NULL
default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
PRIMARY KEY (`id`));

INSERT INTO test_table
SET name = 'test1';

INSERT INTO test_table
SET name = 'test2',
last_modified = NULL;

and both of the inserts work and we end up with two rows. In Percona, the second insert fails and produces the error:

ERROR 1048 (23000): Column 'last_modified' cannot be null

Of course, the webapp (a PHP program written by several people who no longer are here) produces the second insert and so the insert fails. Unfortunately, because the SQL is automatically generated by the webapp (the table actually has MANY columns), untangling it all will not be trivial. So, before I undertake that, I want to confirm that the second insert is explicitly no longer allowed. If by some chance this is a bug in our version of percona, I'll attempt a workaround, but if this insert is no longer allowed, I'll need to figure out a way to fix the PHP.

Thanks for any insights.

how to upgrade Grafana version in pmm docker container?

Lastest Forum Posts - November 29, 2017 - 10:43pm
hi,


i want to upgrade grafana version in pmm docker container.


i tried to execute like this :

rpm -Uvh <rpm file>

but I couldn't update it, beause it conflicts with file from package percona-grafana files.


how could we upgrade grafana version?


thanks.

Percona Monitoring and Management 1.5.1 Is Now Available

Latest MySQL Performance Blog posts - November 29, 2017 - 11:12am

Percona announces the release of Percona Monitoring and Management 1.5.1. This release contains fixes for bugs found after Percona Monitoring and Management 1.5.0 was released.

Bug fixes
  • PMM-1771: When upgrading PMM to 1.5.0 using Docker commands, PMM System SummaryPMM Add InstancePMM Query Analytics dashboards were not available.
  • PMM-1761: The PMM Query Analytics dashboard did not display the list of hosts correctly.
  • PMM-1769: It was possible to add an Amazon RDS instance providing invalid credentials on the PMM Add Instance dashboard.

Other bug fixes: PMM-1767PMM-1762

Percona XtraBackup 2.3.10 Is Now Available

Latest MySQL Performance Blog posts - November 29, 2017 - 11:00am

Percona announces the release of Percona XtraBackup 2.3.10 on November 29, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

New Features
  • Packages are now available for Ubuntu 17.10 (Artful).
  • xbcrypt now can decrypt files in parallel by specifying the number of threads with the --encrypt-threads option.
  • Percona XtraBackup --copy-back option can now be used with --parallel option to copy the user data files in parallel (redo logs and system tablespaces are copied in the main thread).
Bugs Fixed:
  • Percona XtraBackup failed to build with GCC 7. Bug fixed #1681721.
  • Percona XtraBackup would crash while preparing the 5.5 backup with utf8_general50_ci collation. Bug fixed #1533722 (Fungo Wang).
  • Percona XtraBackup would crash if --throttle was used while preparing backups. Fixed by making this option available only during the backup process. Bug fixed #1691093.
  • Percona XtraBackup could get stuck if backups are taken with --safe-slave-backup option, while there were long-running queries. Bug fixed #1717158.

Release notes with all the bugfixes for version 2.3.10 are available in our online documentation. Report bugs on the launchpad bug tracker.

Percona XtraBackup 2.4.9 Is Now Available

Latest MySQL Performance Blog posts - November 29, 2017 - 10:55am

Percona announces the GA release of Percona XtraBackup 2.4.9 on November 29, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

New features:
  • Packages are now available for Ubuntu 17.10 (Artful).
  • xbcrypt now can decrypt files in parallel by specifying the number of threads with the --encrypt-threads option.
  • --copy-back option can now be used with --parallel option to copy the user data files in parallel (redo logs and system tablespaces are copied in the main thread).
Bugs Fixed:
  • Percona XtraBackup would fail to backup large databases on 32-bit platforms. Bug fixed #1602537.
  • Percona XtraBackup failed to build with GCC 7. Bug fixed #1681721.
  • Percona XtraBackup would hang during the prepare phase if there was not enough room in log buffer to accommodate checkpoint information at the end of the crash recovery process. Bug fixed #1705383.
  • When backup was streamed in tar format with the --slave-info option, output file xtrabackup_slave_info did not contain the slave information. Bug fixed #1707918.
  • If --slave-info option was used while backing up 5.7 instances, the master binary log coordinates were not properly displayed in the logs. Bug fixed #1711010.
  • innobackupex --slave-info would report a single m instead of slave info in the standard output. Bug fixed #1727920.
  • Percona XtraBackup would crash while preparing the 5.5 backup with utf8_general50_ci collation. Bug fixed #1533722 (Fungo Wang).
  • Percona XtraBackup would crash if --throttle option was used while preparing backups. Fixed by making this option available only during the backup process. Bug fixed #1691093.
  • Percona XtraBackup could get stuck if backups are taken with --safe-slave-backup option, while there were long-running queries. Bug fixed #1717158.

Other bugs fixed: #1678838, #1727922, and #1729241.

Release notes with all the bugfixes for version 2.4.9 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

pt-table-checksum stuck on &amp;quot;Waiting to check replicas for differences&amp;quot;

Lastest Forum Posts - November 29, 2017 - 5:38am
When I run pt-table-checksum, it keep printing,

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain

Waiting to check replicas for differences: 0% 00:00 remain



and I checked the log on slave, it also loop,
171129 8:34:10 88 Query SELECT MAX(chunk) FROM `percona`.`checksums` WHERE db='test' AND tbl='_logs' AND master_crc IS NOT NULL

171129 8:34:11 88 Query SELECT MAX(chunk) FROM `percona`.`checksums` WHERE db='test' AND tbl='_logs' AND master_crc IS NOT NULL

171129 8:34:12 88 Query SELECT MAX(chunk) FROM `percona`.`checksums` WHERE db='test' AND tbl='_logs' AND master_crc IS NOT NULL

171129 8:34:13 88 Query SELECT MAX(chunk) FROM `percona`.`checksums` WHERE db='test' AND tbl='_logs' AND master_crc IS NOT NULL

171129 8:34:15 88 Query SELECT MAX(chunk) FROM `percona`.`checksums` WHERE db='test' AND tbl='_logs' AND master_crc IS NOT NULL

171129 8:34:16 88 Query SELECT MAX(chunk) FROM `percona`.`checksums` WHERE db='test' AND tbl='_logs' AND master_crc IS NOT NULL

........

MySQL Table Statistics not populating for 2TB Database

Lastest Forum Posts - November 29, 2017 - 3:45am
Hi, I have setup PMM as a docker container and am testing with database servers with varying sized databases. The MySQL table statistics is quite a useful feature, but it doesn't seem to be populating for the largest databases, it does work fine however for the smaller databases which are less than half a terrabyte in size. So I am guessing some discivery queries are timing out for the larger ones.

Could someone please point out if there is a tweakable element here?

Thanks

Restart Node fails (ubuntu-16.04.3, percona-xtradb-cluster 5.7.19-29.22-3.xenial)

Lastest Forum Posts - November 29, 2017 - 2:33am
Hi everyone,

i am currently testing a 2node+arbitrator setup and am running into some problems when restarting a single node.
After all nodes have joined the cluster, if i restart a single one, it fails to start again and only shows

" WSREP: Failed to recover position:"

After cleaning out the whole datadir it will rejoin the cluster just fine.


wsrep.conf
Code: [mysqld] # if cluster is shutdown, and restarted in reverse, try to use IST instead of full SST # https://www.percona.com/blog/2016/11/30/galera-cache-gcache-finally-recoverable-restart/ #wsrep_provider_options="gcache.size=3G" # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links = 0 # Path to Galera library wsrep_provider =/usr/lib/galera3/libgalera_smm.so # In order for Galera to work correctly binlog format should be ROW binlog_format = ROW # MyISAM storage engine has only experimental support default_storage_engine = InnoDB # Slave thread to use #wsrep_slave_threads = 16 wsrep_log_conflicts # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode =2 # Node IP address wsrep_node_address = 10.10.0.131 # Cluster name wsrep_cluster_name = percona_cluster_fra #If wsrep_node_name is not specified, then system hostname will be used wsrep_node_name = db03 #pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER # TODO get devs to fix stuff and switch to ENFORCING pxc_strict_mode = ENFORCING # SST method wsrep_sst_method = xtrabackup-v2 #Authentication for SST method wsrep_sst_auth = "xtrabackup:password" # Cluster connection URL contains IPs of nodes #If no IP is found, this implies that a new cluster needs to be created, #in order to do that you need to bootstrap this node wsrep_cluster_address = gcomm://10.10.0.131,10.10.0.132,10.10.0.101 #wsrep_notify_cmd = /usr/local/bin/galeranotify.py my.cnf
Code: # Ansible managed # # change mysql-prompt [mysql] prompt =\u@db03:[\d]>\_ # Template my.cnf for PXC # Edit to your requirements. [mysqld] user = mysql server-id = 3 datadir = /data/mysql tmpdir = /tmp socket = /var/run/mysqld/mysqld.sock log-error = /var/log/mysqld.log pid-file = /var/run/mysqld/mysqld.pid skip-name-resolve # deactivated bc/keepalived # bind_address = 10.10.0.131 enforce_gtid_consistency = 1 gtid_mode = on # set buffer to 70% innodb_buffer_pool_size = 90069M innodb_file_per_table = ON innodb_flush_log_at_trx_commit = 2 # Logging log-bin = mysql-bin max_binlog_size = 300000000 log_slave_updates slow-query-log = true slow_query_log_file = /var/log/mysql/mysql-slow.log long_query_time = 1 log_error_verbosity = 2 expire_logs_days = 4 log_output = file slow_query_log = ON long_query_time = 1 log_slow_rate_limit = 100 #log_slow_rate_type = query log_slow_verbosity = full log_slow_admin_statements = ON log_slow_slave_statements = ON slow_query_log_always_write_time = 1 slow_query_log_use_global_control = all innodb_monitor_enable = all userstat = 1 explicit_defaults_for_timestamp = 1 # # -------------------------------------------------------------------------------- event_scheduler = 1 max_connect_errors = 16385 # block server after this many unsuccessful connections # # slave-replication # #slave_net_timeout = 60 # #binlog_cache_size = 2M # #binlog_stmt_cache_size = 2M # # Threading / Processes # # thread_cache_size = 1024 max_connections = 8192 back_log = 512 # default 50 (max. = net.ipv4.tcp_max_syn_backlog = 2048) # # # # ThreadPool # # thread_handling = pool-of-threads thread_pool_size = 26 # default # of CPUs thread_pool_stall_limit = 500 # default 500 (ms) # thread_pool_max_threads = 500 # default 500 # thread_pool_idle_timeout = 60 # default 60 (s) # # # # Query cache # # query_cache_limit = 16M query_cache_size = 1M # # # # All storage engines # # tmp_table_size = 8192M max_heap_table_size = 8192M table_open_cache = 4000 Anyone know what the error might be?

regards,
Roman
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.