Buy Percona ServicesBuy Now!

Percona Live Europe Social

Latest MySQL Performance Blog posts - October 4, 2017 - 2:03pm
One for the road…

The social events at Percona Live Europe provide the community with more time to catch up with old friends and make new contacts. The formal sessions provided lots of opportunities for exchanging notes, experiences and ideas. Lunches and coffee breaks proved to be busy too. Even so, what’s better than chilling out over a beer or two (we were in Dublin after all) and enjoying the city nightlife in good company?

Percona Live Europe made it easy for us to get together each evening.  A welcome reception (after tutorials) at Sinnott’s Pub in the heart of the City hosted a lively crowd. The Community Dinner at the Mercantile Bar, another lively city center hostelry, was a sell-out. While our closing reception was held at the conference venue, which had proven to be an excellent base.

Many delegates took the chance to enjoy the best of Dublin’s hospitality late into the night. It’s credit to their stamina – and the fantastic conference agenda – that opening keynotes on both Tuesday and Wednesday were very well attended.

In case you think we might have been prioritizing the Guinness, though, there was the little matter of the lightning talks at the Community Dinner. Seven community-minded generous souls gave up some of their valuable socializing time to share insights into matters open source. Thank you again to Renato Losio of Funambol, Anirban Rahut of Facebook, Federico Razzoli of Catawiki, Dana Van Aken of Carnegie Mellon University, Toshaan Bharvani of VanTosh, Balys Kriksciunas of Hostinger International and Vishal Loel of Lazada.

More about the lightning talks can be seen on the Percona Live Europe website.

And so Percona Live Europe drew to a close. Delegates from 43 countries headed home armed with new knowledge, new ideas and new friends. I’ve put together to give a taste of the Percona Live social meetups in this video. Tempted to join us in 2018?

Sláinte!

ClickHouse MySQL Silicon Valley Meetup Wednesday, October 25 at Uber Engineering with Percona’s CTO Vadim Tkachenko

Latest MySQL Performance Blog posts - October 4, 2017 - 1:41pm

I will be presenting at the ClickHouse MySQL Silicon Valley Meetup on Wednesday, October 25, 2017, at 6:30 PM.

ClickHouse is a real-time analytical database system. Even though they’re only celebrating one year as open source software, it has already proved itself ready for the serious workloads. We will talk about ClickHouse in general, some internals and why it is so fast. ClickHouse works in conjunction with MySQL – traditionally weak for analytical workloads – and this presentation demonstrates how to make the two systems work together.

My talk will cover how we can improve the experience with real-time analytics using ClickHouse, and how we can integrate ClickHouse with MySQL.

I want to thank our friends at Uber Engineering who agreed to host this event.

Please join us here: https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/243887397/.

Vadim Tkachenko, Percona CTO

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks.

Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products.

He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition. Previously, he founded a web development company in his native Ukraine and spent two years in the High Performance Group within the official MySQL support team.

InnoDB: Operating system error number 24 in a file operation... again

Lastest Forum Posts - October 4, 2017 - 5:11am
Hi Guys.

Seen this asked before many times and have followed advice and guidance but at a complete loss with this. I have followed the settings to the letter as set out at https://www.percona.com/blog/2016/12...number-tables/

Running 'find /var/lib/mysql/ -name "*.ibd" | wc -l' returns 8338 so setting my limits of 2000000 seem really excessive.

my.cnf (have **** out the wsrep_cluster_address, wsrep_sst_auth and wsrep_node_address)

binlog_format = ROW
innodb_buffer_pool_size = 2G
innodb_flush_log_at_trx_commit = 0
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 20M
innodb_file_per_table = 1
transaction-isolation = READ-COMMITTED
max_connections = 100

wsrep_cluster_address = ********************
wsrep_provider = /usr/lib64/libgalera_smm.so
wsrep_sst_auth = ***********************
wsrep_slave_threads = 8
wsrep_cluster_name = pptdrupalcluster
wsrep_node_address = *********************
wsrep_node_name = Node2
wsrep_sst_method = xtrabackup-v2
default_storage_engine = InnoDB
datadir = /var/lib/mysql
open_files_limit = 2000000

log_slave_updates
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode = 2

[mysqld_safe]
pid-file = /run/mysqld/mysql.pid
syslog


My systems open file hard limit is set to 2000000 as well.

Excerpt of /etc/security/limits.conf

mysql hard nofile 2000000
mysql soft nofile 2000000

ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 31209
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 31209
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Is there something else I'm missing with this?

Thanks
AST

Performance problems in Percona MySQL 5.6 migrating from Ubuntu 14.04LTS to 16.04LTS

Lastest Forum Posts - October 3, 2017 - 7:54pm
So the big upgrade from 14.04 to 16.04 came up a little unexpectedly 2 weeks ago. many small package changes, and went from Percona 5.6.33-79.0-log (on 14.04) to 5.6.37-82.2-log (on 16.04) - and performance is really not very good since the upgrade. In fact, it fails to meet our minimum requirements. A mass insert process that, triggers & all included, used to run at ~7 r/s, now barely musters 0.7 r/s, a figure improved (not enough to enable the daily process to complete in under 24h) to 1.5 r/s by setting sync_master & sync_relay_log_info to 0

Has anyone else experienced this performance regression issue upgrading from Trusty to Xenial, or have any ideas where I might look to see what is going on?

MyRocks Metrics Now in PMM 1.3.0

Latest MySQL Performance Blog posts - October 3, 2017 - 3:26pm

One of the most exciting features shipped in the Percona Monitoring and Management 1.3.0 (PMM) release is support for MyRocks metrics via a new Metrics Monitor dashboard titled MySQL MyRocks Metrics. The support in PMM follows the recent Percona Server for MySQL release 5.7.19 from September 6, where Percona delivered an EXPERIMENTAL version of MyRocks for non-Production usage.

The MyRocks storage engine from Facebook is based on RocksDB, a persistent key-value store for fast storage environments. MyRocks is optimized for fast storage and combines outstanding space and write efficiency with acceptable read performance. As a result, MyRocks has the following advantages compared to other storage engines (if your workload uses fast storage, such as SSD):

  • Requires less storage space
  • Provides more storage endurance
  • Ensures better IO capacity
MyRocks Database Operations

This graph will help you visualize MyRocks database operations of Next and Seek attributes:

MyRocks Cache Activity

We also have a graph to help you visualize the count of Hits and Misses on the MyRocks cache:

MyRocks Cache Data Bytes Read/Write

Finally, another important MyRocks graph will help you understand the volume of data read and written to the MyRocks cache:

Please note that the MyRocks storage engine is not suitable (yet) for production workloads, but if you are testing this technology take a moment to install PMM in order to take advantage of our new MySQL MyRocks Metrics dashboard!

In PMM, you can view the metrics provided by the information schema as well as various data reported by the RocksDB engine’s status used by your MySQL database instance.

PMM Grafana Dashboard doesn't show monitored host

Lastest Forum Posts - October 3, 2017 - 2:01pm
I installed PMM server in one AWS EC2 instance and PMM client on another EC2 instance. I configured http, https and ssh on both servers. However, after I started the data collection, I only got QAN information, but not system metrics because the client host is not listed under PMM server. I checked the network status and it shows the below:

[ec2-user@ip-172-31-26-150 ~]$ sudo pmm-admin check-network
PMM Network Status

Server Address | 172.31.16.149
Client Address | 172.31.26.150

* System Time
NTP Server (0.pool.ntp.org) | 2017-10-03 20:54:18 +0000 UTC
PMM Server | 2017-10-03 20:54:18 +0000 GMT
PMM Client | 2017-10-03 20:54:18 +0000 UTC
PMM Server Time Drift | OK
PMM Client Time Drift | OK
PMM Client to PMM Server Time Drift | OK

* Connection: Client --> Server
-------------------- -------
SERVER SERVICE STATUS
-------------------- -------
Consul API OK
Prometheus API OK
Query Analytics API OK

Connection duration | 313.172µs
Request duration | 574.062µs
Full round trip | 887.234µs


* Connection: Client <-- Server
-------------- ----------------- -------------------- ------- ---------- ---------
SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD
-------------- ----------------- -------------------- ------- ---------- ---------
linux:metrics ip-172-31-26-150 172.31.26.150:42000 DOWN YES YES
mysql:metrics ip-172-31-26-150 172.31.26.150:42002 DOWN YES YES

[ec2-user@ip-172-31-26-150 ~]$ sudo pmm-admin list
pmm-admin 1.3.1

PMM Server | 172.31.16.149 (password-protected)
Client Name | ip-172-31-26-150
Client Address | 172.31.26.150
Service Manager | unix-systemv

-------------- ----------------- ----------- -------- ----------------------------------------- ---------------------------------------------
SERVICE TYPE NAME LOCAL PORT RUNNING DATA SOURCE OPTIONS
-------------- ----------------- ----------- -------- ----------------------------------------- ---------------------------------------------
mysql:queries ip-172-31-26-150 - YES root:***@unix(/var/lib/mysql/mysql.sock) query_source=perfschema, query_examples=true
linux:metrics ip-172-31-26-150 42000 YES -
mysql:metrics ip-172-31-26-150 42002 YES root:***@unix(/var/lib/mysql/mysql.sock)

How can I fix this?

Thanks,
Jie

Webinar October 4, 2017: Databases in the Hosted Cloud

Latest MySQL Performance Blog posts - October 3, 2017 - 10:33am

Join Percona’s Chief Evangelist, Colin Charles as he presents Databases in the Hosted Cloud on Wednesday, October 4, 2017, at 7:00 am PDT / 10:00 am EDT (UTC-7).

Register Now

Today you can use hosted MySQL/MariaDB/Percona Server for MySQL/PostgreSQL in several “cloud providers” as a database as a service (DBaaS). Learn the differences, the access methods and the level of control you have for the various public databases in the hosted cloud offerings:

  • Amazon RDS including Aurora
  • Google Cloud SQL
  • Rackspace OpenStack DBaaS
  • Oracle Cloud’s MySQL Service

The administration tools and ideologies behind each are completely different, and you are in a “locked-down” environment. Some considerations include:

  • Different backup strategies
  • Planning for multiple data centers for availability
  • Where do you host your application?
  • How do you get the most performance out of the solution?
  • What does this all cost?
  • Monitoring

Growth topics include:

  • How do you move from one DBaaS to another?
  • How do you move from a DBaaS to your own hosted platform?

Register for the webinar here.

Colin Charles, Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within many open source communities and has spoken on the conference circuit.

 

Batch deletes from big table causing wsrep system lock

Lastest Forum Posts - October 3, 2017 - 1:36am
On our production server we have a big table which runs into 450 Gb. The table is partitioned based on time into monthly partitions.

The delete is performed using a stored procedure. When we execute the SP we are getting following warning in server logs on all the nodes and at the same time all other transactions are waiting for this SP to complete.

2017-10-03 09:43:10 12355 [Warning] WSREP: Failed to report last committed 33482996, -4 (Interrupted system call)
2017-10-03 09:43:12 12355 [Warning] WSREP: Failed to report last committed 33482997, -4 (Interrupted system call)
2017-10-03 09:43:20 12355 [Warning] WSREP: Failed to report last committed 33482999, -4 (Interrupted system call)
2017-10-03 09:43:33 12355 [Warning] WSREP: Failed to report last committed 33483002, -4 (Interrupted system call)
2017-10-03 09:43:50 12355 [Warning] WSREP: Failed to report last committed 33483007, -4 (Interrupted system call)
2017-10-03 09:44:25 12355 [Warning] WSREP: Failed to report last committed 33483018, -4 (Interrupted system call)

The delete is performed in a while loop and each batch is limited to 10K rows.

Below is the SP, used for the delete operation.

CREATE `spc_Util_DeleteTsPassages`(p_tsId INT, p_startDate DATE, p_endDate DATE)
BEGIN
tspassages_Block:BEGIN
DECLARE v_tsId, v_deleteStartDate, v_deleteEndDate INT;
DECLARE v_Process VARCHAR(256);
DECLARE v_count BIGINT;

SET v_count = 0;
SET v_tsId = p_tsId;
SET v_Process = 'tspassages - Data Deletion';
SET v_deleteStartDate = UNIX_TIMESTAMP(p_startDate);
SET v_deleteEndDate = UNIX_TIMESTAMP(p_endDate);

-- select v_tsId, v_deleteStartDate, v_deleteEndDate;

INSERT INTO dataDeleteAuditLogs(PROCESS,LogDate,Descriptions)
VALUES(v_Process,NOW(),CONCAT('Deletion started for tsId : ', v_tsId, '; FromDate : ',p_startDate, ' ToDate : ', p_endDate));

WHILE EXISTS(SELECT passageTime FROM tspassages WHERE passageTime >= v_deleteStartDate AND passageTime < v_deleteEndDate AND tsId = v_tsId LIMIT 1)
DO
-- Deleting from tspassages
DELETE FROM tspassages
WHERE passageTime >= v_deleteStartDate AND passageTime < v_deleteEndDate AND tsId = v_tsId
LIMIT 10000;
SET v_count = v_count + ROW_COUNT();
END WHILE;
INSERT INTO dataDeleteAuditLogs(PROCESS,LogDate,Descriptions)
VALUES('tspassages - Data Deletion',NOW(),CONCAT('Deletion process completed for tsId : ', v_tsId, '; Total records deleted is : ',v_count));
END;
END$$

DELIMITER ;


Table definition

CREATE TABLE `tspassages` (
`rowId` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`tsId` mediumint(8) unsigned NOT NULL,
`camId` tinyint(3) unsigned NOT NULL,
`passageTime` int(11) unsigned NOT NULL,
`passageMs` smallint(3) unsigned NOT NULL,
`passageId` smallint(5) unsigned NOT NULL,
`lP` varchar(15) NOT NULL,
`isCalib` tinyint(1) unsigned NOT NULL,
`Confidence` smallint(5) unsigned NOT NULL,
`RDWType` tinyint(2) unsigned NOT NULL,
PRIMARY KEY (`rowId`,`passageTime`),
UNIQUE KEY `uidx_tspassages` (`tsId`,`camId`,`passageTime`,`passageMs`),
KEY `Idx_tspassage_passagetime` (`passageTime`),
KEY `idx_tspassage_tsId` (`tsId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
PARTITION BY RANGE (PassageTime)
(PARTITION 2017JAN VALUES LESS THAN (1485903600) ENGINE = InnoDB,
PARTITION 2017FEB VALUES LESS THAN (1488322800) ENGINE = InnoDB,
PARTITION 2017MAR VALUES LESS THAN (1490997600) ENGINE = InnoDB,
PARTITION 2017APR VALUES LESS THAN (1493589600) ENGINE = InnoDB,
PARTITION 2017MAY VALUES LESS THAN (1496268000) ENGINE = InnoDB,
PARTITION 2017JUN VALUES LESS THAN (1498860000) ENGINE = InnoDB,
PARTITION 2017JUL VALUES LESS THAN (1501538400) ENGINE = InnoDB,
PARTITION 2017AUG VALUES LESS THAN (1504216800) ENGINE = InnoDB,
PARTITION 2017SEP VALUES LESS THAN (1506808800) ENGINE = InnoDB,
PARTITION 2017OCT VALUES LESS THAN (1509490800) ENGINE = InnoDB,
PARTITION 2017NOV VALUES LESS THAN (1512082800) ENGINE = InnoDB,
PARTITION 2017DEC VALUES LESS THAN (1514761200) ENGINE = InnoDB)


Our Prod setup is of 3 nodes running on Server version: 5.6.35-80.0-56-log.

Why am I getting a WSREP system lock for a batch delete operation and how can I mitigate this issue ?


With Regards,
Raghupradeep

Sleeping Connection and Records_Read &amp;gt; 0

Lastest Forum Posts - October 2, 2017 - 7:52pm
Hi, we are troubleshooting a web application where a form takes a long time to load: some 8 seconds. We used the showprocesslist command and can send see that the web server's Java application opens a connection. The connection command is always sleep. It stays in that state for about 8 seconds. The info field shows up as NULL and the rows_read show up as 199. Once the form has loaded on the web server, the connection is dropped in MySQL. I'm a little puzzled to explain why the connection has command of SLEEP and doesn't show what SQL it is executing in the INFO field, yet shows that it read 199 records. Or could this be a pooled connection and the rows_read reflects the rows that were read through this connection over its lifetime. Thanks for any pointers.

Big Dataset: All Reddit Comments – Analyzing with ClickHouse

Latest MySQL Performance Blog posts - October 2, 2017 - 5:11pm

In this blog, I’ll use ClickHouse and Tabix to look at a new very large dataset for research.

It is hard to come across interesting datasets, especially a big one (and by big I mean one billion rows or more). Before, I’ve used on-time airline performance available from BUREAU OF TRANSPORTATION STATISTICS. Another recent example is NYC Taxi and Uber Trips data, with over one billion records.

However, today I wanted to mention an interesting dataset I found recently that has been available since 2015. This is Reddit’s comments and submissions dataset, made possible thanks to Reddit’s generous API. The dataset was first mentioned at “I have every publicly available Reddit comment for research,” and currently you can find it at pushshift.io. However, there is no guarantee that pushshift.io will provide this dataset in the future. I think it would be valuable for Amazon or another cloud provider made this dataset available for researchers, just as Amazon provides https://aws.amazon.com/public-datasets/.

The dataset contains 2.86 billion records to the end of 2016 and is 709GB in size, uncompressed. This dataset is valuable for a variety of research scenarios, from simple stats to natural language processing and machine learning.

Now let’s see what simple info we can collect from this dataset using ClickHouse and https://tabix.io/, a GUI tool for ClickHouse. In this first round, we’ll figure some basic stats, like number of comments per month, number of authors per month and number of subreddits. I also added how many comments in average are left for a post.

Queries to achieve this:

SELECT toYYYYMM(created_date) dt,count(*) comments FROM commententry1 GROUP BY dt ORDER BY dt ;; SELECT toYYYYMM(created_date) dt,count(DISTINCT author) authors FROM commententry1 GROUP BY dt ORDER BY dt ;; SELECT toYYYYMM(created_date) dt,count(DISTINCT subreddit) subreddits FROM commententry1 GROUP BY dt ORDER BY dt ;; SELECT toYYYYMM(created_date) dt,count(*)/count(distinct link_id) comments_per_post FROM commententry1 GROUP BY dt ORDER BY dt

And the graphical result:


It impressive to see the constant growth in comments (to 70mln per month by the end of 2016) and authors (to 3.5mln for the same time period). There is something interesting happening with subreddits, which jump up and down. It’s interesting to see that the average count of comments per post stays stable, with a slight decline to 13 comments/post by the end of 2016.

Now let’s check most popular subreddits:

SELECT subreddit,count(*) cnt FROM commententry1 GROUP BY subreddit ORDER BY cnt DESC limit 100 DRAW_TREEMAP { path:'subreddit.cnt' }

and using a treemap (available in Tabix.io):

We can measure subreddits that get the biggest increase in comments in 2016 compared to 2015:

SELECT subreddit,cntnew-cntold diff FROM (SELECT subreddit,count(*) cntnew FROM commententry1 WHERE toYear(created_date)=2016 GROUP BY subreddit) ALL INNER JOIN (SELECT subreddit,count(*) cntold FROM commententry1 WHERE toYear(created_date)=2015 GROUP BY subreddit) USING (subreddit) ORDER BY diff DESC LIMIT 50 DRAW_TREEMAP { path:'subreddit.diff' }

Obviously, Reddit was affected by the United States Presidential Election 2016, but not just that. The gaming community saw an increase in Overwatch, PokemonGO and Dark Souls 3.

Now we can try to run our own DB-Ranking, but only based on Reddit comments. This is how I can do this for MySQL, PostgreSQL and MongoDB:

SELECT toStartOfQuarter(created_date) Quarter, sum(if(positionCaseInsensitive(body,'mysql')>0,1,0)) mysql, sum(if(positionCaseInsensitive(body,'postgres')>0,1,0)) postgres, sum(if(positionCaseInsensitive(body,'mongodb')>0,1,0)) mongodb FROM commententry1 GROUP BY Quarter ORDER BY Quarter;

I would say the result is aligned with https://db-engines.com/en/ranking, where MySQL is the most popular among the three, followed by PostgreSQL and then MongoDB. There is an interesting spike for PostgreSQL in the second quarter in 2015, caused by a bot in “leagueoflegend” tournaments. The bot was actively announcing that it is powered by PostgreSQL in the comments, like this: http://reddit.com/r/leagueoflegends/comments/37cvc3/c/crln2ef.

To highlight more ClickHouse features: along with standard SQL functions, it provides a variety of statistical functions (for example, Quantile calculations). We can try to see the distribution of the number of comments left by authors:

SELECT quantileExact(0.1)(cnt), quantileExact(0.2)(cnt), quantileExact(0.3)(cnt), quantileExact(0.4)(cnt), quantileExact(0.5)(cnt), quantileExact(0.6)(cnt), quantileExact(0.7)(cnt), quantileExact(0.8)(cnt), quantileExact(0.9)(cnt), quantileExact(0.99)(cnt) FROM ( SELECT author, count(*) AS cnt FROM commententry1 WHERE author != '[deleted]' GROUP BY author )

The result is:

quantileExact(0.1)(cnt) - 1 quantileExact(0.2)(cnt) - 1 quantileExact(0.3)(cnt) - 1 quantileExact(0.4)(cnt) - 2 quantileExact(0.5)(cnt) - 4 quantileExact(0.6)(cnt) - 7 quantileExact(0.7)(cnt) - 16 quantileExact(0.8)(cnt) - 42 quantileExact(0.9)(cnt) - 160 quantileExact(0.99)(cnt) - 2271

Which means that 30% of authors left only one comment, and 50% of authors left four comments or less.

In general, ClickHouse was a pleasure to use when running analytical queries. However, I should note the missing support of WINDOW functions is a huge limitation. Even MySQL 8.0, which recently was released as RC, provides support for WINDOW functions. I hope ClickHouse will implement this as well.

Percona Monitoring and Management 1.3.1 Is Now Available

Lastest Forum Posts - October 2, 2017 - 5:21am
Percona announces the release of Percona Monitoring and Management1.3.1. This release only contains bug fixes related to usability.

For install and upgrade instructions, see Deploying Percona Monitoring and Management. Bug fixes

  • PMM-1271: In QAN, when the user selected a database host with no queries, the query monitor could still show metrics.
  • PMM-1512: When clicking the QAN in Grafana, QAN would open the home page. Now, QAN opens and automatically selects the database host and time range active in Grafana.
  • PMM-1523: User-defined Prometheus memory settings were not honored, potentially causing performance issues in high load environments.
Other bug fixes in this release: PMM-1452, PMM-1515. About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.

Percona XtraDB Cluster 5.7.19-29.22 is now available

Lastest Forum Posts - October 2, 2017 - 5:19am
Percona announces the release of Percona XtraDB Cluster 5.7.19-29.22 on September 22, 2017. Binaries are available from the downloads section or our software repositories.

NOTE: You can also run Docker containers from the images in the Docker Hub repository.

Percona XtraDB Cluster 5.7.19-29.22 is now the current release, based on the following:All Percona software is open-source and free. Upgrade Instructions

After you upgrade each node to Percona XtraDB Cluster 5.7.19-29.22, run the following command on one of the nodes:
1 $ mysql -uroot -p < /usr/share/mysql/pxc_cluster_view.sql Then restart all nodes, one at a time:
1 $ sudo service mysql restart New Features

  • Introduced the pxc_cluster_view table to get a unified view of the cluster. This table is exposed through the performance schema.
    1
    2
    3
    4
    5
    6
    7
    8 mysql> select * from pxc_cluster_view;
    -----------------------------------------------------------------------------
    HOST_NAME UUID STATUS LOCAL_INDEX SEGMENT
    -----------------------------------------------------------------------------
    n1 b25bfd59-93ad-11e7-99c7-7b26c63037a2 DONOR 0 0
    n2 be7eae92-93ad-11e7-88d8-92f8234d6ce2 JOINER 1 0
    -----------------------------------------------------------------------------
    2 rows in set (0.01 sec)
  • PXC-803: Added support for new features in Percona XtraBackup 2.4.7:
    • wsrep_debug enables debug logging
    • encrypt_threads specifies the number of threads that XtraBackup should use for encrypting data (when encrypt=1). This value is passed using the --encrypt-threads option in XtraBackup.
    • backup_threads specifies the number of threads that XtraBackup should use to create backups. See the --parallel option in XtraBackup.
Improvements

  • PXC-835: Limited wsrep_node_name to 64 bytes.
  • PXC-846: Improved logging to report reason of IST failure.
  • PXC-851: Added version compatibility check during SST with XtraBackup:
    • If a donor is 5.6 and a joiner is 5.7: A warning is printed to perform mysql_upgrade.
    • If a donor is 5.7 and a joiner is 5.6: An error is printed and SST is rejected.
Fixed Bugs

  • PXC-825: Fixed script for SST with XtraBackup (wsrep_sst_xtrabackup-v2) to include the --defaults-group-suffix when logging to syslog. For more information, see #1559498.
  • PXC-826: Fixed multi-source replication to PXC node slave. For more information, see #1676464.
  • PXC-827: Fixed handling of different binlog names between donor and joiner nodes when GTID is enabled. For more information, see #1690398.
  • PXC-830: Rejected the RESET MASTER operation when wsrep provider is enabled and gtid_mode is set to ON. For more information, see #1249284.
  • PXC-833: Fixed connection failure handling during SST by making the donor retry connection to joiner every second for a maximum of 30 retries. For more information, see #1696273.
  • PXC-839: Fixed GTID inconsistency when setting gtid_next.
  • PXC-840: Fixed typo in alias for systemd configuration.
  • PXC-841: Added check to avoid replication of DDL if sql_log_bin is disabled. For more information, see #1706820.
  • PXC-842: Fixed deadlocks during Load Data Infile (LDI) with log-bin disabled by ensuring that a new transaction (of 10 000 rows) starts only after the previous one is committed by both wsrep and InnoDB. For more information, see #1706514.
  • PXC-843: Fixed situation where the joiner hangs after SST has failed by dropping all transactions in the receive queue. For more information, see #1707633.
  • PXC-853: Fixed cluster recovery by enabling wsrep_ready whenever nodes become PRIMARY.
  • PXC-862: Fixed script for SST with XtraBackup (wsrep_sst_xtrabackup-v2) to use the ssl-dhparams value from the configuration file.
Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

One Million Tables in MySQL 8.0

Latest MySQL Performance Blog posts - October 1, 2017 - 7:26pm

In my previous blog post, I talked about new general tablespaces in MySQL 8.0. Recently MySQL 8.0.3-rc was released, which includes a new data dictionary. My goal is to create one million tables in MySQL and test the performance.

Background questions

Q: Why million tables in MySQL? Is it even realistic? How does this happen?

Usually, millions of tables in MySQL is a result of “a schema per customer” Software as a Service (SaaS) approach. For the purposes of customer data isolation (security) and logical data partitioning (performance), each “customer” has a dedicated schema. You can think of a WordPress hosting service (or any CMS based hosting) where each customer has their own dedicated schema. With 10K customers per MySQL server, we could end up with millions of tables.

Q: Should you design an application with >1 million tables?

Having separate tables is one of the easiest designs for a multi-tenant or SaaS application, and makes it easy to shard and re-distribute your workload between servers. In fact, the table-per-customer or schema-per-customer design has the quickest time-to-market, which is why we see it a lot in consulting. In this post, we are not aiming to cover the merits of should you do this (if your application has high churn or millions of free users, for example, it might not be a good idea). Instead, we will focus on if the new data dictionary provides relief to a historical pain point.

Q: Why is one million tables a problem?

The main issue results from the fact that MySQL needs to open (and eventually close) the table structure file (FRM file). With one million tables, we are talking about at least one million files. Originally MySQL fixed it with table_open_cache and table_definition_cache. However, the maximum value for table_open_cache is 524288. In addition, it is split into 16 partitions by default (to reduce the contention). So it is not ideal. MySQL 8.0 has removed FRM files for InnoDB, and will now allow you to create general tablespaces. I’ve demonstrated how we can create tablespace per customer in MySQL 8.0, which is ideal for “schema-per-customer” approach (we can move/migrate one customer data to a new server by importing/exporting the tablespace).

One million tables in MySQL 5.7

Recently, I’ve created the test with one million tables. The test creates 10K databases, and each database contains 100 tables. To use a standard benchmark I’ve employed sysbench table structure.

mysql> select count(*) from information_schema.schemata where schema_name like 'test_sbtest%'; +----------+ | count(*) | +----------+ | 10000 | +----------+ 1 row in set (0.01 sec) mysql> select count(*) from information_schema.tables where table_schema like 'test_sbtest%'; +----------+ | count(*) | +----------+ | 1000000 | +----------+ 1 row in set (4.61 sec)

This also creates a huge overhead: with one million tables we have ~two million files. Each .frm file and .ibd file size sums up to 175G:

# du -sh /ssd/mysql_57 175G /ssd/mysql_57

Now I’ve used sysbench Lua script to insert one row randomly into one table

pathtest = "/usr/share/sysbench/tests/include/oltp_legacy/" if pathtest then dofile(pathtest .. "common.lua") else require("common") end function event() local table_name local i local c_val local k_val local pad_val local oltp_tables_count = 100 local oltp_db_count = 10000 table_name = "test_sbtest_" .. sb_rand_uniform(1, oltp_db_count) .. ".sbtest".. sb_rand_uniform(1, oltp_tables_count) k_val = sb_rand(1, oltp_table_size) c_val = sb_rand_str([[ ###########-###########-###########-###########-###########-###########-###########-###########-###########-###########]]) pad_val = sb_rand_str([[ ###########-###########-###########-###########-###########]]) rs = db_query("INSERT INTO " .. table_name .. " (id, k, c, pad) VALUES " .. string.format("(%d, %d, '%s', '%s')", i, k_val, c_val, pad_val)) end end

With:

local oltp_tables_count = 100 local oltp_db_count = 10000

Sysbench will choose one table randomly out of one million. With oltp_tables_count = 1 and oltp_db_count = 100, it will only choose the first table (sbtest1) out of the first 100 databases (randomly).

As expected, MySQL 5.7 has a huge performance degradation when going across one million tables. When running a script that only inserts data into 100 random tables, we can see ~150K transactions per second. When the data is inserted in one million tables (chosen randomly) performance drops to 2K (!) transactions per second:

Insert into 100 random tables:

SQL statistics: queries performed: read: 0 write: 16879188 other: 0 total: 16879188 transactions: 16879188 (140611.72 per sec.) queries: 16879188 (140611.72 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.)

Insert into one million random tables:

SQL statistics: queries performed: read: 0 write: 243533 other: 0 total: 243533 transactions: 243533 (2029.21 per sec.) queries: 243533 (2029.21 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.)

This is expected. Here I’m testing the worse case scenario, where we can’t keep all table open handlers and table definitions in cache (memory) since the table_open_cache and table_definition_cache both have a limit of 524288.

Also, normally we can expect a huge skew between access to the tables. There can be only 20% active customers (80-20 rule), meaning that we can only expect an active access to 2K databases. In addition, there will be old or unused tables so we can expect around 100K or less of active tables.

Hardware and config files

The above results are from this server:

Processors | 64xGenuine Intel(R) CPU @ 2.00GHz Memory Total | 251.8G Disk | Samsung 950 Pro PCIE SSD (nvme)

Sysbench script:

sysbench $conn --report-interval=1 --num-threads=32 --max-requests=0 --max-time=600 --test=/root/drupal_demo/insert_custom.lua run

My.cnf:

innodb_buffer_pool_size = 100G innodb_io_capacity=20000 innodb_flush_log_at_trx_commit = 0 innodb_log_file_size = 2G innodb_flush_method=O_DIRECT_NO_FSYNC skip-log-bin open_files_limit=1000000 table_open_cache=524288 table_definition_cache=524288

One million tables in MySQL 8.0 + general tablespaces

In MySQL 8.0 is it easy and logical to create one general tablespace per each schema (it will host all tables in this schema). In MySQL 5.7, general tablespaces are available – but there are still .frm files.

I’ve used the following script to create 100 tables in one schema all in one tablespace:

mysql test -e "CREATE TABLESPACE t ADD DATAFILE 't.ibd' engine=InnoDB;" for i in {1..10000} do mysql test -e "create table ab$i(i int) tablespace t" done

The new MySQL 8.0.3-rc also uses the new data dictionary, so all MyISAM tables in the mysql schema are removed and all metadata is stored in additional mysql.ibd file.

Creating one million tables

Creating InnoDB tables fast enough can be a task by itself. Stewart Smith published a blog post a while ago where he focused on optimizing time to create 30K tables in MySQL.

The problem is that after creating an .ibd file, MySQL needs to “fsync” it. However, when creating a table inside the tablespace, there is no fsync. I’ve created a simple script to create tables in parallel, one thread per database:

#/bin/bash function do_db { mysql -vvv -e "create database $db"; mysql -vvv $db -e "CREATE TABLESPACE $db ADD DATAFILE '$db.ibd' engine=InnoDB;" for i in {1..100} do table="CREATE TABLE sbtest$i ( id int(10) unsigned NOT NULL AUTO_INCREMENT, k int(10) unsigned NOT NULL DEFAULT '0', c varchar(120) NOT NULL DEFAULT '', pad varchar(60) NOT NULL DEFAULT '', PRIMARY KEY (id), KEY k_1 (k) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 tablespace $db;" mysql $db -e "$table" done } c=0 for m in {1..100} do for i in {1..100} do let c=$c+1 echo $c db="test_sbtest_$c" do_db & done wait done

That script works perfectly in MySQL 8.0.1-dmr and creates one million tables in 25 minutes and 28 seconds (1528 seconds). That is ~654 tables per second. That is significantly faster than ~30 tables per second in the original Stewart’s test and 2x faster than a test where all fsyncs were artificially disabled using libeat-my-data library.

Unfortunately, in MySQL 8.0.3-rc some regression was introduced. In MySQL 8.0.3-rc I can see heavy mutex contention, and the table creation speed dropped from 25 minutes to ~280 minutes. I’ve filed a bug report: performance regression: “create table” speed and scalability in 8.0.3.

Size on disk

With general tablespaces and no .frm files, the size on disk decreased:

# du -h -d1 /ssd/ 147G /ssd/mysql_801 119G /ssd/mysql_803 175G /ssd/mysql_57

Please note though that in MySQL 8.0.3-rc, with new native data dictionary, the size on disk increased as it needs to write additional information (Serialized Dictionary Information, SDI) to the tablespace files:

InnoDB: Serialized Dictionary Information (SDI) is now present in all InnoDB tablespace files except for temporary tablespace and undo tablespace files. SDI is serialized metadata for schema, table, and tablespace objects. The presence of SDI data provides metadata redundancy. ... The inclusion of SDI data in tablespace files increases tablespace file size. An SDI record requires a single index page, which is 16k in size by default. However, SDI data is compressed when it is stored to reduce the storage footprint.

The general mysql data dictionary in MySQL 8.0.3 is 6.6Gb:

6.6G /ssd/mysql/mysql.ibd

Benchmarking the insert speed in MySQL 8.0 

I’ve repeated the same test I’ve done for MySQL 5.7 in MySQL 8.0.3-rc (and in 8.0.1-dmr), but using general tablespace. I created 10K databases (=10K tablespace files), each database has100 tables and each database resides in its own tablespace.

There are two new tablespace level caches we can use in MySQL 8.0: tablespace_definition_cache and schema_definition_cache:

tablespace_definition_cache = 15000 schema_definition_cache = 524288

Unfortunately, with one million random table accesses in MySQL 8.0 (both 8.0.1 and 8.0.3), we can still see that it stalls on opening tables (even with no .frm files and general tablespaces):

mysql> select conn_id, current_statement, state, statement_latency, lock_latency from sys.processlist where current_statement is not null and conn_id <> CONNECTION_ID(); +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ | conn_id | current_statement | state | statement_latency | lock_latency | +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ | 199 | INSERT INTO test_sbtest_9749.s ... 8079-53209333270-93105555128') | Opening tables | 4.45 ms | 0 ps | | 198 | INSERT INTO test_sbtest_1863.s ... 9574-29782886623-39251573705') | Opening tables | 9.95 ms | 5.67 ms | | 189 | INSERT INTO test_sbtest_3948.s ... 9365-63502117132-66650064067') | Opening tables | 16.29 ms | 15.38 ms | | 190 | INSERT INTO test_sbtest_6885.s ... 8436-41291265610-60894472357') | Opening tables | 13.78 ms | 9.52 ms | | 191 | INSERT INTO test_sbtest_247.sb ... 7467-89459234028-92064334941') | Opening tables | 8.36 ms | 3.18 ms | | 192 | INSERT INTO test_sbtest_9689.s ... 8058-74586985382-00185651578') | Opening tables | 6.89 ms | 0 ps | | 193 | INSERT INTO test_sbtest_8777.s ... 1900-02582963670-01868315060') | Opening tables | 7.09 ms | 5.70 ms | | 194 | INSERT INTO test_sbtest_9972.s ... 9057-89011320723-95018545652') | Opening tables | 9.44 ms | 9.35 ms | | 195 | INSERT INTO test_sbtest_6977.s ... 7902-29158428721-66447528241') | Opening tables | 7.82 ms | 789.00 us | | 196 | INSERT INTO test_sbtest_129.sb ... 2091-86346366083-87657045906') | Opening tables | 13.01 ms | 7.30 ms | | 197 | INSERT INTO test_sbtest_1418.s ... 6581-90894769279-68213053531') | Opening tables | 16.35 ms | 10.07 ms | | 208 | INSERT INTO test_sbtest_4757.s ... 4592-86183240946-83973365617') | Opening tables | 8.66 ms | 2.84 ms | | 207 | INSERT INTO test_sbtest_2152.s ... 5459-55779113235-07063155183') | Opening tables | 11.08 ms | 3.89 ms | | 212 | INSERT INTO test_sbtest_7623.s ... 0354-58204256630-57234862746') | Opening tables | 8.67 ms | 2.80 ms | | 215 | INSERT INTO test_sbtest_5216.s ... 9161-37142478639-26288001648') | Opening tables | 9.72 ms | 3.92 ms | | 210 | INSERT INTO test_sbtest_8007.s ... 2999-90116450579-85010442132') | Opening tables | 1.33 ms | 0 ps | | 203 | INSERT INTO test_sbtest_7173.s ... 2718-12894934801-25331023143') | Opening tables | 358.09 us | 0 ps | | 209 | INSERT INTO test_sbtest_1118.s ... 8361-98642762543-17027080501') | Opening tables | 3.32 ms | 0 ps | | 219 | INSERT INTO test_sbtest_5039.s ... 1740-21004115002-49204432949') | Opening tables | 8.56 ms | 8.44 ms | | 202 | INSERT INTO test_sbtest_8322.s ... 8686-46403563348-31237202393') | Opening tables | 1.19 ms | 0 ps | | 205 | INSERT INTO test_sbtest_1563.s ... 6753-76124087654-01753008993') | Opening tables | 9.62 ms | 2.76 ms | | 213 | INSERT INTO test_sbtest_5817.s ... 2771-82142650177-00423653942') | Opening tables | 17.21 ms | 16.47 ms | | 216 | INSERT INTO test_sbtest_238.sb ... 5343-25703812276-82353892989') | Opening tables | 7.24 ms | 7.20 ms | | 200 | INSERT INTO test_sbtest_2637.s ... 8022-62207583903-44136028229') | Opening tables | 7.52 ms | 7.39 ms | | 204 | INSERT INTO test_sbtest_9289.s ... 2786-22417080232-11687891881') | Opening tables | 10.75 ms | 9.01 ms | | 201 | INSERT INTO test_sbtest_6573.s ... 0106-91679428362-14852851066') | Opening tables | 8.43 ms | 7.03 ms | | 217 | INSERT INTO test_sbtest_1071.s ... 9465-09453525844-02377557541') | Opening tables | 8.42 ms | 7.49 ms | | 206 | INSERT INTO test_sbtest_9588.s ... 8804-20770286377-79085399594') | Opening tables | 8.02 ms | 7.50 ms | | 211 | INSERT INTO test_sbtest_4657.s ... 4758-53442917995-98424096745') | Opening tables | 16.62 ms | 9.76 ms | | 218 | INSERT INTO test_sbtest_9672.s ... 1537-13189199316-54071282928') | Opening tables | 10.01 ms | 7.41 ms | | 214 | INSERT INTO test_sbtest_1391.s ... 9241-84702335152-38653248940') | Opening tables | 21.34 ms | 15.54 ms | | 220 | INSERT INTO test_sbtest_6542.s ... 7778-65788940102-87075246009') | Opening tables | 2.96 ms | 0 ps | +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ 32 rows in set (0.11 sec)

And the transactions per second drops to ~2K.

Here I’ve expected different behavior. With the .frm files gone and with tablespace_definition_cache set to more than 10K (we have only 10K tablespace files), I’ve expected that MySQL does not have to open and close files. It looks like this is not the case.

I can also see the table opening (since the server started):

mysql> show global status like '%open%'; +----------------------------+-----------+ | Variable_name | Value | +----------------------------+-----------+ | Com_ha_open | 0 | | Com_show_open_tables | 0 | | Innodb_num_open_files | 10040 | | Open_files | 0 | | Open_streams | 0 | | Open_table_definitions | 524288 | | Open_tables | 499794 | | Opened_files | 22 | | Opened_table_definitions | 1220904 | | Opened_tables | 2254648 | | Slave_open_temp_tables | 0 | | Table_open_cache_hits | 256866421 | | Table_open_cache_misses | 2254643 | | Table_open_cache_overflows | 1254766 | +----------------------------+-----------+

This is easier to see on the graphs from PMM. Insert per second for the two runs (both running 16 threads):

  1. The first run is 10K random databases/tablespaces and one table (sysbench is choosing table#1 from a randomly chosen list of 10K databases). This way there is also no contention on the tablespace file.
  2. The second run is a randomly chosen table from a list of one million tables.

As we can see, the first run is dong 50K -100K inserts/second. Second run is only limited to ~2.5 inserts per second:

“Table open cache misses” grows significantly after the start of the second benchmark run:

As we can see, MySQL performs ~1.1K table definition openings per second and has ~2K table cache misses due to the overflow:

When inserting against only 1K random tables (one specific table in a random database, that way we almost guarantee that one thread will always write to a different tablespace file), the table_open_cache got warmed up quickly. After a couple of seconds, the sysbench test starts showing > 100K tps. The processlist looks much better (compare the statement latency and lock latency to the above as well):

mysql> select conn_id, current_statement, state, statement_latency, lock_latency from sys.processlist where current_statement is not null and conn_id <> CONNECTION_ID(); +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ | conn_id | current_statement | state | statement_latency | lock_latency | +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ | 253 | INSERT INTO test_sbtest_3293.s ... 2282-95400708146-84684851551') | starting | 22.72 us | 0 ps | | 254 | INSERT INTO test_sbtest_3802.s ... 4030-35983148190-23616685226') | update | 62.88 us | 45.00 us | | 255 | INSERT INTO test_sbtest_5290.s ... 2361-58374942527-86207214617') | Opening tables | 36.07 us | 0 ps | | 256 | INSERT INTO test_sbtest_5684.s ... 4717-34992549120-04746631452') | Opening tables | 37.61 us | 37.00 us | | 257 | INSERT INTO test_sbtest_5088.s ... 5637-75275906887-76112520982') | starting | 22.97 us | 0 ps | | 258 | INSERT INTO test_sbtest_1375.s ... 8592-24036624620-65536442287') | query end | 98.66 us | 35.00 us | | 259 | INSERT INTO test_sbtest_8764.s ... 8566-02569157908-49891861265') | Opening tables | 47.13 us | 37.00 us | | 260 | INSERT INTO test_sbtest_560.sb ... 2605-08226572929-25889530906') | query end | 155.64 us | 38.00 us | | 261 | INSERT INTO test_sbtest_7776.s ... 0243-86335905542-37976752368') | System lock | 46.68 us | 32.00 us | | 262 | INSERT INTO test_sbtest_6551.s ... 5496-19983185638-75401382079') | update | 74.07 us | 40.00 us | | 263 | INSERT INTO test_sbtest_7765.s ... 5428-29707353898-77023627427') | update | 71.35 us | 45.00 us | | 265 | INSERT INTO test_sbtest_5771.s ... 7065-03531013976-67381721569') | query end | 138.42 us | 39.00 us | | 266 | INSERT INTO test_sbtest_8603.s ... 7158-66470411444-47085285977') | update | 64.00 us | 36.00 us | | 267 | INSERT INTO test_sbtest_3983.s ... 5039-55965227945-22430910215') | update | 21.04 ms | 39.00 us | | 268 | INSERT INTO test_sbtest_8186.s ... 5418-65389322831-81706268892') | query end | 113.58 us | 37.00 us | | 269 | INSERT INTO test_sbtest_1373.s ... 1399-08304962595-55155170406') | update | 131.97 us | 59.00 us | | 270 | INSERT INTO test_sbtest_7624.s ... 0589-64243675321-62971916496') | query end | 120.47 us | 38.00 us | | 271 | INSERT INTO test_sbtest_8201.s ... 6888-31692084119-80855845726') | query end | 109.97 us | 37.00 us | | 272 | INSERT INTO test_sbtest_7054.s ... 3674-32329064814-59707699237') | update | 67.99 us | 35.00 us | | 273 | INSERT INTO test_sbtest_3019.s ... 1740-35410584680-96109859552') | update | 5.21 ms | 33.00 us | | 275 | INSERT INTO test_sbtest_7657.s ... 4985-72017519764-59842283878') | update | 88.91 us | 48.00 us | | 274 | INSERT INTO test_sbtest_8606.s ... 0580-38496560423-65038119567') | freeing items | NULL | 37.00 us | | 276 | INSERT INTO test_sbtest_9349.s ... 0295-94997123247-88008705118') | starting | 25.74 us | 0 ps | | 277 | INSERT INTO test_sbtest_3552.s ... 2080-59650597118-53885660147') | starting | 32.23 us | 0 ps | | 278 | INSERT INTO test_sbtest_3832.s ... 1580-27778606266-19414961452') | freeing items | 194.14 us | 51.00 us | | 279 | INSERT INTO test_sbtest_7685.s ... 0234-22016898044-97277319766') | update | 62.66 us | 40.00 us | | 280 | INSERT INTO test_sbtest_6026.s ... 2629-36599580811-97852201188') | Opening tables | 49.41 us | 37.00 us | | 281 | INSERT INTO test_sbtest_8273.s ... 7957-39977507737-37560332932') | update | 92.56 us | 36.00 us | | 283 | INSERT INTO test_sbtest_8584.s ... 7604-24831943860-69537745471') | starting | 31.20 us | 0 ps | | 284 | INSERT INTO test_sbtest_3787.s ... 1644-40368085836-11529677841') | update | 100.41 us | 40.00 us | +---------+-------------------------------------------------------------------+----------------+-------------------+--------------+ 30 rows in set (0.10 sec)

What about the 100K random tables? That should fit into the table_open_cache. At the same time, the default 16 table_open_cache_instances split 500K table_open_cache, so each bucket is only ~30K. To fix that, I’ve set table_open_cache_instances = 4 and was able to get ~50K tps average. However, the contention inside the table_open_cache seems to stall the queries:

There are only a very limited amount of table openings:

 

Conclusion

MySQL 8.0 general tablespaces looks very promising. It is finally possible to create one million tables in MySQL without the need to create two million files. Actually, MySQL 8 can handle many tables very well as long as table cache misses are kept to a minimum.

At the same time, the problem with “Opening tables” (worst case scenario test) still persists in MySQL 8.0.3-rc and limits the throughput. I expected to see that MySQL does not have to open/close the table structure file. I also hope the create table regression bug is fixed in the next MySQL 8.0 version.

I’ve not tested other new features in the new data dictionary in 8.0.3-rc: i.e., atomic DDL (InnoDB now supports atomic DDL, which ensures that DDL operations are either committed in their entirety or rolled back in case of an unplanned server stoppage). That is the topic of the next blog post.

Recover lost database from Hard image

Lastest Forum Posts - October 1, 2017 - 5:33am
Hello there
Something like a bug in pt-online-schema-change deleted many rows after i Alter the table and add a unique key to a col.
Actually it ignored errors about duplicate keys and removed any duplicate row i think!
BTW i got a disk image from hard drive in another server and i searched it for my lost data , Fortunately i could find my lost data inside image.


Notice : pt-online-schema-change always makes a new table and old table then copy data from one to another then delete and rename them.


SO if i can recover table_old and "undrop" it from disk image , of course i can get my data back.

I Found this :
https://www.percona.com/docs/wiki/in...ool_start.html


BUT unfortunately it has very weak document : I separated pages in different subdirectory but please look what docs said :
https://www.percona.com/docs/wiki/in...he_pages_apart

"Now that we have split the pages into separate subdirectories, we need to determine which index holds the data we are interested in recovering. In most cases, we want to recover from the table's PRIMARY index, which contains the full rows. There are several ways to accomplish this." Document says.


Really thanks to the writer who mentioned there are several ways to accomplish this :| !

Can anyone tell me what's one of this ways ?

I Really need your helps.
With regards.

Percona Server crashed after upgrade from 5.5 to 5.6

Lastest Forum Posts - September 29, 2017 - 12:42pm
We upgrade one of our Percona MySQL 5.5 server to 5.6 a week ago, then it crashed all of sudden with the following error message:
[09/29/17 01:34 PM] Barnum, Christopher: /datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(my_print_stacktrace+0x35)[0x9363a5]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(handle_fatal_signal+0x3eb)[0x6816fb]
/lib64/libpthread.so.0(+0xf7e0)[0x7fade07507e0]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(my_hash_first+0x36)[0x921a76]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(my_hash_search+0x11)[0x921ad1]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(_Z14get_slave_uuidP3THDP6String+0x46)[0x8f42b6]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(_Z16show_slave_hostsP3THD+0xa69)[0x8f6529]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(_Z21mysql_execute_commandP3THD+0x38ee)[0x70d31e]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(_Z11mysql_parseP3THDPcjP12Parser_state +0x609)[0x711849]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(_Z16dispatch_command19enum_server_comm andP3THDPcj+0xbc0)[0x712b30]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(_Z10do_commandP3THD+0x117)[0x714a87]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(_Z24do_handle_one_connectionP3THD+0x13 0)[0x6da810]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(handle_one_connection+0x45)[0x6da915]
/datadrive/apollo/_env/EC2NMDBPerconaClusterServer-swit1na.15655325.302524007.1470144507/bin/mysqld_real(pfs_spawn_thread+0x126)[0xb289f6]
/lib64/libpthread.so.0(+0x7aa1)[0x7fade0748aa1]
/lib64/libc.so.6(clone+0x6d)[0x7fade049593d]


Does anyone have any idea why and how to troubleshoot?

Thanks,
Jie

This Week in Data with Colin Charles #8: Percona Live Europe 2017 Is a Wrap!

Latest MySQL Performance Blog posts - September 29, 2017 - 11:03am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Europe 2017 Dublin

We’ve spent a lot of time in the last few months organizing Percona Live Europe Dublin. I want to thank all the speakers, sponsors and attendees for helping us to pull off yet another great event. While we’ll provide some perspectives, thoughts and feedback soon, all the early mornings, jam-packed meetings and the 4 am bedtimes means I’ll probably talk about this event in my next column!

In the meantime, save the date for Percona Live Santa Clara, April 23-25 2018. The call for papers will open in October 2017.

Releases Link List Upcoming appearances

Percona’s website keeps track of community events, so check out where to listen to a Perconian speak. My upcoming appearances are:

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Percona Live Europe Session Interview: High-Performance JSON – PostgreSQL Vs. MongoDB with Wei Shan Ang and Dominic Dwyer (GlobalSign)

Latest MySQL Performance Blog posts - September 29, 2017 - 10:41am

The Percona Live Europe 2017 conference has been going strong for two days, and I’ve been to a bunch of presentations. An excellent one was High-Performance JSON – PostgreSQL Vs. MongoDB with Wei Shan Ang and Dominic Dwyer of GlobalSign.

This talk was very engaging and well attended and provided some enlightening stats from their experiments and tests. For GlobalSign, applications have to be both super fast and consistent, and achieving that balance requires dedicated and detailed testing and development. For example, while one configuration might offer incredibly fast throughput if the tradeoff is that there are dropouts from time to time – even for a second or two – it’s not a solution that would meet GlobalSign’s needs. Or as Wei Shan put it, using such a solution might lead to a few discussions with management!

There were lively questions from the floor that carried on outside the room well after the session. Since they presented the talk as PostgreSQL vs. MongoDB, there were advocates for both. These were handled with cool aplomb by both guys.

I caught up with them after the session:

Percona Monitoring and Management 1.3.1 Is Now Available

Latest MySQL Performance Blog posts - September 29, 2017 - 10:23am

Percona announces the release of Percona Monitoring and Management 1.3.1. This release only contains bug fixes related to usability.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

Bug fixes
  • PMM-1271: In QAN, when the user selected a database host with no queries, the query monitor could still show metrics.
  • PMM-1512: When clicking the QAN in GrafanaQAN would open the home page. Now, QAN opens and automatically selects the database host and time range active in Grafana.
  • PMM-1523: User-defined Prometheus memory settings were not honored, potentially causing performance issues in high load environments.

Other bug fixes in this release: PMM-1452PMM-1515.

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.

Unable to start mysql:queries

Lastest Forum Posts - September 29, 2017 - 4:01am
Hello,

I tried many things but I can't start mysql:queries.
linux and mysql metrics work as expected, but receiving below error when adding mysql:queries

root # pmm-admin add mysql:queries
Error adding MySQL queries: problem with agent registration on QAN API: exit status 1

root # # pmm-admin check-network
PMM Network Status

Server Address | xxxxxxxxxxx
Client Address | xxxxxxxxxxxxx

* System Time
NTP Server (0.pool.ntp.org) | 2017-09-29 10:54:30 +0000 UTC
PMM Server | 2017-09-29 10:54:30 +0000 GMT
PMM Client | 2017-09-29 10:54:30 +0000 UTC
PMM Server Time Drift | OK
PMM Client Time Drift | OK
PMM Client to PMM Server Time Drift | OK

* Connection: Client --> Server
-------------------- -------
SERVER SERVICE STATUS
-------------------- -------
Consul API OK
Prometheus API OK
Query Analytics API DOWN

Connection duration | 31.077004ms
Request duration | 32.748302ms
Full round trip | 63.825306ms


* Connection: Client <-- Server
-------------- ------------------------ ------------------ ------- ---------- ---------
SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD
-------------- ------------------------ ------------------ ------- ---------- ---------
linux:metrics XXXXXXXX xxxxxxxxx:42000 OK YES YES
mysql:metrics YYYYYYY yyyyyyyyyyy:42002 OK YES YES

Slave info show GTID information not binlog file &amp;amp; position

Lastest Forum Posts - September 29, 2017 - 12:57am
I'm running MySQL 5.7. GTID is currently turned off, but, in the past has been in a few OFF_PERMISSIVE states, it's (GTID) never been fully enabled or ON.

I'm using innobackupex to try and rebuild a new slave, but, whenever I use --slave-info I get a GTID rather than the expected binlog file and position.

SET GLOBAL gtid_purged='44c0008b-4453-11e6-9065-000f53396de0:1-8';
CHANGE MASTER TO MASTER_AUTO_POSITION=1

The following files are generated:
  • xtrabackup_binlog_info
  • xtrabackup_checkpoints
  • xtrabackup_info
  • xtrabackup_logfile
  • xtrabackup_slave_info
Master and slaves all show similar:

> show variables like "%gtid%";
+----------------------------------+-----------+
| Variable_name | Value |
+----------------------------------+-----------+
| binlog_gtid_simple_recovery | ON |
| enforce_gtid_consistency | OFF |
| gtid_executed_compression_period | 1000 |
| gtid_mode | OFF |
| gtid_next | AUTOMATIC |
| gtid_owned | |
| gtid_purged | |
| session_track_gtids | OFF |
+----------------------------------+-----------+
8 rows in set (0.00 sec)

The GTID is in gtid_executed and slave_master_info on the slaves.

Is there something else I need to do to purge / remove these from the tables, somewhere else, to get this to provide the information the standard binlog file and position?
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.