Buy Percona ServicesBuy Now!

How to clear data in PMM

Lastest Forum Posts - May 3, 2017 - 5:58am
I have separated AWS instance for PMM, CentOS 7, BTRFS

[root@mysql-aws-pmm ~]# docker version
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Mon Mar 27 17:05:44 2017
OS/Arch: linux/amd64

Server:
Version: 17.03.1-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: c6d412e
Built: Mon Mar 27 17:05:44 2017
OS/Arch: linux/amd64
Experimental: false

/var/lib/docker is a simlink to /data/docker

[root@mysql-aws-pmm ~]# ls -l /var/lib | grep docker
lrwxrwxrwx. 1 root root 12 Апр 25 10:05 docker -> /data/docker
[
/data/ mountpoint 50G size

[root@mysql-aws-pmm ~]# df -hl | grep data | grep dev
/dev/xvdb1 50G 3,4G 47G 7% /data

In official documentation about space requirement for PMM we can see:

3.3.2 What are the minimum system requirements for PMM?
• PMM Server
Any system which can run Docker version 1.12.6 or later.
It needs roughly 1 GB of storage for each monitored database node with data retention set to one week.
Minimum memory is 2 GB for one monitored database node, but it is not linear when you increase more nodes.
For example, data from 20 nodes should be easily handled with 16 GB.

For now i have 5 MySQL instances in PMM, and yesterday all space was eaten by PMM docker.
Because i was sure, that 50Gb enough for me, i do not monitored free space, but i do it now.
I reinstalled all from 0 yesterday, for now i have this statistics:

docker exec -it pmm-server bash

[root@69a113c27b55 opt]# date
Wed May 3 11:05:21 EEST 2017
[root@69a113c27b55 opt]# du -hs /var/lib/mysql
151M /var/lib/mysql
[root@69a113c27b55 opt]# du -hs /opt/prometheus/
1.6G /opt/prometheus/

[root@69a113c27b55 opt]# date
Wed May 3 11:46:33 EEST 2017
[root@69a113c27b55 opt]du -hs /var/lib/mysql
156M /var/lib/mysql
[root@69a113c27b55 opt]du -hs /opt/prometheus/
1.6G /opt/prometheus/

[root@69a113c27b55 opt]# date
Wed May 3 15:37:06 EEST 2017
[root@69a113c27b55 opt]# du -hs /var/lib/mysql
176M /var/lib/mysql
[root@69a113c27b55 opt]# du -hs /opt/prometheus/
2.0G /opt/prometheus/

We have next situation:
DB size increased for 25M in 4h 30m
Prometheus size increased for 400M in 4h 30m

In 24 hour it will be about 2.5Gb, so 50Gb will end in 20 days.
According to official documentation, it must not happent, because data rotate in 7 day cicle.
But in my situation something go wrong.

I need to understand, how to fix this problem.

Percona XtraDB latest release huge memory problems

Lastest Forum Posts - May 2, 2017 - 11:42pm
Hi everybody,
we have the same memory problems, with mysql etaing up all the memory, bigenning to swap till OOM KILLER raise up.
We have 3 nodes, Percona-XtraDB 57-5.7.17-29.20.3, centos 7.3.1611, 4 gb RAM (VM on vSphere6) and the VM are dedicated to mysql, with the only exception of the process due to PMM agent and OS overhead.

Each vm has :
- 8 vCPU, 4GB RAM and 1.5 GB of Swap;
- Transparent Huge Pages disabled
- Jemalloc instead of glibc

Here are:
- the conf file
- the screenshot of the top process after 3 hours of complete inactivity
- the screenshot of the PMM dashboard showing MySQL internal memory overview.

We were planning to go on production, but after this test, we are not more so confident....

Can you please help ? pmm_dashboard_pxc-set3.pngtop_image_pxc-set3.png

Percona University in Europe May 9 and May 11

Latest MySQL Performance Blog posts - May 2, 2017 - 11:20am

In 2013 we started Percona University, which consists of technology discussion events held in different cities around the world. The next installments of Percona University in Europe are next week when I fly there for Percona University Berlin (May 9) and Percona University Budapest (May 11). Both events are free to attend, and you are very welcome to join us for either of them.

Below are some questions and answers about why you should attend a Percona University session:

What is Percona University? It is a half-day technical educational event, with a wider program when compared to a traditional meetup. Usually, we include about six hours of talks split with a 30-minute coffee break. We encourage people to join us at any point during these talks – we understand that not everyone can take off a half a day from their work or studies.

What is on the agenda for each of the events? Full agendas and registration forms for the Berlin and Budapest events are available at the indicated links.

Does the word “University” mean that we won’t cover any in-depth topics, and these events would only interest college/university students? No, it doesn’t. We designed Percona University presentations for all kinds of “students,” including professionals with years of database industry experience. The word “University” means that this event series is about educating attendees on technical topics (it’s not a sales-oriented event, it’s about educating the community).

Does Percona University cover only Percona technology? We will definitely mention Percona technology, but we will also focus on real-world technical issues and recommend solutions that work (regardless of whether Percona developed them).

Are there other Percona University events coming up besides Berlin and Budapest? We will hold more Percona University events in different locations in the future. Our events newsletter is a good source of information about when and where they will occur. If you want to partner with Percona in organizing a Percona University event, contact our team. You can also check our list of technical webinars to get further educational insights.

These events are free and low-key! We want them to remain easy to organize in any city of the world. They aren’t meant to look like a full conference (like our Percona Live series). Percona University has a different format – it’s purposefully informal, and designed to be perfect for learning and networking. This is an in-person database community gathering, so feel free to come with interesting cases and tricky questions!

I hope to see many of you at Percona University in Europe, Berlin and Budapest editions!

Webinar Thursday 5/4/2017: Percona Software News and Roadmap Update Q2 2017

Latest MySQL Performance Blog posts - May 1, 2017 - 11:34am

Come and listen to Percona CEO Peter Zaitsev on Thursday, May 4, 2017 at 11:00 am (PST) / 2:00 pm (EST) discuss Percona’s software news and roadmap, including Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

Register Now During this webinar, Peter will talk about newly released features in Percona software, show a few quick demos and share with you highlights from the Percona open source software roadmap.

Peter will also talk about new developments in Percona commercial services, and finish with a Q&A.

You can register for the webinar here.

Peter Zaitsev, CEO of Percona

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Data Performance Blog. Fortune and DZone also tapped Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of percona.com’s most popular downloads.

XtraDB Cluster HIgh Load

Lastest Forum Posts - April 30, 2017 - 3:34am
HI ,
I want to know is it possible to handle about 3k TPS with XTRADB Cluster and mysql 5.7?
I have 5 nodes and each node has 32G Ram with 8 Cores Intel(R) Xeon(R) CPU E5-2660 v4@ 2.00GHz and a haproxy as load balancer .
Haproxy handle traffic method is roundrobin .and maxxon is about 8096 .

Unfortunately it has poor performance and can only handle about 800 TPS and less than 1000 conenction !!!.
I want to know is it possible to handle more than 3K tps and also more than 2k connection with Xtradb and mysql 5.7 .

And also i want to know is it better to have 3 node with 64G Ram and 16 Cores CPU or 5 node with 32G ram and 8 Cores CPU.

and also my my.cnf :

[MYSQLD]
user=mysql
basedir=/usr/
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
pid-file=/var/lib/mysql/mysql.pid
port=3306
log_error=/var/log/mysql/mysqld.log
log_warnings=2
# log_output = FILE
### INNODB OPTIONS
innodb_buffer_pool_size=12055M
innodb_flush_log_at_trx_commit=1
innodb_file_per_table=1
innodb_data_file_path = ibdata1:100M:autoextend
## You may want to tune the below depending on number of cores and disk sub
innodb_read_io_threads=4
innodb_write_io_threads=4
innodb_doublewrite=1
innodb_log_file_size=1024M
innodb_log_buffer_size=96M
innodb_buffer_pool_instances=-1
innodb_log_files_in_group=2
innodb_thread_concurrency=64
innodb_io_capacity=15000
innodb_io_capacity_max=25000
# innodb_file_format = barracuda
innodb_flush_method = O_DIRECT
# innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode=2
## avoid statistics update when doing e.g show tables
innodb_stats_on_metadata=0
default_storage_engine=innodb
back_log=1500
# CHARACTER SET
# collation_server = utf8_unicode_ci
# init_connect = 'SET NAMES utf8'
# character_set_server = utf8
thread_handling = pool-of-threads
# REPLICATION SPECIFIC
server_id=1
binlog_format=ROW
# log_bin = binlog
# log_slave_updates = 1
# gtid_mode = ON
# enforce_gtid_consistency = 1
# relay_log = relay-bin
# expire_logs_days = 7

# OTHER THINGS, BUFFERS ETC
tmp_table_size = 64M
max_heap_table_size = 64M
max_allowed_packet = 512M
# sort_buffer_size = 256K
# read_buffer_size = 256K
# read_rnd_buffer_size = 512K
# myisam_sort_buffer_size = 8M
skip_name_resolve
memlock=0
sysdate_is_now=1
max_connections=5000
thread_cache_size=512k
query_cache_type = 0
query_cache_size = 0
table_open_cache=1024
lower_case_table_names=0
# 5.6 backwards compatibility (FIXME)
# explicit_defaults_for_timestamp = 1
##
## WSREP options
##

performance_schema = ON
performance-schema-max-mutex-classes = 0
performance-schema-max-mutex-instances = 0

# Full path to wsrep provider library or 'none'
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_on=ON
wsrep_node_address=192.168.100.11

# Provider specific configuration options
wsrep_provider_options="base_port=4567; gcache.size=1024M; gmcast.segment=0"

# Logical cluster name. Should be the same for all nodes.
wsrep_cluster_name="my_wsrep_cluster"

# Group communication system handle
wsrep_cluster_address=gcomm://192.168.100.11,192.168.100.12,192.168.100.13,192.1 68.100.14,192.168.100.15

# Human_readable node name (non-unique). Hostname by default.
wsrep_node_name=192.168.100.11

# Address for incoming client connections. Autodetect by default.
#wsrep_node_incoming_address=

# How many threads will process writesets from other nodes
wsrep_slave_threads=4

# DBUG options for wsrep provider
#wsrep_dbug_option

# Generate fake primary keys for non-PK tables (required for multi-master
# and parallel applying operation)
wsrep_certify_nonPK=1

# Location of the directory with data files. Needed for non-mysqldump
# state snapshot transfers. Defaults to mysql_real_data_home.
#wsrep_data_home_dir=

# Maximum number of rows in write set
wsrep_max_ws_rows=131072

# Maximum size of write set
wsrep_max_ws_size=1073741824

# to enable debug level logging, set this to 1
wsrep_debug=0

# convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=0

# how many times to retry deadlocked autocommits
wsrep_retry_autocommit=1

# change auto_increment_increment and auto_increment_offset automatically
wsrep_auto_increment_control=1

# replicate myisam, not supported in PXC 5.7
wsrep_replicate_myisam=0

# retry autoinc insert, which failed for duplicate key error
wsrep_drupal_282555_workaround=0

# enable "strictly synchronous" semantics for read operations
wsrep_causal_reads=0

From Percona Live 2017: Thank You, Attendees!

Latest MySQL Performance Blog posts - April 28, 2017 - 2:47pm

From everyone at Percona and Percona Live 2017, we’d like to send a big thank you to all our sponsors, exhibitors, and attendees at this year’s conference.

This year’s conference was an outstanding success! The event brought the open source database community together, with a technical emphasis on the core topics of MySQL, MariaDB, MongoDB, PostgreSQL, AWS, RocksDB, time series, monitoring and other open source database technologies.

We will be posting tutorial and session presentation slides at the Percona Live site, and all of them should be available shortly. 

Highlights This Year:

Thanks to Our Sponsors!

We would like to thank all of our valuable event sponsors, especially our diamond sponsors Continuent and VividCortex – your participation really makes the show happen.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We look forward to seeing you there!

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.