Buy Percona ServicesBuy Now!

Webinars on Wednesday November 15, 2017: Proxy Wars and Percona Software Update for Q4

Latest MySQL Performance Blog posts - November 14, 2017 - 10:00am

Do you need to get to grips with MySQL proxies? Or maybe you could do with discovering the latest developments and plans for Percona’s software?

Well, wait no more because …

on Wednesday November 15, 2017, we bring you a webinar double bill.

Join Percona’s Chief Evangelist, Colin Charles as he presents “The Proxy Wars – MySQL Router, ProxySQL, MariaDB MaxScale” at 7:00 am PST / 10:00 am EST (UTC-8).

Reflecting on his past experience with MySQL proxies, Colin will provide a short review of three open source solutions. He’ll run through a comparison of MySQL Router, MariaDB MaxScale and ProxySQL and talk about the reasons for using the right tool for an application.

Register for Colin’s Webinar

 

Meanwhile, return a little later in the day at 10:00 am PST / 1:00 pm EST (UTC-8) to hear Percona CEO Peter Zaitsev discuss what’s new in Percona open source software. In “Percona Software News and Roadmap Update – Q4 2017”, Peter will talk about new features in Percona software, show some quick demos and share highlights from the Percona open source software roadmap. He will also talk about new developments in Percona commercial services and finish with a Q&A.

Join Peter’s Webinar

 

You are, of course, very welcome to register for either one or both webinars. Please register for your place soon!

Peter Zaitsev, Percona CEO and Co-Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 30+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University, where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone have both tapped Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

Colin Charles, Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within many open source communities and has spoken on the conference circuit.

 

Common MongoDB Topologies

Latest MySQL Performance Blog posts - November 14, 2017 - 2:34am

In this blog post, we’ll look at some common MongoDB topologies used in database deployments.

The question of the best architecture for MongoDB will arise in your conversations between developers and architects. In this blog, we wanted to go over the main sharded and unsharded designs, with their pros and cons.

We will first look at “Replica Sets.” Replica sets are the most basic form of high availability (HA) in MongoDB, and the building blocks for sharding. From there, we will cover sharding approaches and if you need to go that route.

Replica Set

From the MongoDB manual:

replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments.

Short of sharding, this is the ideal way to run MongoDB. Things like high availability, failover and recovery become automated with no action typically needed. If you expect large growth or more than 200G of data, you should consider using this plus sharding to reduce your mean time to recovery on a restore from backup.

Pros:

  • Elections happen automatically and unnoticed by application setup with retry
  • Rebuilding a new node, or adding an additional read-only node, is as easy as  “rs.add(‘hostname’)”
  • Can skip building indexes to improve write speed
  • Can have members
    • hidden in other geographic location
    • delayed replication
    • analytics nodes via taggings

Cons:

  • Depending on the size of the oplog used, you can use  10-100+% more space to hold to change data for replication
  • You must scale up not out meaning more expensive hardware
  • Recovery using a sharded approach is faster than having is all on a single node ( parallelism)
Flat Mongos (not load balanced)

This is one of MongoDB’s more suggested deployment designs. To understand why, we should talk about the driver and the fact that it supports a CSV list of mongos hosts for fail-over.

You can’t distribute writes in a single replica set. Instead, they all need to go to the primary node. You can distribute reads to the secondaries using Read Preferences. The driver keeps track of what is a primary and what is a secondary and routes queries appropriately.

Conceptually, the driver should have connections bucketed into the mongos they go to. This allowed the 3.0+ driver to be semi-stateless and ask any connection to a specific mongos to preform a getMore to that mongos. In theory, this allows slightly more concurrency. Realistically you only use one mongos, since this is only a fail-over system.

Pros:

  • Mongos is on its own gear, so it will not run the application out of memory
  • If Mongos doesn’t respond, the driver “fails-over” to the next in the list
  • Can be put closer to the database or application depending on your network and sorting needs

Cons:

  • You can’t use mongos in a list evenly, so it is only good for fail-over (not evenness) in most drivers. Please read specific drivers for support, and test thoroughly.
Load Balanced (preferred if possible)

According to the Mongo docs:

You may also deploy a group of mongos instances and use a proxy/load balancer between the application and the mongos. In these deployments, you must configure the load balancer for client affinity so that every connection from a single client reaches the same mongos.

This is the model used by platforms such as ObjectRocket. In this pattern, you move mongos nodes to their own tier but then put them behind a load-balancer. In this design, you can even out the use of mongos by using a least-connection system. The challenge, however, is new drivers have issues with getMores. By this we mean the getMore selects a new random connection, and the load balancer can’t be sure which mongos should get it. Thus it has a one in N (number of mongos) chance of selecting the right one, or getting a “Cursor Not Found” error.

Pros:

  • Ability to have an even use of mongos
  • Mongos are separated from each other and the applications to prevent memory and CPU contention
  • You can easily remove or add mongos to help scale the layer without code changes
  • High availability at every level (multiple mongos, multiple configs, ReplSet for high availability and even multiple applications for app failures)

Cons:

  • If batching is used, unless switched to an IP pinning algorithm (which loses evenness) you can get “Cursor Not Found” errors due to the wrong mongos getting getMore and bulk connector connections
App-Centric Mongos

By and large, this is one of the most typical deployment designs for MongoDB sharding. In it, we have each application host talking to a mongos on the local network interface. This ensures there is very little latency to the application from the mongos.

Additionally, this means if a mongos fails, at most its own host is affected instead of the wider range of all application hosts.

Pros:

  • Local mongos on the loopback interface mean low to no latency
  • Limited scope of outage if this mongos fails
  • Can be geographically farther from the data storage in cases where you have a DR site

Cons:

  • Mongos is a memory hog; you could steal from your application memory to support running it here
    • Made worse with large batches, many connections, and sorting
  • Mongos is single-threaded and could become a bottleneck for your application
  • It is possible for a slow network to cause bad decision making, including duplicate databases on different shards. The functional result is data writing intermittently to two locations, and a DBA must remediate that at some point (think MMM VIP ping pong issues)
  • All sorting and limits are applied on the application host. In cases where the sort uses an index this is OK, but if not indexed the entire result set must be held in memory by mongos and then sorted, then returned the limited number of results to the client. This is the typical cause of mongos OOM’s errors due to the memory issues listed before.
Conclusion

The topologies are above cover many of the deployment needs for MongoDB environments. Hope this helps, and list any questions in the comments below.

New node joins, but is unreachable

Lastest Forum Posts - November 13, 2017 - 1:47pm
Hi There,

I'm using Percona with Galera version 3.21(r8678538).

I'm creating a new Percona XtraDB Cluster, but something goes wrong when I add the second node.
I can bootstrap the first node perfectly, but when the second node is added, I can see the increase of the wsrep_cluster_size
and wsrep_cluster_conf_id on the first node, but I can't login to the second node.
Whenever I try to access, I get the message:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

When I run "netstat -ltnp" I can't find any process listening on port 3306 (on the second node):
$ netstat -ltnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 877/sshd
tcp 0 0 0.0.0.0:4567 0.0.0.0:* LISTEN 1530/mysqld
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1029/master
tcp6 0 0 :::111 :::* LISTEN 1/systemd
tcp6 0 0 :::22 :::* LISTEN 877/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1029/master

But I can see the mysqld process running:
$ ps aux | grep -imysql
mysql 1530 0.2 12.8 1154696 450120 pts/0 Sl 21:34 0:00 mysqld --user=mysql --log-error=/var/lib/mysql/error.log
mysql 1544 0.0 0.0 113492 1968 pts/0 S 21:34 0:00 /bin/bash -ue /bin/wsrep_sst_xtrabackup-v2 --role joiner --address 10.0.1.5 --datadir /var/lib/mysql/data/ --defaults-file /etc/my.cnf --defaults-group-suffix --parent 1530
mysql 1971 0.0 0.3 171724 12748 pts/0 Sl 21:34 0:00 innobackupex --no-version-check --use-memory=512000 --apply-log /var/lib/mysql/data//.sst
root 2033 0.0 0.0 112664 992 pts/0 R+ 21:36 0:00 grep --color=auto -i mysql

At first, I thought that the problem could be the SST still in progress, but my databases are empty and I've waited several minutes but nothing changed.
Didn't find anything relevant on the log files, no error messages, only warnings.

Any suggestion on how to solve this, please?

Thanks!

replace command in binary log not working

Lastest Forum Posts - November 13, 2017 - 10:51am
I have binary log edit INSERT to REPLACE to overwrite the data in table but it showing Duplicate key error

Code: [root@etim upload_new]# mysql --defaults-extra-file=/usr/bin/.config145 < BHKR_mysql-bin.002873_13-11-17_2030_1.sql ERROR 1062 (23000) at line 32: Duplicate entry '6354429' for key 'PRIMARY' Binary log output as below

SET INSERT_ID=6354429/*!*/;
#171113 19:32:33 server id 10 end_log_pos 5363984 CRC32 0x349ae0c3 Query thread_id=882 exec_time=0 error_code=0
SET TIMESTAMP=1510581753/*!*/;
replace into temp_ticket(waybill_no,etim_no,trip_no,route_no,bu s_service,ticket_code,ticket_number,boarding_stage ,boarding_stage_code,from_stage_code,alright_stage ,alright_stage_code,till_stage_code,full_ticket,ha lf_ticket,luggage_qty,pass_qty,total_amt,ticket_da te,ticket_date_actual,ticket_time, pass_id,insp_conc_code,interstate_code,full_tkt_am t,half_tkt_amt,lugg_tkt_amt,lastupdate,lastupdate_ id,status,bus_capacity,ttl_stages,seq_no,is_ex_tkt ,uid_no,group_ticket_flag,secret_code,maharashtra_ adult_amount,maharashtra_child_amount,state_1_adul t_amount,state_1_child_amount,state_2_adult_amount ,state_2_child_amount,state_3_adult_amount,state_3 _child_amount,state_4_adult_amount,state_4_child_a mount,state_5_adult_amount,state_5_child_amount,ac tual_adult_fare_from_db,actual_child_fare_from_db, actual_lugg_fare_from_db,original_maharashtra_adul t_amount,original_maharashtra_child_amount,origina l_state_1_adult_amount,original_state_1_child_amou nt,original_state_2_adult_amount,original_state_2_ child_amount,original_state_3_adult_amount,origina l_state_3_child_amount,original_state_4_adult_amou nt,original_state_4_child_amount,original_state_5_ adult_amount,original_state_5_child_amount,per_adu lt_original_fare,per_child_original_fare,per_lugg_ original_fare,reservation_charge,gprs_status,seat_ no,epurse_ticket,refund_ticket_id,version_type,div isionname,divisioncode,deponame,depocode,ASN_amoun t,per_adult_service_tax_amount,per_child_service_t ax_amount,service_charge)values('109333','BHKR5002 ','000L1385','61109','3','00','023849','001','OO', 'BHKR','009','O-','NND','02','01','00','00','0014300','12/11/17','2017-11-12','17:12:42','','00','0','0011400','0002900','0' ,'','','','44','9','','0','','0','0','11400','2900 ','0','0','0','0','0','0','0','0','0','0','5670',' 2850','270','11400','2900','0','0','0','0','0','0' ,'0','0','0','0','5700','2900','300','0','1','','0 ','0','MOFF','','','','','300','0','0','0'),('1093 33','BHKR5002','000L1385','61109','3','00','023850 ','001','OO','BHKR','004','O-','SEBLAN','01','00','00','00','0002200','12/11/17','2017-11-12','17:13:05','','00','0','0002200','0000000','0' ,'','','','44','3.5','','0','','0','0','2200','0', '0','0','0','0','0','0','0','0','0','0','2205','11 00','120','2200','0','0','0','0','0','0','0','0',' 0','0','0','2200','0','100','



====
Create Table: CREATE TABLE `temp_ticket` (
`ticket_id` int(11) NOT NULL AUTO_INCREMENT,
`waybill_no` int(100) NOT NULL,
`etim_no` varchar(100) NOT NULL,
`trip_no` varchar(10) NOT NULL,
`route_no` int(100) NOT NULL,

`service_charge` int(20) NOT NULL,
PRIMARY KEY (`ticket_id`),
KEY `waybill_no` (`waybill_no`) USING BTREE
) ENGINE=MyISAM AUTO_INCREMENT=6595631 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Percona Live Open Source Database Conference 2018 Call for Papers Is Now Open!

Latest MySQL Performance Blog posts - November 13, 2017 - 10:09am

Announcing the opening of the Percona Live Open Source Database Conference 2018 in Santa Clara, CA, call for papers. It will be open from now until December  22, 2017.

Our theme is “Championing Open Source Databases,” with topics of MySQL, MongoDB and other open source databases, including PostgreSQL, time series databases and RocksDB. Sessions tracks include Developers, Operations and Business/Case Studies.

We’re looking forward to your submissions! We want proposals that cover the many aspects and current trends of using open source databases, including design practices, application development, performance optimization, HA and clustering, cloud, containers and new technologies, as well as new and interesting ways to monitor and manage database environments.

Describe the technical and business values of moving to or using open source databases. How did you convince your company to make the move? Was there tangible ROI? Share your case studies, best practices and technical knowledge with an engaged audience of open source peers.

Possible topics include:

  • Application development. How are you building applications using open source databases to power the data layers? What languages, frameworks and data models help you to build applications that your customers love? Are you using MySQL, MongoDB, PostgreSQL, time series or other databases?  
  • Database performance. What database issues have you encountered while meeting new application and new workload demands? How did they affect the user experience? How did you address them? Are you using WiredTiger or a new storage engine like RocksDB? Have you moved to an in-memory engine? Let us know about the solutions you have found to make sure your applications can get data to users and customers.
  • DBaaS and PaaS. Are you using a Database as a Service (DBaaS) in the public cloud, or have you rolled out your own? Are you on AWS, Google Cloud, Microsoft Azure or RackSpace/ObjectRocket? Are you using a database in a Platform as a Service (PaaS) environment? Tell us how it’s going.
  • High availability. Are your applications a crucial part of your business model? Do they need to be available at all times, no matter what? What database challenges have you come across that impacted uptime, and how did you create a high availability environment to address them?
  • Scalability. Has scaling your business affected database performance, user experience or the bottom line? How are you addressing the database environment workload as your business scales? Let us know what technologies you used to solve issues.
  • Distributed databases. Are you moving toward a distributed model? Why? What is your plan for replication and sharding?
  • Observability and monitoring. How do we design open source database deployment with observability in mind? Are you using Elasticsearch or some other analysis tool? What tools are you using to monitor data? Grafana? Prometheus? Percona Monitoring and Management? How do you visualize application performance trends for maximum impact?
  • Container solutions. Do you use Docker, Kubernetes or other containers in your database environment? What are the best practices for using open source databases with containers and orchestration? Has it worked out for you? Did you run into challenges and how did you solve them?
  • Security. What security and compliance challenges are you facing and how are you solving them?
  • Migrating to open source databases. Did you recently migrate applications from proprietary to open source databases? How did it work out? What challenges did you face, and what obstacles did you overcome? What were the rewards?
  • What the future holds. What do you see as the “next big thing”? What new and exciting features just released? What’s in your next release? What new technologies will affect the database landscape? AI? Machine learning? Blockchain databases? Let us know what you see coming.

The Percona Live Open Source Database Conference 2018 Call for Papers is open until December 22, 2017. We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions. Share your open source database experiences with peers and professionals in the open source community by presenting a:

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. Encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

Speaking at Percona Live is a great way to build your personal and company brands. If selected, you will receive a complimentary full conference pass!

Submit your talks now.

Tips for Submitting to Percona Live

Include presentation details, but be concise. Clearly state:

  • Purpose of the talk (problem, solution, action format, etc.)
  • Covered technologies
  • Target audience
  • Audience takeaway

Keep proposals free of sales pitches. The Committee is looking for case studies and in-depth technical talks, not ones that sound like a commercial.

Be original! Make your presentation stand out by submitting a proposal that focuses on real-world scenarios, relevant examples, and knowledge transfer.

Submit your proposals as soon as you can – the call for papers is open until December 22, 2017.

proxysql looses server after restart of mysqld (percona xtradb cluster)

Lastest Forum Posts - November 13, 2017 - 7:47am
Hello
I have a testsetup, 4 vm, 3 node percona xtradb cluster + proxysql running on each, 1 vm as client.
RHEL 6.9
Percona-XtraDB-Cluster-57-5.7.19-29.22.3.el6.x86_64
proxysql-1.4.3-1.1.el6.x86_64
all from percona repository
proxysql is setup with 2 hostgroups, one for write and one for read traffic, but the following problem is before any client connect:

Code: (admin@localhost) [(none)]> select * from mysql_servers order by hostgroup_id,hostname; +--------------+--------------+------+--------------+------------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+--------------+------+--------------+------------+-------------+-----------------+---------------------+---------+----------------+---------+ | 500 | 192.168.0.51 | 3306 | ONLINE | 1000000000 | 0 | 1000 | 0 | 0 | 0 | | | 500 | 192.168.0.52 | 3306 | OFFLINE_SOFT | 1000000 | 0 | 1000 | 0 | 0 | 0 | | | 500 | 192.168.0.53 | 3306 | OFFLINE_SOFT | 100 | 0 | 1000 | 0 | 0 | 0 | | | 501 | 192.168.0.51 | 3306 | OFFLINE_SOFT | 100 | 0 | 1000 | 0 | 0 | 0 | | | 501 | 192.168.0.52 | 3306 | ONLINE | 1000000000 | 0 | 1000 | 0 | 0 | 0 | | | 501 | 192.168.0.53 | 3306 | ONLINE | 1000000000 | 0 | 1000 | 0 | 0 | 0 | | +--------------+--------------+------+--------------+------------+-------------+-----------------+---------------------+---------+----------------+---------+ I restart the mysqld on one node (/etc/init.d/myqsl restart) as we would do after an yum update run e.g.
After mysql is back up again:

Code: (admin@localhost) [(none)]> select * from mysql_servers order by hostgroup_id,hostname; +--------------+--------------+------+--------+---------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+--------------+------+--------+---------+-------------+-----------------+---------------------+---------+----------------+---------+ | 500 | 192.168.0.51 | 3306 | ONLINE | 1000000 | 0 | 1000 | 0 | 0 | 0 | WRITE | | 501 | 192.168.0.52 | 3306 | ONLINE | 1000 | 0 | 1000 | 0 | 0 | 0 | READ | | 501 | 192.168.0.53 | 3306 | ONLINE | 1000 | 0 | 1000 | 0 | 0 | 0 | READ | +--------------+--------------+------+--------+---------+-------------+-----------------+---------------------+---------+----------------+---------+ 3 rows in set (0.00 sec) Note that this is the output of "select * from mysql_servers" but "select * from runtime_mysql_servers" shows the same. This happens to proxysql on the 1. node also if I restart on the 2. node. It does not always get corrupt, but often enough. We have a customer using proxysql on his appserver with the same effekt (1.3.9 installed there). It does not loose all servers, sometime it looks like this:
Code: | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+--------------+------+--------------+------------+-------------+-----------------+---------------------+---------+----------------+---------+ | 500 | 192.168.0.51 | 3306 | ONLINE | 1000000000 | 0 | 1000 | 0 | 0 | 0 | | | 500 | 192.168.0.53 | 3306 | OFFLINE_SOFT | 100 | 0 | 1000 | 0 | 0 | 0 | | | 501 | 192.168.0.51 | 3306 | OFFLINE_SOFT | 100 | 0 | 1000 | 0 | 0 | 0 | | | 501 | 192.168.0.52 | 3306 | ONLINE | 1000 | 0 | 1000 | 0 | 0 | 0 | READ | | 501 | 192.168.0.53 | 3306 | ONLINE | 1000000000 | 0 | 1000 | 0 | 0 | 0 | | +--------------+--------------+------+--------------+------------+-------------+-----------------+---------------------+---------+----------------+---------+ (1. node proxysql, restart of mysql on second node) Until we do "load mysql servers to memory;" proxysql keeps this broken state. Depending on the state the next shutdown of a node could lead to a downtime of the application, as no failover is possible. Anything more I could provide to get this fixed?
Hubertus

Fresh Install, mysql fails to start.

Lastest Forum Posts - November 10, 2017 - 9:52am
I have a fresh install of Debian stretch, with a fresh install of percona 5.7.
I try to start the mysql service:
Code: service mysql start Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details. systemctl status mysql.service
Code: systemctl status mysql.service ● mysql.service - Percona Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: deactivating (final-sigterm) (Result: exit-code) since Thu 2017-11-09 15:24:45 MST; 18h ago Process: 23866 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=1/FAILURE) Process: 23858 ExecStartPre=/usr/bin/ps_mysqld_helper (code=exited, status=0/SUCCESS) Process: 23817 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 21289 (code=exited, status=0/SUCCESS) Tasks: 2 (limit: 7372) CGroup: /system.slice/mysql.service └─23869 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid Nov 10 10:16:59 dbase1 systemd[1]: Starting Percona Server... Nov 10 10:17:12 dbase1 systemd[1]: mysql.service: Control process exited, code=exited status=1 journalctl -xe
Code: journalctl -xe -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit mysql.service has finished shutting down. Nov 10 10:17:19 dbase1 systemd[1]: Starting Percona Server... -- Subject: Unit mysql.service has begun start-up -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit mysql.service has begun starting up. Nov 10 10:17:33 dbase1 mysqld[23962]: Initialization of mysqld failed: 0 Nov 10 10:17:33 dbase1 systemd[1]: mysql.service: Control process exited, code=exited status=1 Nov 10 10:17:38 dbase1 systemd[1]: Failed to start Percona Server. -- Subject: Unit mysql.service has failed -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit mysql.service has failed. -- -- The result is failed. Nov 10 10:17:38 dbase1 systemd[1]: mysql.service: Unit entered failed state. Nov 10 10:17:38 dbase1 systemd[1]: mysql.service: Failed with result 'exit-code'. Nov 10 10:17:39 dbase1 systemd[1]: mysql.service: Service hold-off time over, scheduling restart. Nov 10 10:17:39 dbase1 systemd[1]: Stopped Percona Server. -- Subject: Unit mysql.service has finished shutting down -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit mysql.service has finished shutting down. Nov 10 10:17:39 dbase1 systemd[1]: Starting Percona Server... -- Subject: Unit mysql.service has begun start-up -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit mysql.service has begun starting up. Nov 10 10:17:52 dbase1 mysqld[24053]: Initialization of mysqld failed: 0 Nov 10 10:17:52 dbase1 systemd[1]: mysql.service: Control process exited, code=exited status=1 Nov 10 10:17:57 dbase1 systemd[1]: Failed to start Percona Server. -- Subject: Unit mysql.service has failed -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit mysql.service has failed. -- -- The result is failed. Nov 10 10:17:57 dbase1 systemd[1]: mysql.service: Unit entered failed state. Nov 10 10:17:57 dbase1 systemd[1]: mysql.service: Failed with result 'exit-code'. Nov 10 10:17:57 dbase1 systemd[1]: mysql.service: Service hold-off time over, scheduling restart. Nov 10 10:17:57 dbase1 systemd[1]: Stopped Percona Server. -- Subject: Unit mysql.service has finished shutting down -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit mysql.service has finished shutting down. Nov 10 10:17:57 dbase1 systemd[1]: Starting Percona Server... -- Subject: Unit mysql.service has begun start-up -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit mysql.service has begun starting up. Nov 10 10:18:11 dbase1 mysqld[24146]: Initialization of mysqld failed: 0 Nov 10 10:18:11 dbase1 systemd[1]: mysql.service: Control process exited, code=exited status=1
Weird that it just seems to say it failed, but not why.

To rule out permissions, there a are the listings for the relevant directory:
Code: ls -la /var/lib/mysql total 1061580 drwxr-x--- 6 mysql mysql 4096 Nov 9 16:00 . drwxr-xr-x 26 root root 4096 Nov 8 16:44 .. -rw-r----- 1 mysql mysql 56 Nov 8 16:44 auto.cnf drwxr-xr-x 2 mysql mysql 4096 Nov 9 15:59 binlog -rw------- 1 mysql mysql 1680 Nov 8 16:44 ca-key.pem -rw-r--r-- 1 mysql mysql 1083 Nov 8 16:44 ca.pem -rw-r--r-- 1 mysql mysql 1087 Nov 8 16:44 client-cert.pem -rw------- 1 mysql mysql 1680 Nov 8 16:44 client-key.pem -rw-r----- 1 mysql mysql 293 Nov 9 15:24 ib_buffer_pool -rw-r----- 1 mysql mysql 12582912 Nov 9 15:24 ibdata1 -rw-r----- 1 mysql mysql 536870912 Nov 9 15:24 ib_logfile0 -rw-r----- 1 mysql mysql 536870912 Nov 9 13:34 ib_logfile1 drwxr-x--- 2 mysql mysql 4096 Nov 8 16:44 mysql -rw-r----- 1 mysql mysql 645404 Nov 9 15:24 mysql-error.log -rw-r----- 1 mysql mysql 650 Nov 9 13:56 mysql-slow.log drwxr-x--- 2 mysql mysql 4096 Nov 8 16:44 performance_schema -rw------- 1 mysql mysql 1676 Nov 8 16:44 private_key.pem -rw-r--r-- 1 mysql mysql 452 Nov 8 16:44 public_key.pem -rw-r--r-- 1 mysql mysql 1087 Nov 8 16:44 server-cert.pem -rw------- 1 mysql mysql 1676 Nov 8 16:44 server-key.pem drwxr-x--- 2 mysql mysql 12288 Nov 8 16:44 sys and:
Code: ls -la /var/run/mysqld/ total 0 drwxr-xr-x 2 mysql mysql 40 Nov 9 15:24 . drwxr-xr-x 17 root root 520 Nov 9 13:26 .. everything is owned by mysql so it should be fine

and here is the my.cnf:
Code: cat /etc/mysql/my.cnf # Generated by Percona Configuration Wizard (http://tools.percona.com/) version REL5-20120208 # Configuration name dbase1-1 generated for sama@mt.net at 2017-11-09 00:17:23 !includedir /etc/mysql/conf.d/ !includedir /etc/mysql/percona-server.conf.d/ [mysql] # CLIENT # port = 3306 socket = /var/lib/mysql/mysql.sock [mysqld] # GENERAL # user = mysql default-storage-engine = InnoDB socket = /var/lib/mysql/mysql.sock pid-file = /var/lib/mysql/mysql.pid # MyISAM # key-buffer-size = 32M myisam-recover = FORCE,BACKUP # SAFETY # max-allowed-packet = 16M max-connect-errors = 1000000 skip-name-resolve sql-mode = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY sysdate-is-now = 1 innodb = FORCE # DATA STORAGE # datadir = /var/lib/mysql/ # BINARY LOGGING # server-id = 1 log-bin = /var/lib/mysql/binlog/mysql-bin expire-logs-days = 14 sync-binlog = 1 # CACHES AND LIMITS # tmp-table-size = 32M max-heap-table-size = 32M query-cache-type = 0 query-cache-size = 0 max-connections = 500 thread-cache-size = 50 open-files-limit = 65535 table-definition-cache = 4096 table-open-cache = 4096 # INNODB # innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 innodb-log-file-size = 512M innodb-flush-log-at-trx-commit = 1 innodb-file-per-table = 1 innodb-buffer-pool-size = 216G # LOGGING # log-error = /var/lib/mysql/mysql-error.log log-queries-not-using-indexes = 1 slow-query-log = 1 slow-query-log-file = /var/lib/mysql/mysql-slow.log From what I gather the problem is with this line
Code: /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid but I don't see any problem with it, so I am really at a loss as to how to fix this.

This Week in Data with Colin Charles 14: A Meetup in Korea and The Magic Quadrant

Latest MySQL Performance Blog posts - November 10, 2017 - 3:03am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

We’re close to opening up the call for papers for Percona Live Santa Clara 2018 and I expect this to happen next week. We also have a committee all lined up and ready to vote on submissions.

In other news, I’ve spent some time preparing for the Korean MySQL Power Group meetup to be held in Seoul this Saturday, 11 November 2017. This is a great opportunity for us to extend our reach in Asia. This meetup gathers together top DBAs from Internet companies that use MySQL and related technologies.

Gartner has released their Magic Quadrant for Operational Database Management Systems 2017. Reprint rights have been given to several vendors, e.g. EnterpriseDB and Microsoft. I’m sure you can find other links. The Magic Quadrant features far fewer database vendors now, many have been dropped. What’s your take on it?

Releases

This was a slow release week. Check out:

Link List Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

MySQL and Linux Context Switches

Latest MySQL Performance Blog posts - November 9, 2017 - 11:50am

In this blog post, I’ll look at MySQL and Linux context switches and what is the normal number per second for a database environment.

You might have heard many times about the importance of looking at the number of context switches to indicate if MySQL is suffering from the internal contention issues. I often get the question of what is a “normal” or “acceptable” number, and at what point should you worry about the number of context switches per second?

First, let’s talk about what context switches are in Linux. This StackOverflow Thread provides a good discussion, with a lot of details, but basically it works like this:  

The process (or thread in MySQL’s case) is running its computations. Sooner or later, it has to do some blocking operation: disk IO, network IO, block waiting on a mutex or yield. The execution switches to the other process, and this is called voluntary context switch.On the other hand, the process/thread may need to be preempted by the scheduler because it used an allotted amount of CPU time (and now other tasks need to run) or because it is required to run high priority task. This is called involuntary context switches. When all the process in the system are added together and totaled, this is the system-wide number of context switches reported (using, for example, vmstat):

root@nuc2:~# vmstat 10 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st 17  0      0 12935036 326152 2387388    0    0     0     5     0      1  9  0 91  0  0 20  0      0 12933936 326152 2387384    0    0     0     3 32228 124791 77 22  1  0  0 17  0      0 12933348 326152 2387364    0    0     0    11 33212 124575 78 22  1  0  0 16  0      0 12933380 326152 2387364    0    0     0    78 32470 126100 78 22  1  0  0

This is a global number. In many cases, however, it is better to look at it as context switches per CPU logical core. This is because cores execute tasks independently. As such, they have mostly independent causes for context switches. If you have a large number of cores, there can be quite a difference:

The number of context switches per second on this system looks high (at more than 1,000,000). Considering it has 56 logical cores, however, it is only about 30,000 per second per logical core (which is not too bad).

So how do we judge if the number of context switches is too high in your system? One answer is that it is too high if you’re wasting too much CPU on context switches. This brings up the question: how many context switches can the system handle if it is only doing context switches?

It is easy to find this out!  

Sysbench has a “threads” test designed specifically to measure this. For example:

sysbench --thread-locks=128 --time=7200 --threads=1024 threads run

Check the vmstat output or the Context Switches PMM graph:

We can see this system can handle up to 35 million context switches per second in total (or some 500K per logical CPU core on average).

I don’t recommend using more than 10% of CPU resources on context switching, so I would try to keep the number of the context switches at no more than 50K per logical CPU core.

Now let’s think about context switches from the other side: how many context switches do we expect to have at the very minimum for given load? Even if all the stars align and your query to MySQL doesn’t need any disk IO or context switches due to waiting for mutexes, you should expect at least two context switches: one to the client thread which processes the query and one for the query response sent to the client.    

Using this logic, if we have 100,000 queries/sec we should expect 200,000 context switches at the very minimum.

In the real world, though, I would not worry about contention being a big issue if you have less than ten context switches per query.

It is worth noting that in MySQL not every contention results in a context switch. InnoDB implements its own mutexes and RW-locks, which often try to “spin” to wait for a resource to become available. This wastes CPU time directly rather than doing a context switch.

Summary:

  • Look at the number of context switches per logical core rather than the total for easier-to-compare numbers
  • Find out how many context switches your system can handle per second, and don’t get too concerned if your context switches are no more than 10% of that number
  • Think about the number of context switches per query: the minimum possible is two, and values less than 10 make contention an unlikely issue
  • Not every MySQL contention results in a high number of context switches

Cardinality drops to 0 (zero) under write load in Version 5.7.19-17

Lastest Forum Posts - November 9, 2017 - 2:31am
Hi
Over the last month we had problems with some Indexes in our PerconaDB (v. 5.7.17). From time to time, the cardinality of all indexes on 2 table are just dropping.
We use ANALYZE table asins; to fix this.

Table 1:
- around 800k rows
- created with
CREATE TABLE `foooo1` (
`val1` char(2) COLLATE utf8_bin NOT NULL,
`val2` char(20) COLLATE utf8_bin NOT NULL,
`val3` char(20) COLLATE utf8_bin NOT NULL,
`val4` mediumint(8) unsigned NOT NULL,
PRIMARY KEY (`val1`,`val2`),
KEY `reverse` (`val1`,`val3`)
) ENGINE=TokuDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=TOKUDB_LZMA; Table 2:
- around 400k rows
- created with
CREATE TABLE `foooo2` (
`val1` char(2) COLLATE utf8_bin NOT NULL,
`val2` char(20) COLLATE utf8_bin NOT NULL,
`val3` bigint(20) NOT NULL,
`val4` bigint(20) DEFAULT NULL,
`val5` text COLLATE utf8_bin,
[..multiple text columns like this...]
PRIMARY KEY (`val1`,`val2`)
) ENGINE=TokuDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=TOKUDB_LZMA; Logs:
error.log and general.log/mysql.log does not contain anything suspicious at the specific drop times

This problem is hounting us for weeks now. First we observe an heavy performance drop in the database. After checking the tables, we found the missing cardinalities. This ends up in full table scans for every query.

Attempt 1:
After reading a lot about problems like this in the current version, we decide to upgrade the server to v. 5.7.19-17.
But the problems is still there.

Attempt 2:
Disable all auto reanalyze to test if it kind of a failed background analyze.
'tokudb_analyze_delete_fraction', '1.000000'
'tokudb_analyze_in_background', 'OFF'
'tokudb_analyze_mode', 'TOKUDB_ANALYZE_STANDARD'
'tokudb_analyze_throttle', '0'
'tokudb_analyze_time', '0' But the problems is still there.

Attempt 3:
Disable write on table 'foooo2' to check if it just under load
But the problems is still there on 'foooo1', but table 'foooo2' is working perfectly fine.
So the problem just appears under write load.

If some one could provide additional informations to track that down, I would be very grateful.

MySQL instance Corruption

Lastest Forum Posts - November 8, 2017 - 10:43pm
I have a 3 node cluster and one of the instances appears to be corrupted with the following during partial startup followed by instance crash:

InnoDB: is in the future! Current system log sequence number 139654003953.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files

I am wondering if I am able to recover this instance with the following methodology:

1. remove instance from cluster
2. start instance with force recovery parameter
3. identify corrupted table
4. drop table
5. import table from another instance
6. start instance
7. add back into cluster

Regards,
Geofrey.

Users Credentials

Lastest Forum Posts - November 8, 2017 - 2:02pm
Server A(old): MySQL 5.1, Server B(New): MySQL 5.7. OS on both: RH Linux 6.8

I have exported DBs from Server A to Server B using mysql workbench. I have installed PERCONA tool kit on Server A.

I need to copy all the users and their passwords to Server B. I am getting errors on Server A when I run the following commands:

]$ pt-show-grants>grants.sql

-bash: grants.sql: Permission denied

$ pt-show-grants

DBI connect(';;mysql_read_default_group=client','',... ) failed: Access denied for user 'xyz'@'localhost' (using password: NO) at /usr/bin/pt-show-grants line 1367


1: Do I need to add user ID and password to above PT command to access Server A? How ? I need users IDs and permissions of all users on Server A for restore on Server B. I am able to login to both servers without a problem. I am just getting errors on PT command.

2: Do I need to install PERCONA Tool kit on Server B to restore?

Thanks for your help.

Can't access few tables after restoring cold backup of tokudb

Lastest Forum Posts - November 8, 2017 - 11:52am
Can't access few tables after restoring cold backup of tokudb in a new server. Most of the tables are accessible but can't access few. I have checked source database and tables are accessible there.

MariaDB [amarchive]> select * from user_agent_credit limit 5;
ERROR 1017 (HY000): Can't find file: 'user_agent_credit' (errno: 2)

Also, I am getting the below warning in error log.
[Warning] ./amarchive/user_agent_credit.frm is inconsistent: engine typecode 42, engine name TokuDB (43)

Server version: 5.5.39-tokudb-7.5.2-MariaDB-log MariaDB Server

Can you please help me to resolve the issue.

Using Prometheus to Check for auto_increment Value Exhaustion

Latest MySQL Performance Blog posts - November 8, 2017 - 11:48am

In this blog post, we’ll look at how to use Prometheus to check for auto_increment value exhaustion.

One of the proactive tasks DBAs perform is checking if a field defined as auto_increment is about to reach the maximum allowed value of the int definition. For example, if a field is defined as smallint unsigned and your next auto_increment value is 65535 (the maximum possible value for smallint), you are in trouble. You should modify the column definition to at least mediumint unsigned, (which has a maximum value is 16777215) before you start to silently lose data or are not able to insert more rows.

Prometheus and the mysqld exporter

When you deploy the Percona Monitoring and Management (PMM) platform, you’re not only getting a state-of-the-art visualization tool, but you also get the Prometheus time-series database with it as part of the package. Prometheus connects to exporters running on a PMM Client and aggregates metrics collected by those exporters. For this case, the exporter that we care about is for MySQL (https://github.com/percona/mysqld_exporter), and specifically the metrics gathered by the collect.auto_increment.columns collector.

Tablestats

To know if your PMM installation is collecting this data, you should check with pmm-admin. If your output comes with tablestats=OFF, you are not collecting this metric:

[root@localhost]# pmm-admin list | grep tablestats mysql:metrics ip-10-0-0-207 42002 YES percona:***@unix(/var/lib/mysql/mysql.sock) tablestats=OFF

By default, the tablestats feature is enabled (unless your MySQL server has more than 1000 tables). To enable it beyond that limit, re-add the metrics using the --disable-tablestats-limit parameter with a larger value than the number of your tables (described in https://www.percona.com/doc/percona-monitoring-and-management/pmm-admin.html):

pmm-admin add mysql:metrics --disable-tablestats-limit 10000

Prometheus query

Prometheus not only stores the current value of the auto_increment column (mysql_info_schema_auto_increment_column), but also has the maximum allowed value per int type on it (mysql_info_schema_auto_increment_column_max). We can get the percentage of values used for the column with a single query, like:

mysql_info_schema_auto_increment_column*100/mysql_info_schema_auto_increment_column_max (available thanks to Max Bubenick)

Now, where do I execute the query? There are three options: the Prometheus GUI, using the API or the MySQL Table Statistics dashboard in PMM.

GUI

Every PMM installation allows you to access Prometheus by going to the URL http://<serverIP>/prometheus. Put the query in the text field, and hit Execute. The results look like this:

Prometheus GUI

 

You can see a row per existent table. In this case we can see that the table “sbtest20” is using the 95% of the available values. This makes it a candidate for a column definition modification. The remaining tables are using around 0.0002% of the values, so there’s plenty room to grow.

API

This is probably the best approach for checking this regularly. You can use the REST API to query metrics (as explained in https://prometheus.io/docs/querying/api/), like this:

curl -g 'http://127.0.0.1:80/prometheus/api/v1/query?query=(mysql_info_schema_auto_increment_column{schema="percona",table="sbtest20"}*100/mysql_info_schema_auto_increment_column_max{schema="percona",table="sbtest20"})'

See the new data? If you want to query the metrics for a specific table like I do, then just add the schema and table name: {schema=”percona”,table=”sbtest20″}:

[root@localhost ~]# curl -g 'http://127.0.0.1:80/prometheus/api/v1/query?query=(mysql_info_schema_auto_increment_column{schema="percona",table="sbtest20"}*100/mysql_info_schema_auto_increment_column_max{schema="percona",table="sbtest20"})' | python -m json.tool % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 220 100 220 0 0 1646 0 --:--:-- --:--:-- --:--:-- 1666 { "data": { "result": [ { "metric": { "column": "id", "instance": "localhost.localdomain", "job": "mysql", "schema": "percona", "table": "sbtest20" }, "value": [ 1509395634.736, "95.2755905511811" ] } ], "resultType": "vector" }, "status": "success" }

The part of the output that matters here is:

"value": [ 1509395634.736, "95.2755905511811" ]

As we saw in the GUI output, the table sbtest20 already used 95% of the available values for the column int definition. In this case, it is tinyint signed and the current maximum value is 120 (out of 127):

mysql> show create table sbtest20G *************************** 1. row *************************** Table: sbtest20 Create Table: CREATE TABLE `sbtest20` ( `id` tinyint(10) NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k_1` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=121 DEFAULT CHARSET=latin1 1 row in set (0.02 sec) mysql> select max(id) from sbtest20; +---------+ | max(id) | +---------+ | 120 | +---------+ 1 row in set (0.09 sec)

PMM’s MySQL Table Statistics dashboard

This information is also available on PMM, just go to the dashboard called MySQL Table Statistics and look for the section “Top Tables by Auto Increment Usage”:

The above graph is from Percona’s PMM demo setup, which is available publicly to everyone. You can access that particular chart by going to https://pmmdemo.percona.com/graph/dashboard/db/mysql-table-statistics?refresh=1m&panelId=53&fullscreen&orgId=1.

Conclusion

There are several ways to check for auto_increment capacity; this is just one of them. But what really matters is how PMM collects a rich set of information, and that’s is available for just one API request of distance. Note that each additional metric series collected by Prometheus adds additional work on the PMM Server, so watch that your server doesn’t enter what is known as Rushed Mode. This is a condition where the PMM Server is experiencing a high volume of writes and is struggling to keep up. You can identify this condition (since version 1.4) by examining the Prometheus dashboard, and in particular the graph Prometheus Problems. Check for the metric series called “Has been in Rushed Mode” and see if is greater than 0.

Enjoy PMM!

Lost data after stress test

Lastest Forum Posts - November 8, 2017 - 12:26am
Hello! I have a cluster with 3 nodes on dedicated server (Hetzner) running on centos 6, with Percona Cluster 5.7. When I run sysbench like

sysbench --mysql-table-engine=innodb --oltp-reconnect=on --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-user=root --mysql-password=MyPass --db-driver=mysql /usr/share/sysbench/tests/include/oltp_legacy/insert.lua prepare

replication between nodes passed well, but when I change --oltp-table-size=10000 to --oltp-table-size=100000 or more - the data between nodes are different
like

node1:
mysql> SELECT table_schema "sbtest", sum( data_length + index_length )/1024/1024 "Data Base Size in MB" FROM information_schema.TABLES GROUP BY table_schema;
+--------------------+----------------------+
| database_name | Data Base Size in MB |
+--------------------+----------------------+
| information_schema | 0.17187500 |
| mysql | 2.42676449 |
| percona | 0.01562500 |
| performance_schema | 0.00000000 |
| sbtest | 25.06250000 |
| sys | 0.01562500 |
+--------------------+----------------------+
6 rows in set (0.03 sec)

node2:
mysql> SELECT table_schema "sbtest", sum( data_length + index_length )/1024/1024 "Data Base Size in MB" FROM information_schema.TABLES GROUP BY table_schema;
+--------------------+----------------------+
| sbtest | Data Base Size in MB |
+--------------------+----------------------+
| information_schema | 0.17187500 |
| mysql | 2.42676449 |
| percona | 0.01562500 |
| performance_schema | 0.00000000 |
| sbtest | 23.06250000 |
| sys | 0.01562500 |
+--------------------+----------------------+
6 rows in set (0.02 sec)

node3:
mysql> SELECT table_schema "sbtest", sum( data_length + index_length )/1024/1024 "Data Base Size in MB" FROM information_schema.TABLES GROUP BY table_schema;
+--------------------+----------------------+
| sbtest | Data Base Size in MB |
+--------------------+----------------------+
| information_schema | 0.17187500 |
| mysql | 2.42676449 |
| percona | 0.01562500 |
| performance_schema | 0.00000000 |
| sbtest | 24.06250000 |
| sys | 0.01562500 |
+--------------------+----------------------+
6 rows in set (0.02 sec)



my.cnf from first node is:

#
# The Percona XtraDB Cluster 5.7 configuration file.
#
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
# Please make any edits and changes to the appropriate sectional files
# included below.
#
!includedir /etc/my.cnf.d/
!includedir /etc/percona-xtradb-cluster.conf.d/

[mysqld]

datadir=/var/lib/mysql
user=mysql

# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://94.130.142.232,94.130.142.231,94.130.142.228

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# This InnoDB autoincrement locking mode is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node 1 address
wsrep_node_address=94.*.*.232

# SST method
wsrep_sst_method=xtrabackup-v2

# Cluster name
wsrep_cluster_name=my_centos_cluster

# Authentication for SST method
wsrep_sst_auth="sstuser:s3cret"

on the other nodes my.cnf are the same, except wsrep_node_address=

Is it normal or someone have a solution?

how do I run pt-stalk to loop continuously gathering report?

Lastest Forum Posts - November 7, 2017 - 9:36pm
I'm currently working on a load testing for a new application and I would like to gather report from pt-stalk for the next 24 hours. how do I configure pt-stalk to work like that?

Upcoming MongoDB Security Webinar November 8, 2017: Enhanced Security Using LDAP Authentication

Latest MySQL Performance Blog posts - November 7, 2017 - 7:58am

Join Percona’s Senior Technical Services Engineer Adamo Tonete as he presents MongoDB Security Webinars: Enhanced Security Using LDAP Authentication on Wednesday, November 8, 2017 at 11:00 am PST / 2:00 pm EST (UTC-8).

  • Experience: Intermediate
  • Tags: SysAdmin, DBAs

In this webinar, we are going to demonstrate how to configure Percona Server for MongoDB in a secure replica-set using LDAP authentication. LDAP authentication allows you to use a single password server to authenticate users. LDAP authentication is only available only on MongoDB Enterprise, but Percona offers this feature free of charge with Percona Server for MongoDB. Check out what other Enterprise features Percona Server for MongoDB provides for free.

Register Now

Register for the webinar.

Adamo Tonete, Senior Technical Services Engineer

Adamo joined Percona in 2015, after working as a MongoDB/MySQL Database Administrator for three years. As the main DBA of a startup, he was responsible for suggesting the best architecture and data flows for a worldwide company in a 24×7 environment. Before that, he worked as a Microsoft SQL Server DBA at a large e-commerce company. Here he mainly focused on performance tuning and automation. Adamo has almost eight years of experience working as a DBA. In the past three years he has moved to NoSQL technologies without giving up relational databases. He likes to play video games and to study everything that is related to engines. Adamo lives with his wife in São Paulo, Brazil.

Unable to show Mysql Summary at PMM Query Analytic

Lastest Forum Posts - November 7, 2017 - 3:08am
PMM-Client version: 1.4.1
PMM-Server version: 1.4.1
Percona-Toolkit version: 3.0.4

The System Summary can show correct information, but at MySQL Summary section, it only show "exit status 1"

Agent log as below

[root@db8888 ~]# tail /var/log/pmm-mysql-queries-0.log

# PID: 30598

# API: 8.8.8.8/qan-api

# UUID: 77e9fb0008fd405e5050058484e73ead

2017/11/07 18:29:48.750261 main.go:163: Starting agent...

2017/11/07 18:29:48.755488 main.go:331: Agent is ready

2017/11/07 18:29:48.757690 main.go:204: API is ready

2017/11/07 18:30:23.439962 ERROR agent exit status 1

2017/11/07 18:30:23.440079 WARNING agent Cmd fail: Reply[Cmd:GetMySQLSummary Error:'exit status 1' Id:efdad214-59cd-4282-6708-901fdae0e045]

2017/11/07 18:53:59.509491 ERROR agent exit status 1

2017/11/07 18:53:59.509626 WARNING agent Cmd fail: Reply[Cmd:GetMySQLSummary Error:'exit status 1' Id:a0c97888-a983-4024-7b17-ec878ff838ed]



But direct execute pt-mysql-summary at target machine with user, password, host, port, it can show information correctly.

I try to trace code, not sure i'm in correct path, but it seem not pass user, password, host, port to pt-mysql-summary
https://github.com/percona/qan-agent.../agent.go#L694

Option --tables-exclude does not behavior as expected

Lastest Forum Posts - November 7, 2017 - 1:43am
Recently I found the --tables-exclude option does not behavior as expected.
The backup command i used:
innobackupex --defaults-file=/usr/local/mysql-5.7.12/my.cnf --user=root --host=localhost --socket=/var/lib/mysql/mysql1.sock --password=root --no-timestamp --tables-exclude=test.t backup

There are three tables in database test,e.g. t,t2 and t3, i want to exclude table t from the backup, but i found innobackupex excludes all of the three tables.
Does --tables-exclude always treat the option value as regex that if the table in database test contains char 't' and that table will be excluded?



innobackupex version:
innobackupex version 2.4.8 Linux (x86_64) (revision id: 97330f7)
MySQL version: 5.7.12

Feel free to response,thanks!

XbStream is waiting

Lastest Forum Posts - November 7, 2017 - 1:08am
Hi,

I am new to xtrabackup.
I tried to take a backup using xbstream and stored it in my disk which is in other machine. Later I am trying to apply the data to the xbstream command. The command waits and it is not exiting.

But i see a that when my last chunk got applied and after some time, the data is extracted to the specified folder but the process is still watiing for something.

[root@rhel6532bit _stage]# ps aux | grep "xbstream"
root 15436 0.0 0.1 24652 11624 ? Sl 14:08 0:00 /usr/bin/xbstream -x -C /_stage/new


Can anyone tell me, what the xbstream command is waiting for?
Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.