Buy Percona ServicesBuy Now!

mysql:queries API Error

Lastest Forum Posts - November 24, 2017 - 3:06pm
Our PMM server is being a proxy for security. We can connect to it from our clients and add linux:metrics and mysql:metrics. However, when we try to add mysql:queries, we get an error. "Error adding MySQL queries: timeout 10s waiting on agent to connect to API." It was working till we put it behind the proxy.

Not sure how we can debug this. Thanks!

Can't start Percona Cluster (mysql) after "self-leave"

Lastest Forum Posts - November 24, 2017 - 6:21am
Hello,

Doesn't matter whatever I do I can't start Percona Node 2 in my 2 node cluster.... (yes, I know that I should not run 2 nodes in a cluster but that's what it is for now)
RHEL7, latest Percona 5.7.

Your time and help are much appropriated. This is a UAT setup but if all goes well it will go into PROD...

This setup was running fine until one day it did that

Code: 2017-11-17T11:55:13.044889Z 4 [Note] WSREP: Current view of cluster as seen by this node view (view_id(NON_PRIM,5fc8c4bc,4) memb { 8567ecca,0 } joined { } left { } partitioned { 5fc8c4bc,0 } ) 2017-11-17T11:55:13.044957Z 4 [Note] WSREP: Current view of cluster as seen by this node view ((empty)) 2017-11-17T11:55:13.045119Z 4 [Note] WSREP: gcomm: closed 2017-11-17T11:55:13.045186Z 0 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 2017-11-17T11:55:13.045220Z 0 [Note] WSREP: Flow-control interval: [100, 100] 2017-11-17T11:55:13.045225Z 0 [Note] WSREP: Trying to continue unpaused monitor 2017-11-17T11:55:13.045227Z 0 [Note] WSREP: Received NON-PRIMARY. 2017-11-17T11:55:13.045230Z 0 [Note] WSREP: Shifting SYNCED -> OPEN (TO: 8725) 2017-11-17T11:55:13.045250Z 0 [Note] WSREP: Received self-leave message. 2017-11-17T11:55:13.045254Z 0 [Note] WSREP: Flow-control interval: [0, 0] 2017-11-17T11:55:13.045257Z 0 [Note] WSREP: Trying to continue unpaused monitor 2017-11-17T11:55:13.045259Z 0 [Note] WSREP: Received SELF-LEAVE. Closing connection. 2017-11-17T11:55:13.045261Z 0 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 8725) 2017-11-17T11:55:13.045284Z 0 [Note] WSREP: RECV thread exiting 0: Success 2017-11-17T11:55:13.045345Z 4 [Note] WSREP: recv_thread() joined. 2017-11-17T11:55:13.045353Z 4 [Note] WSREP: Closing replication queue. 2017-11-17T11:55:13.045357Z 4 [Note] WSREP: Closing slave action queue. 2017-11-17T11:55:13.045756Z 4 [Note] WSREP: /usr/sbin/mysqld: Terminated.

Nothing gets logged to the /var/log/mysql.log

Here is my error message:

Code: -- Unit mysql.service has begun starting up. Nov 24 13:38:27 S-02 mysqld_safe[126039]: 2017-11-24T13:38:27.619589Z mysqld_safe Logging to '/var/log/mysqld-safe.log'. Nov 24 13:38:27 S-02 mysqld_safe[126039]: 2017-11-24T13:38:27.623103Z mysqld_safe Logging to '/var/log/mysqld-safe.log'. Nov 24 13:38:27 S-02 mysqld_safe[126039]: 2017-11-24T13:38:27.649402Z mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Nov 24 13:38:27 S-02 mysqld_safe[126039]: 2017-11-24T13:38:27.665308Z mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.pZnH0L' --pid-file='/var/lib/mysql/server-02-recover.pid' Nov 24 13:38:31 S-02 mysql-systemd[126040]: /usr/bin/mysql-systemd: line 140: kill: (126039) - No such process Nov 24 13:38:31 S-02 mysql-systemd[126040]: ERROR! mysqld_safe with PID 126039 has already exited: FAILURE Nov 24 13:38:31 S-02 systemd[1]: mysql.service: control process exited, code=exited status=1 Nov 24 13:38:31 S-02 mysql-systemd[126701]: WARNING: mysql pid file /var/lib/mysql/mysqld.pid empty or not readable Nov 24 13:38:31 S-02 mysql-systemd[126701]: ERROR! mysql already dead Nov 24 13:38:31 S-02 systemd[1]: mysql.service: control process exited, code=exited status=2 Nov 24 13:38:31 S-02 mysql-systemd[126732]: WARNING: mysql pid file /var/lib/mysql/mysqld.pid empty or not readable Nov 24 13:38:31 S-02 mysql-systemd[126732]: WARNING: mysql may be already dead Nov 24 13:38:31 S-02 systemd[1]: Failed to start Percona XtraDB Cluster. -- Subject: Unit mysql.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit mysql.service has failed. -- -- The result is failed. Nov 24 13:38:31 S-02 systemd[1]: Unit mysql.service entered failed state. Nov 24 13:38:31 S-02 systemd[1]: mysql.service failed. Nov 24 13:38:31 S-02 polkitd[622]: Unregistered Authentication Agent for unix-process:125992:169691973 (system bus name :1.22171, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)

And that's what in wsrep_recovery:

Code: 2017-11-24T13:38:27.847539Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2017-11-24T13:38:27.849040Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.19-17-57-log) starting as process 126662 ... 2017-11-24T13:38:27.852025Z 0 [Warning] No argument was provided to --log-bin, and --log-bin-index was not used; so replication may break when this MySQL server acts as a master and has his hostname changed!! Please use '--log-bin=S-02-bin' to avoid this problem. 2017-11-24T13:38:27.856882Z 0 [Note] InnoDB: PUNCH HOLE support available [...CUT...] 2017-11-24T13:38:28.523347Z 0 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.7.19-17 started; log sequence number 126001947 2017-11-24T13:38:28.523365Z 0 [Warning] InnoDB: Skipping buffer pool dump/restore during wsrep recovery. 2017-11-24T13:38:28.523658Z 0 [Note] Plugin 'FEDERATED' is disabled. 2017-11-24T13:38:28.549281Z 0 [Note] WSREP: Recovered position: 601f3fef-c30e-11e7-81ab-f27164665fa9:8724 2017-11-24T13:38:28.549313Z 0 [Note] Binlog end 2017-11-24T13:38:28.554438Z 0 [Note] Shutting down plugin 'validate_password' 2017-11-24T13:38:28.554458Z 0 [Note] Shutting down plugin 'ngram' 2017-11-24T13:38:28.554460Z 0 [Note] Shutting down plugin 'partition' 2017-11-24T13:38:28.554462Z 0 [Note] Shutting down plugin 'BLACKHOLE' 2017-11-24T13:38:28.554464Z 0 [Note] Shutting down plugin 'ARCHIVE' 2017-11-24T13:38:28.554466Z 0 [Note] Shutting down plugin 'INNODB_SYS_VIRTUAL' 2017-11-24T13:38:28.554468Z 0 [Note] Shutting down plugin 'INNODB_CHANGED_PAGES' 2017-11-24T13:38:28.554469Z 0 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES' 2017-11-24T13:38:28.554471Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES' [...CUT...] 2017-11-24T13:38:28.554520Z 0 [Note] Shutting down plugin 'XTRADB_ZIP_DICT_COLS' 2017-11-24T13:38:28.554522Z 0 [Note] Shutting down plugin 'XTRADB_ZIP_DICT' 2017-11-24T13:38:28.554524Z 0 [Note] Shutting down plugin 'XTRADB_RSEG' 2017-11-24T13:38:28.554525Z 0 [Note] Shutting down plugin 'XTRADB_INTERNAL_HASH_TABLES' 2017-11-24T13:38:28.554527Z 0 [Note] Shutting down plugin 'XTRADB_READ_VIEW' 2017-11-24T13:38:28.554528Z 0 [Note] Shutting down plugin 'InnoDB' 2017-11-24T13:38:28.554636Z 0 [Note] InnoDB: FTS optimize thread exiting. 2017-11-24T13:38:28.554738Z 0 [Note] InnoDB: Starting shutdown... 2017-11-24T13:38:29.656074Z 0 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool 2017-11-24T13:38:30.568578Z 0 [Note] InnoDB: Shutdown completed; log sequence number 126001966 2017-11-24T13:38:30.570358Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1" 2017-11-24T13:38:30.570371Z 0 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA' 2017-11-24T13:38:30.570420Z 0 [Note] Shutting down plugin 'MEMORY' 2017-11-24T13:38:30.570424Z 0 [Note] Shutting down plugin 'CSV' 2017-11-24T13:38:30.570428Z 0 [Note] Shutting down plugin 'MyISAM' 2017-11-24T13:38:30.570457Z 0 [Note] Shutting down plugin 'MRG_MYISAM' 2017-11-24T13:38:30.570459Z 0 [Note] Shutting down plugin 'sha256_password' 2017-11-24T13:38:30.570483Z 0 [Note] Shutting down plugin 'mysql_native_password' 2017-11-24T13:38:30.570485Z 0 [Note] Shutting down plugin 'wsrep' 2017-11-24T13:38:30.570658Z 0 [Note] Shutting down plugin 'binlog' 2017-11-24T13:38:30.570889Z 0 [Note] /usr/sbin/mysqld: Shutdown complete

Relay log written hourly

Lastest Forum Posts - November 24, 2017 - 12:09am
Hello, how is the graph "relay log written hourly" calculated in the "MYSQL replication" dashboard?

Pmm 1.5

Lastest Forum Posts - November 23, 2017 - 10:56pm
https://github.com/grafana/grafana/issues/9764
lvl=eror msg="Failed to send alert notification email" logger=alerting.notifier.email error="unencrypted connection"

mysqld_exporter use cpu too much

Lastest Forum Posts - November 23, 2017 - 12:02am
Host operating system: output of uname -a

Linux malldbxx 2.6.32-696.3.2.el6.x86_64 #1 SMP Tue Jun 20 01:26:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux mysqld_exporter version: output of mysqld_exporter --version

mysqld_exporter, version 1.4.1 (branch: master, revision: c5b2f15)
build user:
build date:
go version: go1.8 MySQL server version

oracle mysql 5.7.16-log mysqld_exporter command line flags

mysql:queries xx.abcd.com - YES pmm:***@unix(/tmp/mysql.sock) query_source=slowlog, query_examples=false
linux:metrics xx.abcd.com 42000 YES -
mysql:metrics xx.abcd.com 42002 YES pmm:***@unix(/tmp/mysql.sock) processlist=OFF, tablestats=OFF, userstats=OFF
What did you do that produced an error?

there is no error, it just consume too much cpu resource.
What did you expect to see?

there is no error
What did you see instead?

there is no error.

I just want to know why this deamon use cpu resource too much.

this is "TOP" results in Linux.
as you can see, mysqld_exporter is using 11.7% CPU resource.

thanks.

-- "TOP" results

Tasks: 623 total, 2 running, 621 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.7%us, 0.7%sy, 0.0%ni, 95.0%id, 0.4%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 65842428k total, 62781936k used, 3060492k free, 595428k buffers
Swap: 8191996k total, 620840k used, 7571156k free, 27095728k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20264 mysql 20 0 42.0g 31g 9232 S 283.6 50.6 284106:34 mysqld
15821 root 20 0 101m 1196 808 R 97.8 0.0 0:01.00 zip
31437 root 20 0 2765m 15m 4220 S 11.7 0.0 0:51.92 mysqld_exporter
2690 root 20 0 0 0 0 S 2.0 0.0 652:54.67 jbd2/sdb1-8
15823 root 20 0 17468 1556 832 R 2.0 0.0 0:00.01 top
1 root 20 0 21396 1292 1076 S 0.0 0.0 0:16.12 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.21 kthreadd
3 root RT 0 0 0 0 S 0.0 0.0 64:51.99 migration/0
4 root 20 0 0 0 0 S 0.0 0.0 1:52.61 ksoftirqd/0
5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/0
6 root RT 0 0 0 0 S 0.0 0.0 1:21.87 watchdog/0
7 root RT 0 0 0 0 S 0.0 0.0 75:51.53 migration/1
8 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/1
9 root 20 0 0 0 0 S 0.0 0.0 0:49.07 ksoftirqd/1
10 root RT 0 0 0 0 S 0.0 0.0 1:09.22 watchdog/1
11 root RT 0 0 0 0 S 0.0 0.0 99:10.97 migration/2
12 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/2
13 root 20 0 0 0 0 S 0.0 0.0 0:37.20 ksoftirqd/2
14 root RT 0 0 0 0 S 0.0 0.0 0:44.47 watchdog/2
15 root RT 0 0 0 0 S 0.0 0.0 77:44.68 migration/3
16 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/3
17 root 20 0 0 0 0 S 0.0 0.0 0:31.30 ksoftirqd/3
18 root RT 0 0 0 0 S 0.0 0.0 0:45.33 watchdog/3
19 root RT 0 0 0 0 S 0.0 0.0 46:39.83 migration/4
20 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/4
21 root 20 0 0 0 0 S 0.0 0.0 0:27.29 ksoftirqd/4
22 root RT 0 0 0 0 S 0.0 0.0 0:44.57 watchdog/4
23 root RT 0 0 0 0 S 0.0 0.0 40:09.09 migration/5
24 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/5
25 root 20 0 0 0 0 S 0.0 0.0 0:23.82 ksoftirqd/5
26 root RT 0 0 0 0 S 0.0 0.0 0:39.15 watchdog/5
27 root RT 0 0 0 0 S 0.0 0.0 64:09.10 migration/6
28 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/6
29 root 20 0 0 0 0 S 0.0 0.0 0:21.45 ksoftirqd/6
30 root RT 0 0 0 0 S 0.0 0.0 0:39.43 watchdog/6
31 root RT 0 0 0 0 S 0.0 0.0 62:35.76 migration/7
32 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/7
33 root 20 0 0 0 0 S 0.0 0.0 0:20.04 ksoftirqd/7
34 root RT 0 0 0 0 S 0.0 0.0 0:39.19 watchdog/7
35 root RT 0 0 0 0 S 0.0 0.0 135:55.34 migration/8
36 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/8
37 root 20 0 0 0 0 S 0.0 0.0 3:20.93 ksoftirqd/8
38 root RT 0 0 0 0 S 0.0 0.0 3:21.69 watchdog/8



this is "/var/log/pmm-mysql-metrics.log"

-- /var/log/pmm-mysql-metrics.log

time="2017-11-22T22:12:36+09:00" level=info msg="Starting mysqld_exporter (version=1.4.1, branch=master, revision=c5b2f15a2b2b46eb53192c6aded039c90f406733) " source="mysqld_exporter.go:798"
time="2017-11-22T22:12:36+09:00" level=info msg="Build context (go=go1.8, user=, date=)" source="mysqld_exporter.go:799"
time="2017-11-22T22:12:36+09:00" level=info msg="HTTPS/TLS is enabled" source="mysqld_exporter.go:843"
time="2017-11-22T22:12:36+09:00" level=info msg="Listening on 106.xxx.xx.xxx:42002" source="mysqld_exporter.go:846"
time="2017-11-23T09:42:14+09:00" level=info msg="Starting mysqld_exporter (version=1.4.1, branch=master, revision=c5b2f15a2b2b46eb53192c6aded039c90f406733) " source="mysqld_exporter.go:798"
time="2017-11-23T09:42:14+09:00" level=info msg="Build context (go=go1.8, user=, date=)" source="mysqld_exporter.go:799"
time="2017-11-23T09:42:14+09:00" level=info msg="HTTPS/TLS is enabled" source="mysqld_exporter.go:843"
time="2017-11-23T09:42:14+09:00" level=info msg="Listening on 106.xxx.xx.xxx:42002" source="mysqld_exporter.go:846"
time="2017-11-23T09:42:36+09:00" level=info msg="Starting mysqld_exporter (version=1.4.1, branch=master, revision=c5b2f15a2b2b46eb53192c6aded039c90f406733) " source="mysqld_exporter.go:798"
time="2017-11-23T09:42:36+09:00" level=info msg="Build context (go=go1.8, user=, date=)" source="mysqld_exporter.go:799"
time="2017-11-23T09:42:36+09:00" level=info msg="HTTPS/TLS is enabled" source="mysqld_exporter.go:843"
time="2017-11-23T09:42:36+09:00" level=info msg="Listening on 106.xxx.xx.xxx:42002" source="mysqld_exporter.go:846"

About PMM security problem.

Lastest Forum Posts - November 22, 2017 - 8:31pm
hi,


I've installed pmm-latest version.

but there is some security issue.


pmm db user has too many privilege in database. (including select all data / super)

and pmm db user's password is in pmm.yml

so it could be dangerous.


how could I solve this security problem?

could we use incrypt the pmm.yml file?


thanks,

MongoDB 3.6 Change Streams: A Nest Temperature and Fan Control Use Case

Latest MySQL Performance Blog posts - November 22, 2017 - 6:16pm

In this post, I’ll look at what MongoDB 3.6 change streams are, in a creative way. Just in time for the holidays!

What is a change stream?

Change streams in MongoDB provide a cross-platform unified API that can be supported with sharding. It has an option for talking to secondaries, and even allows for security controls like restrictions and action controls.

How is this important? To demonstrate, I’ll walk through an example of using a smart oven with a Nest Thermostat to keep your kitchen from becoming a sauna while you bake a cake — without the need for you to moderate the room temperature yourself.

What does a change stream look like? db.device_states.watch( { $match: { documentKey.device: { $in : [ "jennair_oven", "nest_kitchen_thermo"] }, operationType: "insert" } }); What can we watch?

We can use change streams to watch these actions:

  • Insert
  • Delete
  • Replace
  • Update
  • Invalidate
Why not just use the Oplog?

Any change presented in the oplog could be rolled back as it’s only single node durable. Change streams need at least one other node to receive the change. In general, this represents a majority for a typical three node replica-set.

In addition, change streams are resumable. Having a collector job that survives an election is easy as pie, as by default it will automatically retry once. However, you can also record the last seen token to know how to resume where it left off.

Finally, since this is sharding supported with the new cluster clock (wc in Oplog), you can trust the operations order you get, even across shards. This was problematic both with the old oplog format and when managing connections manually.

In short, this is the logical evolution of oplog scrapping, and helps fit a long help request to be able to tail the oplog via mongos, not per replica set.

So what’s the downside?

It’s estimated that after 1000 streams you will start to see very measurable performance drops. Why there is not a global change stream option to avoid having so many cursors floating around is not clear. I think it’s something that should be looked at for future versions of this feature. Up to now, many use cases of mongo, specifically in the multi-tenant world, might have > 1000 namespaces on a system. This would make the performance drop problematic.

What’s in a change stream anyhow?

The first thing to understand is that while some drivers will have db.collection.watch(XXXX) as a function, you could use, this is just an alias for an actual aggregation pipeline $changeStream. This means you could mix this with much more powerful pipelines, though you should be careful. Things like projection could break the ability to resume if the token is not passed on accurately.

So a change stream:

  1. Is a view of an oplog entry for a change This sometimes means you know the change contents, and sometimes you don’t, for example in a delete
  2. Is an explicit API for drivers and code, but also ensures you can get data via Mongos rather than having to connect directly to each node.
  3. Is scalable, resumable, and well ordered – even when sharded!
  4. Harnesses the power of aggregations.
  5. Provides superior ACL support via roles and privileges

Back to a real-world example. Let’s assume you have a Nest unit in your house (because none of us techies have those right?) Let’s also assume you’re fancy and have the Jenn-Air oven which can talk to the Nest. If you’re familiar with the Nest, you might know that its API lets you enable the Jenn-Air fan or set its oven temperature remotely. Sure the oven has a fan schedule to prevent it running at night, but its ability to work with other appliances is a bit more limited.

So for our example, assume you want the temperature in the kitchen to drop by 15 degrees F whenever the oven is on, and that the fan should run even if it’s outside its standard time window.

Hopefully, you can see how such an app, powered by MongoDB, could be useful? However, there are a few more assumptions, which we have already set up: a collection of “device_states” to record the original state of the temperature setting in the Nest; and to record the oven’s status so that we know how to reset the oven using the Nest once cooking is done.

As we know we have the state changes for the devices coming in on a regular basis, we could simply say:

db.device_states.watch({ $match: { documentKey.device: { $in : [ "jennair_oven", "nest_kitchen_thermo"] }, operationType: "insert" } });

This will watch for any changes to either of these devices whether it be inserting new states or updating old ones.

Now let’s assume anytime something comes in for the Nest, we are updating  db.nest_settings with that document. However, in this case, when the oven turns on we update a secondary document with an _id of “override” to indicate this is the last known nest_setting before the oven enabling. This means that we can revert to it later.

This would be accomplished via something like…

Change Event document

{  _id: <resume_token>, operationType: 'insert', ns: {db:'example',coll:"device_states"}, documentKey: { device:'nest_kitchen_thermo'}, fullDocument: {  _id : ObjectId(), device: 'nest_kitchen_thermo', temp: 68 } }

So you could easily run the follow from your code:

db.nest_settings.update({_id:"current"},{_id:"current",data: event.fullDocument})

Now the current document is set to the last checking from the Nest API.

That was simple enough, but now we can do even more cool things…

Change Event document

{  _id: <resume_token>, operationType: 'insert', ns: {db:'example',coll:"device_states"}, documentKey: { device:'jennair_oven'}, fullDocument: {  _id : ObjectId(), device: 'jennair_oven', temp: 350, power: 1, state: "warming" } }

This next segment is mode pseudocode:

var event = watcherObj.next(); var device = event.documentKey.device; var data = event.fullDocument; if ( device == "jennair_oven"){ override_enabled = db.nest_settings.count({_id:"override"}); if ( data.power  && !override_enabled){ var doc = db.nest_settings.findOne({_id:"current"}); doc._id="override"; doc.data.temp += -15;  db.nest_settings.insert(doc); } if (data.power){ overide_doc = db.nest_settings.findOne({_id:"override"}); NestObj.termostate.updateTemp(override_doc.data.temp); NestObj.termostate.enableFan(15); //Enable for 15 minutes  }else{ overide_doc = db.nest_settings.findOne({_id:"override"}); overide_doc.data.temp += 15; NestObj.termostate.updateTemp(override_doc.data.temp); NestObj.termostate.enableFan(0); //Enable for 15 minutes  db.nest_settings.remove({_id:"override"}); } }

This code is doing a good deal, but it’s pretty basic at the same time:

  1. If the oven is on, but there is no override document, create one from the most recent thermostat settings.
  2. Decrease the current temp setting by 15, and then insert it with the “override” _id value
  3. If the power is set to on
    (a) read in the current override document
    (b) set the thermostat to that setting
    (c) enable the fan for 15 minutes
  4. If the power is now off
    (a) read in the current override document
    (b) set the thermostat to 15 degrees higher
    (c) set the fan to disabled

Assuming you are constantly tailing the watch cursor, this means you will disable the oven and fan as soon as the oven is off.

Hopefully, this blog has helped explain how change streams work by using a real-world logical application to keep your kitchen from becoming a sweat sauna while making some cake… and then eating it!

Sudoku Recursive Common Table Expression Solver

Latest MySQL Performance Blog posts - November 22, 2017 - 11:17am

In this blog post, we’ll look at a solving Sudoku using MySQL 8.0 recursive common table expression.

Vadim was recently having a little Saturday morning fun solving Sudoku using MySQL 8. The whole idea comes from SQLite, where Richard Hipp has come up with some outlandish recursive query examplesWITH clause.

The SQLite query:

WITH RECURSIVE  input(sud) AS (    VALUES('53..7....6..195....98....6.8...6...34..8.3..17...2...6.6....28....419..5....8..79')  ),  digits(z, lp) AS (    VALUES('1', 1)    UNION ALL SELECT    CAST(lp+1 AS TEXT), lp+1 FROM digits WHERE lp<9  ),  x(s, ind) AS (    SELECT sud, instr(sud, '.') FROM input    UNION ALL    SELECT      substr(s, 1, ind-1) || z || substr(s, ind+1),      instr( substr(s, 1, ind-1) || z || substr(s, ind+1), '.' )     FROM x, digits AS z    WHERE ind>0      AND NOT EXISTS (            SELECT 1              FROM digits AS lp             WHERE z.z = substr(s, ((ind-1)/9)*9 + lp, 1)                OR z.z = substr(s, ((ind-1)%9) + (lp-1)*9 + 1, 1)                OR z.z = substr(s, (((ind-1)/3) % 3) * 3                        + ((ind-1)/27) * 27 + lp                        + ((lp-1) / 3) * 6, 1)         )  ) SELECT s FROM x WHERE ind=0;

Which should provide the answer: 534678912672195348198342567859761423426853791713924856961537284287419635345286179.

The modified query to run on MySQL 8.0.3 release candidate and MariaDB Server 10.2.9 stable GA courtesy of Vadim:

WITH RECURSIVE  input(sud) AS (    SELECT '53..7....6..195....98....6.8...6...34..8.3..17...2...6.6....28....419..5....8..79'  ),  digits(z, lp) AS (    SELECT '1', 1    UNION ALL SELECT    CAST(lp+1 AS CHAR), lp+1 FROM digits WHERE lp<9  ),  x(s, ind) AS (    SELECT sud, instr(sud, '.') FROM input    UNION ALL    SELECT      concat(substr(s, 1, ind-1) , z , substr(s, ind+1)),      instr( concat(substr(s, 1, ind-1) ,z ,substr(s, ind+1)), '.' )     FROM x, digits AS z    WHERE ind>0      AND NOT EXISTS (            SELECT 1              FROM digits AS lp             WHERE z.z = substr(s, ((ind-1) DIV 9)*9 + lp, 1)                OR z.z = substr(s, ((ind-1)%9) + (lp-1)*9 + 1, 1)                OR z.z = substr(s, (((ind-1) DIV 3) % 3) * 3                        + ((ind-1) DIV 27) * 27 + lp                        + ((lp-1) DIV 3) * 6, 1)         )  ) SELECT s FROM x WHERE ind=0;

The test environment for the setup is a standard Linode 1024 instance, with one CPU core and 1GB of RAM. The base OS was Ubuntu 17.04. MySQL and MariaDB Server were installed via their respective tarballs. No configuration is done beyond a basic out-of-the-box install inside of the MySQL sandbox. This is similar for sqlite3. Remember to run “.timer on” for sqlite3.

Note that initially they were done on separate instances, but because of the variance you get in cloud instances, it was decided that it would be better to run on the same instance using the MySQL Sandbox.

MySQL 8 first run time: 0.16s. 5 runs: 0.16, 0.16, 0.17, 0.16, 0.16 MariaDB Server 10.2 first run time: 0.20s. 5 runs: 0.22, 0.22, 0.21, 0.21, 0.20 MariaDB Server 10.3.2 first run time: 0.206s. 5 runs: 0.237, 0.199, 0.197, 0.198, 0.192 SQLite3 first run time: Run Time: real 0.328 user 0.323333 sys 0.003333 / Run Time: real 0.334 user 0.333333 sys 0.000000

Trying a more complex Sudoku routine, “..41..2.3……..12…..8..82.6.43…..8.9…..67.2.48..5…..64……..3.7..69..” to produce the result “574198263638425791219367854821654379743819625956732148195273486462981537387546912″, the results are:

MySQL 8 first run time: 4.87s. 5 runs: 5.43, 5.35, 5.10, 5.19, 5.05 MariaDB Server 10.2 first run time: 6.65s. 5 runs: 7.03, 6.57, 6.61, 6.59, 7.12 MariaDB Server 10.3.2 first run time: 6.121s. 5 runs: 5.701, 6.043, 6.043, 5.849, 6.199 SQLite3 first run time: Run Time: real 10.105 user 10.099999 sys 0.000000 / Run Time: real 11.305 user 11.293333 sys 0.000000

Conclusions from this fun little exercise? SQL, even though it’s a standard is not portable between databases. Thankfully, MySQL and MariaDB are syntax-compatible in this case! MySQL and MariaDB Server are both faster than sqlite3 when returning a recursive CTE. It would seem that the MySQL 8.0.3 release candidate is faster at solving these Sudoku routines compared to the MariaDB Server 10.2 stable GA release. It also seems that MariaDB Server 10.3.2 alpha is marginally quicker than MariaDB Server 10.2.

Kudos to Team MariaDB for getting recursive common table expression support first in the MySQL ecosystem, and kudos to Team MySQL for making it fast!

Open table locks incrementing on one cluster node, requires mysqld cycle to correct

Lastest Forum Posts - November 21, 2017 - 3:51pm
Hi there,

We've been having trouble with a 3 node galera cluster, where seemingly out of nowhere one of the nodes will decide it doesn't like a query and hang indefinitely. It will lock a table and hold that table. At this point, all tables start to lock and all queries end up creating new table locks. The process list will simply accumulate processes, existing processes will increment their time, and the open tables will just keep incrementing. Killing the process (or all processes) does not help. The only way to recover seems to be to cycle mysqld.

Here's an example of one hung query from SHOW PROCESSLIST. I'll spare the full list, unless you think it would help.

| 8 | xxxx_system_user | localhost | xxxx_system | Killed | 49634 | updating | UPDATE leases SET version=version+1,holder_id=1511216099643476245,ex pires=NOW() + INTERVAL 6930000 M | 0 | 0 |

Here's a related row from SHOW OPEN TABLES.

| xxxx_system | leases | 3 | 0 |

And here's a bit from the SHOW ENGINE INNODB STATUS that looks suspect:

---TRANSACTION 421965858011080, not started sleeping before entering InnoDB
mysql tables in use 1, locked 1
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 421965858013496, not started sleeping before entering InnoDB
mysql tables in use 1, locked 1
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 421965858009872, not started sleeping before entering InnoDB
mysql tables in use 1, locked 1
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 421965858008664, not started sleeping before entering InnoDB
mysql tables in use 1, locked 1

We're using pxc_strict_mode=ENFORCING and we're using a read/write splitting proxy (MaxScale).

We have tried everything we know of, to both avoid the situation, as well as to recover from it without a restart.

Please let me know if any other information would be helpful in this case.

Cheers,
Clinton

Percona Toolkit 3.0.5 is Now Available

Latest MySQL Performance Blog posts - November 21, 2017 - 10:32am

Percona announces the release of Percona Toolkit 3.0.5 on November 21, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

You download Percona Toolkit packages from the web site or install from official repositories.

This release includes the following changes:

New Features:
  • PT-216: The pt-mongodb-query-digest supports MongoDB versions lower than 3.2; incorrect output was fixed.
  • PT-182: The pt-summary, pt-mysql-summary, pt-mongodb-summary commands provide output in the the JSON format.
  • PT-152: pt-mysql-summary shows the output of the SHOW SLAVE HOSTS command.
  • PT-139: pt-table-sync supports replication channels (requires MySQL version 5.7.6 or higher)
  • PMM-1590: MongoDB Profiler for Percona Management and Monitoring and Percona Toolkit has been improved.
Bug fixes:
  • PT-211: pt-mext would fail if the Rsa_public_key variable was empty.
  • PT-212: pt-mongodb-query-digest --version produced incorrect values.
  • PT-202: pt-online-schema-change incorrectly processed virtual columns.
  • PT-200: pt-online-schema-change command reported an error when the name of an index contained UNIQUE as as the prefix or suffix.
  • PT-199: pt-table-checksum did not detect differences on a system with the ROW based replication active.
  • PT-196: pt-onine-schema-change --max-load paused if a status variable was passed 0 as the value.
  • PT-193: pt-table-checksum reported a misleading error if a column comment contained an apostrophe. For more information, see #1708749.
  • PT-187: In some cases, pt-table-checksum did not report that the same table contained different values on the master and slave.
  • PT-186: pt-online-schema-change --alter could fail if field names contained upper case characters. For more information, see #1705998.
  • PT-183: In some cases pt-mongodb-query-digest could not connect to a database using authentication.
  • PT-167: In some cases, pt-kill could ignore the value of the --busy-time parameter. For more information, see #1016272.
  • PT-161: When run with the --skip-check-slave-lag, the pt-table-checksum could could fail in some cases.

Need for a two steps pt-online-schema-change

Lastest Forum Posts - November 21, 2017 - 6:11am
My use case is the following: I have two java servers. When I want to deploy a new Java application, I need to isolate one of them, deploy a new Java application, apply mysql schema changes, stop first server, activate second servers. Under this scenario, server1 is still working against the old schema for sometime.

What I would like to do is:
1. deploy new application in server2
2. create a new table
3. copy data into new table
4. keep new table in sync with old table (just old -> new, not the contrary)
5. block server1
6. recreate foreign keys pointing to new table
7. rename new table to old table
8. activate server2

It seems I need pl-online-schema-change but in two steps. First step comprises points 2-4. Second step points 6-7.
From 5 to 8 is the downtime period.

First question is: do points 6 and 7 take long time respect to the previous ones?
Second question is: can I achieve this in pl-online-schema-change?

adding additional QAN rds instances to PMM Server.

Lastest Forum Posts - November 20, 2017 - 5:10pm
pmm-admin 1.4.1
RDS mysql 5.6
The mysql:queries I set up about 3 months ago have this:
query_source=perfschema, query_examples=true
Adding new RDS instances with the latest version of PMM
The RDS mysql 5.7 (same P_S settings)
query_examples=false
new RDS mysql 5.6 (same P_S settings)
query_examples=false

When I get help for listing these I see, and added the --query-source perfschema as in the earlier versions of PMM I did not need to add this.
Hints? I feel like it must be some P_S setting I set in the other RDS that works....

InnoDB Page Compression: the Good, the Bad and the Ugly

Latest MySQL Performance Blog posts - November 20, 2017 - 10:54am

In this blog post, we’ll look at some of the facets of InnoDB page compression.

Somebody recently asked me about the best way to handle JSON data compression in MySQL. I took a quick look at InnoDB page compression and wanted to share my findings.

There is also some great material on this topic that was prepared and presented by Yura Sorokin at Percona Live Europe 2017: https://www.percona.com/live/e17/sessions/percona-xtradb-compressed-columns-with-dictionaries-an-alternative-to-innodb-table-compression. Yura also implemented Compressed Columns in Percona Server.

First, the good part.

InnoDB page compression is actually really easy to use and provides a decent compression ratio. To use it, I just ran CREATE TABLE commententry (...) COMPRESSION="zlib"; – and that’s all. By the way, for my experiment I used the subset of Reddit comments stored in JSON (described here: Big Dataset: All Reddit Comments – Analyzing with ClickHouse).

This method got me a compressed table of 3.9GB. Compare this to 8.4GB for an uncompressed table and it’s about a 2.15x compression ratio.

Now, the bad part.

As InnoDB page compression uses “hole punching,” the standard Linux utils do not always properly support files created this way. In fact, to see the size “3.9GB” I had to use du --block-size=1 tablespace_name.ibd , as the standard ls -l tablespace_name.ibd shows the wrong size (8.4GB). There is a similar limitation on copying files. The standard way cp old_file new_file may not always work, and to be sure I had to use cp --sparse=always old_file new_file.

Speaking about copying, here’s the ugly part.

The actual time to copy the sparse file was really bad.

On a fairly fast device (a Samsung SM863), copying the sparse file mentioned above in its compressed size of 3.9GB took 52 minutes! That’s shocking, so let me repeat it again: 52 minutes to copy a 3.9GB file on an enterprise SATA SSD.

By comparison, copying regular 8.4GB file takes 9 seconds! Compare 9 sec and 52 mins.

To be fair, the NMVe device (Intel® SSD DC D3600) handles sparse files much better. It took only 12 seconds to copy the same sparse file on this device.

Having considered all this, it is hard to recommend that you use InnoDB page compression for serious production. Well, unless you power your database servers with NVMe storage.

For JSON data, the Compressed Columns in Percona Server for MySQL should work quite well using Dictionary to store JSON keys – give it a try!

Help! Node won't join Cluster after Restart.

Lastest Forum Posts - November 20, 2017 - 4:28am
Hi Folks!

I had my new XtraDB-Cluster (5.7.19, Debian 9) up and running for a while and today I decided to restart one of the three Nodes. That was apparently not a good Idea, because now it's not willing to rejoin the Cluster anymore. I really tried out what I could, but got the following Error, over and over again:

"mysql[119540]: WSREP: Failed to recover position:"

Here's my /etc/mysql/percona-xtradb-cluster.conf.d/mysqld.cnf which flawless works on the other 2 Nodes:

wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_name=pxc-cluster
wsrep_cluster_address=gcomm://xx.xx.107.133,xx.xx.107.134,xx.xx.107.135
wsrep_node_name=myname
wsrep_node_address=xx.xx.107.134
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:thepassword
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2

Also starting the Node, without making any changes to mysqld.cnf, and filling up the wsrep* variables manually (wsrep_cluster_address, global wsrep_provider, wsrep_node_address, wsrep_sst_auth) did not help.

My Boss will be killing me, please help!

With best regards,
DBA Jay

Installing MySQL Enterprise 5.7 server with custom paths

Lastest Forum Posts - November 20, 2017 - 3:14am
Just wondering how to install MySQL enterprise 5.7 version using RPM install method with custom paths.

1. Default installation look like below. When I do installation using rpm, we don't full control on customizing the paths. understood it involvs effort to customize them post install method.

root 3486 1 0 15:27 pts/0 00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/node01.pid
mysql 3573 3486 1 15:27 pts/0 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=node01.err --pid-file=/var/lib/mysql/node01.pid

2. If we have to modify all the paths manually what about service file in /etc/init.d. I feel customizing the paths effects the usage of mysql services file

Question: Is there any way to install mysql enterprise version of mysql using rpms with custom paths? which I think reduce the effort of post customization. Atleast one level.

Regards,
Venu

Cannot install from tarball packages

Lastest Forum Posts - November 18, 2017 - 10:58pm
Hello, due to firewall restrictions i cannot install using the apt repo method, so I downloaded the tar bundle, but I cannot install due to the following errors -

Code: sudo apt install ./percona-xtradb-cluster-full-57_5.7.19-29.22-3.xenial_amd64.deb Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'percona-xtradb-cluster-full-57' instead of './percona-xtradb-cluster-full-57_5.7.19-29.22-3.xenial_amd64.deb' Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: percona-xtradb-cluster-full-57 : Depends: percona-xtradb-cluster-server-5.7 (>= 5.7.19-29.22-3.xenial) but it is not installable Depends: percona-xtradb-cluster-client-5.7 (>= 5.7.19-29.22-3.xenial) but it is not installable Depends: percona-xtradb-cluster-test-5.7 (>= 5.7.19-29.22-3.xenial) but it is not installable Depends: percona-xtradb-cluster-5.7-dbg (>= 5.7.19-29.22-3.xenial) but it is not installable Depends: percona-xtradb-cluster-server-debug-5.7 but it is not installable Depends: percona-xtradb-cluster-garbd-debug-5.7 (>= 5.7.19-29.22-3.xenial) but it is not installable E: Unable to correct problems, you have held broken packages. Not sure how to proceed.

Thanks.

This Week in Data with Colin Charles 15: Percona Live 2018 Call for Papers and Best Practices for Observability

Latest MySQL Performance Blog posts - November 17, 2017 - 6:10am

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

So we have announced the call for presentations for Percona Live Santa Clara 2018. Please send your submissions in!

As you probably already know, we have been expanding the content to be more than just MySQL and MongoDB. It really does include more open source databases: the whole of 2016 had a “time series” theme to it, and we of course love to have more PostgreSQL content (there have been tracks dedicated to PostgreSQL for sometime now). I found this one comment interesting recently, from John Arundel, “If you’re going to learn one database really well, make it Postgres.” I have been noticing newer developers jump on the PostgreSQL bandwagon. I presume much of this blog’s readership is still MySQL centric, but it will be interesting to see where this goes.

Charity Majors recently wrote Best Practices for Observability. In addition, her book alongside Laine Campbell is now available for purchase on Kindle: Database Reliability Engineering: Designing and Operating Resilient Database Systems. Highly recommended purchase. You can also get it on O’Reilly Safari (free month with those codes for Percona Live Europe Dublin attendees).

Are you using Google Cloud Spanner? It now has multi-region support, and has an updated SLA for 99.999% uptime. That’s basically no more than 5.25 minutes of downtime per year!

Releases
  • orchestrator 3.0.3 – auto-provisioning Raft nodes, native Consul support, SQLite or MySQL backed setups, web UI improvements and more. Solid release.
  • MongoDB 3.6 – you can download this soon.
  • MariaDB 10.1.29 – important changes to Mariabackup, InnoDB/XtraDB, and some security fixes
  • Apache Kylin 2.2 – OLAP for Hadoop, originally developed at eBay, has enhanced ACL support amongst other improvements.
  • Cassandra on Azure Cosmos DB
Link List Upcoming Appearances
  • ACMUG 2017 gathering – Beijing, China, December 9-10 2017 – it was very exciting being there in 2016, I can only imagine it’s going to be be bigger and better for 2017, since it is now two days long!
Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

SQL_MODE=&amp;quot;NO_ENGINE_SUBSTITUTION&amp;quot; replication issue - 5.7.19 on Centos 7

Lastest Forum Posts - November 17, 2017 - 2:16am
Hi,

We are in the process of attempting to upgrade our production DB from 5.6 to 5.7, and are at the stage of adding in a new slave so that it can form a 3 slave 1 master setup, which will ultimately be promoted to be the new Master once the other 2 slaves have also been upgraded.

New slave is running on Centos 7.4, data volume is a 4 disk raid10 ssd lvm.

So, Percona 5.7 installed, full backup restored, mysql upgrade run.

Slave setup and started.

Both Master (5.6) and Slave (5.7) have SQL_MODE="NO_ENGINE_SUBSTITUTION" set in ./etc/my.cnf, in the correct section. Checked in both mysqld.log and global variables.

However, despite this, once the slave is started and begins to catch up, it soon runs into error 1048's

2017-11-17T09:30:19.772355Z 13 [ERROR] Slave SQL for channel '': Error 'Column 'date' cannot be null' on query. Default database: 'my_database'. Query: 'INSERT INTO user_favourites (user_id, favourite_id, date) VALUES ('3504328', 2316904, NULL)', Error_code: 1048
2017-11-17T09:30:19.772377Z 13 [Warning] Slave: Column 'date' cannot be null Error_code: 1048

From what we have read, this is an expected feature of the new stricter sql_mode setting, however, our understanding was that setting to NO_ENGINE_SUBSTITUTION was the correct temporary work round.

As this is a new slave there is no pressing issue, however, this doesn't happen on test setup (Centos 6.9 Master and Slave, running same version, 5.7.19) and currently prevents further progress.

Any insights would be greatly appreciated.

Thanks,

Mike

New Cluster: nodes 2 and 3 don't join but rather start their own clusters

Lastest Forum Posts - November 16, 2017 - 4:07am
Followed the instructions exactly by using copy/paste, changing nothing but the IP addresses involved.

However. when I start the second node using the instructions above, it comes up not as a member of the cluster but isolated.

first host:
+--------------------------+--------------------------------------+
| Variable_name | Value |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | f28ae440-cabb-11e7-976c-17bca5fe1b6b |
| wsrep_cluster_status | Primary |
+--------------------------+--------------------------------------+

second host:
+--------------------------+--------------------------------------+
| Variable_name | Value |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 9df5f23e-cabc-11e7-b6e2-030d02b0e16b |
| wsrep_cluster_status | Primary |
+--------------------------+--------------------------------------+

Environment: Ubuntu 16.04, fresh install with only apt-get update && apt-get upgrade -y done prior to installing cluster. ufw firewall is disabled

config from host 1:
# Template my.cnf for PXC
# Edit to your requirements.
[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin log_slave_updates
expire_logs_days=7

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_name=pxc-cluster
wsrep_cluster_address=gcomm://192.168.175.141,192.168.168.30
wsrep_node_name=pxc1
wsrep_node_address=192.168.175.141
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuserassw0rd
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2

config from host 2:
# Template my.cnf for PXC
# Edit to your requirements.
[mysqld]
server-id=2
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin log_slave_updates
expire_logs_days=7

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_name=pxc-cluster
wsrep_cluster_address=gcomm://192.168.175.141,192.168.168.30
wsrep_node_name=pxc2
wsrep_node_address=192.168.168.30
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuserassw0rd
pxc_strict_mode=ENFORCING
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2

sst user was created on host 1
+---------------+-----------+
| user | host |
+---------------+-----------+
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
| sstuser | localhost |
+---------------+-----------+
+----------------------------------------------------------------------------------------+
| Grants for sstuser@localhost |
+----------------------------------------------------------------------------------------+
| GRANT RELOAD, PROCESS, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost' |
+----------------------------------------------------------------------------------------+

Logs are posted to https://pastebin.com/XLUjX7HF

This seems to be similar to https://www.percona.com/forums/quest...fter-bootstrap and https://www.percona.com/forums/quest...ster-size-of-1 -- neither of which has an answer.

Percona MySQL Server working but filling the messages log with errors

Lastest Forum Posts - November 15, 2017 - 8:25am
I have Percona MySQL server 5.7 running under CentOS 7 and although mysql is running without any noticeable errors, it is filling my /var/log/messages with the following every ten seconds:

Code: Nov 15 10:07:27 server systemd: mysqld.service holdoff time over, scheduling restart. Nov 15 10:07:27 server systemd: Starting MySQL Percona Server... Nov 15 10:07:27 server mysqld_safe: 171115 10:07:27 mysqld_safe Adding '/usr/lib64/libjemalloc.so.1' to LD_PRELOAD for mysqld Nov 15 10:07:27 server mysqld_safe: 171115 10:07:27 mysqld_safe Logging to '/var/lib/mysql/server.local.err'. Nov 15 10:07:27 server mysqld_safe: 171115 10:07:27 mysqld_safe A mysqld process already exists Nov 15 10:07:27 server systemd: mysqld.service: main process exited, code=exited, status=1/FAILURE Nov 15 10:07:28 server systemd: Failed to start MySQL Percona Server. Nov 15 10:07:28 server systemd: Unit mysqld.service entered failed state. Nov 15 10:07:28 server systemd: Triggering OnFailure= dependencies of mysqld.service. Nov 15 10:07:28 server systemd: mysqld.service failed. Nov 15 10:07:28 server systemd: Started Service Status Monitor. Nov 15 10:07:28 server systemd: Starting Service Status Monitor...
Even though it's stating in there that it failed to start the Percona server, it appears to be working as my website is still doing mysql queries. I know very little about mysql admin and was hoping a Percona mysql guru could shed some light on what is happening. Thanks!

Visit Percona Store


General Inquiries

For general inquiries, please send us your question and someone will contact you.