Debugging MariaDB Galera Cluster SST Problems – A Tale of a Funny Experience

MariaDB galera cluster starting timeRecently, I had to work on an emergency for a customer who was having a problem restarting a MariaDB Galera Cluster. After a failure in the cluster they decided to restart the cluster entirely following the right path: bootstrapping the first node, and then adding the rest of the members, one by one. Everything went fine until one of the nodes rejected the request to join the cluster.

Given this problem, the customer decided to ask us to help with the problematic node because none of the tests they did worked, and the same symptom repeated over and over: SST started, copied few gigs of data and then it just hanged (apparently) while the node remained out of the cluster.

Identifying the issue…

Once onboard with the issue, initially I just checked that the cluster was trying a SST: given the whole dataset was about 31GB I decided to go directly to a healthy solution: to clean up the whole datadir and start afresh. No luck at all, the symptom was exactly the same no matter what I tried:

After reviewing the logs I noticed few strange things. In the joiner:

In the donor (output has been a obfuscated to avoid sharing private info and the times are not matching deliberately):

So SST starts correctly and then failed. I tried forcing different donors, checked firewall rules, etc. Nothing.

Additionally I noticed that the process was starting over and over, while monitoring,  .ssh folder was growing up to certain size (something around 7GB) and then would start over. The logs kept showing the same messages, the init script failed with an error but the process kept running either until I executed service mysql stop or kill -9 to all processes. It was getting stranger every minute.

At this point I was totally lost, scratching my head looking for solutions. More strange still was that trying a manual SST using netcat worked perfectly! So it was definitely a problem with the init script. Systemd journal was not providing any insight…

And then…

MariaDB Cluster dies in the SST process after 90 seconds

Suddenly I noticed that the failure was happening roughly 90 seconds after the start. A short googling later—doing more specific search—I found this page:
https://mariadb.com/kb/en/library/systemd/#ssts-and-systemd which explained precisely my problem.

The MariaDB init script has changed its timeout from 900 seconds to 90 while MySQL Community and Percona Server has this value set to 15 mins. Also it seems that this change has caused some major issues with nodes crashing as documented in MDEV-15607 — the bug is reported as fixed but we still can see timeout problems.

I observed that in case of failure, systemd was killing the mysqld process but not stopping the service. This results in an infinite SST loop that only stops when the service is killed or stopped via systemd command.

The fix was super easy, I just needed to create a file to be used by systemd that sets the timeout to a more useful value as follows:

As you may notice I set the timeout to 15 minutes but I could set it to any time. That was it, the next SST will have plenty of time to finish. Reference to this change is very well documented here

On reflection…

One could argue about this change, and I’m still having some internal discussions about it. In my opinion, a 90 seconds timeout is too short for a Galera cluster. It is very likely that almost any cluster will reach that timeout during SST. Even a regular MySQL server that suffers a crash with a high proportion of dirty pages or many operations to rollback, 90 seconds doesn’t seem to be an feasible time for crash recovery. Why the developers changed it to such a short timeout I have no idea. Luckily, it is very easy to fix now I know the reason.


Photo by Tim Gouw on Unsplash

Share this post

Comments (13)

  • Aurélien LEQUOY Reply

    900 secondes is not enough too.

    One point you should mention the exact version of MariaDB. (all version since 6~10 months) shouldn’t have this problem.

    I was really bored about this lack of test from MariaDB + GaleraCluster with SST, normally now it’s should be done on each new release to prevent stupid things like this.

    for me I use :

    TimeoutSec=infinity

    or you can put 0 if I good remember also

    February 12, 2019 at 8:59 am
    • Francisco Bordenave Reply

      Aurélien, yeap, fair enough and I agree with you regarding timeouts, infinite IMO is not good either but maybe is just me against infinite processes 🙂
      Regarding version this is 10.2.8-MariaDB-10.2.8 which is quite old actually. I will have a look on what are these timeouts in current versions, thanks for pointing out!

      February 12, 2019 at 9:11 am
  • Rich M Reply

    Been having exactly this problem for some time and our database is somewhat bigger than that one.. Thankfully, we discovered it on a test database first, so we’re aware of it.. but it’s incredibly annoying if you forget about changing the system file (if you can find it/edit it – permissions).
    In my case, it was going from 10.1.28 to 10.1.34 that highlighted the problem.
    (also MDEV-15606)

    February 12, 2019 at 9:58 am
    • Nicholas P Reply

      It’s systemd. Use systemctl edit mysql and create an override. (Then symlink the /etc/systemd/system/mysql.service.d/ file to /etc/systemd/system/mysqld.service.d/ just for compat) There’s a good chance you should be or are already doing this for the number of processes, number of open files, or memlock.

      February 12, 2019 at 2:01 pm
  • Art van Scheppingen Reply

    We had the exact same problem somewhere mid-2018. As by then they identified the issue and promised to fix it in the next release I never gave it much thought. We simply added it to our Ansible playbook and that solved it for us…

    February 12, 2019 at 11:21 am
  • jtomaszon Reply

    Honestly, I’ll not blame MariaDB, PXC or anything coming from Galera.. this comes from systemd-crap
    They don’t know how to start databases, this was SST, what happen when you start pulling the buffer pool from disk and you have more than 90 seconds to start?
    The startup script should be smart enough to understand the “start” process is in progress, giving you an alert of “Long running start process is running” with a WARN maybe? and let you continue..
    EXIT and kill the process is simple wrong.

    February 12, 2019 at 11:39 am
  • Slawomir Lisznianski Reply

    Ok, I can be the first to defend both the systemd and mariadb new timeout, call me devil’s advocate. Why is recovery/joining considered a “startup process”? It should be marked as started, what it’s doing intrnally to become a fully joined node is not systemd’s problem, and a lot can show progress there. 90 seconds is too long for a well written server to start. Galera init phase as advertised to systemd is broken and this issue exposed it.

    February 13, 2019 at 9:18 am
    • Francisco Bordenave Reply

      Slawomir and Jtomaszon, I would agree with you both but (there is always a but) your reasoning should be true if we just got a fail and service stop working, not a continuos failure that just keeps starting service even the command throws failed (notice that timeout just killed the mysqld server but remains trying to start it as it would be mysqld_safe), I’m ok with an error message and failure but not with a constant loop of SSTs until you decide to either kill or stop the service which never started

      February 13, 2019 at 9:25 am
  • rautamiekka Reply

    Sorry, but I don’t understand this at all. How can that timeout period help this at all ?

    February 13, 2019 at 12:20 pm
    • Francisco Bordenave Reply

      It helps by avoiding systemd killing mysqld process due longer sst than timeout period.

      February 13, 2019 at 2:05 pm
      • rautamiekka Reply

        But the logs seem to indicate some kinda connection or file problem instead of timeouts.

        February 13, 2019 at 2:34 pm
        • Francisco Bordenave Reply

          No, in the donor you see this message:
          Jan 29 18:08:30 node3 -innobackupex-backup: innobackupex: Error writing file ‘UNOPENED’ (Errcode: 32 – Broken pipe)

          Meaning it can’t write a file due broken pipe errors, this means that joiner stopped connection because mysqld was killed and sst process restarted using another connection socket.

          February 13, 2019 at 2:36 pm
  • Valerii Kravchuk Reply

    All you need to know about this problem, proper solutions depending on systemd version, workarounds etc is here:

    https://jira.mariadb.org/browse/MDEV-17571

    February 17, 2019 at 2:19 pm

Leave a Reply