Emergency

Announcement

Collapse
No announcement yet.

Cluster hangs, too many connections. Processes in 'wsrep in pre-commit stage' state.

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Cluster hangs, too many connections. Processes in 'wsrep in pre-commit stage' state.

    Hi,

    I am running 5.6.15-56-log Percona XtraDB Cluster (GPL), Release 25.5, Revision 759, wsrep_25.5.r4061 on Fedora 20. I have three nodes within the cluster happily doing their thing for the most part yet when we start experiencing high traffic the cluster will start locking up.

    The connection limit will be reached quickly with most processes in the list showing 'wsrep in pre-commit stage'. The queries are all INSERT and UPDATE on the same table (which has a primary key).

    The logs don't show anything of interest other than '[Warning] Too many connections'.

    I have set the gcs.fc_limit to 1000 which has helped reduce the number of times the cluster locks, however I cannot eliminate the problem completely.

    Other threads have suggested checking:

    SHOW STATUS LIKE 'Threads%'; and

    SELECT substring_index(host, ':',1) AS host_name,state,count(*) FROM information_schema.processlist GROUP BY state,host_name; Unfortunately I haven't been able to execute them while the problem is occurring yet.

    Before the cluster we had a Master-Slave setup in place rather than Master-Master. If this problem cannot be addressed is there an easy way to revert to a Master-Slave setup?

    Thanks.

  • #2
    Hi sjregan, did you find any resolution to this i am getting the same error on my production xtradb cluster, when we try to alter a table with 500K rows
    here is the package version
    ||/ Name Version Description
    +++-===========================================-===========================================-================================================== ================================================== ==
    un percona-server-client-5.1 <none> (no description available)
    un percona-server-client-5.5 <none> (no description available)
    un percona-server-common-5.1 <none> (no description available)
    un percona-server-common-5.5 <none> (no description available)
    un percona-server-server-5.1 <none> (no description available)
    un percona-server-server-5.5 <none> (no description available)
    ii percona-toolkit 2.2.7 Advanced MySQL and system command-line tools
    ii percona-xtrabackup 2.1.8-733-1.precise Open source backup tool for InnoDB and XtraDB
    un percona-xtradb-client-5.0 <none> (no description available)
    ii percona-xtradb-cluster-client-5.5 5.5.34-25.9-607.precise Percona Server database client binaries
    ii percona-xtradb-cluster-common-5.5 5.5.34-25.9-607.precise Percona Server database common files (e.g. /etc/mysql/my.cnf)
    un percona-xtradb-cluster-galera <none> (no description available)
    ii percona-xtradb-cluster-galera-2.x 163.precise Galera components of Percona XtraDB Cluster
    un percona-xtradb-cluster-galera-25 <none> (no description available)
    ii percona-xtradb-cluster-server-5.5 5.5.34-25.9-607.precise Percona Server database server binaries
    un percona-xtradb-server-5.0 <none> (no description available)

    and i am running on a VM with 4GB Ram

    Comment


    • #3
      Increasing gcs.fc_limit is the correct workaround but setting it to 1000 seems to be too much. It's default is 16. You should also check disk IO latency and also review hardware settings which might need to be tuned for better performance.

      Comment


      • #4
        hi jrivera, Thanks for the response, but these are machines in the cloud, so i am not sure how and which hardware settings i can change, i am attaching the plots from our nagiosgraphs that show the CPU usage, Disk IO and Memory consumption respective, do you see anything standing out?

        Comment


        • #5
          well the photo doesn't seem to upload in the right size, not sure how to send you the image, all the stats seems to quite low, disk IO avg to about 150, CPU idle is quite high as well, memory used for active data is about 67%, total used is about 80%, so i am not sure what could be contributing to this slowness in the cluster

          Comment


          • #6
            The article here http://www.percona.com/blog/2013/05/...ter-for-mysql/ mentions about three parameters do i need to adjust all of them? gcs.fcs_limit, gcs.fc_master_slave, gcs.fc_factor ? is it safe to let the cluster lag and will that eliminate the " wsrep in pre-commit stage" messages?

            Comment


            • #7
              Was this problem solved? What's the current status?
              WB

              Comment


              • #8
                no it wasn't solved, but we have just upgraded the db cluster to 5.5.41 will test tomorrow if that helped

                Comment


                • #9
                  Couldn't wait so to hear back from you, cheers man!
                  WB

                  Comment

                  Working...
                  X