Announcement

Announcement Module
Collapse
No announcement yet.

How does one troubleshoot a server crashing?

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • How does one troubleshoot a server crashing?

    Hi folks,
    New to the Percona scene and I suppose still green at Linux development (coming from Windows and firmware development). How do I help? I have a server in a Percona XtraDB cluster of three that crashes often (1-3x a day) and recovers. How do I find out if this is a bug that I need to submit, or someone already submitted the bug, or if there is a fix already? For instance, I am going to paste my log here, and I know one of you will probably point me the way, but can you also tell me how you
    found the answer and if I should submit as a bug (and how)? https://bugs.launchpad.net/percona-xtradb-cluster is still a bit cryptic to me. Thanks!
    ____________________________
    2014-07-18 02:53:45 26203 [Note] WSREP: (9f3d8a11-0e46-11e4-8791-83304bc0dc9b, 'tcp://0.0.0.0:4567') reconnecting to 426b8225-0db1-11e4-9736-f28d21187f64 (tcp://10.10.10.31:4567), attempt 0
    2014-07-18 02:53:48 26203 [Note] WSREP: (9f3d8a11-0e46-11e4-8791-83304bc0dc9b, 'tcp://0.0.0.0:4567') address 'tcp://10.10.10.32:4567' pointing to uuid 9f3d8a11-0e46-11e4-8791-83304bc0dc9b is blacklisted, skipping
    2014-07-18 02:53:49 26203 [Note] WSREP: (9f3d8a11-0e46-11e4-8791-83304bc0dc9b, 'tcp://0.0.0.0:4567') reconnecting to c203f600-0db1-11e4-8989-6bba6abcb41f (tcp://10.10.10.33:4567), attempt 0
    2014-07-18 02:53:49 26203 [Note] WSREP: (9f3d8a11-0e46-11e4-8791-83304bc0dc9b, 'tcp://0.0.0.0:4567') cleaning up duplicate 0x7fa1a401e220 after established 0x7fa1a40058e0
    07:53:49 UTC - mysqld got signal 11 ;
    This could be because you hit a bug. It is also possible that this binary
    or one of the libraries it was linked against is corrupt, improperly built,
    or misconfigured. This error can also be caused by malfunctioning hardware.
    We will try our best to scrape up some info that will hopefully help
    diagnose the problem, but since we have already crashed,
    something is definitely wrong and this may fail.
    Please help us make Percona XtraDB Cluster better by reporting any
    bugs at https://bugs.launchpad.net/percona-xtradb-cluster
    key_buffer_size=25165824
    read_buffer_size=131072
    max_used_connections=12
    max_threads=202
    thread_count=10
    connection_count=10
    It is possible that mysqld could use up to
    key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 105157 K bytes of memory
    Hope that's ok; if not, decrease some variables in the equation.
    Thread pointer: 0x0
    Attempting backtrace. You can use the following information to find out
    where mysqld died. If you see no messages after this, something went
    terribly wrong...
    stack_bottom = 0 thread_stack 0x40000
    /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x8ea56e]
    /usr/sbin/mysqld(handle_fatal_signal+0x4a4)[0x68d2f4]
    /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7fa2113f8cb0]
    [0x7fa1a40001da]
    You may download the Percona XtraDB Cluster operations manual by visiting
    http://www.percona.com/software/percona-xtradb-cluster/. You may find information
    in the manual which will help you identify the cause of the crash.
    140718 02:53:50 mysqld_safe Number of processes running now: 0
    140718 02:53:50 mysqld_safe WSREP: not restarting wsrep node automatically
    140718 02:53:50 mysqld_safe mysqld from pid file /var/lib/mysql/mysqld.pid ended
    140718 02:55:17 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
    140718 02:55:17 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.vAqfRP' --pid-file='/var/lib/mysql/NODE2-recover.pid'
    140718 02:56:33 mysqld_safe WSREP: Recovered position b33d11f5-cbed-11e3-a1d5-ea0534d9f1f5:125617
    2014-07-18 02:56:36 0 [Note] WSREP: wsrep_start_position var submitted: 'b33d11f5-cbed-11e3-a1d5-ea0534d9f1f5:125617'
    2014-07-18 02:56:36 12539 [Note] WSREP: Read nil XID from storage engines, skipping position init
    2014-07-18 02:56:36 12539 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/libgalera_smm.so'
    2014-07-18 02:56:36 12539 [Note] WSREP: wsrep_load(): Galera 3.5(r178) by Codership Oy loaded successfully.
    2014-07-18 02:56:36 12539 [Note] WSREP: CRC-32C: using "slicing-by-8" algorithm.
    2014-07-18 02:56:36 12539 [Note] WSREP: Found saved state: b33d11f5-cbed-11e3-a1d5-ea0534d9f1f5:-1
    2014-07-18 02:56:36 12539 [Note] WSREP: Passing config to GCS: base_host = 10.10.10.32; base_port = 4567; cert.log_conflicts = no; debug = no; evs.inactive_check_period = PT10S; evs.inactive_timeout = PT1M; evs.install_timeout = PT1M; evs.join_retrans_period = PT1S; evs.keepalive_period = PT3S; evs.max_install_timeouts = 1; evs.send_window = 1024; evs.stats_report_period = PT1M; evs.suspect_timeout = PT30S; evs.user_send_window = 512; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 1024M; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 2; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.npvo = false; pc.version = 0; pc.wait_pri
    2014-07-18 02:56:37 12539 [Note] WSREP: Service thread queue flushed.
    2014-07-18 02:56:37 12539 [Note] WSREP: Assign initial position for certification: 125617, protocol version: -1
    2014-07-18 02:56:37 12539 [Note] WSREP: wsrep_sst_grab()
    2014-07-18 02:56:37 12539 [Note] WSREP: Start replication
    2014-07-18 02:56:37 12539 [Note] WSREP: Setting initial position to b33d11f5-cbed-11e3-a1d5-ea0534d9f1f5:125617
    2014-07-18 02:56:37 12539 [Note] WSREP: protonet asio version 0
    2014-07-18 02:56:37 12539 [Note] WSREP: Using CRC-32C (optimized) for message checksums.
    2014-07-18 02:56:37 12539 [Note] WSREP: backend: asio
    2014-07-18 02:56:37 12539 [Note] WSREP: GMCast version 0
    2014-07-18 02:56:37 12539 [Note] WSREP: (0b521a5e-0e51-11e4-b422-ee6e10b1721f, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
    2014-07-18 02:56:37 12539 [Note] WSREP: (0b521a5e-0e51-11e4-b422-ee6e10b1721f, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
    2014-07-18 02:56:37 12539 [Note] WSREP: EVS version 0
    2014-07-18 02:56:37 12539 [Note] WSREP: PC version 0
    2014-07-18 02:56:37 12539 [Note] WSREP: gcomm: connecting to group 'my_wsrep_cluster', peer '10.10.10.31:4567,10.10.10.33:4567,10.10.10.32:456 7'
    2014-07-18 02:56:37 12539 [Warning] WSREP: (0b521a5e-0e51-11e4-b422-ee6e10b1721f, 'tcp://0.0.0.0:4567') address 'tcp://10.10.10.32:4567' points to own listening address, blacklisting
    2014-07-18 02:56:38 12539 [Note] WSREP: (0b521a5e-0e51-11e4-b422-ee6e10b1721f, 'tcp://0.0.0.0:4567') address 'tcp://10.10.10.32:4567' pointing to uuid 0b521a5e-0e51-11e4-b422-ee6e10b1721f is blacklisted, skipping
    2014-07-18 02:56:38 12539 [Note] WSREP: (0b521a5e-0e51-11e4-b422-ee6e10b1721f, 'tcp://0.0.0.0:4567') address 'tcp://10.10.10.32:4567' pointing to uuid 0b521a5e-0e51-11e4-b422-ee6e10b1721f is blacklisted, skipping

  • #2
    address 'tcp://x.x.x.x:4567' points to own listening address, blacklisting ...

    all addresses seems to be recognized as blacklisted

    Comment


    • #3
      I found code correspond to the warning at gcomm/src/gmcast.cpp

      blacklist address is registered when connection is established (gcomm::GMCast::handle_established)

      UUID of the log represents cluster id.

      Code:
      2014-07-18 02:53:45 26203 [Note] WSREP: (9f3d8a11-0e46-11e4-8791-83304bc0dc9b, 'tcp://0.0.0.0:4567') reconnecting to 426b8225-0db1-11e4-9736-f28d21187f64 (tcp://10.10.10.31:4567), attempt 0
      2014-07-18 02:53:48 26203 [Note] WSREP: (9f3d8a11-0e46-11e4-8791-83304bc0dc9b, 'tcp://0.0.0.0:4567') address 'tcp://10.10.10.32:4567' pointing to uuid 9f3d8a11-0e46-11e4-8791-83304bc0dc9b is blacklisted, skipping
      2014-07-18 02:53:49 26203 [Note] WSREP: (9f3d8a11-0e46-11e4-8791-83304bc0dc9b, 'tcp://0.0.0.0:4567') reconnecting to c203f600-0db1-11e4-8989-6bba6abcb41f (tcp://10.10.10.33:4567), attempt 0

      UUID
      9f3d8a11-0e46-11e4-8791-83304bc0dc9b
      c203f600-0db1-11e4-8989-6bba6abcb41f

      UUID differs, it may be recognized another cluster and black listed.

      I'd like to know 3 things about your configuration.

      1. command when you start nodes.
      2. wsrep_cluster_address(my.cnf setting) of all nodes
      3. wsrep_incoming_addresses (whether all the possible node is added) which you can get by following command.
      mysql> show status like "wsrep%";

      thank you.

      Comment


      • #4
        I also checked bug list and you might hit the following bug which is not fixed yet.

        https://bugs.launchpad.net/percona-x...r/+bug/1330953

        Comment


        • #5
          Thanks Taka!

          I see how you looked at the source code to find the error text about blacklisting, so that's cool. I guess would have to know that blacklisting is bad (I guess I will need to read about it). I saw so many blacklisting messages, and it syncing fine, that I thought it was ok.

          Question: how does one match/search an issue to the bug list? Meaning, how did you even know to look at this particular bug on the bug list?

          Here are answers:

          1. I installed cluster using severalnines script, so I am not sure how the instance gets started automatically by clustercontrol. Here is what 'ps -eaf | grep mysql' shows:
          /usr/sbin/mysqld --basedir=/usr/ --datadir=/var/lib/mysql --plugin-dir=/usr//lib/mysql/plugin --user=mysql --wsrep-provider=/usr/lib/libgalera_smm.so --log-error=/var/log/mysql.log --pid-file=/var/lib/mysql/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 --wsrep_start_position=b33d11f5-cbed-11e3-a1d5-ea0534d9f1f5:134865

          2. Node1 (10.10.10.31): gcomm://10.10.10.33:4567,10.10.10.31:4567,10.10.10.32:4567
          Node2 (10.10.10.32): gcomm://10.10.10.33:4567,10.10.10.32:4567,10.10.10.31:4567
          Node3 (10.10.10.33): gcomm://10.10.10.33:4567,10.10.10.32:4567,10.10.10.31:4567

          This configuration was automatically done by SeveralNine's ClusterControl. These three nodes are geographically dispersed (different cities), different MySQL segments, connected by VPN. It works great most of the time, and they are synced, unless the mysqld crashes. It only seems to crash on the two slow machines (Node2 and Node3).

          3. I simply lists this on every node: ,,

          Here is the entire output for node2:
          wsrep_local_state_uuid b33d11f5-cbed-11e3-a1d5-ea0534d9f1f5
          wsrep_protocol_version 5
          wsrep_last_committed 137303
          wsrep_replicated 390
          wsrep_replicated_bytes 470247
          wsrep_repl_keys 1348
          wsrep_repl_keys_bytes 19754
          wsrep_repl_data_bytes 425533
          wsrep_repl_other_bytes 0
          wsrep_received 4842
          wsrep_received_bytes 6150087
          wsrep_local_commits 390
          wsrep_local_cert_failures 0
          wsrep_local_replays 0
          wsrep_local_send_queue 0
          wsrep_local_send_queue_avg 0.000000
          wsrep_local_recv_queue 0
          wsrep_local_recv_queue_avg 0.097687
          wsrep_local_cached_downto 134385
          wsrep_flow_control_paused_ns 0
          wsrep_flow_control_paused 0.000000
          wsrep_flow_control_sent 0
          wsrep_flow_control_recv 0
          wsrep_cert_deps_distance 57.063421
          wsrep_apply_oooe 0.002144
          wsrep_apply_oool 0.000000
          wsrep_apply_window 1.003314
          wsrep_commit_oooe 0.000000
          wsrep_commit_oool 0.000000
          wsrep_commit_window 1.001754
          wsrep_local_state 4
          wsrep_local_state_comment Synced
          wsrep_cert_index_size 19
          wsrep_causal_reads 0
          wsrep_cert_interval 0.004073
          wsrep_incoming_addresses ,,
          wsrep_cluster_conf_id 62
          wsrep_cluster_size 3
          wsrep_cluster_state_uuid b33d11f5-cbed-11e3-a1d5-ea0534d9f1f5
          wsrep_cluster_status Primary
          wsrep_connected ON
          wsrep_local_bf_aborts 0
          wsrep_local_index 1
          wsrep_provider_name Galera
          wsrep_provider_vendor Codership Oy <info@codership.com>
          wsrep_provider_version 3.5(r178)
          wsrep_ready ON


          Thanks!

          Comment


          • #6
            Hi, thank you for your reply.

            >Question: how does one match/search an issue to the bug list? Meaning, how did you even know to look at this particular bug on the bug list?
            free word search by "blacklist" because this is the key and galera specific problem.

            I also thank you for your 3 information.

            ===
            2014-07-18 02:56:37 12539 [Warning] WSREP: (0b521a5e-0e51-11e4-b422-ee6e10b1721f, 'tcp://0.0.0.0:4567') address 'tcp://10.10.10.32:4567' points to own listening address, blacklisting
            2014-07-18 02:56:38 12539 [Note] WSREP: (0b521a5e-0e51-11e4-b422-ee6e10b1721f, 'tcp://0.0.0.0:4567') address 'tcp://10.10.10.32:4567' pointing to uuid 0b521a5e-0e51-11e4-b422-ee6e10b1721f is blacklisted, skipping
            ===

            "points to own listening address" seems wrong.

            how about get rid of own address from wsrep_cluster_address although official manual doesn't say so.

            ex.) 10.10.10.32 config.
            wsrep_cluster_address=gcomm://10.10.10.33:4567,10.10.10.31:4567

            http://www.percona.com/doc/percona-x...luster_address

            Comment


            • #7
              Ok, I will try that tonight and will report back later. Thanks again taka.

              PS. How did you search the bug list to find that possible bug related to my problem? Thanks!

              Comment

              Working...
              X