GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

Arbitrator Garbd

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Arbitrator Garbd

    Hello ,

    I want to test an arbitrator on my pxc . I put it on the HAproxy server. After installing packages openssl098e-0.9.8e-17.el6.centos.2.i686.rpm and galera-25.3.1-1.rhel6.i386.rpm , I configure the file /etc/default/garb like that :

    # A space-separated list of node addresses (address[ort]) in the cluster
    GALERA_NODES=node1,node2,node3,node4:4567
    # Galera cluster name, should be the same as on the rest of the nodes.
    GALERA_GROUP=my_cluster
    # Log file for garbd. Optional, by default logs to syslog
    LOG_FILE="/var/log/garbd.log"

    Then I start the service with the following command

    garbd -a gcomm://node4:4567 -g my_cluster -l //var/log/gardb.log -d
    This sound works properly .


    But when I want to start garbd :

    [root@mon_serveur log]# service garb start

    I have the next error :

    List of GALERA_NODES is not configured [FAILED]

    Here is the results of the log file


    2013-11-18 14:34:50.058 INFO: CRC-32C: using "slicing-by-8" algorithm.
    2013-11-18 14:34:50.059 INFO: Read config:
    daemon: 1
    name: garb
    address: gcomm://node4:4567
    group: my_cluster
    sst: trivial
    donor:
    options: gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_master_slave=yes
    cfg:
    log: /var/log/gardb.log

    2013-11-18 14:34:50.067 INFO: protonet asio version 0
    2013-11-18 14:34:50.068 INFO: Using CRC-32C (optimized) for message checksums.
    2013-11-18 14:34:50.068 INFO: backend: asio
    2013-11-18 14:34:50.071 INFO: GMCast version 0
    2013-11-18 14:34:50.072 INFO: (32c90a52-5056-11e3-8a06-8679905b008d, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
    2013-11-18 14:34:50.072 INFO: (32c90a52-5056-11e3-8a06-8679905b008d, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
    2013-11-18 14:34:50.074 INFO: EVS version 0
    2013-11-18 14:34:50.074 INFO: PC version 0
    2013-11-18 14:34:50.074 INFO: gcomm: connecting to group 'my_cluster', peer '@IP:4567'
    2013-11-18 14:34:50.078 WARN: (32c90a52-5056-11e3-8a06-8679905b008d, 'tcp://0.0.0.0:4567') address 'tcp://@IP:4567' points to own listening address, blacklisting
    2013-11-18 14:34:53.077 WARN: no nodes coming from prim view, prim not possible
    2013-11-18 14:34:53.077 INFO: view(view_id(NON_PRIM,32c90a52-5056-11e3-8a06-8679905b008d,1) memb {
    32c90a52-5056-11e3-8a06-8679905b008d,0
    } joined {
    } left {
    } partitioned {
    })
    2013-11-18 14:34:53.577 WARN: last inactive check more than PT1.5S ago (PT3.50404S), skipping check
    2013-11-18 14:35:23.087 INFO: view((empty))
    2013-11-18 14:35:23.088 ERROR: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
    at gcomm/src/pc.cpp:connect():141
    2013-11-18 14:35:23.088 ERROR: gcs/src/gcs_core.c:gcs_core_open():196: Failed to open backend connection: -110 (Connection timed out)
    2013-11-18 14:35:23.088 ERROR: gcs/src/gcs.c:gcs_open():1291: Failed to open channel 'my_cluster' at 'gcomm://@IP:4567': -110 (Connection timed out)
    2013-11-18 14:35:23.088 FATAL: Failed to open connection to group: 110 (Connection timed out)
    at garb/garb_gcs.cpp:Gcs():35
    I also try these commands but the result is the same:
    garbd -a gcomm://node1,node2,node3,node4 -g my_cluster --donor node1 -l /var/log/garbd.log -d
    garbd -a gcomm://node1,node2,node3,node4 -g my_cluster --sst xtrabackup --donor node1 -l /var/log/garbd.log -d
    garbd -a gcomm://node1,node2,node3,node4 -g my_cluster --sst rsync --donor node1 -l /var/log/garbd.log -d
    garbd -a gcomm://node4 -g my_cluster --sst rsync --donor node1 -l /var/log/garbd.log -d
    garbd -a gcomm://node1,node2,node3,node4 -g my_cluster --sst rsync --donor node1 -l /var/log/garbd.log -d
    Does anyone now how to figure out this problem. Thanks in advance.

  • #2
    Hello,

    Are you sure the existing nodes: node1,2,3 are accessible via hostnames? Maybe try using IP number of one of the nodes?
    Also, the host where you run the garbd should have access to TCP port 4567 on all the nodes.
    This command should be enough to test it:
    Code:
    garbd -a gcomm://x.x.x.x:4567 -g my_cluster
    where x.x.x.x is IP number of one of the working nodes

    Comment


    • #3
      Use spaces as separator when defining GALERA_NODES variable

      Comment

      Working...
      X