Percona on EC2 across regions

  • Filter
  • Time
  • Show
Clear All
new posts

  • Percona on EC2 across regions

    I am trying to setup the following:
    3 node cluster on east region (these nodes work just fine).
    2 node cluster in west region (this where is my issue is).
    1 node as an arbitrator (which I'm not entirely sure how to set this up).

    The 3 node cluster in the east is working but when trying to connect the 2 in the west I get an error:
    140318 15:57:51 [Warning] WSREP: (964ad955-aed7-11e3-89a5-9ffeab59941d, 'tcp://') address 'tcp://' points to own listening address, blacklisting
    140318 15:57:54 [Warning] WSREP: no nodes coming from prim view, prim not possible
    140318 15:57:54 [Note] WSREP: view(view_id(NON_PRIM,964ad955-aed7-11e3-89a5-9ffeab59941d,1) memb {
    } joined {
    } left {
    } partitioned {
    140318 15:57:54 [Warning] WSREP: last inactive check more than PT1.5S ago (PT3.50367S), skipping check
    140318 15:58:24 [Note] WSREP: view((empty))
    140318 15:58:24 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
    at gcomm/src/pc.cpp:connect():139
    140318 15:58:24 [ERROR] WSREP: gcs/src/gcs_core.c:gcs_core_open():195: Failed to open backend connection: -110 (Connection timed out)
    140318 15:58:24 [ERROR] WSREP: gcs/src/gcs.c:gcs_open():1289: Failed to open channel 'ves_cluster' at 'gcomm://,,,10.110.4 9.101,': -110 (Connection timed out)
    140318 15:58:24 [ERROR] WSREP: gcs connect failed: Connection timed out
    140318 15:58:24 [ERROR] WSREP: wsrep::connect() failed: 6
    140318 15:58:24 [ERROR] Aborting are the west nodes. Security groups are updated (tested with using telnet), added the west IPs to my.cnf file (wsrep_cluster_address=gcomm://,,,10.110.4 9.101,, added the user specified in my my.cnf file and password (create user 'blah'@'10.110.64.%' identified by 'password' and added the grant rights.

    Is there something else I'm missing?

    I need to setup an arbitrator node but I'm not sure on how this is done. Is it:

    garbd -a gcomm://MAIN_NODE_IP:4567 -g my_cluster or garbd -a gcomm://ALL_IPS_IN_CLUSTERS:4567 -g my_cluster