Announcement

Announcement Module
Collapse
No announcement yet.

clean PXC instance complains about its port being unavailable

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • clean PXC instance complains about its port being unavailable

    So, a bit of explanation of the big picture: I am trying to get multiple PXC instances running on a single machine, replicating via Galera to each other. I am running Ubuntu Server 13.04 amd64 in a VirtualBox OSE virtual machine. I installed the Percona debian/ubuntu repository and installed percona-xtradb-cluster-server-5.5, percona-xtrabackup, percona-xtradb-cluster-client-5.5, percona-xtradb-cluster-common-5.5, and percona-xtradb-cluster-galera-2.x from there. mysqld --version shows "mysqld Ver 5.5.33-55 for Linux on x86_64 (Percona XtraDB Cluster (GPL), wsrep_23.7.6.r3915)".

    Once I had PXC installed, I set up a /data/mysql directory and several /etc/mysql/my.*.cnf files, one for each node I intended to create. my.galera1.cnf looks like so:
    Code:
    [mysqld]
    port = 5533
    socket=/tmp/mysql.galera1.sock
    datadir=/data/mysql/galera1
    #basedir=/usr/bin
    user=root
    log_error=error.log
    innodb_file_per_table
    
    ### galera-required settings ###
    query_cache_size=0
    binlog_format=ROW
    default_storage_engine=innodb
    innodb_autoinc_lock_mode=2
    #innodb_locks_unsafe_for_binlog=1
    
    ### wsrep basic settings ###
    wsrep_provider=/usr/lib/libgalera_smm.so
    wsrep_cluster_name=boringname
    wsrep_cluster_address=gcomm://localhost:5533,localhost:5534,localhost:5535
    wsrep_sst_method=rsync
    #wsrep_sst_auth=repluser:replpass
    wsrep_node_name=node1
    wsrep_node_address=localhost:5533
    
    [mysql]
    user=root
    I then called mysql_install_db --defaults-file=/etc/mysql/my.galera1.cnf to initialize the data directory, followed by mysqld_safe --defaults-file=/etc/mysql/my.galera1.cnf --wsrep-new-cluster. mysqld_safe prints the following to the screen:

    130927 23:53:22 mysqld_safe Logging to '/data/mysql/galera1/error.log'.
    130927 23:53:22 mysqld_safe Starting mysqld daemon with databases from /data/mysql/galera1
    130927 23:53:22 mysqld_safe Skipping wsrep-recover for empty datadir: /data/mysql/galera1
    130927 23:53:22 mysqld_safe Assigning 00000000-0000-0000-0000-000000000000:-1 to wsrep_start_position
    130927 23:53:32 mysqld_safe mysqld from pid file /data/mysql/galera1/ubusrv1304-0.pid ended
    and the following to /data/mysql/galera1/error.log:

    130927 23:53:22 mysqld_safe Starting mysqld daemon with databases from /data/mysql/galera1
    130927 23:53:22 mysqld_safe Skipping wsrep-recover for empty datadir: /data/mysql/galera1
    130927 23:53:22 mysqld_safe Assigning 00000000-0000-0000-0000-000000000000:-1 to wsrep_start_position
    130927 23:53:22 [Note] WSREP: wsrep_start_position var submitted: '00000000-0000-0000-0000-000000000000:-1'
    130927 23:53:22 [Note] WSREP: Read nil XID from storage engines, skipping position init
    130927 23:53:22 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/libgalera_smm.so'
    130927 23:53:22 [Note] WSREP: wsrep_load(): Galera 2.7(r157) by Codership Oy <info@codership.com> loaded succesfully.
    130927 23:53:22 [Warning] WSREP: Could not open saved state file for reading: /data/mysql/galera1//grastate.dat
    130927 23:53:22 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
    130927 23:53:22 [Note] WSREP: Preallocating 134219048/134219048 bytes in '/data/mysql/galera1//galera.cache'...
    130927 23:53:22 [Note] WSREP: Passing config to GCS: base_host = localhost; base_port = 5533; cert.log_conflicts = no; gcache.dir = /data/mysql/galera1/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /data/mysql/galera1//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = NO; replicator.causal_read_timeout = PT30S; replicator.commit_order = 3
    130927 23:53:22 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
    130927 23:53:22 [Note] WSREP: wsrep_sst_grab()
    130927 23:53:22 [Note] WSREP: Start replication
    130927 23:53:22 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
    130927 23:53:22 [Note] WSREP: protonet asio version 0
    130927 23:53:22 [Note] WSREP: backend: asio
    130927 23:53:22 [Note] WSREP: GMCast version 0
    130927 23:53:22 [Note] WSREP: (e6e10868-27f9-11e3-85c4-2672bd9349a7, 'tcp://0.0.0.0:5533') listening at tcp://0.0.0.0:5533
    130927 23:53:22 [Note] WSREP: (e6e10868-27f9-11e3-85c4-2672bd9349a7, 'tcp://0.0.0.0:5533') multicast: , ttl: 1
    130927 23:53:22 [Note] WSREP: EVS version 0
    130927 23:53:22 [Note] WSREP: PC version 0
    130927 23:53:22 [Note] WSREP: gcomm: connecting to group 'boringname', peer ''
    130927 23:53:22 [Note] WSREP: Node e6e10868-27f9-11e3-85c4-2672bd9349a7 state prim
    130927 23:53:22 [Note] WSREP: view(view_id(PRIM,e6e10868-27f9-11e3-85c4-2672bd9349a7,1) memb {
    e6e10868-27f9-11e3-85c4-2672bd9349a7,
    } joined {
    } left {
    } partitioned {
    })
    130927 23:53:22 [Note] WSREP: gcomm: connected
    130927 23:53:22 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
    130927 23:53:22 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
    130927 23:53:22 [Note] WSREP: Opened channel 'boringname'
    130927 23:53:22 [Note] WSREP: Waiting for SST to complete.
    130927 23:53:22 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 1
    130927 23:53:22 [Note] WSREP: Starting new group from scratch: e6e177a0-27f9-11e3-92e7-4b4286f53f4a
    130927 23:53:22 [Note] WSREP: STATE_EXCHANGE: sent state UUID: e6e17ffd-27f9-11e3-8c08-8b5bd34ddb16
    130927 23:53:22 [Note] WSREP: STATE EXCHANGE: sent state msg: e6e17ffd-27f9-11e3-8c08-8b5bd34ddb16
    130927 23:53:22 [Note] WSREP: STATE EXCHANGE: got state msg: e6e17ffd-27f9-11e3-8c08-8b5bd34ddb16 from 0 (node1)
    130927 23:53:22 [Note] WSREP: Quorum results:
    version = 2,
    component = PRIMARY,
    conf_id = 0,
    members = 1/1 (joined/total),
    act_id = 0,
    last_appl. = -1,
    protocols = 0/4/2 (gcs/repl/appl),
    group UUID = e6e177a0-27f9-11e3-92e7-4b4286f53f4a
    130927 23:53:22 [Note] WSREP: Flow-control interval: [16, 16]
    130927 23:53:22 [Note] WSREP: Restored state OPEN -> JOINED (0)
    130927 23:53:22 [Note] WSREP: New cluster view: global state: e6e177a0-27f9-11e3-92e7-4b4286f53f4a:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 2
    130927 23:53:22 [Note] WSREP: SST complete, seqno: 0
    130927 23:53:22 [Note] Plugin 'FEDERATED' is disabled.
    /usr/sbin/mysqld: Table 'mysql.plugin' doesn't exist
    130927 23:53:22 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
    130927 23:53:22 InnoDB: The InnoDB memory heap is disabled
    130927 23:53:22 InnoDB: Mutexes and rw_locks use GCC atomic builtins
    130927 23:53:22 InnoDB: Compressed tables use zlib 1.2.3
    130927 23:53:22 InnoDB: Using Linux native AIO
    130927 23:53:22 [Note] WSREP: Member 0 (node1) synced with group.
    130927 23:53:22 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
    130927 23:53:22 InnoDB: Initializing buffer pool, size = 128.0M
    130927 23:53:22 InnoDB: Completed initialization of buffer pool
    InnoDB: The first specified data file ./ibdata1 did not exist:
    InnoDB: a new database to be created!
    130927 23:53:22 InnoDB: Setting file ./ibdata1 size to 10 MB
    InnoDB: Database physically writes the file full: wait...
    130927 23:53:22 InnoDB: Log file ./ib_logfile0 did not exist: new to be created
    InnoDB: Setting log file ./ib_logfile0 size to 5 MB
    InnoDB: Database physically writes the file full: wait...
    130927 23:53:22 InnoDB: Log file ./ib_logfile1 did not exist: new to be created
    InnoDB: Setting log file ./ib_logfile1 size to 5 MB
    InnoDB: Database physically writes the file full: wait...
    InnoDB: Doublewrite buffer not found: creating new
    InnoDB: Doublewrite buffer created
    InnoDB: 127 rollback segment(s) active.
    InnoDB: Creating foreign key constraint system tables
    InnoDB: Foreign key constraint system tables created
    130927 23:53:22 InnoDB: Waiting for the background threads to start
    130927 23:53:23 Percona XtraDB (http://www.percona.com) 5.5.33-rel31.0 started; log sequence number 0
    130927 23:53:23 [ERROR] Can't start server: Bind on TCP/IP port. Got error: 98: Address already in use
    130927 23:53:23 [ERROR] Do you already have another mysqld server running on port: 5533 ?
    130927 23:53:23 [ERROR] Aborting

    130927 23:53:25 [Note] WSREP: Closing send monitor...
    130927 23:53:25 [Note] WSREP: Closed send monitor.
    130927 23:53:25 [Note] WSREP: gcomm: terminating thread
    130927 23:53:25 [Note] WSREP: gcomm: joining thread
    130927 23:53:25 [Note] WSREP: gcomm: closing backend
    130927 23:53:25 [Note] WSREP: view((empty))
    130927 23:53:25 [Note] WSREP: gcomm: closed
    130927 23:53:25 [Note] WSREP: Received self-leave message.
    130927 23:53:25 [Note] WSREP: Flow-control interval: [0, 0]
    130927 23:53:25 [Note] WSREP: Received SELF-LEAVE. Closing connection.
    130927 23:53:25 [Note] WSREP: Shifting SYNCED -> CLOSED (TO: 0)
    130927 23:53:25 [Note] WSREP: RECV thread exiting 0: Success
    130927 23:53:25 [Note] WSREP: recv_thread() joined.
    130927 23:53:25 [Note] WSREP: Closing replication queue.
    130927 23:53:25 [Note] WSREP: Closing slave action queue.
    130927 23:53:25 [Note] WSREP: Service disconnected.
    130927 23:53:25 [Note] WSREP: rollbacker thread exiting
    130927 23:53:26 [Note] WSREP: Some threads may fail to exit.
    130927 23:53:26 InnoDB: Starting shutdown...
    130927 23:53:27 InnoDB: Shutdown completed; log sequence number 1597945
    130927 23:53:27 [Note] /usr/sbin/mysqld: Shutdown complete

    Error in my_thread_global_end(): 1 threads didn't exit
    130927 23:53:32 mysqld_safe mysqld from pid file /data/mysql/galera1/ubusrv1304-0.pid ended
    Seeing the error about binding to port 5533, I ran `netstat -anp | grep 5533`, but got no results, indicating (I think) that port 5533 has no process bound to it, in fact.

    At some point previously, I somehow managed to get the server to at least stay up, and while I could connect to it from the mysql command line client, there was no command that didn't complain that there was no such command.

    I am thoroughly baffled at what I'm doing wrong, so any help at all would be very welcome. Thank you, everyone, in advance!
    Last edited by arunin; 09-30-2013, 03:00 PM.

  • #2
    Never tried to setup PXC on single host and I don't think it is even supported. Surely it can make lot of problems, so I suggest using separate VM/host for each node.
    From the logs I can see:
    [Note] WSREP: Passing config to GCS: base_host = localhost; base_port = 5533;
    wile the default for galera is 4567 and can be specified with gmcast.listen_addr
    see here: http://www.codership.com/wiki/doku.p...era_parameters

    So the problem is you tried to start MySQL listening on the same port as galera.

    Then wsrep_sst_receive_address should be also specified.
    But I don't think it's worth your effort here.

    Comment


    • #3
      Originally posted by przemek View Post
      Never tried to setup PXC on single host and I don't think it is even supported. Surely it can make lot of problems, so I suggest using separate VM/host for each node.
      From the logs I can see:
      [Note] WSREP: Passing config to GCS: base_host = localhost; base_port = 5533;
      wile the default for galera is 4567 and can be specified with gmcast.listen_addr
      see here: http://www.codership.com/wiki/doku.p...era_parameters

      So the problem is you tried to start MySQL listening on the same port as galera.

      Then wsrep_sst_receive_address should be also specified.
      But I don't think it's worth your effort here.
      Ah, that makes sense! Thank you very much for your assistance. I had also come to the conclusion that trying to set up multiple PXCs on one box was not worth the effort, and decided to just spin up a few VMs with one PXC each.

      Comment


      • #4
        Actually I was wrong, there is such possibility it seems: http://www.percona.com/doc/percona-x...singlebox.html
        But still it's easier to setup on 3 separate hosts.

        Comment

        Working...
        X