Buy Percona ServicesBuy Now!

max_connections and Percona XtraDB Cluster Operator

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • max_connections and Percona XtraDB Cluster Operator

    Hello,

    I've been using the guide to test the PXC Operator on a local Kubernetes installation.

    I've been resolving application compatibility issues, but have hit a barrier. My MySQL server logs are now saying the following:

    HTML Code:
    % kubectl logs cluster1-pxc-0 | tail -n 5
    2019-11-04T17:08:16.015741Z 0 [Warning] Too many connections
    2019-11-04T17:08:18.004719Z 0 [Warning] Too many connections
    2019-11-04T17:08:18.004817Z 0 [Warning] Too many connections
    2019-11-04T17:08:18.005107Z 0 [Warning] Too many connections
    2019-11-04T17:08:18.005212Z 0 [Warning] Too many connections
    Connecting directly to the container, I can see that the max_connections setting hasn't been set, meaning each server should alow the default of 151 connections.

    HTML Code:
    % kubectl exec -it cluster1-pxc-0 /bin/bash
    bash-4.2$ mysql -uroot -proot_password
    mysql: [Warning] Using a password on the command line interface can be insecure.
    ERROR 1040 (HY000): Too many connections
    bash-4.2$ grep -r max_connections /etc/my.cnf /etc/my.cnf.d /etc/mysql
    bash-4.2$ echo $?
    1
    Asking ProxySQL what it thinks about the connections:

    HTML Code:
    mysql> select * from mysql_servers;
    +--------------+-------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | hostgroup_id | hostname    | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
    +--------------+-------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    | 11           | 10.244.3.27 | 3306 | 0         | ONLINE | 1000   | 0           | 1000            | 0                   | 1       | 0              |         |
    | 10           | 10.244.3.27 | 3306 | 0         | ONLINE | 1000   | 0           | 1000            | 0                   | 1       | 0              |         |
    +--------------+-------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
    2 rows in set (0.01 sec)
    (I have a single host, with allow_unsafe_configuration: true)

    Due to the connection limit being reached, the health check correctly kills my pxc container. As soon as it comes back up, though, it seems that ProxySQL tries to re-open all of its connections. This means my server is essentially dead.

    Is this max_connections=1000 above a misconfiguration by the operator, or have I missed something?

  • #2
    Hi Martin, I checked in with the team on this and they said that MySQL options like max_connections can be configured using a configuration file. There's some information on this here: https://www.percona.com/doc/kubernet...c/options.html

    Come back to us if that does cover off what you need? Thanks!

    Comment


    • #3
      Hi Lorraine,

      I can indeed configure max_connections for the MySQL servers using the configuration attribute in the PXC custom resource, but per the first ticket I can't configure ProxySQL's limit.

      From my investigation above (which may be flawed!), it looks like ProxySQL sets its own max_connections to the underlying servers at 1000. It looks like that's the case from the docs, and I can see it's the case in the MySQL admin panel:

      Code:
      mysql> show create table mysql_servers\G
      *************************** 1. row ***************************
             table: mysql_servers
      Create Table: CREATE TABLE mysql_servers (
          hostgroup_id INT CHECK (hostgroup_id>=0) NOT NULL DEFAULT 0,
          hostname VARCHAR NOT NULL,
          port INT CHECK (port >= 0 AND port <= 65535) NOT NULL DEFAULT 3306,
          gtid_port INT CHECK (gtid_port <> port AND gtid_port >= 0 AND gtid_port <= 65535) NOT NULL DEFAULT 0,
          status VARCHAR CHECK (UPPER(status) IN ('ONLINE','SHUNNED','OFFLINE_SOFT', 'OFFLINE_HARD')) NOT NULL DEFAULT 'ONLINE',
          weight INT CHECK (weight >= 0 AND weight <=10000000) NOT NULL DEFAULT 1,
          compression INT CHECK (compression IN(0,1)) NOT NULL DEFAULT 0,
          max_connections INT CHECK (max_connections >=0) NOT NULL DEFAULT 1000,
          max_replication_lag INT CHECK (max_replication_lag >= 0 AND max_replication_lag <= 126144000) NOT NULL DEFAULT 0,
          use_ssl INT CHECK (use_ssl IN(0,1)) NOT NULL DEFAULT 0,
          max_latency_ms INT UNSIGNED CHECK (max_latency_ms>=0) NOT NULL DEFAULT 0,
          comment VARCHAR NOT NULL DEFAULT '',
          PRIMARY KEY (hostgroup_id, hostname, port) )
      1 row in set (0.00 sec)
      Note the max_connections INT CHECK (max_connections >=0) NOT NULL DEFAULT 1000.

      I may not be understanding this correctly, but I think it's saying that the ProxySQL cluster can make 1,000 connections to my MySQL server which is only configured to allow 151. Isn't it still sensible to have the ProxySQLs not try to open more connections to the underlying SQL servers than they're configured to allow? Whether this means having the operator increase max_connections in the underlying MySQL servers or having the operator configure max_connections in ProxySQL, it seems that out of the box it's unsafe.

      There's also a potential memory overhead each connection in the MySQL server due to per-connection buffers to consider - allowing an extra 849 connections when I know my application doesn't need them seems unwise, but perhaps it's not an issue if they're unused.

      In my testing I saw a failure scenario where opening a bunch of connections (and closing them) in my application is eventually failed to get connections to the underlying MySQL. My assumption is that ProxySQL isn't recycling connections in its connection pool until it gets under pressure, so it will keep creating connections until it hits that 1,000 limit. Unfortunately I tried to reproduce this today and failed.

      I'll update this if I can reliably reproduce the issue, but I just wanted to make sure the difference in max_connections in the SQL server and ProxySQL isn't an issue.

      Comment

      Working...
      X