Tmp tables

  • Filter
  • Time
  • Show
Clear All
new posts

  • Tmp tables


    I have configured XtraDB cluster in there nodes. All the there servers has same MySQL configurations, 32 GB RAM and CentOS 5.
    The Server version is 5.5.29-23.7.1 (64 bit).

    During the load test we are getting the Too Many Connection error. I have found that MySQL process list occupied with creating tmp table statements which leads to reaching the Thread connection threshold value 1000.

    The creating tmp table issue occurs randomly in one server, other two servers are working as expected.

    Please let me know increasing the tmp table size will fix this issue or any thoughts why this issue happens,

    The Current my.cnf values,

    ## Temp Tables
    tmp-table-size = 256M
    max-heap-table-size = 128M

    ## Networking
    max-connections = 1000

  • #2
    Your tmp table sizes are already quite big and with 1000 connections all on this state this bound to cause problems even on production. What kind of queries are you running? I suggest looking at the queries first then your tmpdir - is it going to disk or consuming too much RAM and swapping?
    Our documentation has a lot of answers about common questions on Percona software, have you checked there before posting that question here? http://www.percona.com/forums/core/i...lies/smile.png

    Join us at the annual Percona Live MySQL Users Conference - http://www.percona.com/live/mysql-conference-2014/


    • #3
      Hello revin,

      Thank you for the details.

      The query has select,join,group by and order statements. The following details taken from slow log during the issue,

      # Query_time: 5.976164 Lock_time: 0.000029 Rows_sent: 1 Rows_examined: 107 Rows_affected: 0 Rows_read: 107
      # Bytes_sent: 1045 Tmp_tables: 1 Tmp_disk_tables: 1 Tmp_table_sizes: 192444

      The query creates disk tables and consuming RAM. The servers are behind the hardware load balancer and requests are assigned in round robin. I have verified the requests evenly distributed in the cluster. I wonder why this issue occurs randomly in one server during that time, other servers has no issues. Say the issue is occurring in DB1, it consuming low amount of RAM compare to other servers, but it dropping connections.

      [root@db1] ~ >> free -m
      total used free shared buffers cached
      Mem: 30457 19492 10965 0 1216 8467
      -/+ buffers/cache: 9808 20649
      Swap: 1913 0 1913

      [root@db2] ~ >> free -m
      total used free shared buffers cached
      Mem: 30457 29780 677 0 1408 7418
      -/+ buffers/cache: 20952 9505
      Swap: 1913 0 1913

      [root@db3] ~ >> free -m
      total used free shared buffers cached
      Mem: 30457 29895 562 0 1219 7677
      -/+ buffers/cache: 20999 9458
      Swap: 1913 0 1913

      I have disabled swap in all the servers using vm.swappiness = 0.