GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

OOM problem

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • OOM problem

    Hi, I'm having some annoying trouble with my PXC installation.
    I have 3 nodes cluster and, just for debug, right now only one node is running.
    Each node is an EC2 m1.large instance with Ubuntu 12.10 Server 64bit and 7.5GB of memory.

    In my configuration there are the following variables:

    max_connections = 500
    open_files_limit = 500000
    transaction-isolation = 'READ-COMMITTED'

    join_buffer_size = 128
    table_cache = 96
    table_open_cache = 64
    thread_cache_size = 128
    thread_stack = 256K
    tmp_table_size = 256M
    max_allowed_packet = 500M
    read_buffer_size = 128K

    innodb_flush_method = ALL_O_DIRECT
    innodb_buffer_pool_size = 4000M # recommended: ~ 60% - 70% of RAM
    innodb_log_buffer_size = 8M
    innodb_log_file_size = 50M
    innodb_support_xa = 0 # OK with Galera synchronous replication
    innodb_import_table_from_xtrabackup = 1 # allows restoring a single table
    innodb_flush_log_at_trx_commit = 0 # speeds things up, can lose 1 second worth of transactions if MySQL crashes.
    innodb_doublewrite = 1 # ensures an incremental state transfer will be tried if possible
    innodb_max_dirty_pages_pct = 50
    innodb_file_per_table

    With this conf, usually my server have 1.9GB of free memory, but about each day, oom-killer kill mysqld because have the biggest score.
    For this reason, I have a cron that adjust the mysql oom score to prevent killing.
    Unfortunatelly after 2 days, despite the free memory, oom-killer is been invoked again and it killed all other process, causing a kernel panic.

    Have you some hint to solve this problem??
    You think it's a good idea to disable overcommit?

    tnx!

  • #2
    Apparently your mysqld process uses too much memory. Note that for PXC nodes, memory requirement is bigger then for standard MySQL+InnoDB instance.
    Especially if you have large transactions, which can make the memory usage grow a lot.
    Btw what is the reason for that huge setting: max_allowed_packet = 500M ?
    I think you should determine what is using lot of memory in MySQL rather then playing with kernel overcommit settings. I don't believe MySQL will work fine if kernel refuses to allocate memory for it.

    Comment


    • #3
      mmm max_allowed_packet has that value just because time ago we had some blob field. But this value can cause a wrong memory usage by PXC? With PXC, what is the right percentage for innodb_buffer_pool_size variable? I have even noticed that when mysqldump ends the backup, the memory used by mysql is increased about 1,2GB. Can you explain that?

      Comment


      • #4
        With my experience so far I have noticed that E.g. With 1GB buffer pool memory will raise up to 1.7GB. With 4GB buffer pool memory usage has been around 5.7GB and so on. Maybe this is caused by Galera caches etc.. Of course this is still less than you have overall memory, but if you are running something else on the server that could explain OOM problems. How large the mysqld process size has been with 4GB buffer pool ?

        Comment

        Working...
        X