GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

load data local infile causes out of memory killer to kill mysqld

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • load data local infile causes out of memory killer to kill mysqld

    So i have a 3 node cluster setup using Percona 5.5.29 on 64 bit CentOS 6.3. My server has 2GB of RAM. There is nothing else running on the server besides mysql. With mysql started I see I have 1.6gigs free of memory from the free -m command

    I have a csv file that is 10 million rows to load into a table with 5 columns of all varchar. the max vachar being a length of 255

    when attempting to load this into the database I consistently get out of memory errors and the OOM killer kills mysql. I've tried messing around with wsrep_max_ws_size, innodb_buffer_pool_size, bulk_insert_buffer_size, and key_buffer_size but increasing these values does not help.

    A 'show engine innodb status' shows total memory allocated as being 1052MB

    What parameters can I tune to make this load datafile complete? 10 million rows does not seem like much especially when it commits every 10K rows. If max memory allocated from the show engine command shows 1GB and I have over 1.5GB free memory, how is load data infile consuming so much memory?

    I've even attempted to use pt-fifo-split to split the file up in smaller chunks but OOM still whacks mysqld

    thanks

  • #2
    so if i comment out the cluster parameters and run the load data infile, it runs fine and doesn't touch swap. why does the cluster consume so much memory? it chews right thru my swap space.

    Comment

    Working...
    X