Buy Percona ServicesBuy Now!

Killed by OOM Killer when '--lock-ddl-per-table' option is specified

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Killed by OOM Killer when '--lock-ddl-per-table' option is specified

    I faced the problem that extrabackup executed with `--lock-ddl-per-table` option is killed by OOM-Killer.It always happens when backing up the largest table in my database.
    According to the xtrabackup STDOUT log, as soon as xtrabackup starts MDL lock on the large table, disk I/O will jump to an abnormal number and memory usage will also increase. After that, a swap occurs, and the process is finally terminated by OOM Killer.
    The large table has more than 200 million rows and is partitioned.

    This problems does not occur when running without the `--lock-ddl-per-table` option.
    Does anyone knows how to fix it?



    My command:
    xtrabackup --backup --user=${my_user} --password=${my_pass} --check-privileges --stream=xbstream --parallel=4 --compress --compress-threads=2 --slave-info --target-dir="${bk_work}" --extra-lsndir="${bk_work}" --lock-ddl-per-table

    version:
    xtrabackup version 2.4.9 based on MySQL server 5.7.13 Linux (x86_64) (revision id: a467167cdd4)

    OOM log:
    See oom.txt(attached).

  • #2
    Hi nato;

    When you run your backups with --lock-ddl-per-table, other connections are likely piling up waiting for meta data locks, which could be bogging down your server and increasing memory usage. While this is happening, check the process list to see if this is the case. If so, then you might have to either run your backup at a less busy time, or potentially move some other jobs to a different time (i.e. if there is a big batch job scheduled to run during your backup).

    There is one bug report for this same situation, however the details are sparse and they are likely hitting the same issue (load during the backup).

    https://jira.percona.com/browse/PXB-1491
    Scott Nemes
    http://www.linkedin.com/in/scottnemes

    Comment


    • #3
      Hi.

      I tried to start a backup after stopping all other user processes, jobs, and replications, but the problem is still not resolved.
      The problem reported in that ticket seems like the same. I would like to ask in that ticket.
      Thanks.

      Comment


      • #4
        I have the same issue and its the xtrabackup program thats consuming the memory, not mysqld (mysql runs with no issues during the backup). Issue is happening on ~50mln row partitioned table in my case.

        Comment

        Working...
        X