Emergency

mysqldump backup times greatly increased with 5.7.20-19

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • mysqldump backup times greatly increased with 5.7.20-19

    Not sure if this is the right place to post this or not.
    After we upgraded to 5.7.20-19 from 5.7.19, mysql backups for this table jumped from 42 mins to 5hrs.
    The backup time for the innodb table doubled but the tokudb table took more than 5 times longer.
    I saw there was a memory leak fixed in mysqldump but unless there were other changes there I
    don't see that being the issue.

    Any ideas?

    # Time: 2018-01-04T12:22:36.800014Z
    # User@Host: root[root] @ localhost [] Id: 148138
    # Schema: cdr Last_errno: 0 Killed: 0
    # Query_time: 2568.367009 Lock_time: 0.000000 Rows_sent: 523136203 Rows_examined: 523136203 Rows_affected: 0
    # Bytes_sent: 188647575431
    SET timestamp=1515068556;
    SELECT /*!40001 SQL_NO_CACHE */ * FROM `cdr_main`;

    # Time: 2018-01-04T12:22:56.091752Z
    # User@Host: root[root] @ localhost [] Id: 148138
    # Schema: cdr Last_errno: 0 Killed: 0
    # Query_time: 19.015636 Lock_time: 0.000000 Rows_sent: 11859695 Rows_examined: 11859695 Rows_affected: 0
    # Bytes_sent: 239650071
    SET timestamp=1515068576;
    SELECT /*!40001 SQL_NO_CACHE */ * FROM `line_item_ratings`;

    After upgrade to 5.7.20-19

    # Time: 2018-01-10T20:18:38.510011Z
    # User@Host: root[root] @ localhost [] Id: 29239
    # Schema: cdr Last_errno: 1681 Killed: 0
    # Query_time: 18269.061056 Lock_time: 0.000000 Rows_sent: 525785773 Rows_examined: 525785773 Rows_affected: 0
    # Bytes_sent: 189596705933
    SET timestamp=1515615518;
    SELECT /*!40001 SQL_NO_CACHE */ * FROM `cdr_main`;


    # Time: 2018-01-10T20:19:24.966083Z
    # User@Host: root[root] @ localhost [] Id: 29239
    # Schema: cdr Last_errno: 1681 Killed: 0
    # Query_time: 44.798296 Lock_time: 0.000000 Rows_sent: 11859695 Rows_examined: 11859695 Rows_affected: 0
    # Bytes_sent: 239650071
    SET timestamp=1515615564;
    SELECT /*!40001 SQL_NO_CACHE */ * FROM `line_item_ratings`;

    SHOW TABLE STATUS:
    Name: cdr_main
    Engine: TokuDB
    Version: 10
    Row_format: tokudb_zlib
    Rows: 526232788
    Avg_row_length: 323
    Data_length: 169998403381
    Max_data_length: 9223372036854775807
    Index_length: 35156877283
    Data_free: 18446743920195218452
    Auto_increment: 829412264
    Create_time: 2016-07-11 11:16:33
    Update_time: 2018-01-10 17:08:38
    Check_time: NULL
    Collation: latin1_swedish_ci
    Checksum: NULL
    Create_options:
    Comment:

    Name: line_item_ratings
    Engine: InnoDB
    Version: 10
    Row_format: Dynamic
    Rows: 6108827
    Avg_row_length: 35
    Data_length: 214663168
    Max_data_length: 0
    Index_length: 261586944
    Data_free: 5242880
    Auto_increment: NULL
    Create_time: 2016-08-24 11:34:06
    Update_time: NULL
    Check_time: NULL
    Collation: latin1_swedish_ci
    Checksum: NULL
    Create_options:
    Comment:

  • #2
    Maybe I should roll back ?

    Comment


    • #3
      https://bugs.launchpad.net/percona-server/+bug/1742751 - we looking into testing this on our end!
      Kind Regards,
      God Bless,
      Roel

      Roel Van de Paar (UTC+10)
      https://www.linkedin.com/in/roelvandepaar/
      Technical Director - Quality Assurance, Percona

      Comment


      • #4
        So far our tests are not showing the issue;

        8 Million rows in 4 tables:

        # Innodb tables
        $ time /home/shahriyar.rzaev/XB_TEST/server_dir/PS010118-5.7.20-18-linux-x86_64-debug/bin/mysqldump -uroot --socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS010118-5.7.20-18-linux-x86_64-debug/socket.sock --port=12973 --databases bug1742751 bug1742751_2 > /home/shahriyar.rzaev/XB_TEST/dump_5_7_20_18.sql
        Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.

        real 1m30.048s
        user 0m34.070s
        sys 0m1.836s

        # TokuDB tables

        $ time /home/shahriyar.rzaev/XB_TEST/server_dir/PS010118-5.7.20-18-linux-x86_64-debug/bin/mysqldump -uroot --socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS010118-5.7.20-18-linux-x86_64-debug/socket.sock --port=12973 --databases bug1742751 bug1742751_2 > /home/shahriyar.rzaev/XB_TEST/dump_5_7_20_18.sql
        Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.

        real 0m57.983s
        user 0m33.533s
        sys 0m1.685s

        What OS do you use, and what are your my.cnf settings?

        This looks like it may be an isolated issue on your machine, possibly related to configuration or similar.
        Kind Regards,
        God Bless,
        Roel

        Roel Van de Paar (UTC+10)
        https://www.linkedin.com/in/roelvandepaar/
        Technical Director - Quality Assurance, Percona

        Comment


        • #5
          Hi Roel thanks for looking into this. What OS/Kernel is your testing done on ?

          As noted in bug:
          It appears to be this kernel "kernel-3.10.0-693.11.6.el7.x86_64"
          rolling back to "kernel-3.10.0-693.11.1.el7.x86_64" seems to have resolved the problems.
          This is on CentOS Linux release 7.4.1708 (Core)
          Attached Files

          Comment


          • #6
            It seems that you might be seeing performance issues due to Meltdown and/or Spectre https://www.percona.com/blog/2018/01...lnerabilities/

            These 'fixes' were introduced in 3.10.0-693.11.6

            This is interesting as I have heard reports of < ~1% impact up to ~20% impact, but, your numbers suggest a much larger hit for this specific case.

            Comment


            • #7
              I was thinking the same thing. Any thoughts on what we should do short of never upgrading?
              I would be happy to provide any data you may need.

              Comment

              Working...
              X