GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

InnoDB Lock Time

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • InnoDB Lock Time

    Hi,

    I'm running a delete on a table with 150 million rows and the performance seems horrible. I ran it over 24 hours and the status seemed to remain at "sending data". We're using mysql 5.0.51a on Ubuntu. Dual Core machine with 8Gb ram.

    Also using mysqlreport I get:

    [code]

    MySQL 5.0.51a-3ubuntu5. uptime 114 4:13:48 Sat Apr 3 17:47:06 2010

    __ Key __________________________________________________ __________ _____
    Buffer used 418.64M of 512.00M %Used: 81.76
    Current 93.36M %Usage: 18.24
    Write hit 55.06%
    Read hit 99.93%

    __ Questions __________________________________________________ _________
    Total 2.07G 209.4/s
    DMS 1.26G 127.8/s %Total: 61.01
    Com_ 806.45M 81.7/s 39.03
    -Unknown 975.23k 0.1/s 0.05
    COM_QUIT 157.04k 0.0/s 0.01
    Slow (1) 64.17k 0.0/s 0.00 %DMS: 0.01 Log: ON
    DMS 1.26G 127.8/s 61.01
    SELECT 851.31M 86.3/s 41.20 67.54
    INSERT 275.07M 27.9/s 13.31 21.82
    DELETE 104.26M 10.6/s 5.05 8.27
    UPDATE 29.80M 3.0/s 1.44 2.36
    REPLACE 35.24k 0.0/s 0.00 0.00
    Com_ 806.45M 81.7/s 39.03
    commit 323.40M 32.8/s 15.65
    set_option 313.81M 31.8/s 15.19
    begin 164.15M 16.6/s 7.94

    __ SELECT and Sort __________________________________________________ ___
    Scan 34.37M 3.5/s %SELECT: 4.04
    Range 547.32k 0.1/s 0.06
    Full join 1.51k 0.0/s 0.00
    Range check 0 0/s 0.00
    Full rng join 2 0.0/s 0.00
    Sort scan 248.38k 0.0/s
    Sort range 60.26k 0.0/s
    Sort mrg pass 3.16k 0.0/s

    __ Table Locks __________________________________________________ _______
    Waited 734.30k 0.1/s %Total: 0.05
    Immediate 1.43G 145.0/s

    __ Tables __________________________________________________ __________ __
    Open 64 of 64 %Cache: 100.00
    Opened 870.03k 0.1/s

    __ Connections __________________________________________________ _______
    Max used 101 of 100 %Max: 101.00
    Total 188.56k 0.0/s

    __ Created Temp __________________________________________________ ______
    Disk table 725.12k 0.1/s
    Table 3.70M 0.4/s Size: 32.0M
    File 2.57k 0.0/s

    __ Threads __________________________________________________ __________ _
    Running 18 of 26
    Cached 3 of 8 %Hit: 89.12
    Created 20.52k 0.0/s
    Slow 0 0/s

    __ Aborted __________________________________________________ __________ _
    Clients 63.04k 0.0/s
    Connects 190 0.0/s

    __ Bytes __________________________________________________ __________ ___
    Sent 1.81T 183.0k/s
    Received 408.04G 41.4k/s

    __ InnoDB Buffer Pool __________________________________________________
    Usage 4.88G of 4.88G %Used: 100.00
    Read hit 99.94%
    Pages
    Free 0 %Total: 0.00
    Data 316.67k 98.96 %Drty: 0.98
    Misc 3332 1.04
    Latched 276 0.09
    Reads 105.75G 10.7k/s
    From file 58.22M 5.9/s 0.06
    Ahead Rnd 1901825 0.2/s
    Ahead Sql 467742 0.0/s
    Writes 7.69G 779.7/s
    Flushes 63.25M 6.4/s
    Wait Free 0 0/s

    __ InnoDB Lock __________________________________________________ _______
    Waits 2211 0.0/s
    Current 0
    Time acquiring
    Total 7671513 ms
    Average 3469 ms
    Max 121554 ms

    __ InnoDB Data, Pages, Rows ____________________________________________
    Data
    Reads 73.85M 7.5/s
    Writes 21.40M 2.2/s
    fsync 5.16M 0.5/s
    Pending
    Reads 1
    Writes 0
    fsync 0

    Pages
    Created 21.32M 2.2/s
    Read 168.07M 17.0/s
    Written 63.25M 6.4/s

    Rows
    Deleted 61.03M 6.2/s
    Inserted 1.43G 145.4/s
    Read 81.79G 8.3k/s
    Updated 144.89M 14.7/s
    [code]

    Does an average time of 3469ms per innodb lock seem like a major problem? Any other insight would be appreciated.

    Thanks

  • #2
    It's hard to say what could be happening without seeing the query and the table structure.

    Comment


    • #3
      Sorry, I should have posted that. The tables look like this:


      CREATE TABLE `cars_articles` ( `car_id` int(11) NOT NULL DEFAULT '0', `article_id` int(11) NOT NULL DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, KEY `idx_cars_articles_car_id` (`car_id`), KEY `idx_cars_articles_art_id` (`article_id`), KEY `index_cars_articles_on_car_id_and_article_id` (`car_id`,`article_id`)) ENGINE=InnoDB DEFAULT CHARSET=latin1CREATE TABLE `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(255) DEFAULT NULL, `markup` mediumtext NOT NULL, `version` varchar(255) NOT NULL DEFAULT 'default_version', `created_on` datetime NOT NULL, `updated_on` datetime NOT NULL, PRIMARY KEY (`id`), KEY `index_articles_on_uuid` (`uuid`), KEY `index_articles_on_version` (`version`),) ENGINE=InnoDB DEFAULT CHARSET=latin1CREATE TABLE `cars` ( `id` int(11) NOT NULL AUTO_INCREMENT, `year` int(11) NOT NULL DEFAULT '0', `oem_id` int(11) NOT NULL DEFAULT '0', `make_id` int(11) NOT NULL, `model` varchar(50) NOT NULL DEFAULT '', `manufacturer_id` int(11) DEFAULT NULL, `created_on` datetime NOT NULL, `updated_on` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `index_cars_on_oem_id_and_oem_vkey` (`oem_id`,`oem_vkey`), KEY `fk_cars_make_id` (`make_id`), KEY `index_cars_on_uuid` (`uuid`), KEY `index_cars_on_manu_id` (`manufacturer_id`), CONSTRAINT `fk_cars_make_id` FOREIGN KEY (`make_id`) REFERENCES `makes` (`id`),) ENGINE=InnoDB DEFAULT CHARSET=latin1


      The delete query looks like


      delete cars_articles from cars_articlesjoin articles a on a.id = cars_articles.article_id and a.version in ('beta_1','legacy')join cars c on c.id = cars_articles.car_id and c.manufacturer_id is not null;


      There are 150 million rows in the cars_articles table, 50K in the cars table and 5 million in the articles table.

      Comment


      • #4
        Aha, good thing I asked ) That's totally different from deleting from just one table. What does an EXPLAIN of an equivalent SELECT say? Please use \G so the output is vertical.

        Comment


        • #5
          Hi,

          Here it is:


          *************************** 1. row *************************** id: 1 select_type: SIMPLE table: c type: rangepossible_keys: PRIMARY,index_cars_on_manufacturer_id key: index_cars_on_manufacturer_id key_len: 5 ref: NULL rows: 26603 Extra: Using where; Using index*************************** 2. row *************************** id: 1 select_type: SIMPLE table: cars_articles type: refpossible_keys: idx_cars_fct_articles_car_id,idx_cars_fct_articles _art_id,index_cars_articles_on_car_id_and_article_ id key: idx_cars_fct_articles_car_id key_len: 4 ref: cartalk.c.id rows: 1063 Extra: *************************** 3. row *************************** id: 1 select_type: SIMPLE table: f type: eq_refpossible_keys: PRIMARY,index_articles_on_version key: PRIMARY key_len: 4 ref: cartalk.cars_articles.article_id rows: 1 Extra: Using where3 rows in set (0.00 sec)


          Thanks

          Comment


          • #6
            It estimates that this will match 1063 * 26603 rows. That's a lot.

            InnoDB is optimized for small, short transactions that commit quickly and don't roll back. I'd do this in small chunks.

            Comment

            Working...
            X