GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

LRU_list_mutex contention - Percona server 5.1.54

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • LRU_list_mutex contention - Percona server 5.1.54

    We have run into problems with mysql community edition 5.1.53, our DB traffic seems to be serialized quite often into a single thread concurrency. For example, 5 more demanding SELECT queries are running in the processlist, but the HW is idle (1 CPU core used, loadavg 1.2, nearly no disk usage). This seems to be worse when exectuting some more demanding SELECT queries, OLTP workload is fine.

    I examined SHOW INNODB STATUS and have seen lot of buffer pool mutex contention in the SEMAPHORES section (buf0buf.c).

    I have read XtraDB has implemented some split of this mutex, but after upgrade to Percona server (5.1.54-rel12.5-log) , the situation is not better and I see this:



    SEMAPHORES----------OS WAIT ARRAY INFO: reservation count 359058, signal count 835663--Thread 1306868032 has waited at buf/buf0buf.c line 2106 for 0.0000 seconds the semaphore:Mutex at 0xdcdcc0 '&LRU_list_mutex', lock var 1waiters flag 1--Thread 1301276992 has waited at ./include/buf0flu.ic line 64 for 0.0000 seconds the semaphore:Mutex at 0xdcdcc0 '&LRU_list_mutex', lock var 1waiters flag 1--Thread 1248028992 has waited at srv/srv0srv.c line 3017 for 0.0000 seconds the semaphore:Mutex at 0x2abcd704b468 '&log_sys->mutex', lock var 1waiters flag 1Mutex spin waits 15957200, rounds 19706535, OS waits 254203RW-shared spins 2696416, OS waits 59792; RW-excl spins 79032, OS waits 38879Spin rounds per wait: 1.23 mutex, 2.96 RW-shared, 38.01 RW-excl



    iostatavg-cpu: %user %nice %system %iowait %steal %idle 4.25 0.00 0.04 0.04 0.00 95.67Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %utilsda 0.00 0.00 0.00 1.00 0.00 16.00 32.00 0.00 0.00 0.00 0.00sdc 0.00 0.00 8.00 69.00 96.00 1136.00 32.00 0.02 0.31 0.10 0.80


    Is it something standard, or should I blame it on some HW problems? Is there any testcase like sysbench, which could find any problems?
    The system is supermicro 2x Intel X5670 (24 cores including HT), Intel SSD drives + HDD RAID for logs. 70GB buffer pool, disabled adaptive hash index, disabled query cache, 4000MB innodb logs in total.

    I have tried setting innodb_thread concurrency to 12, 20 and zero, but without any effect. Setting CPU affinity to the first 12 cores makes no difference. OS is CentOS, kernel 2.6.18-194.32.1.el5.

    Thanks for any response, Vojtech

  • #2
    SHOW ENGINE INNODB STATUS output

    Comment


    • #3
      try innodb_buffer_pool_instances

      Comment


      • #4
        Gmouse, innodb_buffer_pool_instances is available since 5.5, am I right? I still don't have a feeling that 5.5 is stable enough. Percona claims that they solved the buffer pool mutex contention in another (better) way. I wonder if they meant XtraDB in the 5.1 or 5.5 release.

        Comment


        • #5
          Today I converted all COMPRESSED tables to COMPACT and all problems are ............. GONE!
          Everything runs perfect now, our hardware is properly used, mysql scales well.

          I think compression is useless until innodb:
          - makes it scale well
          - makes alter table multithreaded. If you need to compress some tables, you do it because they are large. If it takes 20+ hours, the maintenance becomes a nightmare.

          Comment


          • #6
            krteq, that is a very interesting finding. I would not be surprised if we can solve this bug for you. Please contact our sales department at http://www.percona.com/contact/sales/ if this is something you are interested in having us fix.

            Comment

            Working...
            X