GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

Performance problem on Linux

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Performance problem on Linux

    Hi
    I have a website running PHP + MySQL (Percona-Server-server-55-5.5.15-rel21.0.158.rhel6.x86_64)
    Today I noticed high %system load on my server caused by mysqld. Database table format is MyISAM. Database files are located on SSD.

    Config:

    [mysqld]user = mysqlpid-file = /var/run/mysqld/mysqld.pidsocket = /var/run/mysqld/mysqld.sockport = 3306basedir = /usrdatadir = /var/lib/mysqltmpdir = /tmplanguage = /usr/share/mysql/englishskip-external-lockingbind-address = 127.0.0.1key_buffer = 16Mmax_allowed_packet = 16Mthread_stack = 192Kthread_cache_size = 8myisam-recover = BACKUPmax_connections = 100table_cache =256max_heap_table_size = 16Mthread_cache_size = 4query_cache_size = 20Mtmp_table_size = 64Mkey_buffer_size = 4Mbulk_insert_buffer_size = 4Mmyisam_sort_buffer_size = 4Mmyisam_max_sort_file_size = 2Gmyisam_repair_threads = 1myisam_recoverlog-slow-queries=/var/log/mysql/slow.loglong_query_time = 3server-id = 1expire_logs_days = 10max_binlog_size = 100M


    strace output:

    # strace -c -f -p 22283Process 22283 attached with 23 threads - interrupt to quitProcess 23102 attached (waiting for parent)Process 23102 resumed (parent 22283 ready)Process 23103 attached (waiting for parent)Process 23103 resumed (parent 22283 ready)Process 23104 attached (waiting for parent)Process 23104 resumed (parent 22283 ready)Process 23105 attached (waiting for parent)Process 23105 resumed (parent 22283 ready)......% time seconds usecs/call calls errors syscall------ ----------- ----------- --------- --------- ---------------- 42.65 45.960006 127667 360 io_getevents 29.19 31.456203 1571 20021 4525 futex 16.69 17.983456 203 88735 read 3.97 4.280503 30358 141 poll 3.69 3.973396 993349 4 select 1.50 1.614578 21 75963 pread 1.25 1.348565 19 71479 lseek 0.43 0.459066 25 18151 102 setsockopt 0.27 0.286388 32 9000 write 0.21 0.222788 26 8498 time 0.11 0.115981 28995 4 3 restart_syscall 0.02 0.020877 29 710 fcntl 0.01 0.006529 58 112 lstat 0.01 0.005446 32 170 28 access 0.01 0.005407 38 142 getsockname 0.00 0.004816 20 236 pwrite 0.00 0.004772 34 142 accept 0.00 0.004429 40 112 close 0.00 0.004234 38 111 shutdown 0.00 0.001858 46 40 set_robust_list 0.00 0.000866 31 28 getcwd 0.00 0.000838 28 30 open 0.00 0.000754 19 40 clone 0.00 0.000404 25 16 mprotect 0.00 0.000000 0 17 madvise 0.00 0.000000 0 40 getpeername 0.00 0.000000 0 2 2 readlink------ ----------- ----------- --------- --------- ----------------100.00 107.762160 294304 4660 total


    How can I find why mysql spends so much time on kernel syscalls any where is the problem.

    Thanks.

  • #2
    Remove key_buffer from your configuration, it's an old name for key_buffer_size.
    And increase the key_buffer_size a lot, since I'm guessing that you don't have only 20MB RAM on that machine.
    The rough recommendation is to set key_buffer_size to about 25% of available RAM on the machine. So if you are running a web-server on the same machine then you have to deduct the memory usage of it from available RAM on the server.

    Another suggestion is that you lower the long_query_time (you can set fractions of a second) so that more slow queries turn up in the slow query log. Also set the:
    log-queries-not-using-indexes variable to on so that you can get non optimized queries in the log regardless of execution time.

    Quote:
    key_buffer = 16M
    ...
    key_buffer_size = 4M
    ..
    log-slow-queries=/var/log/mysql/slow.log
    long_query_time = 3

    Comment


    • #3
      First of all I want to thank you for your help!
      I've done changes you suggested, but I can still observe high (15 to 30) %system in iostat output. What could it be? Which system calls causes mysql to consume so much time in kernel calls?

      Comment


      • #4
        Could you output some lines of iostat -dx 5?

        Comment


        • #5
          Could you also post the output from SHOW GLOBAL STATUS in an attachment to this thread?

          And:
          1 how much RAM you got on the machine?
          2 how large is your database?

          Comment


          • #6
            OK.

            # iostat -dx 5 /dev/dm-11Linux 2.6.32.45-grsec (2baksa.net) 10/25/2011 _x86_64_ (4 CPU)Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.05 2.81 0.96 22.47 8.19 0.19 68.10 58.26 16.66Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.40 0.00 11.20 8.00 0.25 0.14 176.43 24.70Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.60 0.00 12.80 8.00 0.72 461.62 449.12 71.86Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.60 0.00 12.80 8.00 0.82 590.88 515.25 82.44Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.80 0.00 14.40 8.00 0.57 332.00 314.22 56.56Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.60 0.00 12.80 8.00 0.40 282.12 251.50 40.24Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.20 0.00 9.60 8.00 0.62 535.83 512.67 61.52Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.60 0.00 12.80 8.00 0.35 216.62 216.62 34.66Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 0.80 0.00 6.40 8.00 0.14 170.00 169.75 13.58Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.20 0.00 9.60 8.00 0.12 100.33 100.33 12.04Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 3.20 0.00 25.60 8.00 0.15 45.56 45.56 14.58Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.20 0.00 9.60 8.00 0.11 90.00 89.83 10.78Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.40 0.00 11.20 8.00 0.23 0.00 162.57 22.76Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 1.60 0.00 12.80 8.00 0.19 260.50 117.50 18.80Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utildm-11 0.00 0.00 0.00 0.80 0.00 6.40 8.00 0.29 358.25 358.25 28.66



            Global status attached

            P.S. the worst queries I have are queries scanning a large set of rows with large limit offset. Now I'm trying to implement something like suggested in this question

            Comment


            • #7
              You should check out the slow query log and identify which queries that aren't using an index.

              This:
              Quote:
              Handler_read_rnd_next 25972508
              combined with this:
              Com_select 50468
              Tells us that on average one of your selects performs a table scan of 512 rows.

              And this:
              Quote:
              Select_full_join 2
              Select_full_range_join 0
              Select_range 10174
              Select_range_check 0
              Select_scan 143
              Tells us that you have some queries that would like an index to perform better.

              Give us an example of the query and the output from SHOW CREATE TABLE and we can suggest an index for you.

              Comment


              • #8
                Oh. Forgive me. It's not because of me. It's because of this stupid blog engine.
                Let's begin:
                The query scanning 40000 rows to get 11

                mysql> EXPLAIN EXTENDED SELECT i.inumber as itemid, i.ititle as title, i.ibody as body, m.mname as author, m.mrealname as authorname, UNIX_TIMESTAMP(i.itime) as timestamp, i.itime, i.imore as more, m.mnumber as authorid, c.cname as category, i.icat as catid, i.iclosed as closed FROM nucleus_item as i USE INDEX(`TDB-T`), nucleus_member as m, nucleus_category as c WHERE i.iauthor=m.mnumber and i.icat=c.catid and i.iblog=1 and i.itime<="2011-10-25 15:34:54" and i.idraft=0 ORDER BY i.itime DESC LIMIT 40297,11;+----+-------------+-------+--------+---------------+---------+---------+-----------------------+-------+----------+-------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |+----+-------------+-------+--------+---------------+---------+---------+-----------------------+-------+----------+-------------+| 1 | SIMPLE | i | range | TDB-T | TDB-T | 13 | NULL | 68785 | 100.00 | Using where || 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | nowaru_BakS.i.icat | 1 | 100.00 | || 1 | SIMPLE | m | eq_ref | PRIMARY | PRIMARY | 4 | nowaru_BakS.i.iauthor | 1 | 100.00 | |+----+-------------+-------+--------+---------------+---------+---------+-----------------------+-------+----------+-------------+3 rows in set, 1 warning (0.00 sec)



                mysql> SHOW CREATE TABLE nucleus_item\G*************************** 1. row *************************** Table: nucleus_itemCreate Table: CREATE TABLE `nucleus_item` ( `inumber` int(11) NOT NULL AUTO_INCREMENT, `ititle` varchar(255) DEFAULT NULL, `ibody` text NOT NULL, `imore` text, `iblog` int(11) NOT NULL DEFAULT '1', `iauthor` int(11) NOT NULL DEFAULT '0', `itime` datetime NOT NULL DEFAULT '0000-00-00 00:00:00', `iclosed` tinyint(2) NOT NULL DEFAULT '0', `idraft` tinyint(2) NOT NULL DEFAULT '0', `ikarmapos` int(11) NOT NULL DEFAULT '0', `icat` int(11) DEFAULT NULL, `ikarmaneg` int(11) NOT NULL DEFAULT '0', `visible` tinyint(1) NOT NULL, PRIMARY KEY (`inumber`), KEY `icat` (`icat`), KEY `itime` (`itime`), KEY `TDB-T` (`itime`,`idraft`,`iblog`), KEY `new` (`itime`,`iblog`,`idraft`,`visible`)) ENGINE=MyISAM AUTO_INCREMENT=72568 DEFAULT CHARSET=cp1251 PACK_KEYS=0 ROW_FORMAT=COMPACT1 row in set (0.00 sec)mysql> SHOW CREATE TABLE nucleus_member\G*************************** 1. row *************************** Table: nucleus_memberCreate Table: CREATE TABLE `nucleus_member` ( `mnumber` int(11) NOT NULL AUTO_INCREMENT, `mname` varchar(16) NOT NULL DEFAULT '', `mrealname` varchar(60) DEFAULT NULL, `mpassword` varchar(40) NOT NULL DEFAULT '', `memail` varchar(60) DEFAULT NULL, `murl` varchar(100) DEFAULT NULL, `mnotes` varchar(100) DEFAULT NULL, `madmin` tinyint(2) NOT NULL DEFAULT '0', `mcanlogin` tinyint(2) NOT NULL DEFAULT '1', `mcookiekey` varchar(40) DEFAULT NULL, `deflang` varchar(20) NOT NULL DEFAULT '', PRIMARY KEY (`mnumber`), UNIQUE KEY `mname` (`mname`), KEY `madmin` (`madmin`), KEY `madmin_2` (`madmin`)) ENGINE=MyISAM AUTO_INCREMENT=150025 DEFAULT CHARSET=cp1251 ROW_FORMAT=DYNAMIC1 row in set (0.00 sec)mysql> SHOW CREATE TABLE nucleus_category\G*************************** 1. row *************************** Table: nucleus_categoryCreate Table: CREATE TABLE `nucleus_category` ( `catid` int(11) NOT NULL AUTO_INCREMENT, `cblog` int(11) NOT NULL DEFAULT '0', `cname` varchar(40) DEFAULT NULL, `cdesc` varchar(200) DEFAULT NULL, PRIMARY KEY (`catid`)) ENGINE=MyISAM AUTO_INCREMENT=18 DEFAULT CHARSET=cp12511 row in set (0.00 sec)


                Please don't kick me for such a horrible queries. I've tried to improve performance some time ago and modified engine code to use index 'TDB-T' which seems work better than without it.

                Comment


                • #9
                  Iostat reports 0 reads in total, are you sure that is correct? The device seems pretty busy writing small amounts of data, that does not seem to be based on an SSD.

                  Your TBD-T index is not more usable than the time index. The true problem is the high offset in your limit: http://www.mysqlperformanceblog.com/2008/09/24/four-ways-to- optimize-paginated-displays/

                  Comment


                  • #10
                    # ls -la /dev/mapper/*lrwxrwxrwx. 1 root root 7 Sep 6 11:50 /dev/mapper/1tb-backup -> ../dm-2lrwxrwxrwx. 1 root root 8 Sep 5 06:46 /dev/mapper/main-mysql -> ../dm-11lrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/main-swap_1 -> ../dm-9lrwxrwxrwx. 1 root root 8 Sep 5 06:46 /dev/mapper/main-www -> ../dm-10crw-rw----. 1 root root 10, 59 Sep 5 06:46 /dev/mapper/controllrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/VolGroup-lv_home -> ../dm-7lrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/VolGroup-lv_root -> ../dm-0lrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/VolGroup-lv_swap -> ../dm-1lrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/VolGroup-lv_tmp -> ../dm-6lrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/VolGroup-lv_var_log -> ../dm-5lrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/VolGroup-lv_var_spool_mqueue -> ../dm-4lrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/VolGroup-lv_var_www -> ../dm-3lrwxrwxrwx. 1 root root 7 Sep 5 06:46 /dev/mapper/VolGroup-wuala -> ../dm-8# grep mysql /proc/mounts/dev/mapper/main-mysql /var/lib/mysql ext4 rw,seclabel,nosuid,nodev,noexec,noatime,barrier=1, data=ordered 0 0# pvs PV VG Fmt Attr PSize PFree /dev/sda5 main lvm2 a- 51.00g 28.89g /dev/sdb VolGroup lvm2 a- 931.51g 522.00g /dev/sdc 1tb lvm2 a- 931.51g 331.51g# vgs VG #PV #LV #SN Attr VSize VFree 1tb 1 1 0 wz--n- 931.51g 331.51g main 1 3 0 wz--n- 51.00g 28.89g VolGroup 1 8 0 wz--n- 931.51g 522.00g


                    50Gb is OCZ Vertex 2 SSD
                    lshw output:

                    ... *-disk:0 description: ATA Disk product: OCZ-VERTEX2 physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 1.28 serial: OCZ-0QCNPB6G7X0P94G0 size: 51GiB (55GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00083d92...

                    Comment


                    • #11
                      Are you able to modify the query itself in the blog software?

                      If you rewrite it this way with a derived table you avoid performing 40,000 joins before throwing them away.
                      In this case you just scan the index, retrieve the 11 rows that you want and then join with the two other tables.
                      Notice the SELECT * is not so nice and should be replaced with the appropriate columns for this query.


                      EXPLAIN EXTENDED SELECT i.inumber as itemid, i.ititle as title, i.ibody as body, m.mname as author, m.mrealname as authorname, UNIX_TIMESTAMP(i.itime) as timestamp, i.itime, i.imore as more, m.mnumber as authorid, c.cname as category, i.icat as catid, i.iclosed as closedFROM ( SELECT * FROM nucleus_item as i2 WHERE i2.itime<="2011-10-25 15:34:54" AND i2.idraft=0 AND i2.iblog=1 ORDER BY i2.itime DESC LIMIT 40297,11 ) AS iINNER JOIN nucleus_member as m ON i.iauthor=m.mnumberINNER JOIN nucleus_category as c ON i.icat=c.catidORDER BY i.itime


                      Good luck!

                      Comment


                      • #12
                        I think you are totally right. I'm with you. I'll try to modify engine code. I'll keep in touch

                        Comment


                        • #13
                          sterin71 wrote on Mon, 31 October 2011 09:23
                          Are you able to modify the query itself in the blog software?

                          If you rewrite it this way with a derived table you avoid performing 40,000 joins before throwing them away.
                          In this case you just scan the index, retrieve the 11 rows that you want and then join with the two other tables.
                          Notice the SELECT * is not so nice and should be replaced with the appropriate columns for this query.


                          EXPLAIN EXTENDED SELECT i.inumber as itemid, i.ititle as title, i.ibody as body, m.mname as author, m.mrealname as authorname, UNIX_TIMESTAMP(i.itime) as timestamp, i.itime, i.imore as more, m.mnumber as authorid, c.cname as category, i.icat as catid, i.iclosed as closedFROM ( SELECT * FROM nucleus_item as i2 WHERE i2.itime<="2011-10-25 15:34:54" AND i2.idraft=0 AND i2.iblog=1 ORDER BY i2.itime DESC LIMIT 40297,11 ) AS iINNER JOIN nucleus_member as m ON i.iauthor=m.mnumberINNER JOIN nucleus_category as c ON i.icat=c.catidORDER BY i.itime


                          Good luck!
                          OK. I tried that in mysql console client. A bit different query but quite similar:


                          mysql> EXPLAIN EXTENDED SELECT i.inumber as itemid, i.ititle as title, i.ibody as body, m.mname as author, m.mrealname as authorname, UNIX_TIMESTAMP(i.itime) as timestamp, i.itime, i.imore as more, m.mnumber as authorid, c.cname as category, i.icat as catid, i.iclosed as closed FROM (SELECT inumber, ititle, ibody, itime, imore, icat, iclosed, iauthor FROM nucleus_item as i2 WHERE i2.iblog=1 and i2.itime<="2011-10-31 13:18:10" and i2.idraft=0 ORDER BY i2.itime DESC LIMIT 35520, 1) AS i INNER JOIN nucleus_member as m ON i.iauthor=m.mnumber INNER JOIN nucleus_category as c ON i.icat=c.catid;+----+-------------+------------+--------+-----------------+---------+---------+-------+-------+----------+-------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |+----+-------------+------------+--------+-----------------+---------+---------+-------+-------+----------+-------------+| 1 | PRIMARY | | system | NULL | NULL | NULL | NULL | 1 | 100.00 | || 1 | PRIMARY | m | const | PRIMARY | PRIMARY | 4 | const | 1 | 100.00 | || 1 | PRIMARY | c | const | PRIMARY | PRIMARY | 4 | const | 1 | 100.00 | || 2 | DERIVED | i2 | range | itime,TDB-T,new | new | 13 | NULL | 68958 | 100.00 | Using where |+----+-------------+------------+--------+-----------------+---------+---------+-------+-------+----------+-------------+4 rows in set, 1 warning (1.11 sec)


                          As you can see it spends most of it's time on subquery where it filters out a lot of rows using where. May be there are ways to make it use index only. Ideas?

                          Comment


                          • #14
                            And what about the actual execution time compared to the original query?


                            The other option you have is to change the order of the columns in your TDB-T index to: (iblog, idraft, itime)
                            But if it is better depends on if your where that filter out by the iblog and idraft columns is faster than the where for the itime.
                            I just guessed that you have a low cardinality on the iblog and idraft columns hence I didn't suggest it.

                            Comment


                            • #15
                              sterin71 wrote on Mon, 31 October 2011 13:41
                              And what about the actual execution time compared to the original query?


                              The other option you have is to change the order of the columns in your TDB-T index to: (iblog, idraft, itime)
                              But if it is better depends on if your where that filter out by the iblog and idraft columns is faster than the where for the itime.
                              I just guessed that you have a low cardinality on the iblog and idraft columns hence I didn't suggest it.
                              Actual execution time is about the same as before.
                              95% of all records are in one blog and idraft=0 for them. Also there are about 0.1% records with itime>NOW().
                              I'll try with modified index

                              Comment

                              Working...
                              X