Announcement

Announcement Module
Collapse
No announcement yet.

Innodb performance on Windows 2008R2

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Innodb performance on Windows 2008R2

    MySQL 4.0.12 (I know, old)
    Production server is Windows 2008 R2 on ESX. Dedicated to DB.
    Test servers Redhat 4 and Windows XP perform fine.
    Production server recently started to perform extremely poorly for our single innodb table. All other tables are ISAM.

    32,000 rows in table. size on disk is 198Meg
    select count(*) from table takes .07 on test machines but full 7 seconds on production machine.

    A particular query we are working on takes up to 30 seconds on production machine (But occasionally, 10% of time, less then 1 second). On test machines it takes less then 1 second.
    The query is simple (in pseudo code): select a couple of fields from table where date_fieldA > 'someDate' and date_fieldB < 'someDate' order by field1, field2

    Machine CPU is low, memory is reasonable, but disk I/O shoots up with above query when the query takes 30 seconds.

    I have read www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-w alk-through/ and the sections on SEMAPHORES and BUFFER POOL AND MEMORY seem to be the place to start based on our status below.
    SEMAPHORES looks most suspicious but not sure what to do about it. The spin waits, rounds, share spins, and OS Waits are quite higher on production then on test.

    innodb_buffer_pool_size from show variables says "67108864' but from show innodb status its '4096'. That doesn't make sense to me unless the variable value is way to big but with a 198Meg innodb file I would think the actual used pool would be much higher.

    Any help on this is of course greatly appreciated.


    mysql> show variables like '%innodb%';+---------------------------------+------------------------+| Variable_name | Value |+---------------------------------+------------------------+| have_innodb | YES || innodb_additional_mem_pool_size | 2097152 || innodb_buffer_pool_size | 67108864 || innodb_data_file_path | ibdata1:10M:autoextend || innodb_data_home_dir | || innodb_file_io_threads | 16 || innodb_force_recovery | 0 || innodb_thread_concurrency | 8 || innodb_flush_log_at_trx_commit | 0 || innodb_fast_shutdown | ON || innodb_flush_method | || innodb_lock_wait_timeout | 50 || innodb_log_arch_dir | .\ || innodb_log_archive | OFF || innodb_log_buffer_size | 4194304 || innodb_log_file_size | 16777216 || innodb_log_files_in_group | 2 || innodb_log_group_home_dir | .\ || innodb_mirrored_log_groups | 1 |+---------------------------------+------------------------+19 rows in set (0.00 sec)



    SHOW INNODB STATUS\G gives

    =====================================121219 22:35:01 INNODB MONITOR OUTPUT=====================================Per second averages calculated from the last 16 seconds----------SEMAPHORES----------OS WAIT ARRAY INFO: reservation count 715700, signal count 688703Mutex spin waits 648325, rounds 12965580, OS waits 46RW-shared spins 1431331, OS waits 715483; RW-excl spins 189, OS waits 171------------TRANSACTIONS------------Trx id counter 0 5945912Purge done for trx's n < 0 5945747 undo n < 0 0Total number of lock structs in row lock hash table 0LIST OF TRANSACTIONS FOR EACH SESSION:--------FILE I/O--------I/O thread 0 state: wait Windows aioI/O thread 1 state: wait Windows aioI/O thread 2 state: wait Windows aioI/O thread 3 state: wait Windows aioI/O thread 4 state: wait Windows aioI/O thread 5 state: wait Windows aioI/O thread 6 state: wait Windows aioI/O thread 7 state: wait Windows aioI/O thread 8 state: wait Windows aioI/O thread 9 state: wait Windows aioI/O thread 10 state: wait Windows aioI/O thread 11 state: wait Windows aioI/O thread 12 state: wait Windows aioI/O thread 13 state: wait Windows aioI/O thread 14 state: wait Windows aioI/O thread 15 state: wait Windows aioPending normal aio reads: 0, aio writes: 0, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0Pending flushes (fsync) log: 0; buffer pool: 05822514 OS file reads, 21155 OS file writes, 9655 OS fsyncs0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s-------------------------------------INSERT BUFFER AND ADAPTIVE HASH INDEX-------------------------------------Ibuf for space 0: size 1, free list len 0, seg size 2,10 inserts, 10 merged recs, 10 mergesHash table size 276707, used cells 15077, node heap has 16 buffer(s)0.00 hash searches/s, 0.00 non-hash searches/s---LOG---Log sequence number 0 615680910Log flushed up to 0 615680910Last checkpoint at 0 6156809100 pending log writes, 0 pending chkp writes4285 log i/o's done, 0.00 log i/o's/second----------------------BUFFER POOL AND MEMORY----------------------Total memory allocated 81266588; in additional pool allocated 358016Buffer pool size 4096Free buffers 0Database pages 4080Modified db pages 0Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0Pages read 5822404, created 134, written 149670.00 reads/s, 0.00 creates/s, 0.00 writes/sNo buffer pool activity since the last printout--------------ROW OPERATIONS--------------0 queries inside InnoDB, 0 queries in queueMain thread id 1708, state: waiting for server activityNumber of rows inserted 450, updated 8806, deleted 55, read 643477040.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s----------------------------END OF INNODB MONITOR OUTPUT============================

  • #2
    Hi,


    32,000 rows in table. size on disk is 198Megselect count(*) from table takes .07 on test machines but full 7 seconds on production machine.


    Is that table InnoDB on both servers? Do they have the same number of rows? Take in account that a SELECT COUNT(*) is much slower on InnoDB than on MyISAM. InnoDB has not actual statistics of tables so it has to read and count every row in order to provide you an accurate count of rows.

    Another question, are tests servers also on a ESX server?


    select a couple of fields from table where date_fieldA > 'someDate' and date_fieldB < 'someDate' order by field1, field2


    Can you provide us the EXPLAIN output of that query? Comparing the EXPLAIN output that you get on test and production server could help. Maybe MySQL is choosing different execution plans.


    innodb_buffer_pool_size from show variables says "67108864' but from show innodb status its '4096'. That doesn't make sense to me unless the variable value is way to big but with a 198Meg innodb file I would think the actual used pool would be much higher.


    4096 is in InnoDB Pages units. One page is 16kb so 4096 * 16kb = 65536 kb = 64MB

    So seems that the value is correct.

    Also, as you pointed out, the mysql version is old. The 5.5 has a lot of improvements on InnoDB and Windows:

    http://blogs.innodb.com/wp/2010/09/mysql-5-5-innodb-performa nce-improvements-on-windows/

    I would also recommend you to double check the ESX storage performance because that's usually the problem on these shared/virtualized environments.

    Regards.

    Comment


    • #3
      Thank you miguelangelnieto. I really appreciate your well written answer.

      The bottom line was not enough innodb_buffer_pool. The test database was just under the 64M. As soon as production grew above 64M it fell apart performance wise. Bumping it to 128M fixed the issue.

      Below are some answers to your specific questions and some additional comments.

      You asked...
      Quote:
      Is that table InnoDB on both servers? Do they have the same number of rows? Take in account that a SELECT COUNT(*) is much slower on InnoDB than on MyISAM. InnoDB has not actual statistics of tables so it has to read and count every row in order to provide you an accurate count of rows.
      Yes they are both InnoDB. But no they do not have the same number of records. Albeit I would have answered that question differently a week ago. From the time we took the DB snapshot for test and the time the problem appeared the DB grew beyond the 64M threshold.

      Quote:
      Another question, are tests servers also on a ESX server?
      No. Linux and Windows XP.

      Quote:
      Can you provide us the EXPLAIN output of that query? Comparing the EXPLAIN output that you get on test and production server could help. Maybe MySQL is choosing different execution plans.
      EXPLAIN is same on all servers... Other then a "small" difference in number of rows... which caused us to be over the 64M as explained above.

      mysql> explain SELECT zone, employeeno, DAYOFWEEK(startdatetime) as day, (unix_timestamp(enddatetime) - unix_timestamp(startdatetime)) / 3600 as duration FROM Task WHERE TO_DAYS(startdatetime) >= TO_DAYS('2012-12-24') AND TO_DAYS(enddatetime) <= TO_DAYS('2012-12-30') ORDER BY day, zone;+-------+------+---------------+------+---------+------+-------+-----------------------------+| table | type | possible_keys | key | key_len | ref | rows | Extra |+-------+------+---------------+------+---------+------+-------+-----------------------------+| Task | ALL | NULL | NULL | NULL | NULL | 28467 | Using where; Using filesort |+-------+------+---------------+------+---------+------+-------+-----------------------------+1 row in set (0.00 sec)

      We could probably add integer fields for year and day of year for both start and end datetime that could have indexes to be used in the where clause.

      Quote:
      I said...
      innodb_buffer_pool_size from show variables says "67108864' but from show innodb status its '4096'. That doesn't make sense to me unless the variable value is way to big but with a 198Meg innodb file I would think the actual used pool would be much higher.

      You said...
      4096 is in InnoDB Pages units. One page is 16kb so 4096 * 16kb = 65536 kb = 64MB

      So seems that the value is correct.
      This was the issue here.... I saw 4096 for buffer pool size and didn't figure out that was pages. Where would one find such info? Where is that documented? Although since Free buffers was zero that should have been a clue.

      Quote:
      I would also recommend you to double check the ESX storage performance because that's usually the problem on these shared/virtualized environments.
      And yes the performance of the ESX storage is probably not acceptable. Even if we do have to go to disk for the data 6 or 7 seconds to get 230 records out of 28000 is not good given the simplicity of this query.

      Comment

      Working...
      X