Announcement

Announcement Module
Collapse
No announcement yet.

MySQL server optimization for HUGE table queries

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • MySQL server optimization for HUGE table queries

    Hi guys,

    I hope you will help me to find a solution for my startup project. From the beginning it became very huge ...

    I have a huge table with about ~200M rows.
    Table structure:


    CREATE TABLE `table` ( `id` int(11) NOT NULL auto_increment, `id1` int(11) default NULL, `id2` int(11) default NULL, `num1` smallint(5) NOT NULL, `num2` decimal(11,2) default NULL, `num3` decimal(7,2) default NULL, PRIMARY KEY (`id`), UNIQUE KEY `id1_2_id2` (`id1`,`id2`), KEY `id1` (`id1`), KEY `id2` (`id2`)) ENGINE=MyISAM;


    Whole table is ~4G, idexes - ~10G

    During quite simple query (I have more complicated with group and 1 join to 20M table) server performance very low. For example, query:


    select count(tp.id), sum(tp.num1), sum(tp.num2)fromtable tpwhereid1 = 187085


    which are returns 20K rows (ya - I need all of them) are runs for about 22 secs. As you can see it's absolutely unacceptable.

    Do you think it has a sense to tune my dedicated server or I need to redesign my database structure?

    My dedicated box:

    Win Server 2003
    Core2Duo 2.13Ghz
    1G of RAM

    Do you need me to attach SHOW GLOBAL STATUS here?

    Thank you in advance.

  • #2
    This query is most expensive:

    select count(tp.id), sum(tp.num1), sum(tp.num2), date_format(from_unixtime(t.date), "%Y%m%d") as dfromtable tpinner join table2 t on t.id = tp.id2whereid1 = 7366group by d;


    table2 contains about 20M rows (everything indexed and also quite huge)
    As a result this query returns ~400 rows and runs for about 160 secs (

    Do you have any ideas how to optimize it?

    Comment


    • #3
      I'd upgrade your RAM to 16GB+. You might get okay performance out of 8 GB if you gave almost all of it to innodb_buffer_pool. Run a 64 bit version of your OS for efficiency, too.

      It's especially important for performance that all your indexes reside in RAM. With 1GB, even if you have most of that assigned to the innodb_buffer_pool, there's no chance it'll be able to hold all the indexes.

      Your queries look fine. It's disk read bandwidh that's slowing you down, because you have nowhere near enough memory to effectively cache your indexes.

      Comment


      • #4
        MarkRose wrote on Fri, 13 March 2009 23:09

        I'd upgrade your RAM to 16GB+. You might get okay performance out of 8 GB if you gave almost all of it to innodb_buffer_pool. Run a 64 bit version of your OS for efficiency, too.

        It's especially important for performance that all your indexes reside in RAM. With 1GB, even if you have most of that assigned to the innodb_buffer_pool, there's no chance it'll be able to hold all the indexes.

        Your queries look fine. It's disk read bandwidh that's slowing you down, because you have nowhere near enough memory to effectively cache your indexes.

        Hi Mark,

        For second query whole indexes size is about 12G, so you want to say that all these indexes should be located in memory (or at least ~8G) and it will solve my problem?

        Is this only one way?

        Also do I really need to increase innodb_buffer_pool even if I've use MyISAM tables?

        I heard that some guys cannot use innodb_buffer_pool greater than 1,5Gb and they are needed to recompile mysql under win2K by them self - is it true?

        Maybe you could suggest any other mysql server settings to tune using my dedicated box - server upgrade is quite expensive for my startup now (

        Mark - thank you for your prompt response a lot )

        Comment


        • #5
          Sorry, I wrongly assumed you were using InnoDB.

          Even if you're using MyISAM, not having your indexes kept in RAM will result in severe performance problems.

          RAM is cheap. If the queries going slow are costing you money, the expense is totally justified. Look for a board that supports 8 chips, and get 2 GB chips (current sweet spot).

          I don't know much about running MySQL on Windows (I avoid touching Windows in general). I do know that Win2k has a per process RAM limit of 2 GB. I believe there's a setting in the boot.ini file to increase that to 3GB. 64 bit Windows doesn't have that limitation.

          The only other thing I can suggest would be reducing the size of your indexes. Maybe get rid of id1_2_id2 if you don't need it. And if you can reduce the length of your int columns, that may make your indexes smaller (you may have to re-create the indexes afterwards, too).

          Comment


          • #6
            Thank you Mark!!!

            Some more ideas will be really appreciated

            Comment


            • #7
              Well if you have a low amount of writes, you could move to a RAID setup to increase your reads (RAID 5 is ideal for low writes). You might divide your time by the number of drives in the array. If you want sub-second response times though, you will need more RAM.

              Comment


              • #8
                Ya, more RAM looks more suitable ... But require a little bit more $$$ - as I said I've renting dedicated server.

                I've analyzed my server variables and it looks like:

                'Key_blocks_unused', '345288'
                'Key_blocks_used', '60206'
                'Key_read_requests', '526863'
                'Key_reads', '60514'

                Do you know what ideal values should be here?

                Comment


                • #9
                  Dzmitry wrote on Fri, 13 March 2009 16:12

                  Ya, more RAM looks more suitable ... But require a little bit more $$$ - as I said I've renting dedicated server.

                  I've analyzed my server variables and it looks like:

                  'Key_blocks_unused', '345288'
                  'Key_blocks_used', '60206'
                  'Key_read_requests', '526863'
                  'Key_reads', '60514'

                  Do you know what ideal values should be here?


                  How long has your server been running before you got those values?

                  Comment


                  • #10
                    MarkRose wrote on Sat, 14 March 2009 07:03

                    How long has your server been running before you got those values?

                    Few hours - but not in production or under any load - I've ran 10-20 queries just to test response time and check new setting after restart.

                    Comment


                    • #11
                      For those who interested in final solution ...


                      1. I've created composite field - date - and populate it as date_format(from_unixtime(t.date), "%Y%m%d") from 2nd query

                      2. I've created covered index for id1, num1, num2, date - so after that I no longer need inner join for 2nd query and id2

                      So it looks like

                      selectcount(tp.id1),sum(num1),sum(num2),tp.datefro m tpwhere tp.id2= (select id from t2 where varchar= 'anyvarchar')group by tp.date


                      My index for this table 12G now but results are impressive.
                      For ~180M rows it's execute query above for 30K id1 for 0.1-0.12 secs instead of 60-120 secs before.

                      I happy with it.

                      Comment

                      Working...
                      X