GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

Standard MySQL faster than Percona and Maria on my rig

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Standard MySQL faster than Percona and Maria on my rig

    One thing I have noticed with MySQL alternatives is that they are stated to be faster than MySQL for large database / high volume usage.

    That's great and certainly useful for a lot of sites. My site, however, I am actually seeing worse performance with Percona and MariaDB.

    The site has low traffic, includes a forum but the main use is for displaying stats from the database. In one particular report the time taken from generating to completing in the browser window is as follows:

    MySQL 5.8sec
    Percona 6.2sec
    MariaDB 6.5sec

    All are using the same my.cnf file, so I wonder if I need to do any further tweaking to see better performance with Percona or MariaDB?

  • #2
    Here's some info on the stats table:

    Consists of 200,000 rows and 220 fields. Total size around 300Mb.

    This table is readonly and for optimal performance is stored in a Memory Table.

    drop table if exists MEM_aw_races;
    create table MEM_aw_races engine=memory select * from aw_races;
    drop aw_races;


    PHP is used to create a user form for selecting a report and for processing the data.

    As the report contains dozens and dozens of sub reports I select data from the MEM_aw_races table and store into a smaller temporary memory table:

    $temp_table = 'temp_' . substr(rand().rand().rand(), 0, 6);
    create temporary table $temp_table ENGINE = MEMORY SELECT // around 60 columns here // from MEM_aw_races where // users query here //

    //Sub reports here
    function sub_reports($group, $heading) {

    $result = @mysql_query("select $group,sum(if(position=1,1,0)),count(*),sum(if(pos ition=1,sp ,-1)),sum(1/(sp+1)),sum(placed) from $temp_table group by $group");
    while($row = @mysql_fetch_row($result)) {

    //display table data
    }
    drop table $temp_table;

    Is this method not suited to Percona or MariaDB?

    Comment


    • #3
      Just ran a few more tests.

      At the console:

      mysql> create temporary table $temp_table ENGINE = MEMORY SELECT // around 60 columns here // from MEM_aw_races where // users query here //

      MySQL 0.68sec
      Percona 0.71sec
      MariaDB 0.72sec

      Comment


      • #4
        1. Percona has no optimization for the memory engine
        2. Percona provides you with better monitoring statistics, more configuration parameters and is better for high concurrency
        3. Have you used the same MySQL version number (major/minor)?
        4. Try to run your queries directly against aw_races, and make sure that the // from MEM_aw_races where // can be satisfied by an index.
        5. If you try 4. and aw_races is not InnoDB, convert it and optimize your my.cnf for InnoDB ( http://tools.percona.com/ ) or your forum will be unavailable while you run these stats queries

        Comment


        • #5
          4. Surely as this table is read-only then running it from memory would be faster than from disk? OK there's the caching but running it direct from memory would be quicker?

          5. I will try that. This is a test rig so no live data on it.

          Comment


          • #6
            4. You may read from disk for copying the data to MEM_aw_races. So if the data is not cached, both our approaches have to read from disk. With your approach, all data is in memory three times: the cached version after reading MEM_aw_races, the memory table, and the temp table created for executing the stats query. So, if you are short on memory, your approach will definitely use the last bit of memory.

            Comment


            • #7
              I don't think there will be an issue with memory for this rig. There is plenty spare.

              The inserting to MEM table is only done once each day. The temporary MEM tables can be anything from a few K to 100M. Once they have been processed they are dropped immediately and the memory returns. The only issue with that last one is that the opened tables figure keeps on growing and growing, which then throws out the table cache stat e.g.

              Table cache hit rate: 0% (297 open / 195K opened)

              Comment

              Working...
              X