GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

Any way to speed up upgrade from 5.0 to 5.1 Percona?

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Any way to speed up upgrade from 5.0 to 5.1 Percona?

    I am upgrading some extremely large MySQL DBs from 5.0 to Percona 5.1. These contain hundreds of databases, hundreds of GB of data, and many tens of thousands of tables.

    The upgrade went fine, new binaries are in place and everything is working including replication with the still-5.0 master.

    However, during the mysql_upgrade script (or if I run mysqlcheck -A --check-upgrade directly) I get the needs repair message for perhaps 10% of my tables as described in Peters post here:

    http://www.mysqlperformanceblog.com/2010/05/14/mysql_upgrade -and-innodb-tables/

    Following his instructions to run "alter table X engine=InnoDB" does fix these up but the problem is I have 10k tables needing this done and the process is extremely slow.

    I've enabled expand_fast_index_creation and moved the tmpdir directory to an array of SSDs (following advice from Alexey at http://www.mysqlperformanceblog.com/2011/11/06/improved-inno db-fast-index-creation/) and that combination did make things faster but no where near fast enough - at the current pace it will take weeks to complete. I am running 15 at a time by piping the list of tables that need this through xargs. I have bin logs disabled and the MySQL data directory itself is also on SSDs.

    Is there anything else I can do to make this faster? Is this process necessary and what are the consequences of skipping it?

    The ugpraded DB appears to work perfectly, including the tables that mysqlcheck says need to have this slow process performed so I don't understand why this is necessary.

    Any help is greatly appreciated. I am doing this migration on an extremely hefty box so assume I have all options available in terms of RAM/CPU/Disk.

  • #2
    As Peter was responding in a comment on the same thread you are referring to:
    Peter
    I trust this depends on the exact nature of changes. Most typically the changes came from the sort order changes in newer versions which would mean indexes would contain data in the wrong order which can cause some glitches down the road for example different result from query depending on if index was used. In most cases behavior is changed in the edge cases so you may see no difference at all. In practice there are many people running MySQL 5.1 as drop in replacement for MySQL 5.0 not even bothered to run mysql_upgrade
    So the answer is something like
    -"it will probably work, but down the line it might return wrong data"
    which doesn't feel very assuring from a user point of view.

    So I would recommend with continuing the repairs even though it's painfully slow.
    An alternative if you can live with the risk is to start running production on the server and fixing the tables in the background, but I'm guessing that is something like what you are doing at the moment?

    Comment


    • #3
      Thanks, I had missed that comment on Peter's post. I'm letting the alters run until completion. The expand_fast_index_creation and faster tmpdir location have helped enough. I was also basing my timings off some especially large databases that were front loaded so now that those are complete it's moving faster. Looking at perhaps 2 - 3 days to complete the process which is easier to understand for a MySQL instance this large.

      For anyone else needing to do this with a large DB containing a lot of tables, this is the approach I ended up with.

      - turn on expand_fast_index_creation, and put tmpdir on fastest disks available
      - write the list of tables that need altered to a file by grepping the output of
      mysqlcheck -A --check-upgrade

      - write a little process script, something like this (assuming all your tables are InnoDB, don't want to accidently convert tables you wish to keep as myisam for some reason):

      #!/bin/bashset -emysql -e "alter table $1 engine=InnoDB"echo $1 >> ./finished_tables

      - then cat your list of tables that need altered through
      xargs -n 1 -P CORE_COUNT ./process.sh


      This way if the script is interrupted or you need to stop it occasionally you can remove the tables in "finished_tables" from your main list and start it back up later. You could also just run mysqlcheck again but in my case that takes 30+ minutes so was a lot faster to keep my own list maintained.

      Comment


      • #4
        A final update to this to help anybody else with the same problem. Turns out if I just drop and re-add the secondary indexes on these tables that also clears the mysqlcheck warning. My understanding is that the mismatch is due to collation changes in the indexes - we have user-editable fields containing all sorts of wacky characters in our secondary indexes and those are what is needing to be rebuilt.

        In our case our primary indexes are all on synthetic fields (auto increments or regular ints) so those don't seem to have any issues.

        For my tables, dropping and re-adding the secondary indexes is ~2.5 times faster than doing a full alter table. Between fast index creation, SSDs, and this change I can fix ~40,000 tables in less than a day now.

        EDIT: On further testing I discovered this last bit may be false. Adding ANY arbitrary key to the table also cleared the warning even though they were fast index adds that took a fraction of the time that altering the table did. So I'm not trusting the result and back to doing full table alters to be on the safe side.

        Comment

        Working...
        X