Upgrading to MySQL 8: Embrace the Challenge

Upgrading to MySQL 8Nobody likes change, especially when that change may be challenging.  When faced with a technical challenge, I try to remember this comment from Theodore Roosevelt: “Nothing in the world is worth having or worth doing unless it means effort, pain, difficulty.”  While this is a bit of an exaggeration, in this case, the main concept is still valid.  We shouldn’t shy away from an upgrade path because it may be difficult.

MySQL 8.0 is maturing and stabilizing.  There are new features (too many to list here) and performance improvements.  More and more organizations are upgrading to MySQL 8 and running it in production, which expedites the stabilization.  While there is still some significant runway on 5.7 and it is definitely stable (EOL slated for October 2023), organizations need to be preparing to make the jump if they haven’t already. 

What Changed?

So how is a major upgrade to 8.0 different than in years past?  It honestly really isn’t that different.  The same general process applies:

  1. Upgrade in a lower environment
  2. Test, test, and then test some more
  3. Upgrade a replica and start sending read traffic
  4. Promote a replica to primary
  5. Be prepared for a rollback as needed

The final bullet point is the biggest change, especially once you have finished upgrading to MySQL 8.  Historically, minor version upgrades were fairly trivial.  A simple instance stop, binary swap, and instance start were enough to revert to a previous version.  

In 8.0, this process is no longer supported (as noted in the official documentation): 

“Downgrade from MySQL 8.0 to MySQL 5.7, or from a MySQL 8.0 release to a previous MySQL 8.0 release, is not supported.”

That is a definite change in the release paradigm, and it has shown real issues across minor releases.  One good example of how this can impact a live system was captured well in the blog post MySQL 8 Minor Version Upgrades Are ONE-WAY Only from early 2020. 

How to Approach Upgrading to MySQL 8

With this new paradigm, it may seem scary to push through with the upgrade.  I think in some ways, it can be a positive change.  As mentioned above, proper preparation and testing should be the majority of the process.  The actual upgrade/cutover should essentially be a non-event.  There is nothing a DBA loves more than finalizing an upgrade with nobody noticing (aside from any potential improvements).

Unfortunately, in practice, proper testing and preparation is generally an afterthought.  With how easy upgrades (and particularly rollbacks) have been, it was generally easier to just “give it a shot and roll back if needed”.  As downgrades are no longer trivial, this should be viewed as a golden opportunity to enhance the preparation phase of an upgrade.  

Some extra focus should be given to:

  • Reviewing the release notes in detail for any potential changes (new features are also sometimes enabled by default in 8.0)
  • Testing the system with REAL application traffic (benchmarks are nice, but mean nothing if they are generic)
  • Solidifying the backup and restore process (this is already perfect, right?)
  • Reviewing automation (this means no more automated “silent” upgrades)
  • The actual upgrade process (starting at the bottom of the replication chain and maintaining replicas for rollbacks if needed)

With so much emphasis on the preparation, we should hopefully start to see the actual upgrade become less impactful.  It should also instill more confidence across the organization.

What About “At Scale”?

Having worked with a wide range of clients as a TAM, I’ve seen environments that range from a single primary/replica pair to 1000s of servers.  I will freely admit that completing a major version upgrade across 10,000 servers is non-trivial.  Nothing is more painful than needing to rollback 5000 servers when something crops up halfway through the upgrade process.  While no amount of testing can completely eliminate this possibility, we can strive to minimize the risk.

At this scale, testing actual traffic patterns is so much more critical.  When you are looking at complex environments and workloads, the likelihood of hitting an edge case definitely increases.  Identifying those edge cases in lower environments is critical for a smooth process in production.  Similarly, ensuring processes and playbooks exist for rolling back (in the event an issue does appear) is critical.  

Finally, deploying upgrades in phases is also critical.  Assuming you have monitoring such as Percona Monitoring and Management in place, A/B testing and comparison can be invaluable.  Seeing version X and Y on the same dashboard while serving the same traffic allows a proper comparison.  Comparing X in staging to Y in production is important, but can sometimes be misleading.  

Conclusion

Overall, upgrading to MySQL 8 isn’t that different from previous versions.  Extra care needs to be taken during the preparation phase, but that should be viewed as a positive overall.  We definitely shouldn’t shy away from the change, but rather embrace it as it needs to happen eventually.  The worst thing that can happen is to continue to kick the can down the road and then be pressed for time as 5.7 EOL approaches.

To solidify the preparation phase and testing, what tools do you think are missing?  What would make it easier to accurately replay traffic against test instances?  While there are tools available, is there anything that would help ensure these best practices are followed?

If your organization needs help in preparing for or completing an upgrade, our Professional Services team can be a great asset.  Likewise, our Support engineers can help your team as you hit edge cases in testing.  

And, finally, the most important strategy when it comes to upgrades: read-only Friday should be at the top of your playbook!

Share this post

Comments (2)

  • john Reply

    Your post is very informative and useful. We do have a few remaining primary/replica mysql instances, but most of our environment is now converted to use PXC clusters. So exactly how can we upgrade a PXC node to 8.x while the remainder of the cluster is 5.x. It seems impossible.

    So at least for the initial jump to 8.x it would appear we are faced with creating three new 5T nodes and making that a replica and then cutting over to the new cluster. What a pain.

    At least once we make the initial jump to 8.x we should be able to take out one 5T node, upgrade that to 8.x.y and see if it breaks anything before we upgrade the entire cluster. But of course there is always that chance we are wrong. What then?

    April 2, 2021 at 9:22 am
    • Mike Benshoof Reply

      Thanks for your feedback John! While upgrading the cluster from 5.7 to 8.0 in place is technically supported, it comes with some caveats and definitely IS NOT recommended. If the cluster is in read-only or all writes go to a single node, you could go through a rolling upgrade of each node until there is only one remaining 5.7 node. At this point, you would need to drop that node out of the cluster to ensure no writes are sent from 8.0 to 5.7.

      The recommended approach would be to build a single node cluster, keep it in sync via standard asynchronous replication, build out the new cluster from there, and then cutover.

      Likewise, with 8.x minor upgrades, they follow the same paradigm and it is strongly recommended to follow the same new cluster/replication approach.

      April 2, 2021 at 3:00 pm

Leave a Reply