As many of you have seen already, MySQL 8.0.23 is available for download (release notes).
Today our dear LeFred thanked all the numerous contributors to bug fixes. About this: let me mention our two people involved in bug fixing, Venkatesh Prasad Venugopal and Kamil Hołubicki. Great work guys!
On my side, I have reviewed the release note yesterday and want to highlight some points that had my attention. I will follow the order as presented in the release notes and not by what is more relevant for me.
- We have the shift in the syntax from CHANGE MASTER TO to CHANGE REPLICATION SOURCE TO. There are many scripts out there will need to be modified just because of this small change. Seems a small thing, but it will have some negative side effects for sure.
- Another one is the work lead on the HASH JOINS. The Oracle implementation so far was not as efficient as expected and I think it will be worth some test/bench-marking to see if the new way is really better. I have pinned this task and will let you know as soon as I have some results. In the meantime, you can refresh your mind by reviewing the FOSDEM 2020 presentation.
- One thing I would really like to have someone testing is the “Dropping a tablespace with a significant number of pages referenced from the adaptive hash index.” This has been a problem reported by our customers in several situations. Anyone in the community willing to accept the challenge?
- Asynchronous connection failover is now GR cluster-aware. This was the next step I was wishing to see and that I was pushing in my old article. I will test and see if it will help us for real. In any case, I suggest you go and read/test yourself. This can be a significant help for architecture resiliency when designing DR or any geo-distributed solutions.
- Red-Flag about “Replication channels can now be set to assign a GTID to replicated transactions that do not already have one… … This feature enables replication from a source that does not use GTID-based replication, to a replica that does.”. Not sure if this rings the same bell in your mind that it did in mine!
But if it works for real (you know me, I need to test things to be convinced), I see several possible scenarios, from migration major-major to MariaDB to Percona Server for MySQL. Meaning once the big initial effort is done, keeping the destination up to date is less difficult. And more! In my mind, it is worth exploring.
- Also important in this area: “three-actor deadlocks with the commit order locking, which could not be resolved by the replication applier, and caused replication to hang indefinitely” New code in the replication applier allows it to initiate an automatic process of retry for the locked transactions, and eventually stop the replication in a controlled way instead of hanging there.
There are additional things and I would like to test them all one by one, but the time I have available is limited, so I invite you to choose a topic and investigate/understand/share.
Great MySQL to all!