Thanks to all who attended my webinar last week on Writing Application Code for MySQL High Availability. This blog is for me to address the extra questions I didn’t have time to answer on the stream.
What do you think about using Galera Cluster but writing to a single Node with LVS ?
Whatever HA strategy you like that can present a layer 3 or layer 4 to your application tier is fine. A lot of people using PXC us it in a single-writer (master/slave) kind of way.
Is there any way we can determine slave lag and then decide to use weather master or slave? for e.g. instead of using query to find if data is available in slave then …. use if lag_time < xyz?
One of my main points was that this is usually more expensive to implement inside your application code than it is worth, particularly if such a check is done synchronously with user requests. I think it’s (typically) something that would be better left to the your load balancer or whatever HA abstraction you have between your apps and database slaves. For example, the health check could monitor replication lag and “fail” out laggy slaves, but then you have to consider what happens if all slaves show lag simultaneously.
As far as the second part of your question, how would you measure lag time? Even if you want to trust Seconds_behind_master, it’s still a database query (of some kind) to determine its value.
I specifically talked about database interaction, which I doubt is a good idea directly from your client’s web browsers. However, assuming we’re talking about client-side JS accessing your web service, all the same principles apply in that case.
I guess with GO, you have lot of options to do like, putting thread in wait mode or spanning another thread. But with other languages like java, readability of code is not that Great
I don’t want to start a language holy war, but I’d agree some languages make error handling easier than others.
For example, I prefer Go’s model of passing errors back from functions as a distinct return value over throwing exceptions. Further, Go will complain if you don’t use the error value in the code, or else you must explicitly tell Go you won’t use that value. Either way, I’m encouraged to handle errors and I like that in a language.
Also, how I handled errors in Go isn’t necessarily the best way, it is simply one way. I’m sure people doing more Go work than I do regularly have better patterns.
However, I do feel there are appropriate ways to do everything I described in most any language, one way or another. I’d expect the style for this to be dictated by software architects on larger projects and consistency for error handling to be enforced in a mature development org.