Announcement

Announcement Module
Collapse
No announcement yet.

Replication Server Serving SELECT Queries

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Replication Server Serving SELECT Queries

    I have a master database server that has one slave server replicating (on the other side of town). I have a remote office (different state) that only has T1 connectivity to the Master DB. So when we get lots going on they complain of slowness, and it is pretty bad. So we are thinking about putting another slave server on their local network, and have that serve the SELECTS and the modifying of data will still have to go to the Master.

    Anyway, does anyone have any experience doing something like this? Or is there a better way to get data to the remote location for this purpose (cluster, or something)? How about if the data is changed on the master but has not propagated to the slave, will the user see old data?

    Any information would be very helpful. I've read all the manuals on replication, but I was hoping for a real world point of view.

    -David

  • #2
    Using v5.0.42. We have 2 masters in a replication ring separated by 180 miles and connected by a VPN tunnel. Master-1 (M1) is a slave to Master-2 (M2) and M2 is a slave to M1. The lag is usually sub-second. M1 averages 350 qps and Master-2 averages 240 qps with each growing all the time. Clients attached to M2 use M2 exclusively. Clients on M1 are directed to M2 for queries that generate reports and other non-time-critical work but otherwise do everything else on M1. All MySQL statements flow through the replication ring providing us an extra layer of recovery in addition to our backups and allowing us to sleep, knowing that we have (almost) hot swapability for our clients.

    This sytem has been running for a year now and we've never experienced a network failure (but that wouldn't be catastrophic) and replication has broken several times because of application problems at either end.

    Comment


    • #3
      Thanks for the info, and if you don't mind I have another question along this same line. How do you handle INSERT, UPDATE, DELETE? Do they all have to do this at M1? What if someone gets an UPDATE of some kind to M2, does it get overwritten?

      What is your link over your VPN?

      Thanks,
      David

      Comment


      • #4
        We allow INSERT/UPDATE/DELETE at both masters. If you setup with system variables auto_increment_increment and auto_increment_offset many auto increment concerns go away. This will not solve problems with the race condition of creating UNIQUE (non-auto increment) keys simultaneously on both masters nor will it solve situations where key sequence tables are used (we had to code for a table that maintained some of our auto increment values). By segregating users to one master per session we avoid the non-UNIQUE keys problem.

        The VPN is implemented with Sonicwall appliances inside a router at each end making M1 and M2 appear with private non-routable ip addresses. Hope this helps.

        Comment


        • #5
          Ok, I read through those system variables, and that seems to makes sense. So essentially all the ID's for updates that happen on M1 will be even numbers and M2 will be odd numbers. (Obviously depending on your increment values).

          So one more question. If the link goes down for some time say half a day. Let say user1 makes an update on M1. Then user2 who uses M2 selects that range of data that user1 changed earlier. User2 is going to see "old" data? Then when the link comes back and M1/M2 sync the data would be "good"? Right?

          Thanks for all the help.

          David

          PS. What is the speed of your VPN (T1, dsl, etc)?

          Comment


          • #6
            I've done a very similar thing without the VPN appliance. (I don't understand what value VPN gives to people who know how to use ssh.) I use ssh to create a tunnel from point A to point B. Here is how I keep myself protected:
            1. Use ssh to create a tunnel.
            2. Use an alternate port for SSH.
            3. Use an ssh key for authentication, and do not permit ssh to use passwords.
            4. Configure the ssh tunnel to not provide a shell.
            5. Filter incoming traffic, responding only to each other's IPs. (achieving "True Stealth") Use knockd just for good measure.

            I don't see a need for a commercial VPN. What do you gain?

            Comment


            • #7
              Fortunately for us our system is designed using a load balancer that forces certain clients to always use a specific master so breaks in replication are not catastrophic. Keep in mind that replication is single threaded and can easily be blocked by the local threads. If a master is busy enough we will sometimes see replication delays of up to 6 hours or more while the SQL_THREAD waits its turn to access tables. So yes, slave data is always older than the master from somewhere between milliseconds to hours.

              Speed of the connection is 100Mbps burstable, which is significantly more than we need for replication, but is available for growth in replication and other traffic, e.g. NFS.

              When we first started with this architecture we we very concerned about table corruption (we're using MyISAM exclusively) so every table was created with live checksumming (CHECKSUM=1). The checksums of each table are compared and reported among masters every 10 minutes allowing us to fix problems as they occur and keep our data synchronized.

              Comment

              Working...
              X