Announcement

Announcement Module
Collapse
No announcement yet.

SELECTs to slave, INSERTs to master by application?

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • SELECTs to slave, INSERTs to master by application?

    Hi,

    I have a question and would be very thankful on any helping comments whatsoever.

    I am running a PHP web application on a two webserver cluster. I also have two master-slave-replicated MySQL 5x databases.

    What I want to do now is to send read-only-queries to the slave and all others queries to the master, or, better yet, to send write-queries to the master only and read-only-queries to 70% to the slave and 30% to the master (as I expect around 90% read-only-queries after all).

    I already tried the not-yet-documented "mysqli_slave_query" and "mysqli_master_query" functions, but it did not work. I also tried "mysqli_enable_rpl_parse", which at least did not put out any error messages but all queries were sent to the master.

    Last thing I tried was to open two database connections and do the parsing myself, which led me into having only half the performance. Not good.

    I also googled myself through thousends of sites but could not find any helpful advice, maybe I do here?

    Thanks a lot!
    Stefan.

  • #2
    These functions never really worked, so do not use them and just do the things manually - connect to the slave and use it for queries which you can use it for.

    Comment


    • #3
      Stefan,

      I have a similar setup and we choose what server to use at application level. We also use hostnames to connect, as that way if the slave were to go down, it would be easy to direct traffic towards the master.

      Comment


      • #4
        Hi,

        I am new to this replication thing. I want a simple setup where the "master" is the DB where I do inserts, updates, etc. I want the "slave" to focus on selects. I want to keep them almost at the same level of update (i.e., in microseconds, so that when one updates a record, it is immediately available through the slave for selection too. But when I am doing updates on the master, and master is therefore blocked (due to the grand MYISAM table-locking idiocy), I want the slave to be available througout this update time.

        Is this what replication is for? A search for things like "read-only mysql replication" comes up on google with NOTHING, not a single guide on the net that tells me how to do this. I'd like a simple step by step guide -- and I have only one dedicated server, although I have 5 IPs, so I don't have a physical "cluster" or anything, I just want two separate processes running. Would appreciate any pointers!

        Thanks
        .ep

        Comment


        • #5
          erick_p wrote on Thu, 07 June 2007 11:42

          I want to keep them almost at the same level of update (i.e., in microseconds, so that when one updates a record, it is immediately available through the slave for selection too.


          The replication is normally carried out as soon as possible. Which is when the query has been performed on the master and logged. More in the millisecond range than in the microsecond.

          erick_p wrote on Thu, 07 June 2007 11:42


          But when I am doing updates on the master, and master is therefore blocked (due to the grand MYISAM table-locking idiocy), I want the slave to be available througout this update time.


          You can't really do that because the exact same changes will be performed on the slave also and then the slave will be locked for reading while the updates is performed.

          erick_p wrote on Thu, 07 June 2007 11:42


          and I have only one dedicated server, although I have 5 IPs, so I don't have a physical "cluster" or anything, I just want two separate processes running.

          You don't want to do this because you loose more than you gain.
          When you have two processes running it means that they can't share the same cache memory etc. And you also consuming CPU since these two processes needs replication to be in sync.

          If you have run into the performance problem with SELECT's that take a long time and a lot of INSERT/UPDATE/DELETE's you should change to InnoDB table type instead.
          It uses row level locking instead and then you avoid this problem.

          Comment


          • #6
            Thank you. I have tried InnoDB but it is horribly slow. I was quite sold on the idea of row-level locking and tried all sorts of settings of memory and so forth, but InnoDB just does not cut it in terms of raw SELECT speed, which is critical for my application. Which is why I wanted my master to be InnoDB and my slave (for SELECTs) to be MYISAM, but on the same machine. Thanks for your informative post, I suppose I shouldn't do what I was thinking.

            Comment


            • #7
              I am bit curious what you tested if you felt that InnoDB was slow.
              There are quite a few things that you can change to speed it up.

              The most important startup variables:
              Quote:

              your innodb_buffer_pool_size = 80% * [AVAILABLE RAM]


              Should be set to 80% of available RAM.
              This is the internal cache for the InnoDB and when innodb has to read/write a lot of things to disk all the time it can't optimize these operations and can be horribly slow.


              InnoDB is default setup for integrity because not that many people need raw speed but if you are willing to sacrifice integrity for speed then you can continue with these:

              Quote:


              innodb_flush_log_at_trx_commit = 0


              These are for writing the logfiles that should be used in case of a crash to restore the DB up to the same point.
              It is default set to 1 which means that the log is flushed out to disk after _each_ transaction and if you have slow disks this can really make your application go very slow.
              So if you need more speed and can live with that you can loose about 1 second of transactions in case of a crash you should set this to 0.

              Those two are the most important when it comes to speed for InnoDB.

              There are 1 more that can also be useful as a last step and that is innodb_doublewrite which turns of writing to the doublewrite buffer but generally the two first is enough.

              Comment


              • #8
                Thanks for these tips. I am on a MySQL server on a dedicated host which has 2 dual core CPUs and 4GB memory. When I try the following settings in my
                my.cnf
                , it does not even start!


                #skip-innodbinnodb_buffer_pool_size=3Ginnodb_flush_log_a t_trx_commit=0


                If I remove the comment from ski-innodb then it starts ok.

                Comment


                • #9
                  Start by testing:
                  innodb_buffer_pool_size=2G

                  The famous 2GB limit might come in play here.

                  Comment

                  Working...
                  X