Mysql Data on GC-Ramdisk

  • Filter
  • Time
  • Show
Clear All
new posts

  • Mysql Data on GC-Ramdisk


    I waiting to receive my GC-ramdisk from Gigabyte with 4Gb of memory on it and i plan to put my data on it +- 2.2gb.

    My server is an P4 3.2ghz with 4gig of ram and one HD scsi 73gig 10.000rpms at this time my load average +- 6 in extrem condition.

    you think that I will gain in speed ?



  • #2
    Hello, Philippe

    Unless queries are doing lots of full table scans, I don't think you will see more than 10% performance increase. However, please share the experiance - it's really interesting if the load will drop with DB of that size on ramdisk.

    // Aurimas


    • #3
      urious how mysql does in this case

      SELECT COUNT(*) as iTotal FROM messages WHERE memberID = 1 AND read = 'N'

      (I know it's better to create a new table that stores these values as columns, so we could grab them with simple selects queries. And we will change this query. I'm just interested for general knowledge)

      So let’s now say we have a couple of millions of posts in the table, and a couple of thousand members. memberID is indexed (not together with read, and I guess that's not a good idea to do becau


      • #4
        all your data, have no backups or anything ?

        Are you speaking about query log or binary log. If you have binary log from MySQL start it is possible to recover data by using mysqlbinlog.we have no backups. All we have is query log only.General query log or binary log ?

        And does it run to the very beginning ?

        You should be able to reply that query log, if so, it will not be perfect though as now() for example will be screwed. All we have is general query log that runs from the begining until the end. It appears it has all the information related to all kinds of queries. We need tips or suggestions to recreate our database from these General Query Log. Any parser available1

        If there is any consultant who is proficient and have this experience Please contact. I would ask my associates in Boston to contact them directly.David,

        See my signature. We should be able to help you. Drop us a note at


        • #5
          le to manage a server crash (or anything else) and to redirect write queries to the second master server.

          NOTE : this is a master-master circular replication architecture.Hi All

          I read in the MySQL 5 study guide that if you connect to MySQL with the C library that this connection will be alot faster than connecting with the ODBC driver. How do I connect with the C library? If this is not posible what other way of connecting can I use that does not use the ODBC?

          Thank you

          AndrewThis applies mainly to development.

          If you're developing application use normal interface to MySQL, not ODBC. If it is third party application and it supports ODBC and native MySQL driver, use last


          • #6
            one. Thanx for your reply Peter

            I know this is more a development problem than a DBA function and I hope you can help me with this. My developer keeps on telling me that the C library connection is written specifically for .net connections and doesn't work well with C++ connections(our app is written in C++). All the manua


            • #7
              ls and documentation says that the connection works very well with C++. Is this true? Where can I get documentation about this that will help more for a developer than a DBA?


              • #8
                Thanx again for your help.

                AndrewFor .NET you should use native MySQL .NET connector not wrapper via ODBC.

                Our application is written in C++ and we are looking for a way to conn


                • #9
                  ect to MySQL that works beter/faster than the ODBC. ODBC gives alot of problems when we run big reports. What can we use?In this case you simply can use libmysql.

                  You developer is wrong. Connector .NET is completely different thi


                  • #10
                    Regarding the part that shows you filtering on 'N':

                    This is a chunk that will degrade your performance. If the accepted values for this field are 'N' and 'Y', and there is less than 30% variety (for instance, if 25% are N and 75% are Y) then an index will be of no use.

                    Therefore, having the entire database on the ram drive will increase performance.

                    If the balance between N & Y allows closer to 50:50 distribution, the index will improve performance.

                    If almost everything is Y with very few N's, it may be best to hash the table or make a "table of N's" that is maintained in parallel and can be joined to the original when needed.

                    Long term solutions involve changes to the data structures. In that case, disk overhead is reduced and the ram drive wouldn't be necessary.