GET 24/7 LIVE HELP NOW

Announcement

Announcement Module
Collapse
No announcement yet.

Multiple tables design

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Multiple tables design

    Hello everyone,

    I am encountering performance problems with my database during peak hours at 140 qps average (max: 1000+ qps).

    It is a dynamic service using a simple Innodb table, many concurrent read & writes, 3 million rows and growing.

    It stores "posts" of about 90,000 accounts. My question would be, is it a good choice to split the posts into separate tables? E.g. a table for each account instead of one giant table.

    Initially it was CPU bound and not I/O, but now I am no longer sure what problem it is.

    I only wish to know whether it would be wise to split the tables at this point as I still have other optimizations going around at the moment. Many thanks.

    - Tan.

  • #2
    The real question is whether your load is (or expected to become in near future) IO bound. If it is, then splitting data in N tables, turning on file per table feature, and putting files on different physical drives might help.
    But if you can avoid relying on disk IO, that is better. Try to redesign your database structure and queries so most of the data that is actively used can fit in memory, and the rest of the data is accesed only occasionally.
    Of course it's not always possible, but since you mentioned "posts" I assume it has something to do with online communications (message boards, blogs, album comments, support tickets etc.), and usually old posts do not need to be accessed often.

    Comment


    • #3
      Thanks Alexey. It was never really an I/O issue but some less accessed queries using non-indexed columns were dragging the server down and slowing most of the simple read queries.

      In the end I altered my application and removed those small infrequent queries. I also changed my primary index to make use of clustered data as well as ensuring that read queries use indexes entirely.

      Now it is performing very well even with many queries. I plan to support 5 times more traffic with this server therefore I really need to keep it performing at its max, using as little load as possible.

      You're correct about what it is. It is a messaging system with almost live-chat like feature therefore there are plenty concurrent reads/writes. If I'm not mistaken, any INSERT/UPDATE operation to the table will also clear all the cached queries for that table?

      My concern was the table design, whether if multiple tables design is going to be better for the long run since all account posts are isolated, just like mailboxes. But I do not have experience of having 1xx,xxx tables and worry that might cause more problems.

      I guess it's not easy to find an answer. I might even have the answer already, which is to stick with what I have now. I would like to hear some opinions if there are any. Any advice is much appreciated.

      Comment


      • #4
        snecx wrote on Mon, 19 February 2007 19:00

        If I'm not mistaken, any INSERT/UPDATE operation to the table will also clear all the cached queries for that table?

        That's correct. Splitting isolated chunks of data may help raising cache hit ratio.

        Quote:

        I guess it's not easy to find an answer. I might even have the answer already, which is to stick with what I have now.


        If it works good, then it's not worth denormalizing.
        MySQL can easily do tens of thousands queries per second, if you do everything right.

        Comment

        Working...
        X