Announcement

Announcement Module
Collapse
No announcement yet.

Is PXC suitable for large databases?

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Is PXC suitable for large databases?

    Hello all,

    We have setup PXC and run in production for a while.
    From the PXC FAQ: http://www.percona.com/doc/percona-x...uster/faq.html
    if a node meets crash, it need to do all data resync while in recovery:
    Q: What if one of the nodes crashes and innodb recovery roll back some transactions? A: When the node crashes, after the restart it will copy whole dataset from another node (if there were changes to data since crash). For large databases, it would took lots of time and network bandwidth to recover.

    In this case (for large databases), is it suitable for production use?

    or we should sharding this kind of DB into small ones?

    Thanks a lot
    --
    stephon

  • #2
    The recovery that you will observe will be a donation of data from a working node. This is done in one or two ways, from cache (IST) or via backup and restore (SST). Both are automated. It's my understanding that if a node was to recover from a crash within the scenario where the joiner's (recovering node) diff with the cluster (after recovery) is within the gcache.size it will be in sync again without intervention in a relative amount of time depending on various factors. Large data + higher throughput will require tuning gcache.size so that chance of falling back to SST is minimized.

    As you're running this in production, my advice is to try to simulate a failure using something close to your workload to see how the cluster reacts to failures. It's rarely desired to debug the unknown in the middle of the night.
    Last edited by eroomydna; 09-10-2013, 06:27 PM.

    Comment


    • #3
      Hello eroomydna
      Yes, we have use gcache.size to resolve this issue, and works fine.
      Thanks for your reply.
      --
      stephon

      Comment

      Working...
      X