Buy Percona ServicesBuy Now!

Percona XtraDB Cluster Operator and pod volume resizing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Percona XtraDB Cluster Operator and pod volume resizing

    Currently in the PXC operator I can specify a volume specification like this:

    Code:
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: local-path
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 6G
    If I modify the "6G" above to, say, "10G" and reapply the configuration, the PVC is not updated and so my storage provider doesn't know it needs to resize my disk (assuming I'm using one of the storage providers like awsElasticBlockStore that supports this).

    I thought perhaps that by scaling the cluster up, the new pods would get new PVCs using the updated volumeSpec but they do not. This might be a valid way of increasing the disk size of nodes in a cluster except that when scaling down again the finalizer will delete my newer pods with larger volumes first.

    The same applies to my ProxySQL instances and possibly to the backups volumes as well.

    I can see some documentation for OpenShift using the operator that requires a manual cluster deletion and recreation for a volume resize. This seems like a very blunt instrument. If the operator listened to my volume spec and could update the PVCs, would it not be possible to have a zero-downtime resize like I might get with a simple disk resize in a VM? The storage provider may suspend I/O during the resize, but we can ignore this since it's not PXC's problem / under PXC's control.

    EDIT: It would also be convenient to document which of the settings values a user can change after-the-fact (like pod size) and which we can't (like, apparently, volumeSpec).

    Thanks,
    Martin
    Last edited by martinfoot; 11-06-2019, 10:43 AM.

  • #2
    Addendum: this assumes we've enabled dynamic provisioning with this: https://kubernetes.io/docs/concepts/...lumes/#dynamic

    Comment

    Working...
    X