You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue : "I can't restore a backup from a FSB S3 bucket (fr-par-1) to another region (by example : fr-par-2)."
Let me explain more.
We have a cluster with 2 node group to handle legacy application with one pod / one pvc in RWO.
To control the pod repartition we have added a toleration like this, so legacy application without this key "multiaz" can't start in the other zone "fr-par-2" :
The PVC are well setup in new zone "fr-par-2", but the pod restoration are in Pending state because they don't have the Toleration key "multiaz:true", and if I kill the pod in this restore stage, the new pod start correctly in fr-par-2 with an empty volume (which is "normal").
Is it possible to add this ability (tolerations) into the velero chart somewhere ? or in the ConfigMap logic ?
Best regards !
The text was updated successfully, but these errors were encountered:
Hi,
The issue : "I can't restore a backup from a FSB S3 bucket (fr-par-1) to another region (by example : fr-par-2)."
Let me explain more.
We have a cluster with 2 node group to handle legacy application with one pod / one pvc in RWO.
To control the pod repartition we have added a toleration like this, so legacy application without this key "multiaz" can't start in the other zone "fr-par-2" :
`tolerations:
operator: "Equal"
value: "true"
effect: "NoSchedule"`
I can't find any element to setup this toleration when the velero start the restore process.
So the pod in the restore-wait phase can't start without this Toleration define.
I have added the "Toleration" to the NodeGroup by the way.
I have also added the JSON patch that apply correctly :
`version: v1
resourceModifierRules:
groupResource: statefulsets.apps
patches:
path: "/metadata/labels/changeZone"
value: "fr-par-2"
path: "/spec/template/spec/nodeSelector"
value: '{ "topology.kubernetes.io/zone": "fr-par-2" }'
path: "/spec/template/spec/tolerations"
value: '[{"key": "multiaz", "operator": "Equal", "value": "true", "effect": "NoSchedule"}]'
groupResource: persistentvolumeclaims
patches:
path: "/spec/storageClassName"
value: "scw-bssd-retain-fr-par-2"
`
The PVC are well setup in new zone "fr-par-2", but the pod restoration are in Pending state because they don't have the Toleration key "multiaz:true", and if I kill the pod in this restore stage, the new pod start correctly in fr-par-2 with an empty volume (which is "normal").
Is it possible to add this ability (tolerations) into the velero chart somewhere ? or in the ConfigMap logic ?
Best regards !
The text was updated successfully, but these errors were encountered: