-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[enterprise-4.9] Issue in file registry/configuring_registry_storage/configuring-registry-storage-osp.adoc #43053
Comments
@mandre @pierreprinetti Would you be able to confirm this? I believe the reference URL is https://docs.openshift.com/container-platform/4.9/registry/configuring_registry_storage/configuring-registry-storage-osp.html. |
Yes @maxwelldb, you're correct - since the new backend is RWO, we need to ensure the image registry runs a single replica and that the rollout strategy is Recreate. On environments where Object Storage (Swift) is not available, the cluster-image-registry-operator sets this up automatically for you, however this would be an issue if someone follows the above procedure to migrate from a Swift backend (RWX) to Cinder (RWO). We should add a note about the limitation when using RWO storage and change the example
|
@mandre Thanks! |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
/remove-lifecycle stale |
The issue with the OSP documentation being incomplete still potentially exists with 4.13: https://docs.openshift.com/container-platform/4.13/registry/configuring_registry_storage/configuring-registry-storage-osp.html ReadWriteOnce means the storage can only be mounted on a single node but the node spreading priority will attempt to place the registry image on different nodes. The issue with corruption seems to be limited to specific NFS implementations using Filesystem type but, if this issue doesn't exist on Cinder or CephFS then ReadWriteMany for the Filesystem type is an alternative to constraining the registry scale for multi-node access. The documentation is correct for bare metal given RWO storage but is also required to be RWO due to block requires special handling at https://docs.openshift.com/container-platform/4.13/registry/configuring_registry_storage/configuring-registry-storage-baremetal.html#installation-registry-storage-block-recreate-rollout-bare-metal_configuring-registry-storage-baremetal |
Which section(s) is the issue in?
What needs fixing?
Example is for RWO storage but does not note the rolloutStrategy needs to be set to Recreate and the replica count must be set to 1.
The text was updated successfully, but these errors were encountered: