Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[KEP-1979] COSI: mirror the Retain policy of PVCs/PVs for BucketClaims/Buckets #4204

Closed
4 tasks
BlaineEXE opened this issue Sep 11, 2023 · 7 comments
Closed
4 tasks
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@BlaineEXE
Copy link

Enhancement Description

  • One-line enhancement description (can be used as a release note): mirror the Retain policy of PVCs/PVs for BucketClaims/Buckets
  • Kubernetes Enhancement Proposal: https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1979-object-storage-support
  • Discussion Link: This is the discussion
  • Primary contact (assignee): @BlaineEXE
  • Responsible SIGs: sig-storage
  • Enhancement target (which target equals to which milestone):
    • Alpha release target (x.y):
    • Beta release target (x.y):
    • Stable release target (x.y):
  • Alpha
    • KEP (k/enhancements) update PR(s):
    • Code (k/k) update PR(s):
    • Docs (k/website) update PR(s):

Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.


Discussion prompt

The first v1alpha1 spec defines Bucket deletion policy behavior as below:

Buckets can be created with one of two deletion policies:

Retain
Delete

When the deletion policy is Retain, then the underlying bucket is not cleaned up when the Bucket object is deleted. When the deletion policy is Delete, then the underlying bucket is cleaned up when the Bucket object is deleted.

A Rook user pointed out today that this behavior (which is mirrored by lib-bucket-provisioner) does not follow in the same trend as Kubernetes PVCs/PVs. When a Kubernetes PVC is deleted and its retention policy is "Retain", the PV remains after deletion. An admin is then responsible for cleaning it up. Ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain

The CSI spec, being independent of Kubernetes itself, does not seem to have the concept of retention built-in, unless I am looking in the wrong places. Ref: https://github.com/container-storage-interface/spec/blob/master/spec.md#deletevolume

This behavior defined by COSI is not wrong, but I believe it may be worth revisiting and discussing in more detail. I think COSI's intent is largely to mirror usage patterns that are familiar to users/consumers (i.e., PVC/PV) while also mirroring familiar patterns to developers/implementers (i.e., CSI). The COSI design mirrors both PVC/PV and CSI interfaces.

Should COSI mirror the PVC/PV user-facing usage here, or should it continue with the current behavior?

Arguments for mirroring PVC/PV behavior:

  • COSI is mirroring the deletion policy concept from PVC/PV; it follows logically to mirror the behavior and make use of user/admin familiarity
  • Keeping the Bucket in place provides a visible-to-Kubernetes measure of storage that is consumed on the backend but unused by a frontend
  • Reclaiming the backend bucket's data will be easier for users/admins to orchestrate if they don't have to regenerate a Bucket from a backend bucket's data
  • Other arguments?

Arguments for keeping the current behavior:

  • No ETCD storage is "wasted" by deleted buckets whose reclaim policy is "Retain"
  • Keeping the Bucket in place may allow users to re-use the bucket without intent
  • Other arguments?

There are middle-ground options that also exist. It could be useful to specify different types of retention policies for COSI. For example "RetainData" represent the current behavior while "Retain" (or "RetainResources") could represent PVC/PV-mirroring behavior.

@wlan0 @xing-yang

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Sep 11, 2023
@BlaineEXE
Copy link
Author

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 11, 2023
@haslersn
Copy link

Keeping the Bucket in place may allow users to re-use the bucket without intent

With PVs this is solved as follows AFAIK: They can only be reclaimed after an admin removed the claimRef from the PV. Though there doesn't seem to be any fine-grain access control, i.e., after an admin removed the claimRef from the PV, it can be claimed in any namespace by any PVC (with the correct volumeName set).

@BlaineEXE
Copy link
Author

It seems to me that the improvement to the spec is to the workflow available to the admin persona. While the improvement is minor, I think it is still an impactful improvement to make to allow the admin to not have to do manual Bucket creation for re-claiming a bucket/blob.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 28, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 27, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 28, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

4 participants