Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for multiple implementations to reconcile ServiceBindings in the same cluster #114

Closed
scothis opened this issue Sep 28, 2020 · 4 comments

Comments

@scothis
Copy link
Contributor

scothis commented Sep 28, 2020

Currently we assume there will only ever be a single reconciler for all ServiceBinding resources within the cluster. While there can only be a single definition of a CRD (and webhook conversion between different version of that resource), we can have different reconcilers take responsibility for reconciling specific resources within the cluster. Currently, we force a user to choose a specific implementation for an entire cluster.

Plugable resources in Kubernetes use a class to distinguish between different providers. For example, IngressClass and StorageClass. We could employ the same pattern for the ServiceBinding

@navidsh navidsh added the RC3 label Sep 29, 2020
@navidsh
Copy link
Contributor

navidsh commented Sep 29, 2020

Discussed during interlock.

@baijum
Copy link
Contributor

baijum commented Jun 15, 2021

I think the implementation *Class pattern is becoming common in Keberetes SIG-sponsored APIs.
Here is another example:
https://gateway-api.sigs.k8s.io/references/spec/#networking.x-k8s.io/v1alpha1.GatewayClass

(Side note: This API also uses x-k8s.io API group)

The above API has multiple implementations, including projects part of CNCF :
https://gateway-api.sigs.k8s.io/references/implementations/#contour

@scothis
Copy link
Contributor Author

scothis commented Jun 15, 2021

As we move towards a common community implementation of the spec, I think there is less need to support multiple installs concurrently. There should only be a single implementation installed into a given cluster. The *Class approach works for networking and storage because they are inherent extension points for Kubernetes. For us, the meaningful extension point isn't the spec implementation, it is the Provisioned Service duck type and to a lesser extent the Application Projection Mapping.

Keeping this open for now to see what others think, but I suggest we close the issue.

@sbose78
Copy link
Contributor

sbose78 commented Jun 15, 2021

Agreed, Scott.

@nebhale nebhale closed this as completed Sep 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants