-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: Missing Kind in preview mode (yaml/v2) #2904
Conversation
Does the PR have any schema changes?Looking good! No breaking changes found. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2904 +/- ##
==========================================
+ Coverage 27.55% 27.63% +0.08%
==========================================
Files 53 54 +1
Lines 7818 7862 +44
==========================================
+ Hits 2154 2173 +19
- Misses 5485 5504 +19
- Partials 179 185 +6 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly just some small questions and clarifications.
Higher level -- I definitely agree this makes for a better user experience, and how you've implemented things matches my understanding of the provider lifecycle (a single provider per operation etc.). I can't think of a better way to accomplish the desired UX given the current engine behavior.
For completeness, the strongest argument I can think of against this is that it's an experience the user will only see once when a CRD is first installed. Does one preview over the lifetime of a stack warrant complexity that bumps into things like the engine/provider contract?
I don't have enough context to feel strongly one way or the other. I do prefer the UX, but my default position is to usually err on the side of simpler-is-better when brushing up against system boundaries like this. If folks from the Platform side are on board with this then it seems like a no brainer.
provider/pkg/clients/clients.go
Outdated
GenericClient: nil, | ||
DiscoveryClientCached: nil, | ||
RESTMapper: nil, | ||
CRDCache: &CRDCache{}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would make CRDCache
private if there's only meant to be one instance accessed via this client set.
@@ -257,7 +257,7 @@ func (c *chart) template(clientSet *clients.DynamicClientSet) (string, error) { | |||
installAction.APIVersions = c.opts.APIVersions | |||
} | |||
|
|||
if clientSet != nil { | |||
if clientSet != nil && clientSet.DiscoveryClientCached != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the tl;dr behind clientSet
and DiscoveryClientCached
that I would use DiscoveryClientCached
if I can tolerate stale data and clientSet directly if I want to hit the live API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the clientSet
is a set of clients including a discovery client (that happens to have a built-in cache), a resource client, and a REST Mapper for translating Kinds to API Resources. AFAIK the provider doesn't cache any k8s objects but does cache the discovered type information.
I had a discussion with @Frassle where he expressed support but advised me to submit some tests into pu/pu to harden the contract. |
There are a couple of arguments against this (or adding state anywhere in providers for that matter).
But I suspect providers can do much smarter previews if they are stateful, and that's probably worth the downsides of the above. |
Just want to mention that the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Proposed changes
The yaml/v2 package eagerly resolves kinds to apply a default namespace. This PR addresses a problem with the use case
where a kind is installed by one
ConfigGroup
and then used by another. In preview mode, the kind cannot be found by the latter group because the expected side-effect (i.e. installation of a CRD) doesn't occur.The solution to the issue is to maintain a cache in the provider of new CRDs. When the provider executes
Check
on aCustomResourceDefinition
resource, it adds the CRD to the cache. Later, when a component resource must resolve a given kind, it checks the cache. In this way, the cache compensates for the lack of side-effects (i.e. API server calls) that would normally allow the kind to be resolved. Overall ordering is established usingDependsOn
.It is reasonable to populate the cache during
Check
because a)Check
produces the "planned" state that is needed later without actuating it, and b)Update
is called conditionally and thus isn't a reliable alternative.Implementation Details
The provider has three overlapping modes of operation - preview-mode,
clusterUnreachable
andyamlRenderMode
- that affect the initialization and use ofclientSet
. Previously whenclusterUnreachable
is true then theclientSet
is left null. Now, we new-up theclientSet
itself but leave its various fields null. This change makes it possible to use the CRD cache in all modes, as was necessary to supportyamlRenderMode
.Testing
A new integration test is added called
yamlv2
that exercises this scenario.Manual testing of "cluster unreachable mode" was done by misconfiguring my local environment (to have an invalid kube context). Tested in combination with "yaml rendering mode" and found that the latter works as expected even when the cluster is unreachable.
Related issues (optional)