-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Populate/update cache on ClusterCatalog reconcile #1284
base: main
Are you sure you want to change the base?
✨ Populate/update cache on ClusterCatalog reconcile #1284
Conversation
✅ Deploy Preview for olmv1 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1284 +/- ##
==========================================
+ Coverage 74.67% 74.96% +0.28%
==========================================
Files 42 42
Lines 2515 2516 +1
==========================================
+ Hits 1878 1886 +8
+ Misses 451 446 -5
+ Partials 186 184 -2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
_, err = r.Cache.FetchCatalogContents(ctx, existingCatalog) | ||
return ctrl.Result{}, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One benefit of the previous approach is that catalog pull errors will show up in ClusterExtension status.
If we move catalog cache populate to a separate controller, do users lose all visibility into the specific details of the reason the catalog caches aren't populated?
I wonder if we should store these errors somewhere that is available to the ClusterExtension reconciler so that we can still propagate them into ClusterExtension status?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm considering keeping on-demand cache population. If cache fetch fails here - we will keep re-queueing until it gets populated. If it doesn't and it is something permanent - cache population will fail during ClusterExtension
itself and it will be propagated to ClusterExtension
status as it currently works.
It should work, but I don't like it very much because it is not straightforward to understand.
I wonder if we should store these errors somewhere that is available to the ClusterExtension reconciler so that we can still propagate them into ClusterExtension status?
Any ideas? We can probably store somewhere in filesystemCache
(e.g. in cacheDataByCatalogName
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High level idea would be for the interface that ClusterExtension
uses to remain the same (e.g. it can return an error).
Yeah cacheDataByCatalogName
seems like the obvious place. We could have cacheData
include an Err
field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joelanford could you please take a look at #1318?
I refactored the client and cache: the client is now responsible for http requests and cache is responsible for caching only (previously it was making request). And I also added error propagation.
The idea is that here instead of r.Cache.FetchCatalogContents
we will be calling client's PopulateCache
.
Once it is merged I'll be able to rebase this PR on top of the changes.
28b3312
to
72e1586
Compare
72e1586
to
4b76985
Compare
71c2382
to
80763b1
Compare
Currently we fetch catalog data and populate cache on demand during ClusterExtension reconciliation. This works but the first reconciliation after ClusterCatalog creation or update is slow due to the need to fetch data. With this change we proactively populate cache on ClusterCatalog creation and check if cache needs to be updated on ClusterCatalog update. Signed-off-by: Mikalai Radchuk <[email protected]>
80763b1
to
482531d
Compare
Description
Currently we fetch catalog data and populate cache on demand during
ClusterExtension
reconciliation. This works but the first reconciliation afterClusterCatalog
creation or update is slow due to the need to fetch data.With this change we proactively populate cache on
ClusterCatalog
creation and check if cache needs to be updated onClusterCatalog
update.Reviewer Checklist