-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs(handbook): Introduce maintenance description #2944
base: main
Are you sure you want to change the base?
Conversation
* the scope of the maintenance | ||
* the date and time of the maintenance | ||
* expected duration of the maintenance | ||
* self-care actions which customers can/should take before the maintenance date |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can they prevent this? The way our infra is set up influences all this, and we never get this from Heroku et al. Why do we get this from FlowFuse?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fundamental to how kubernetes works.
You can only have a version mismatch of +- 2 (e.g. 1.24 to 1.26) between the core kubernetes api and the version of the kubelet managing the node the pods are running on.
So to upgrade kubernetes version you upgrade the core api, then you need to create new nodes, and then migrate the running pods to those new nodes. The migration requires stopping the instance on the old node and starting it on the new version.
In theory we could start the instance, but that does end up with 2 instances running for a short while, which requires planning in the flow design (around caching or state management)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hardillb My point is that other compute providers just restart, even cloud providers. Somehow we don't and send emails regularly to customers asking them to restart instances, and we don't even tell them which ones.
* underlying cluster hosts | ||
* Kubernetes versions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These seem highly predictable, when do we schedule these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is true. Right now try to go out from the extended support version. Whole AWS EKS release calendar is available here . Once we catch up with versions, we can plan migrations in advance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I read that correctly, that means that one restart every year would make this hassle go away?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is most likely true, once we reach 1.31
version. The upcoming maintenance will bump the version to 1.30
- we will have to run another maintenance before the 25th of July 2025 if we want to stay within standard support (and don't pay extra for the extended one).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ppawlowski I would in that case split this document into "general maintainance", and "Kubernetes Nodes Upgrades". The latter is predictable and I suspect we want to do this twice a year?
Also, I suspect all new hosts will use 1.32 like a week after the release on EKS? And we effectively start draining as soon as these nodes go live?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to disagree - maintenance work (any kind), by definition, is something that should be planned ahead. This is in contrast to an incident, which should be handled differently. To be fair I do not see any benefit of splitting the document. Moreover, I tried to structure it in a way, where we could easily extend it with other maintenance tasks.
Also, I suspect all new hosts will use 1.32 like a week after the release on EKS? And we effectively start draining as soon as these nodes go live?
No, that is not possible. Workers cannot use kubernetes version newer than the one which is currently running on control planes. We have to upgrade control planes first, before we can create node groups in exact same kubernetes version.
Co-authored-by: Nick O'Leary <[email protected]>
Description
This pull request introduces
Infrastructure Maintenance
page to the handbook. The goal of this page is to provide high level documentation on steps required to peroform successfull maintenance of various components of FlowFuse Cloud infrastructure.Related Issue(s)
https://github.com/FlowFuse/CloudProject/issues/624
Checklist