Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(handbook): Introduce maintenance description #2944

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

ppawlowski
Copy link
Contributor

Description

This pull request introduces Infrastructure Maintenance page to the handbook. The goal of this page is to provide high level documentation on steps required to peroform successfull maintenance of various components of FlowFuse Cloud infrastructure.

Related Issue(s)

https://github.com/FlowFuse/CloudProject/issues/624

Checklist

  • I have read the contribution guidelines
  • I have considered the performance impact of these changes
  • Suitable unit/system level tests have been added and they pass
  • Documentation has been updated

@ppawlowski ppawlowski requested a review from knolleary as a code owner January 24, 2025 12:03
@ppawlowski ppawlowski requested a review from hardillb January 24, 2025 12:04
@gstout52
Copy link
Contributor

@ZJvandeWeg

* the scope of the maintenance
* the date and time of the maintenance
* expected duration of the maintenance
* self-care actions which customers can/should take before the maintenance date
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can they prevent this? The way our infra is set up influences all this, and we never get this from Heroku et al. Why do we get this from FlowFuse?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fundamental to how kubernetes works.

You can only have a version mismatch of +- 2 (e.g. 1.24 to 1.26) between the core kubernetes api and the version of the kubelet managing the node the pods are running on.

So to upgrade kubernetes version you upgrade the core api, then you need to create new nodes, and then migrate the running pods to those new nodes. The migration requires stopping the instance on the old node and starting it on the new version.

In theory we could start the instance, but that does end up with 2 instances running for a short while, which requires planning in the flow design (around caching or state management)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hardillb My point is that other compute providers just restart, even cloud providers. Somehow we don't and send emails regularly to customers asking them to restart instances, and we don't even tell them which ones.

Comment on lines +10 to +11
* underlying cluster hosts
* Kubernetes versions
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These seem highly predictable, when do we schedule these?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is true. Right now try to go out from the extended support version. Whole AWS EKS release calendar is available here . Once we catch up with versions, we can plan migrations in advance.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I read that correctly, that means that one restart every year would make this hassle go away?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is most likely true, once we reach 1.31 version. The upcoming maintenance will bump the version to 1.30 - we will have to run another maintenance before the 25th of July 2025 if we want to stay within standard support (and don't pay extra for the extended one).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ppawlowski I would in that case split this document into "general maintainance", and "Kubernetes Nodes Upgrades". The latter is predictable and I suspect we want to do this twice a year?

Also, I suspect all new hosts will use 1.32 like a week after the release on EKS? And we effectively start draining as soon as these nodes go live?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to disagree - maintenance work (any kind), by definition, is something that should be planned ahead. This is in contrast to an incident, which should be handled differently. To be fair I do not see any benefit of splitting the document. Moreover, I tried to structure it in a way, where we could easily extend it with other maintenance tasks.

Also, I suspect all new hosts will use 1.32 like a week after the release on EKS? And we effectively start draining as soon as these nodes go live?

No, that is not possible. Workers cannot use kubernetes version newer than the one which is currently running on control planes. We have to upgrade control planes first, before we can create node groups in exact same kubernetes version.

@ppawlowski ppawlowski requested a review from knolleary February 6, 2025 21:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants