Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider alternatives to Helm #48

Open
craigwalton-dsit opened this issue Jan 7, 2025 · 0 comments
Open

Consider alternatives to Helm #48

craigwalton-dsit opened this issue Jan 7, 2025 · 0 comments

Comments

@craigwalton-dsit
Copy link
Collaborator

craigwalton-dsit commented Jan 7, 2025

What do we use Helm for?

  • Allowing a user to specify their sandbox requirements via a Helm values.yaml file, which is by design very similar to a Docker compose.yaml file.
  • Generating K8s manifests from the values.yaml file
  • Allowing users to customise how configuration files are turned into manifests (via custom Helm charts)
  • Deploying the resources defined in the manifests
  • Waiting until the resources are deemed ready (using helm install's --wait flag)
  • Deleting the resources defined in the manifests
  • Listing the deployed sandboxes (with helm list)

In future, we may make use of its packaging functionality e.g. publishing various Helm charts.

I personally find it convenient that we can view and delete Helm releases in K9s. This is often preferable to viewing all Pods, because there can be many Pods per release. It is handy being able to view the values.yaml file from which the release was derived in K9s to make sense of what the release represents (e.g. task name).

What are its limitations?

  • From the k8s_sandbox package, we interact with Helm via subprocesses to the helm CLI binary. We need to tell users to ensure they have this installed. This doesn't seem very slick or polished.
  • It may not be that scalable, because each invocation of helm seems to result in whatever command is in ~/.kube/config (e.g. aws eks get-token) being called. In typical evals we only have ~8 concurrent calls to Helm though, so this isn't too concerning.
  • Validation of values.yaml is quite primitive and limited to the values.schema.json file.
  • If users want to customise a small part of the manifests generated (let's say they want to add a specific label, or they want to change a Cilium network policy), we have to add support in the Helm chart, or they have to write a whole chart themselves.
  • The values.yaml files are static so for complex or repetitive infra (say a common list of allowDomains or the same N services that are always deployed) they're verbose and require copy/pasting or code generating.

Can some of these limitations be worked around whilst still using Helm?

  • We could try the pyhelm library
  • We could investigate using the HELM_KUBETOKEN env var to reduce calls to aws eks get-token
  • We could add values.yaml inheritance support for repetitive values.yaml parts e.g. allowDomains. This is supported by Helm.

What alternative tools could we consider?

Kustomize - can create manifests with overlays (allowing easy customisation)

Compared to Helm, this won't support:

  • waiting until resources (e.g. Pods) are "ready"
  • also may not have python integration
  • tracking which "releases" are installed (would might have to do something like Helm does with tracking things in a secret)
  • will this be able to let users bring their own "chart" equivalent?

Configuration languages like CUE, JSONnet, Pkl - could help us with validation, customisation and managing repetitive infrastructure definitions.

Compared to Helm, these tools won't have:

  • wait until resources (e.g. Pods) are "ready"
  • tracking which "releases" are installed
  • K9s listing of what releases are installed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant