You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Allowing a user to specify their sandbox requirements via a Helm values.yaml file, which is by design very similar to a Docker compose.yaml file.
Generating K8s manifests from the values.yaml file
Allowing users to customise how configuration files are turned into manifests (via custom Helm charts)
Deploying the resources defined in the manifests
Waiting until the resources are deemed ready (using helm install's --wait flag)
Deleting the resources defined in the manifests
Listing the deployed sandboxes (with helm list)
In future, we may make use of its packaging functionality e.g. publishing various Helm charts.
I personally find it convenient that we can view and delete Helm releases in K9s. This is often preferable to viewing all Pods, because there can be many Pods per release. It is handy being able to view the values.yaml file from which the release was derived in K9s to make sense of what the release represents (e.g. task name).
What are its limitations?
From the k8s_sandbox package, we interact with Helm via subprocesses to the helm CLI binary. We need to tell users to ensure they have this installed. This doesn't seem very slick or polished.
It may not be that scalable, because each invocation of helm seems to result in whatever command is in ~/.kube/config (e.g. aws eks get-token) being called. In typical evals we only have ~8 concurrent calls to Helm though, so this isn't too concerning.
Validation of values.yaml is quite primitive and limited to the values.schema.json file.
If users want to customise a small part of the manifests generated (let's say they want to add a specific label, or they want to change a Cilium network policy), we have to add support in the Helm chart, or they have to write a whole chart themselves.
The values.yaml files are static so for complex or repetitive infra (say a common list of allowDomains or the same N services that are always deployed) they're verbose and require copy/pasting or code generating.
Can some of these limitations be worked around whilst still using Helm?
What do we use Helm for?
values.yaml
file, which is by design very similar to a Dockercompose.yaml
file.values.yaml
filehelm install
's--wait
flag)helm list
)In future, we may make use of its packaging functionality e.g. publishing various Helm charts.
I personally find it convenient that we can view and delete Helm releases in K9s. This is often preferable to viewing all Pods, because there can be many Pods per release. It is handy being able to view the values.yaml file from which the release was derived in K9s to make sense of what the release represents (e.g. task name).
What are its limitations?
k8s_sandbox
package, we interact with Helm via subprocesses to thehelm
CLI binary. We need to tell users to ensure they have this installed. This doesn't seem very slick or polished.helm
seems to result in whatever command is in~/.kube/config
(e.g.aws eks get-token
) being called. In typical evals we only have ~8 concurrent calls to Helm though, so this isn't too concerning.values.yaml
is quite primitive and limited to thevalues.schema.json
file.values.yaml
files are static so for complex or repetitive infra (say a common list ofallowDomains
or the same N services that are always deployed) they're verbose and require copy/pasting or code generating.Can some of these limitations be worked around whilst still using Helm?
HELM_KUBETOKEN
env var to reduce calls toaws eks get-token
values.yaml
inheritance support for repetitivevalues.yaml
parts e.g. allowDomains. This is supported by Helm.What alternative tools could we consider?
Kustomize - can create manifests with overlays (allowing easy customisation)
Compared to Helm, this won't support:
Configuration languages like CUE, JSONnet, Pkl - could help us with validation, customisation and managing repetitive infrastructure definitions.
Compared to Helm, these tools won't have:
The text was updated successfully, but these errors were encountered: