Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customise default volumes for runner pod. #1382

Open
zonorti opened this issue Jul 12, 2024 · 2 comments
Open

Customise default volumes for runner pod. #1382

zonorti opened this issue Jul 12, 2024 · 2 comments

Comments

@zonorti
Copy link
Contributor

zonorti commented Jul 12, 2024

Currently /tmp and /home/runner are created as emptyDir for the runner pod.
With provider heavy configurations it produces significant IO on system disk of the Kubernetes node.
I am looking for a way to customise those default volumes to have them in memory, but it could also have other options.

I am considering creating a PR, but first I would like some feedback - it could be an boolean flag in the spec (use memory emptyDir) or it could allow to redefine default volumes - i.e if /tmp or /home/runner are passed in RunnerPodTemplate - don't add them by default.

WDYT?

@dgem
Copy link
Contributor

dgem commented Sep 20, 2024

as a random person, I'd probably lean towards passing existing mounts into the pod spec via a helm chart config (if that's viable). that way as long as it's a mounted fs it should work, feels better than a boolean flag that offer in memory or as is. avoid feature flag and do it in k8s config basically.

If it's of interest and there is a significant performance issue you can show (I have no idea, sounds feasible) then I don't mind either looking a bit deeper as it's been a while and I'm not familiar... anyhow it seems like you already know the code and have ideas how to do.

I guess, if you've got a problem you can fix and it would be beneficial to the project, go for it and thank you!

hope that helps

@zonorti
Copy link
Contributor Author

zonorti commented Sep 23, 2024

@dgem to add more context here: we are running on GKE and we can have 20 runners at the same time.
GKE creates nodes with 100Gb system disk and as it provides IOPS and bandwidth based on the disk size - we don't get a lot of them.
So those runners hit those limits really fast and also affect other services residing on the same node.

My temporary solution was adding a volume to override the default:

   volumeMounts:
    - mountPath: /tmp
      name: temp
    - mountPath: /home/runner
      name: home
    - mountPath: /tmp/<namespace>-<CR name>
      name: memory-volume

So it works for me, but also feels dirty.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants