-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aarch64 multi-arch builds fail due to no disk space left on builder #1554
Comments
The volumes get cleaned up by https://github.com/coreos/fedora-coreos-pipeline/blob/ddadc038aa99692b346b422c21ede0436cd55de3/multi-arch-builders/builder-common.bu#L81, which runs daily. But I think what can happen is if too many jobs fail too quickly, we blow through the 200G limit before we even make it to the next prune. |
The Aarch64 builder consistently complains about a lack of space, particularly around 10am UTC / 12pm BST (London). This additional prune job aims to mitigate the space issues. See: openshift/os#1554
I issued the above since the very same thing hit us again earlier today. |
The Aarch64 builder consistently complains about a lack of space, particularly around 10am UTC / 12pm BST (London). This additional prune job aims to mitigate the space issues. See: openshift/os#1554
The Aarch64 builder consistently complains about a lack of space. After a brief discussion we decided to increase its size. See: openshift/os#1554 Ref: coreos#1031 (comment)
The Aarch64 builder consistently complains about a lack of space. After a brief discussion we decided to increase its size. See: openshift/os#1554 Ref: #1031 (comment)
I redeployed the builder last week with larger disk size so we should be good now. |
We've been hitting storage issues on the aarch64 multi-arch builder lately and it's causing our builds to fail with a message similar to, but not limited to, the following:
I was able to log into the aarch64 builder today as the builder user and I found
/sysroot
at 100% usage.I freed up some space today by running
podman volume prune
after noticing that most of the storage space was being used by those volumes.Hopefully this will be mitigated once we redeploy the multi-arch builders on AWS and increase the size of the disk to at least 600GB from 200GB. While not necessary to redeploy the builder, landing coreos/fedora-coreos-pipeline#986 would make it much easier. However, it might be worth exploring if we can reduce/prevent the number of dangling volumes on the builders.
The text was updated successfully, but these errors were encountered: