-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
intermittent failure in ext.rpm-ostree.destructive.container-image
#4567
Comments
I don't think we're hitting this in coreos/rpm-ostree#4567 but it'd be useful to have a trace message in case.
I'm hoping this will help us debug coreos/rpm-ostree#4567 ``` [2023-08-30T15:00:16.554Z] Aug 30 15:00:15 qemu0 kola-runext-container-image[1957]: error: Importing: Parsing layer blob sha256:00623c39da63781bdd3fb00fedb36f8b9ec95e42cdb4d389f692457f24c67144: Failed to invoke skopeo proxy method FinishPipe: remote error: write |1: broken pipe ``` I haven't been able to reproduce it outside of CI yet, but we had a prior ugly hack for this in ostreedev@a27dac8 As the comments say - the goal is to hold open the input stream as long as feasibly possible.
Discoveries so far:
More generally it's definitely a race condition; I can sometimes reproduce this by doing Also of note: kola defaults to a uniprocessor VM, which I think is more likely to expose this race. I'm quite certain it has something to do with the scheduling of us closing the pipe vs calling |
Moving this to ostreedev/ostree-rs-ext#657 |
This one is a bit concerning because it's been happening more frequently recently I think. Also, I think we may be running into something related to https://github.com/ostreedev/ostree-rs-ext/blob/bd77743c21280b0089c7146668e4c72f4d588143/lib/src/container/unencapsulate.rs#L143 which is masking the real error.
The text was updated successfully, but these errors were encountered: