-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't pull compose images #88
base: main
Are you sure you want to change the base?
Conversation
That pull take 30-40s per job, which is quite insane when these image rarely if ever change. Of course if they significantly change it would cause problem, but then we should probably find another way to deal with this like running a specific tag that is committed in this repo and that we manually update once in a while.
Doesn't seem to work that well, as now compose is pulling as part of The doc suggest there is an option to disable that:
I suspect we need |
Arf, https://docs.docker.com/compose/compose-file/05-services/#pull_policy suggest it's the default behavior. It's really weird I'd expect all nodes to already have these images in cache and not need to pull. |
Hum, wait:
So each build has a different image for backing services? That seem wrong. |
Yes, that's the image created by the Jobs will not-infrequently be the first thing running on an instance, meaning they do need to pull everything, but the rest of the time they'll be building atop an already-cached set of layers and just pulling the final ref-specific layer(s). |
Right, I figured that a bit later, but that confused me because I wasn't expecting compose to deal with that (but again, I'm super unfamiliar with compose). It seems like my PR works as intended, just maybe not quite as effective as I hoped? What do you think? |
IIRC the explicit pull just separates the pull of the "build image" from the dependencies (postgres or whatever) into separate groups in the log output, so they get their time accounted separately. I don't think it adds time, so I think it's just a small cost (extra bit of config) for an even smaller benefit (extra log line) -- I'm not attached to it. |
It depends. When you But agreed that in most cases it should be that long, unless the image changed. I guess I overlooked that this pull included not only the "services" image but also the main "ruby" image, which changes way more frequently. To really optimize this, we'd need to make these image change less often (e.g. use a digest of the It's quite more involved. |
We do already use a lock-only layer. If we stopped the image there, we'd have to clone the repo on the workers, and last time I tried that seemed to come out slower (given we're semi-often building on a clean machine). |
Yeah, you're right that it depends on a lot on many factors such as this one. Maybe it was just my builds at some point, but they seemed to all pull for 30-40s, I'm pretty sure we could clone faster than that. But I was more in search of a quick win, rather than willing to rearchitect the whole thing. |
Yeah, the docker pull definitely takes a lot longer than I'd expect.. but IIRC so does the git clone 😕 (I do, as is my wont, have vague long-term plans to improve things here.) |
FYI: buildkite-plugins/docker-compose-buildkite-plugin#419
|
That pull take 30-40s per job, which is quite insane when these image rarely if ever change.
Of course if they significantly change it would cause problem, but then we should probably find another way to deal with this like running a specific tag that is committed in this repo and that we manually update once in a while.
Ref: #73 (comment)
cc @zzak