-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bootc integration tracker #22785
Comments
Keep in mind I have not looked at bootc at all. (I know what it is broadly speaking but not what features it actually needs) Also, has this been discussed with other podman maintainers and I am just out of the loop or is this your starting point?
I do not understand this at all, what is
I am not sure what the end solution should look like but I really dislike the idea of a special bootc command to handle opinionated communication between both tools. I much more like public interfaces that can be used by anyone and not have any bootc specific things in podman. Of course I am open to add new interfaces that bootc needs but they should be restricted to stuff that cannot be done on the bootc side or are to complex to do there. If we add code in podman that depends on bootc and bootc depends on podman we end up with a circular dependency which is a real headache to work with.
I don't think this is great reason to move code into podman. Sure short term it might help but long term people need to get used to rust anyway. |
This is the starting point 😄 I try to default to public async discussion.
I guess I'm seeing a few levels; we could treat pinned images as if they have a running container (even if technically they don't). Maybe that'd be a good way to do it; create "dummy/virtual" stopped containers for pinned images, that the admin could explicitly
Yes, fair. For the image pinning I think we can define a standard interface right? This also relates to openshift/enhancements#1481 which IMO should have been in podman aka containers/* from the start. |
To elaborate on this (I realize the context wasn't obvious in my initial description; there are a lot more details in containers/bootc#559 ) but in a nutshell here:
The key advantage here is that we avoid a potentially long latency after a reboot into the new OS when we download new containers. This problem exists today in the real world in OpenShift: etcd is a static pod defined in |
I tend to agree with @Luap99 on not doing specific bootc integrations. Feature requests, sure, but I do not want to evolve a specific API for bootc. It sounds like what's being asked for here is some kind of image pinning - a way to ensure an image is not pruned (and presumably not removed without It would be on Bootc to unpin the images for the old root when it gets removed, though. Probably not a problem, I'd imagine? |
@cgwalters Thanks for the details. I think there are several issues with such a approach, first podman has no idea about the quadlet files. quadlet parses them and creates systemd units with podman commands in them, this is a very clear separation of concern. As such I don't want podman to ever parse the files directly. However I could imagine adding a special quadlet subcommand/flag that outputs all the known images that could then be passed to podman pull. If we want to get fancy we could even make podman pull read such list from stdin so one could easily pipe the output from quadlet into it. The main advantage is that it is possible to execute quadlet in a new mounts with the new rootfs while running the podman pull command in the current active rootfs. The podman logic is quite fragile and executing in from different roots (like you do right now) will likely cause a lot of unwanted side consequences, i.e. from your PR
As such I strongly recommend that you never execute podman from two different roots, you would need to make sure all tmp paths would be available as well not just /var if you try this. And if the new root changes some containers.conf or other config that things can end up in a very bad state. Overall something that is not supportable in any way IMO. Second, the pulling all quadlet images and not pruning part is also complicated because technically you can have these files but not have them enabled. And given these are fully fledged system unit dependencies you can have them hook into other units, etc... so knowing when one is started at boot or not is not easy at all and I doubt we would like to pre-pull images there are never used. Likewise we shouldn't prevent images from being pruned just because there is quadlet file referring to it, i.e. if the service was started once but then will never be used again in a long time it would just eat up space for nothing. |
I understand that there is an internal code separation, and that somewhat leaks out to users in how it basically boils down to a bunch of
Improved image pinning is clearly something that can be made generically applicable. However, it's just one of several things actually, which is why I started out with a more generic issue. However, we can also just try to tackle them separately. bootc is inherently doing some container things, but for the booted host - we share code today indirectly via skopeo, but that's pretty ad-hoc.
The bootc code is always executing podman from the running system, just looking for the unit files in the new root.
Yes, this is one reason I made the binding opt-in. However, in practice I think this would be a sufficient corner case that people may also be fine with a global opt-in, with the ability to do conditional opt-outs.
Again I think we'd require opt-in, so that's not a concern. |
Ok this is good, not sure why you would get read only /etc problems then? Either /etc is always read only so any podman command fails or it is read write then it should just work? (Not you can change the default network config dir to another location in containers.conf of course, but I guess this specific issue is not really relevant for this context here)
Yeah any opt-in works for me. I find your idea to pre-pull the images for the new root to minimize downtime very good as this matches what we do with podman auto-update, pull the new image then restart the service. |
Because bootc-image-builder (a separate project) is basically running the container images in a custom way (not via podman, but via bubblewrap) and it seems to default to a readonly rootfs as opposed to a writable overlayfs by default. That said, there's some code somewhere in podman or maybe netavark (didn't dig yet) that basically unconditionally does a |
Sure it is fixable but there is a ton of initialization work done for every podman command so this isn't a bug IMO. Every podman commands creates a runtime internally that also has to initialize the network code. The fact that we pre create these directory is mostly a historical thing as we used to have the lockfile stored there. It is definitely possible to move the mkdir calls later in the stack only when it is needed at the cost of doing the mkdir every time we create a network. Not bad but a trade off overall and if we start writing in a another place we now have to remember to do the mkdir there as well. So yes we can fix this specific case for the network code but there is a lot more initialization code that runs for every podman pull command no matter if is needed or not. |
I have troubles following the conversation but I assume the following is the problem we want to solve (?):
Assuming that is the problem we want to solve, than I would recommend considering |
podman auto-update does not work for this use case I think, the described scenario assumes quadlet files shipped as part of the OS image ,i.e. /usr/share/containers/systemd/, that contain new images there with different tags. auto-update only works if the exact same image reference is updated on the registry side. |
That is a good point, thanks! |
cc https://github.com/containers/bootc/pull/215/files#r1631710498 in case people here have thoughts on that |
A friendly reminder that this issue had no activity for 30 days. |
containers/bootc#724 (comment) is the kind of big picture thing I was thinking about that comes to the fore as we¹ start to intersect the projects more. ¹well so far mostly I but hopefully that changes... |
Feature request description
👋 the https://github.com/containers/bootc project is intending to closely tie to podman. We actually have containers/bootc#215 which we queued up to make this an explicitly hard dependency.
I was working on containers/bootc#559 and it'd probably be better to have most of this logic live in podman. For that feature it'd be something like
podman pull --all-images-from-root <dirfd>
where<dirfd>
is anopenat2()
fd for the mounted new root. Actually when we do this we probably need to design something more general where images can be "pinned" in this case by...hmm, well, we could just teach podman to ask bootc what active roots there are? So it'd be denied by defaultpodman rmi
an image referenced in a root. Though there are some fun details here around how possibly floating tags behave.Anyways implementing stuff like this is going to start to involve cross-calling between podman and bootc in general. Here's my initial thought: we add something like
podman bootc-internals
which is intended to be a hidden implementation detail of the backend of some things that are logicallybootc
features to start.(One side challenge I'm trying to handle here is that while so far bootc is all Rust, I think as folks here know, there's a lot more people that can write Go for better or worse, and having a "land the code in podman" as a generic escape hatch might help...that has its own tradeoffs of course)
Suggest potential solution
A new
podman bootc-backend
API entrypoint.Have you considered any alternatives?
We could of course try to make things pluggable on either end...in theory of course, some of what I'm talking about here could apply to other image-based update systems. And in theory, bootc could learn to speak to docker too...
But of course, the tradeoff there is a lot more complexity.
The text was updated successfully, but these errors were encountered: