-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce bootc-owned container store, use for bound images #724
Conversation
I've got this far:
So...I need to dig into additionalimagestores here apparently. |
SELinux would require these directories to be labeled correctly. |
Does it work with Update: Ah, I see that's what PR already does ✅ |
@cgwalters and I had a quick meeting yesterday where we say that |
9809224
to
bbb3fc1
Compare
It's actually using skopeo right now (there's some unused podman code) because podman doesn't support copying between two Anyways I figured it out...it was a bad interaction between Anyways with this latest things work for me!
So basically when we do this we'll need to tell people using bound images to specify |
I would add it. That avoids duplicating the images in case non-bound users refer to it. |
I would suggest we do it by default. I think |
ea78d29
to
d2eddef
Compare
OK this one is getting farther. Still TODO:
|
2fe0c9f
to
75f6523
Compare
75f6523
to
a1247fa
Compare
a1247fa
to
8865a02
Compare
OK, I think this is good to review/merge. I've fixed the TODOs above. Though there's clearly a lot more followup...for one thing I need to update the filesystem internals docs to talk about this, as it's suddenly a big new thing vs base ostree. Bigger picture though this should all be very very low risk if one is not enabling any bound container images, which are also still classified as experimental. But, we want people to try this out so getting it shipped in at least our continuous builds would help. |
Closes: containers#721 - Initialize a containers-storage: instance at install time (that defaults to empty) - Open it at the same time we open the ostree repo/sysroot - Change bound images to use this We are *NOT* yet changing the base bootc image pull to use this. That's an obvious next step (xref containers#215 ) but will come later. Signed-off-by: Colin Walters <[email protected]>
8865a02
to
0527ca9
Compare
We are getting close for 9.5 should we cut another release for this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
In the `.container` definition, you should use: | ||
|
||
``` | ||
GlobalArgs=--storage-opt=additionalimagestore=/usr/lib/bootc/storage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't we globally configure the additional image store via containers-storage.conf? That would quite improve the UX.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think it's a possibility. That said, changing the storage configuration without having containers/storage#1885 is...a bit ugly.
There are a few other things that have me slightly wary:
- All of this code is basically a no-op unless bound images are in use, but if we added an additional store it would mean every
podman
invocation would see it...it'd be empty, but it would be a global behavioral change - There's been some lingering concerns I've seen in the past around what happens if...
Yeah ok, I think this a near showstopper for globally (i.e. in storage.conf
)
setting additionalimagestores (hereafter: AIS) as currently defined today.
For the current bootc logically bound images (hereafter: LBI) design, we only opt in individual podman invocations to use the store, and every image there is readonly. It cannot change or break any non-LB images.
But when globally configuring an AIS, there's two actors:
- bootc
- podman
That operate independently of each other, but every podman invocation will see the bootc AIS.
Take the situation where there are two different "FROM " images, one is an LBI and another is free-floating (could be managed by kube/ansible/whatever). If the bootc AIS happens to have the layer(s) for <somebase>
, the podman pull <derived2>
for the non-LBI (floating) image will reuse the common base. Which...makes sense!
Except today the bootc bound images logic will prune images that are no longer referenced.
And yes, I just tested this, and things explode (as you'd expect) in the scenario where a lower read-only layer disappears for the case of an image/container that lives in /var/lib/containers
.
(Note we don't have the inverse problem, because when bootc operates on its storage it uses --root
and should not be looking at or touching /var/lib/containers
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess of course, bootc's GC logic could learn to take into account the /var/lib/containers
store state...but we're starting to "cross the worlds" with stuff like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, one might be thinking...isn't the real fix for this to have one storage by default, i.e. on bootc systems podman and bootc default to having the same storage for images (underneath /sysroot
, though containers would probably still make sense under `/var).
And yes maybe, but the thing that terrified me about going down that path is that typing "podman ..." has the potential to break the operating system. There's lots of docs talking about podman system reset
and the like.
Could we make it reliable? Yes, clearly. But it'd require I think a lot of careful thought, code auditing, cooperation and integration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if podman had a way to recover from it. IE Pull down the missing image but I am not sure the missing image/layer would still exist or have a way to discover where it exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we had agreed to create a pinning image mechanism for you that was going to avoid the reset problem no? i thought that is the way we were going?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not aware of any commitment from anyone to work on image pinning on the podman side.
That said again, I should be very clear that my thoughts on this design changed as we got into the implementation. My original thought was basically "bootc is just running podman pull, how hard can it be Michael? 1 week?"
But yeah...having bootc touch /var
(/var/lib/containers
) just blows up our story around state, messes with transactional updates, partitioning, etc. So that led me rapidly down this path of having a bootc-owned storage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also don't we want the ability to roll back versions of images?
But bottom line we have the same issue if anyone sets up additional stores. If an image is used from an additional store and gets removed, then the container will no longer work. In RHIVOS we thought about this and came to the conclusion that we would know which images were being used so we would know what images to remove. Also containers are ephemeral, they did not survive reboot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also don't we want the ability to roll back versions of images?
bootc retains all LBIs that are referenced from any deployment, i.e. when you upgrade but then roll back, you are guaranteed that the LBIs that are referenced via the .image/.container
files are still there.
(However, there's ill-defined semantics currently around how LBIs work with floating tags, but basically "don't do that")
Fallout in #747 |
The previous change in containers#724 broke for two important scenarios: - Installing with Anaconda - Upgrades from previous states Closes: containers#747 Signed-off-by: Colin Walters <[email protected]>
spec: sync upstream to build all archs and drop i686
Closes: #721
(that defaults to empty)
We are NOT yet changing the base bootc image pull to use this.
That's an obvious next step (xref
#215 ) but will come later.