-
Notifications
You must be signed in to change notification settings - Fork 3
Initial implementation to create single OCi image #4
Conversation
/cc @cgwalters in case you'd like to take a look also, thxs |
cmd/create.go
Outdated
err = runCMD(ostreeEncapsulateBackupCMD) | ||
// Get the current ostree deployment name booted and save it | ||
bootedOSName := fmt.Sprintf( | ||
`nsenter --target 1 --cgroup --mount --ipc --pid -- rpm-ostree status -v --json | jq -r '.deployments[] | select(.booted == true) | .osname' > /var/tmp/booted.osname`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW there is https://github.com/coreos/rpmostree-client-go/
Also unrelated nit but predictable temporary filenames are a CWE: https://cwe.mitre.org/data/definitions/377.html
Using the client-go bindings would gather info to a pipe, avoiding a temporary file at all. But short of that I'd say instead to create a temporary file on the Go side via https://pkg.go.dev/os#CreateTemp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cgwalters thanks for the comment.
did a quick test with the proposed client-go binding and I'm seeing the following issue -> <nil> exec: "rpm-ostree": executable file not found in $PATH
not sure If I might be missing something here, but rpmostree-client-go (current implementation) seems that can't run commands directly on the host namespace. Not sure if it could be used for the use case this tool is targeting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think we could add support to that project for doing the nsenter basically so it can run from a container.
Today though the way the MCO works (using this library) is it chroot()
s to the host, but that has its own major tradeoffs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cgwalters in essence, I took borrowed code from rpm-ostree-client and lifted some parts with the runInHostNamespaces
function for this tool.
As suggested, I've refactored all rpm-ostree commands in e19c750 to use the new rpm-ostree-client.go and pipe gathered info by avoiding temporary files.
If the work done in 519e5d3 is interesting for the rpm-ostree-client go-binding, I'd be happy to contribute to that project by adding support to run from a container with nsenter
.
cmd/create.go
Outdated
err = runCMD(ostreeParentChecksumCMD) | ||
// Execute 'copy' command and backup mco-currentconfig | ||
backupOriginCMD := fmt.Sprintf( | ||
`nsenter --target 1 --cgroup --mount --ipc --pid -- cp /ostree/deploy/%s/deploy/%s.origin %s`, bootedOSName_, bootedDeployment, originFileName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also tangentially related, generating shell script like this is https://cwe.mitre.org/data/definitions/78.html in the general case.
(I've been using https://docs.rs/xshell/latest/xshell/ in some of my Rust code, it's great because Rust macros allow the ergonomics of "mini-languages" like this that correctly generate shell script while quoting input arguments)
But in Go there's nothing really else to do than passing []string
to https://pkg.go.dev/os/exec#Command
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On another topic btw, it'd also probably make sense to factor out the nsenter
to a helper function like runInHostNamespace(args ...string)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const containerFileContent = ` | ||
FROM scratch | ||
COPY . / | ||
COPY --from=ostreerepo . /ostree/repo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see. Humm...so one definite downside of this is will actually defeat the content-addressed layers in our bootable base image, effectively reducing everything to one large layer. Not fatal...but also not ideal either.
So today what would probably work is to do this:
FROM <rhel coreos>
COPY etc.tar.gz /tmp/etc.tar.gz
COPY var.tar.gz /tmp/var.tar.gz
(Or make it not a tarball and just use /tmp/etc
)
Then this image could on the node be:
rpm-ostree rebase $oneimage
The rebase command should warn-and-drop all the content in /tmp
right now.
Then, we'd need to pull it again via podman
today (yes this is unfortunate but we're not yet at unified storage) and extract the files we need from /tmp/etc
and /tmp/var
.
And then the last step would be one more rpm-ostree rebase $realbase
to drop the extra layers and go back to the real base image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now...a while ago I did coreos/rpm-ostree#3961 which is solving a very similar problem and runs on every boot from an old (pre 4.12) bootimage where rpm-ostree is too old to natively understand containers. There we actually do the pull from podman and do what you're aiming to do here with ostree pull-local
from the privileged container.
The unfortunate part is this flow doesn't yet set things up with a proper container origin so we end up taking another reboot unfortunately...I think we could probably fix that. But it's just...so messy and hard to have code that runs both inside and outside of a container image right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bringin' also @donpenney and @javipolo to this thread, in case they want / have something to comment on this proposed flow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW did you guys see my thoughts over at containers/bootc#22 ?
IMO basically what we should do with most of the current Ignition stuff is move it into just a derived image. I see "attached configmaps" replacing the bits that need to be dynamic or even per machine.
However you are effectively making the case here that we may want some sort of first-class support for something a bit like a "release image" at the OS level that actually is just a "rollup" of a reference to one or more container images and configmaps potentially.
Not needed anymore since we don't need to identify a parent commit in the single OCI image approach.
I have refactored this function to also return a rawOutput in the form of []byte in order to integrate the rpm-ostree-client.go
if _, err := os.Stat(backupDir + "/rpm-ostree.json"); os.IsNotExist(err) { | ||
|
||
// Execute 'rpm-ostree status' command and backup its output | ||
rpmOStreeCMD := fmt.Sprintf(`nsenter --target 1 --cgroup --mount --ipc --pid -- rpm-ostree status -v --json > %s/rpm-ostree.json`, backupDir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So far we still need the rpm-ostree.json file for the restore process
I'm actually testing all "one image" process to see how it works, I'll keep you posted :)
Once we validate it, we probably should try the way Colin suggested and see how it goes :)
Many thanks for the feedback throughout the PR, really appreciate it. For the moment, I'll close this PR in favor of #7, as commented there, if the ostree/bootc stack evolves, we'll be happy to refactor this workflow to drive it in a more native way. |
This is an initial implementation to just create a single OCI image for the IBU workflow.