Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tracker for generic tmt+bootc issues #3000

Open
cgwalters opened this issue Jun 9, 2024 · 7 comments
Open

tracker for generic tmt+bootc issues #3000

cgwalters opened this issue Jun 9, 2024 · 7 comments

Comments

@cgwalters
Copy link
Contributor

cgwalters commented Jun 9, 2024

I'm trying to look at tmt+bootc in: containers/bootc#543

There's a lot of things going on in this, but basically this is a generic issue about improving the tmt integration story with https://docs.fedoraproject.org/en-US/bootc/

First, what I find just very missing from the tmt docs is recommendations for the what to me seems like a key aspect of injecting code built from the current git repository into the test environment. I understand there's not one approach to this...some people will want to build rpms, others won't (and actually in the case of bootc, we really want to start in many cases from a container image which may become a disk image).

But, let's use a common forge-integrated flow like Github Actions or Gitlab CI as a reference point. From the common case of:

  • check out the git repo
  • build the code
  • run the tests using that code

All the tmt docs I can find just assume you somehow have done these steps as far as I can tell; I was looking for more real-world examples that cover some of these.


Other notes:

I got bit by:

    provision:
      how: virtual
      image: /home/walters/src/github/containers/bootc/target/testbootc-cloud.qcow2

apparently caching a copy of that qcow2 in /var/tmp/tmt/testcloud/images/ but not invalidating that cache when the image changed.


Note that in the bootc world, a neat thing we emphasize is a whole flow of building a container image, and then optionally creating a disk image from it.

It might be nice to add tmt provisioning support that directly handled a case of going container -> disk, like:

provision:
  how: virtual
  bootc-container: localhost/image-under-test

And this would use e.g. https://github.com/osbuild/bootc-image-builder underneath (or we could streamline here the flow of bootc install to-existing-root); both are valid.

In a scenario like this, tmt could even create its own derived container image to inject infrastructure code, instead of live-mutating the system via cloud-init and shell.


Different topic: I wrote the coreos-assembler kola external tests which operate on a fundamentally different philosophy for the starting point: the test code is injected into the target as a systemd unit, and runs autonomously on its own, and the harness just monitors it. One special thing we added there is support for rebooting the system which reuses the APIs from the Debian autopkgtest framework. Is there such a thing in tmt? I found stuff like this but I don't know if/how it's mapped to test code.

@lukaszachy
Copy link
Collaborator

As far as the current tmt goes it is up to 3rd party to create the "testable bits" and make them available for tmt to install them. In the case of Packit: COPR builds rpms, Testing farm installs them on "guest" (or makes copr repo available) and only after that comes tmt to prepare the guest and execute the tests.

So in the Github/gitlab pipeline the "build" part will be on pipeline (somehow, tmt doesn't care) and only the "run the test" part will be for tmt.

If "testcloud" gets the ability to provision 'bootc' container (as you mentioned, instead of qcow2 it will get the container image + options to force pulling new image), would it be OK if the responsibility on building the image will remain on other party (pipeline, etc...)

I kind of like the idea for building the artefacts as well but it would take some time discuss the details (e.g. cross arch? mock for rpms? how to make all reasonably safe for host?)

@thrix
Copy link
Collaborator

thrix commented Jun 10, 2024

Anyway, building and testing the build artifacts is kinda possible, but you are mainly on your own right:

  1. you can have a test that builds the artifacts and shares them with the tests using TMT_PLAN_DATA directory
  2. other tests can pickup ^ and do what is needed with the artifacts
  3. we have test infrastructure providers that provide /dev/kvm, so testing directly the VM images should be possible

I thought this is actually something bootc already uses btw

@thrix
Copy link
Collaborator

thrix commented Jun 10, 2024

@cgwalters reboots handling is described here:

https://tmt.readthedocs.io/en/stable/stories/features.html#reboot-during-test

It is similar to how test harness works for RHEL - i.e. reboot causes the test calling the reboot to be restarted, with an environment variable available helping to decide if the reboot happened, so you can handle the after reboot case.

@cgwalters
Copy link
Contributor Author

you can have a test that builds the artifacts and shares them with the tests using TMT_PLAN_DATA directory

I found references to this but I didn't quite understand how to use it.

What I'm wondering right now is if there's any support for variables in tmt; basically I am trying to create a flow that:

  • Builds a container image
  • Builds a disk image from that container
  • Passes it to the tmt tests like:
# This tmt 
provision:
  how: virtual
  # This one will be generated
  image: $image
summary: Basic smoke test
execute:
    how: tmt
    script: bootc status

But I am not seeing any variable/templating support for stuff like this?

@thrix
Copy link
Collaborator

thrix commented Jun 10, 2024

@cgwalters in the example I was trying to show, you would not use tmt to provision the VM, but rather call qemu-kvm directly from the tests and then drive the tests yourself. I.e. use another framework to provision the built artifact and run the tests against it.

We currently do not support a flow you are looking for with tmt. Each plan execution is a separate execution, they do not know about each other. Basically, you cannot create an artifact in the first plan, and then easily use that in the provisioning method of another plan.

Workarounds doing a similar flow with solely tmt would be:

1. two plans

First one builds the container image and the VM image.
Second provisions the VM image and runs a test.

tmt run -dddvvv --force -a -i $PWD/plan1 provision -h local execute -h tmt -s 'touch $TMT_PLAN_DATA/image.qcow2
tmt run -dddvvv --force -a -i $PWD/plan2 provision -h virtual -i $PWD/plan1/default/plan/data/image.qcow2 execute -h tmt -s 'bootc status'

This will need different approach for automation, but RHIVOS does something similar for their testing.

2. tmt executing tmt

$ cat build.fmf
provision:
  how: local
prepare:
  - name: install tmt
    how: install
    package:
      - tmt+provision-virtual
execute:
  script:
    - touch $TMT_PLAN_DATA/image.qcow2
    - tmt run -a plan --name /test provision -h virtual -i $TMT_PLAN_DATA/image.qcow2

$ cat test.fmf
provision:
  how: virtual
execute:
  script: bootc status

$ tmt run -dddvvv plan --name /build

This will need some workarounds to make the results nicely available, but should not be maybe such a pain ...

I know about nobody who would be doing this.

Maybe others have some other ideas.

@cgwalters
Copy link
Contributor Author

but rather call qemu-kvm directly from the tests

I was trying to avoid creating a new testing framework. I know how to do that, but I think this is something we really want to streamline so that it can be easily done from many different components and repositories.

  1. two plans
    First one builds the container image and the VM image.
    Second provisions the VM image and runs a test.

The first part is what I did just directly; not sure I see significant value of wrapping that part in tmt. But, that's a minor point.

Hmmm...what if we made a new flow like:

provision:
  how: virtual-bootc
  bootc-container-image: quay.io/example/someimage:latest
  method: image-builder

There's a complication here in how bootc-container-image is found (on Linux, should it be in e.g. podman-machine or not). But it'd basically be a new provisioner variant (subclass?) of the virtual one that did similar things to https://github.com/containers/podman-bootc/ in terms of synthesizing a disk image behind the scenes automatically.

Now that said something I'd still like to support here is being able to conveniently override the input to that image:

tmt run -a plan --name /test provision -h virtual -i $TMT_PLAN_DATA/image.qcow2

Feels a bit like a workaround? But, eh, we can roll with that for now.

@cgwalters
Copy link
Contributor Author

cgwalters commented Jun 18, 2024

OK yeah this is totally unrelated really but am I right that the tmt "prepare" phase does a full rsync of the source to the remote vm? This is...pretty expensive in my case of a Rust project, there's 27G of cache data underneath target/ at the moment for me, plus that's where we actually put the qcow2 we use to boot.

It'd be nice if tmt defaulted to using e.g. git ls-files to determine what to copy to start; in every Rust project the default is to have target/ in .gitignore. (I also have it in .dockerignore).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants