Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fmf: Run cockpit-podman tests on current Fedora #19155

Merged
merged 1 commit into from
Aug 1, 2023

Conversation

martinpitt
Copy link
Member

@martinpitt martinpitt commented Aug 1, 2023

This ensures that changes to the Python bridge don't break podman.


This introduces reverse dependency testing on the upstream level. Details and experiments are in #19117 -- especially with a commit that breaks the bridge, and the revdeps run installs c-ws from the PR packit COPR and c-podman from the main-builds COPR, and the podman tests fail as expected on the "KABOOM".

This particular test could be optimized, and only run on changes to src/ (bridge or ws), as these are the only things that actually affect cockpit-podman. There is a mechanism to do that with packit, which I tested that martinpitt#16. But this doesn't work with the default GITHUB_TOKEN, and thus would require faffing around with more secrets. I shelved that for now. While cockpit → cockpit-podman testing is marginally useful, I really want this to be a model PR for triggering podman → cockpit-podman, or selinux-policy → cockpit. For these, we don't need conditional runs.

Once this lands, I'll do a blog post about all of this (see current draft), and propose it to podman, selinux, and udisks. The latter could even replace our "daily" scenario on fedora-testing, or at least improve it ("shift left") -- stop regressions before they land upstream, instead of after the fact.

@martinpitt martinpitt added the no-test For doc/workflow changes, or experiments which don't need a full CI run, label Aug 1, 2023
@martinpitt
Copy link
Member Author

All the previous testing-farm: tests are now called *-self. There is one extra status now, "testing-farm:fedora-38-x86_64:revdeps". See the TF report, this runs podman. Search the test log for rpm -q to be convinced:

+ rpm -q runc crun podman criu kernel-core selinux-policy cockpit-podman cockpit-bridge
package runc is not installed
[...]
cockpit-podman-73-1.20230731153619030294.main.4.g82329ce.fc38.noarch
cockpit-bridge-297-1.20230801050912127961.pr19155.10.g8a6c4757c.fc38.x86_64

packit.yaml Outdated Show resolved Hide resolved
@jelly
Copy link
Member

jelly commented Aug 1, 2023

I have a few questions unrelated to landing:

  • How would we run multiple "reverse dependencies" like for example we want to add machines? Would that be one job or multiple revdeps entries?
  • Reverse dependency testing for Cockpit would require us to publish a main COPR like podman as well right?
  • Would also be good to have libvirt in the conversation, that's usually what breaks as well but on machines.
  • If I was a udisks maintainer, I'm not sure if I wanted to wait on the full Cockpit testsuite, so I suppose we should wait till we can just reverse test against the storage scenario (YaY scenario's!) But that seems doable already https://github.com/cockpit-project/cockpit/pull/19155/files#diff-69643c3578bd77a21da075f62f5fb3c544b03b08d6f9fd98a4c262031e96be8cR21 correct?
  • Thinking more about this, it would be nice to mention in the blog post how other projects could reverse deps test on us. But I guess that's already on your mind 😃

packit.yaml Outdated Show resolved Hide resolved
@martinpitt
Copy link
Member Author

martinpitt commented Aug 1, 2023

  • How would we run multiple "reverse dependencies" like for example we want to add machines? Would that be one job or multiple revdeps entries?

The cleanest way would be to just add another plans/cockpit-machines.fmf with the same revdeps context.

  • Reverse dependency testing for Cockpit would require us to publish a main COPR like podman as well right?

Correct. That will be my next step.

  • Would also be good to have libvirt in the conversation, that's usually what breaks as well but on machines.

Good idea in principle, but AFAIK they still use a "send patches to ML" based workflow, so I'm not sure how that would work?

Yes, we can choose to only run the /storage scenario. They are all reasonably fast these days. That "willing to wait" is of course a conversation to have with every dependency that we want to hook into, but their own CI will also take at least some time.

  • Thinking more about this, it would be nice to mention in the blog post how other projects could reverse deps test on us. But I guess that's already on your mind smiley

Already on it, see link in the description 😁

This ensures that changes to the Python bridge don't break podman.
@jelly
Copy link
Member

jelly commented Aug 1, 2023

* How would we run multiple "reverse dependencies" like for example we want to add `machines`? Would that be one job or multiple `revdeps` entries?

The cleanest way would be to just add another plans/cockpit-machines.fmf with the same revdeps context.

Does that run in parallel then? That would be my only concern.

* Reverse dependency testing for Cockpit would require us to publish a `main` COPR like podman as well right?

Correct. That will be my next step.

* Would also be good to have libvirt in the conversation, that's usually what breaks as well but on `machines`.

Good idea in principle, but AFAIK they still use a "send patches to ML" based workflow, so I'm not sure how that would work?

They are on Gitlab now! The tricky part is of course that libvirt has many split repositories and we have had issues with libvirt-dbus and maybe other subprojects (as a whole this might be a big challenge to get working)

* If I was a udisks maintainer, I'm not sure if I wanted to wait on the full Cockpit testsuite, so I suppose we should wait till we can just reverse test against the `storage` scenario (YaY scenario's!) But that seems doable already https://github.com/cockpit-project/cockpit/pull/19155/files#diff-69643c3578bd77a21da075f62f5fb3c544b03b08d6f9fd98a4c262031e96be8cR21 correct?

Yes, we can choose to only run the /storage scenario. They are all reasonably fast these days. That "willing to wait" is of course a conversation to have with every dependency that we want to hook into, but their own CI will also take at least some time.

Indeed, I think storage takes ~ 10 minutes. But, we have to figure out if all our critical storage tests can run on packit and I'm afraid they might not:

test/browser/run-test.sh:           TestStorage
test/browser/run-test.sh:    EXCLUDES="$EXCLUDES TestStorageISCSI.testISCSI"
test/browser/run-test.sh:              TestStorageFormat.testAtBoot
test/browser/run-test.sh:              TestStorageFormat.testFormatCancel
test/browser/run-test.sh:              TestStorageFormat.testFormatTooSmall
test/browser/run-test.sh:              TestStorageFormat.testFormatTypes
test/browser/run-test.sh:              TestStorageMounting.testAtBoot
test/browser/run-test.sh:              TestStorageMounting.testBadOption
test/browser/run-test.sh:              TestStorageMounting.testFirstMount
test/browser/run-test.sh:              TestStorageMounting.testMounting
test/browser/run-test.sh:              TestStorageMounting.testMountingHelp
test/browser/run-test.sh:              TestStorageMounting.testNeverAuto
test/browser/run-test.sh:              TestStoragePackagesNFS.testNfsMissingPackages
test/browser/run-test.sh:              TestStoragePartitions.testSizeSlider
test/browser/run-test.sh:              TestStorageIgnored.testIgnored
test/browser/run-test.sh:              TestStorageUnused.testUnused
test/browser/run-test.sh:            TestStorageNfs.testNfsBusy
test/browser/run-test.sh:            TestStorageNfs.testNfsClient
test/browser/run-test.sh:            TestStorageNfs.testNfsMountWithoutDiscovery

FYI: the seem to want to add packit storaged-project/udisks#1080

* Thinking more about this, it would be nice to mention in the blog post how other projects could reverse deps test on us. But I guess that's already on your mind smiley

Already on it, see link in the description grin

👍

@martinpitt
Copy link
Member Author

Does that run in parallel then? That would be my only concern.

Yes, each test plan runs in its own VM, in parallel. That said, TF only gives you up to 5 VMs in parallel.

Good idea in principle, but AFAIK they still use a "send patches to ML" based workflow, so I'm not sure how that would work?

They are on Gitlab now! The tricky part is of course that libvirt has many split repositories and we have had issues with libvirt-dbus and maybe other subprojects (as a whole this might be a big challenge to get working)

Ah, nice! They don't use packit at all yet, but packit does support GitLab. So that'd be the first step, move their spec files etc. upstream and start building rpms and running tests in gitlab MRs.

Indeed, I think storage takes ~ 10 minutes. But, we have to figure out if all our critical storage tests can run on packit and I'm afraid they might not:

Yes, of course only the nondestructive tests run. I recently converted some more in PR #19102, and we have coverage at least for plain disks/partitions and LVM and such. But no stratis and VDO indeed. That may be some motivation to enable these on tmt 😁

@jelly
Copy link
Member

jelly commented Aug 1, 2023

What if podman does a breaking change (valid or not), and we want to change our test this get this green, how would we test against a newer podman to validate our changed test works?

Example, they rename Containers to containers in their json api.

@martinpitt
Copy link
Member Author

martinpitt commented Aug 1, 2023

@jelly Well, one of the points of reverse dependency testing is to stop them from breaking the API 😁 But if it happens, it could be done like this:

  • They have a PR which breaks the API and the revdeps test
  • We create a cockpit-podman PR to adjust, and run it against their PR manually, to validate our fix. At least we have to do that in a backwards-compatible way, so our tests should be green against both current distros and their PR.
  • Once we land our's, they re-run their failed revdeps test in their PR, which should then go green.

Copy link
Member

@jelly jelly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🥳

@martinpitt martinpitt merged commit 8bb6ddf into cockpit-project:main Aug 1, 2023
31 checks passed
@martinpitt martinpitt deleted the revdeps-podman branch August 1, 2023 09:16
@thrix
Copy link

thrix commented Aug 3, 2023

Yes, each test plan runs in its own VM, in parallel. That said, TF only gives you up to 5 VMs in parallel.

@martinpitt just for correction, it is 12 now in public:

https://gitlab.com/testing-farm/infrastructure/-/blob/testing-farm/ranch/public/citool-config/config/test-schedule-runner?ref_type=heads#L3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
no-test For doc/workflow changes, or experiments which don't need a full CI run,
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants