Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC 0119] Formalize testing for nixpkgs packages #119

Merged
merged 14 commits into from
Jun 30, 2022
105 changes: 105 additions & 0 deletions rfcs/0119-testing-conventions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
feature: Defined conventions around testing of official Nixpkgs packages.
start-date: 2021-12-29
author: Jonathan Ringer
co-authors:
shepherd-team: @Mic92 @Artturin @kevincox
shepherd-leader: @Mic92
related-issues:
- [RFC 0088 - Nixpkgs Breaking Change Policy](https://github.com/NixOS/rfcs/pull/88)
---

# Summary
[summary]: #summary

When updating or modifying packages, several conventions for testing regressions
have been adopted. However, these practices are not standard, and generally it's not well
defined how each testing method should be implemented. It would be beneficial to have
an unambiguous way to say that a given package, and all downstream dependencies, have
had as many automated test ran possible. This will give a high degree of certainty that
a given change is less likely to manifest regressions once introduced on a release
channel.

Another desire of this rfc is also to have a way for various review tools
(e.g. ofborg, hydra, nixpkgs-review) to have a standard way to determine if a
package has additional tests which can help verify its correctness.

# Motivation
[motivation]: #motivation

Breakages are a constant painpoint for nixpkgs. It is a very poor user experience to
have a configuration broken because one or more packages fail to build. Often when
these breakages occur, it is because the change had a large impact on the entirety
of nixpkgs; and unless there's a dedicated hydra jobset for the pull request, it's
infeasible to expect pull request authors to verify every package affected
by a change they are proposing. However, it is feasible to specify packages that
are very likely to be affected by changes in another package, and use this information
to help mitigate regressions from appearing in release channels.

# Detailed design
[design]: #detailed-design
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe expand on what are the intended changes to the status quo?

passthru.tests is a name documented inside the manual, however nixosTests are recommended to be also put there.

(also, if sorting by resource consumption, maybe this split is not needed?)

Are we encouraged to compromise on something in the name of more test coverage?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I wasn't sure what the status quo should be. My current thoughts are, "here is some addtional metadata you can add to ensure that people know how your package may break. Or add your package to the tests of other packages to ensure it's not broken."


Copy link
Member

@Artturin Artturin Apr 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recently people have started adding sensitive downstream dependencies passthru.tests
Example https://github.com/NixOS/nixpkgs/pull/167092/files

Cc @risicle

I propose adding a new section called testReverseDeps or so

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this technique was @jonringer 's idea in the first place 😉

The main thing I'm slightly uncomfortable with is adding the reverse dependencies as arguments - I imagine it causing wierdness for people who do a lot of overriding. But I can't think of a sensible way it could be avoided.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally the benefit is for nixpkgs testing. For the overriding case, it should be that an override would not trigger a downstream dependency to influence the build as passthru gets pruned before instantiation.

I agree it's odd that a package is aware of it's downstream dependencies, but I'm not sure of another way of having a package be aware of them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I imagine it causing weirdness for people who do a lot of overriding.

A potential mitigation is to source these from a pkgs argument, so they're all bundled up. Not sure if there's a problem in the first place though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usage of pkgs is frowned upon in nixpkgs from a hygiene perspective, I would rather not involve it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nixpkgs-review builds all reverse dependencies. could we just use this mechanism in a testers.testBuildReverseDeps? but there can be thousands 😬

cc @Mic92

Standardize `passthru.tests.<name>` as a mechanism of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The <name> syntax suggests that the namespace is flat, but it isn't. Both ofborg and nix-build respect the recurseForDerivations attribute (aka recurseIntoAttrs), which is a valuable behavior

  • for authors, adding an attribute is easier than merging attribute sets
  • for "testers", the computation of the set of attribute names is lazier, improving evaluation performance when they only run a specific test or set of tests

passthru is an implementation detail of mkDerivation that must not be part of this specification. You could mention it once as a note to guide package authors, and you can use it in examples with mkDerivation, but it must not be part of the package specification. (See also for an attempt to define "package" NixOS/nix#6507)

Suggested change
Standardize `passthru.tests.<name>` as a mechanism of
Standardize `<pkg>.tests` as a mechanism of

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

won't $tests be populated in each build environment if this is true? the inclusion of passthru was because those get pruned, and is the current convention. (not to say that current conventions aren't flawed, and should be changed)

more expensive but automatic testing for nixpkgs. As well as encourage the usage of
`checkPhase` or `installCheckPhase` when packaging within nixpkgs.

Criteria for `passthru.tests.<name>`:
- Running tests which include downstream dependencies.
- This avoids cyclic dependency issues for test suites.
- Running lengthy or more resource expensive tests.
- There should be a priority on making package builds as short as possible.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hear you! 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I originally wrote this about sage, and sageWithTests. Which took even longer :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(looks sadly at NixOS/nix#3600 being unmerged) although for a given situation with a package, speeding up separate tests on their own would also not hurt

- This reduces the amount of compute required for everyone reviewing, building, or iterating on packages.
- Referencing downstream dependencies which are most likely to experience regressions.
- Most applicable to [RFC 0088 - Nixpkgs Breaking Change Policy](https://github.com/NixOS/rfcs/pull/88),
as this will help define what breakages a pull request author should take ownership.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only place corresponding to «tests have to pass»?

(BTW what about a test removal procedure? Explicit maintainer approval?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future work, there's mentions to add it to the PR template, and it's already part of ofborg.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all of this is vague enough that it remains unclear whether the RFC establishes a norm that tests should be run completely, and have to pass (hmmm, what about platform discrepancies…)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more meant to set the expectation. There's already somewhat of a convention, and I would like for that to be more explicit and expected.

nixpkgs-review goes a long way to finding regressions, but the problem is totality. For some changes, there may only be a few packages which may realistically be affected by the change, so I don't want to build 500+ packages to review the build implications of a change, I just the relevant package and the few direct downstream dependencies. Even better if there's a testcase for the package itself.

Copy link
Member

@7c6f434c 7c6f434c May 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So basically you do not want to implement something with the full extent of the point from the meeting notes?

Tests of the updated package should be become mandatory for acceptance of a PR

(Here my question can be treated as procedural: for the text «updated according to the meeting notes» I am fine both with the text that clearly claims what the point in the meeting notes claims, or with an explicit comment in the discussion that the meeting note point as written is too far-reaching, maybe because it is too brief of a summary of an agreed position; I just want clarity what interpretation of RFC gets covered by the eventual decision)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So basically you do not want to implement something with the full extent of the point from the meeting notes?

ofborg already runs passthru.tests if the commit message is formatted correctly. I think I'm mis-understanding what you're trying to say. My opinion is that the existing process already enforces the opt-in testing behavior. The main issue right now is that the usage of passthru.tests is the exception, not the norm.

BTW what about a test removal procedure?

This is a non-goal for the RFC. Personally, I believe this should be left up to the maintainer(s) to decide. The additional tests should be providing value to the review process, if they don't then they should probably be removed, but this can be decided by the maintainers.

Explicit maintainer approval?

Also a non-goal. Just concerned with the ability to test a PR, which may aide in the decision making process to merge a PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, do we ask that tests are run or that tests pass fully at least on one platform (maybe not the first try etc.…)?

Copy link
Contributor Author

@jonringer jonringer May 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, do we ask that tests are run or that tests pass fully at least on one platform (maybe not the first try etc.…)?

I think the tests should be ran; and if any fail, it should be an indicator that the package is unhealthy in context of the PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the position, and I think it could be made less vague in the RFC. Maybe put that statement that «tests should be run and a failing test is an indicator that the package is unhealthy in the context of the PR.» as a top-level statement in the detailed design?

- Running integration tests (E.g. nixosTests)
- Tests which have heavy usage or platform requirements should add the appropriate systemFeature
- E.g. `nixos-test` `kvm` `big-parallel`

Usage for mkDerivation's `checkPhase`:
- Quick "cheap" tests, which run units tests and maybe some addtional scenarios.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not all test suites are quick or cheap, but running them should be a priority over quickness. If we can make running them in a separate derivation easy, that's worth considering, but it seems that the human overhead would not be worth it in the general case.
A lot could factor into this, so I think we should make this less prescriptive.

Suggested change
- Quick "cheap" tests, which run units tests and maybe some addtional scenarios.
- Preferably quick "cheap" tests, which run units tests and maybe some addtional scenarios.

- Since this contributes to the "build time" of a package, there should be some
emphasis on ensuring this phase isn't bloated.

Usage for mkDerivations `installCheckPhase`:
- A quick trivial example (e.g. `<command> --help`) to demonstrate that one (or more)
of the programs were linked correctly.
Comment on lines +65 to +66
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also testVersion introduced in NixOS/nixpkgs#121896.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- Assert behavior post installation (e.g. python's native extensions only get installed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "only" in this sentence is confusing me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Native extensions only get installed. However, most test suites will consume the code in the build directory. So tests will fail because the compiled extensions will not be present.

I'm not sure how to word this better.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe start with the phrase about the build directory?

and are not present in a build directory)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some programs are hard to test automatically so how about creating a new meta attribute like testingInstructions

Copy link
Contributor

@kevincox kevincox Apr 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly oppose this idea.

IIUC the goal of this RFC is to make it easier for changes to upstream packages to be tested. The end goal is that we have automatic tooling that can test packages, notify maintainers of breakages and eventually mark as broken if the maintainers are unable to fix the package in time. Adding required manual testing puts unacceptable levels of burden on core package maintainers (that are depended on by hundreds or thousands of other packages).

I think a testingInstructions attribute may be an interesting and useful idea but I think it would serve a different purpose as the formalized testing specified by this RFC. If you want to create a different RFC for an informational attribute I would support it.

TL;DR I don't want to require people to manually test loads of packages, if you want your package not to break due to changes in dependencies you need automated tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess testingInstructions would fit here if it came with «replacing this with a test that looks robust will be accepted even if the test implementation is ugly», but I do not believe such a commitment would be accepted into the attribute semantics…

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we could record merge conditions instead?
Some packages don't need manual testing, so all that's needed for a merge is a review of the changelog, but this information has not been recorded yet.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to have a CONTRIBUTING.md in a packages directory with documentation for package maintainers and contributors, like how to test, how to update...
In addition to that, a README.md with documentation for users.

# Drawbacks
[drawbacks]: #drawbacks

None? This is opt-in behavior for package maintainers.

# Alternatives
[alternatives]: #alternatives
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One alternative is that we consider all dependent packages as tests. We can have dependent packages that are just tests for example a testFoobar package to test foobar.

Then a PR author would responsible that all dependents build (aka pass) or are marked broken.

The obvious issue here is that for packages with lots of dependents it becomes infeasible for the average author to run a tool that builds everything and marks failures as broken. I think it is worth mentioning this alternative because this RFC demonstrates a clean way to define an appropriate sample size. Then it is expected that nixpkgs-provided build resources can be used for the full build + mark as broken.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and this is the compromise with passthru.tests listing downstream dependencies. The idea is to list packages which are "likely to break with breaking changes".

For example, some packages may make use of many of systemd's features, however, other packages only really use libudev, which is much more stable. We could probably forego the libudev packages, and just list the packages which are using systemd's more advanced features.


Continue to use current ad-hoc conventions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


# Unresolved questions
[unresolved]: #unresolved-questions

How far should testing go?
- What consistitutes that "enough testing" was done to a package before a change was merged?

Hydra: How would this look for hydra adoption and hydraChecks?

# Future work
[future]: #future-work

One problem with onboarding more tests to the current nixpkgs CI and processes is the increased
need of compute, storage, and ram resources. Therefore, consideration of future work should
take into consideration how much testing is feasible for a given change.

Onboarding of CI tools to support testing paradigms:
- nixpkgs-review
- Run `passthru.tests` on affected packages
- Allow for filtering based upon requiredSystemFeatures
- ofborg
- Testing of `<package>.passthru.tests` is already done.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The technical goal of passthru is to set attributes in the package that are not in the derivation, a solution to the problem of (pkg // { x = y; }).overrideAttrs f losing x.
Since #92, the notion of package needs to be more decoupled from derivation. In other words, mkDerivation won't be in charge of the package (attrset) anymore. In this new world, it does not make sense to expose all mkDerivation arguments as attributes on the package, so the need for "passthru" becomes conceptually redundant.
As such, we should not call the tests attribute on a package passthru.tests.
Before #92, conflating the two was convenient, but it will be breaking and confusing instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not super familiar with 92, but it looks like it's the future


Nixpkgs:
- Add existing nixosTests to related packages
- Update testing clause on PR template
- Update contributing documentation