Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: support locally-dependent but still static testset names #27

Open
marius311 opened this issue Jul 30, 2021 · 3 comments · Fixed by #28
Open

Feature request: support locally-dependent but still static testset names #27

marius311 opened this issue Jul 30, 2021 · 3 comments · Fixed by #28
Labels
enhancement New feature or request

Comments

@marius311
Copy link

After listening to the excellent JuliaCon talk on ReTest, I took it for a spin with my package. Mostly everything works which is great, but the one spot where it'd take some rewriting of my tests is I have patterns like:

@testset "$N" for N=1:3
    @testset "$M" for M=1:N
        @test true
    end
end

which gets analyzed as

1| 1
2|   "$(M)" (repeated)
1| 2
2|   "$(M)" (repeated)
1| 3
2|   "$(M)" (repeated)

because of the use of the "local" N in the inner testset, as discussed in the caveats in the docs.

However, even though it is technically local, it doesn't seem impossible to statically analyze the entire testset tree in cases like this, where the dependence is only on other "for" variables. If it wasn't too hard, it would be a pretty nice feature to have.

@rfourquet
Copy link
Collaborator

rfourquet commented Jul 30, 2021

Thanks for your kind words and interest!
Yeah improving on this is definitely planned. A first note on how retest works:

  1. testsets are statically analyzed to select matching testsets; in case like in the example above where retest can't determine whether a testset should be run (ambiguity), it gets selected.
  2. selected tetsets are "run", whether in dry mode or not. During this phase, some testsets are again filtered (matched against patterns), to resolve ambiguities: those which are "final", i.e. which don't have nested testsets.

So in your example above, let's say the passed pattern is "1/4", ideally no testset should be run. But phase 1 selects all of them. In phase 2 in non-dry mode, only the toplevel testset is run, not children because they are final and they don't match. So the recommendation in this case is to put slow-running testsets only in final position. In dry-mode, you get the result as you showed.

The easiest fix here would be a partial solution: as in dry-mode testsets-for are unfolded, it would be pretty easy to track loop variables to allow filtering a final testsets, as in non-dry mode. The output of the dry-mode would then be something like (with e.g. verbose=2)

1| 1
1| 2
1| 3

Or with pattern "1/1", you would get

1| 1
2|   1
1| 2
1| 3

So this would better match what would happen in non-dry mode.

It seems pretty doable to also improve phase 1 on this front, but this might bit quite more involved, so I have been postponing working on this.

Would the "easy fix" be already helpful in your case (nested testsets are filtered out even in dry-mode), or do you need the more involved fix in your case (such that even the toplevel testset is not run at all with pattern "1/4") ?

@rfourquet rfourquet added the enhancement New feature or request label Jul 30, 2021
@marius311
Copy link
Author

Ah, I hadn't realized that when you actually run the tests, it does run only the correct ones, even if the dry run doesn't statically resolve everything. I guess it is a bit confusing that the dry-run and actual-run may not "match". In any case, I think the "easy" fix would be fine for me, if I understood it correctly. I'm not worried about extra stuff running in dry-mode, as I already have the actual tests mainly on the inner-most testset anyway.

rfourquet added a commit that referenced this issue Aug 1, 2021
Partially fixes #27.

The full fix might be to make `resolve!` also keep track of these
loop variables; but this would involve storing lists of collected
iterators for each nested testset: once we evaluate an iterator
for filtering in `resolve!`, we have to store it in case
it has some random behavior (e.g. `rand(1:9, i)` where `i` is
a loop variable from a parent testset), in which case re-evaling
it at execution time would lead to inconsistency.
The testset tree would also have to be traversed depth-first by also
unfolding testset-for. This could be a nice improvement if the perfs
don't suffer (instead of storing parent strings subjects and looping
through them, we would store iterators and recurse into them).
rfourquet added a commit that referenced this issue Aug 6, 2021
Partially fixes #27.

The full fix might be to make `resolve!` also keep track of these
loop variables; but this would involve storing lists of collected
iterators for each nested testset: once we evaluate an iterator
for filtering in `resolve!`, we have to store it in case
it has some random behavior (e.g. `rand(1:9, i)` where `i` is
a loop variable from a parent testset), in which case re-evaling
it at execution time would lead to inconsistency.
The testset tree would also have to be traversed depth-first by also
unfolding testset-for. This could be a nice improvement if the perfs
don't suffer (instead of storing parent strings subjects and looping
through them, we would store iterators and recurse into them).
@rfourquet
Copy link
Collaborator

Reopening as it could be interesting to try out the less easy fix. Also the confusion here suggests that more documentation is warranted on how filtering works.

@rfourquet rfourquet reopened this Aug 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants