-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: support locally-dependent but still static testset names #27
Comments
Thanks for your kind words and interest!
So in your example above, let's say the passed pattern is The easiest fix here would be a partial solution: as in dry-mode testsets-for are unfolded, it would be pretty easy to track loop variables to allow filtering a final testsets, as in non-dry mode. The output of the dry-mode would then be something like (with e.g. 1| 1
1| 2
1| 3 Or with pattern 1| 1
2| 1
1| 2
1| 3 So this would better match what would happen in non-dry mode. It seems pretty doable to also improve phase 1 on this front, but this might bit quite more involved, so I have been postponing working on this. Would the "easy fix" be already helpful in your case (nested testsets are filtered out even in dry-mode), or do you need the more involved fix in your case (such that even the toplevel testset is not run at all with pattern |
Ah, I hadn't realized that when you actually run the tests, it does run only the correct ones, even if the dry run doesn't statically resolve everything. I guess it is a bit confusing that the dry-run and actual-run may not "match". In any case, I think the "easy" fix would be fine for me, if I understood it correctly. I'm not worried about extra stuff running in dry-mode, as I already have the actual tests mainly on the inner-most testset anyway. |
Partially fixes #27. The full fix might be to make `resolve!` also keep track of these loop variables; but this would involve storing lists of collected iterators for each nested testset: once we evaluate an iterator for filtering in `resolve!`, we have to store it in case it has some random behavior (e.g. `rand(1:9, i)` where `i` is a loop variable from a parent testset), in which case re-evaling it at execution time would lead to inconsistency. The testset tree would also have to be traversed depth-first by also unfolding testset-for. This could be a nice improvement if the perfs don't suffer (instead of storing parent strings subjects and looping through them, we would store iterators and recurse into them).
Partially fixes #27. The full fix might be to make `resolve!` also keep track of these loop variables; but this would involve storing lists of collected iterators for each nested testset: once we evaluate an iterator for filtering in `resolve!`, we have to store it in case it has some random behavior (e.g. `rand(1:9, i)` where `i` is a loop variable from a parent testset), in which case re-evaling it at execution time would lead to inconsistency. The testset tree would also have to be traversed depth-first by also unfolding testset-for. This could be a nice improvement if the perfs don't suffer (instead of storing parent strings subjects and looping through them, we would store iterators and recurse into them).
Reopening as it could be interesting to try out the less easy fix. Also the confusion here suggests that more documentation is warranted on how filtering works. |
After listening to the excellent JuliaCon talk on ReTest, I took it for a spin with my package. Mostly everything works which is great, but the one spot where it'd take some rewriting of my tests is I have patterns like:
which gets analyzed as
because of the use of the "local"
N
in the inner testset, as discussed in the caveats in the docs.However, even though it is technically local, it doesn't seem impossible to statically analyze the entire testset tree in cases like this, where the dependence is only on other "for" variables. If it wasn't too hard, it would be a pretty nice feature to have.
The text was updated successfully, but these errors were encountered: