-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests #71
Add tests #71
Conversation
Codecov Report
@@ Coverage Diff @@
## main #71 +/- ##
===========================================
+ Coverage 62.50% 98.82% +36.32%
===========================================
Files 8 8
Lines 424 424
===========================================
+ Hits 265 419 +154
+ Misses 159 5 -154 see 5 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
@pratikunterwegs @joshwlambert @Bisaloo any idea how to fix this failing test? |
Perhaps a stupid question, but have you changed the behaviour of the function in a way that would legitimately change the snapshot? In which case accepting the new snapshot would be the way out, I'd guess. |
@pratikunterwegs I haven't. Normally, that's the output you'd see in Rstudio when you first run the test and it'd go away after you've accepted it. However, I don't know how to achieve that in the CI. I'm tempted to delete these tests at the cost of some coverage. |
@jamesmbaazam can you reproduce this issue locally? I've run the tests and calculated code coverage on this branch locally and everything works. Have you tried re-running the failed workflow to check if it was a one-time issue? |
@joshwlambert It passes locally for me too but fails on the CI. I get the same error when I re-run the failed jobs. |
@jamesmbaazam from what I understand the snapshot fails as its components are being wrapped in
This issue on using {covr} with {drake} appears to suggest that there is not much to be done about this when there's a conflict with static code analysis tools - you would probably be better off removing this snapshot test. I've tried replacing the One alternative which I haven't tried and which just might work is converting the function factory output to a string internally, and returning a string instead of a closure. I think this would also be caught by {covr} and would probably result in the same issue. Another issue which you might find insightful regarding code coverage for function factories: r-lib/covr#363 An alternative would be to create snapshot tests for the outputs of the generated functions, rather than the function body itself, and/or testing for the statistical correctness. |
@pratikunterwegs Thanks for looking into this. Really detailed explanation. I thought of it over the weekend and decided to remove the snapshot test for now as it doesn't make much sense. There's another test to ensure that the right distribution is passed to the |
For an immediate fix of the issue at hand, you can add More generally, and slightly beyond the scope of this specific PR, I don't see much benefit for this function in its current state. As far as I can tell, It is used just once, in I see two potential solutions:
|
Thanks @Bisaloo. I will implement the first option. Creating an issue to fix this. |
f6e888b
to
a745ec9
Compare
@pratikunterwegs Would you like to review this PR? I am hoping to merge it in by the close of Wednesday. |
39f6ec5
to
45b22e8
Compare
Sure, will have feedback by tomorrow afternoon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest reorganising this file so that the mapping between the <epichains>
and aggregated data created at the top, and the actual tests, are clearer for maintainers who are new to the package. You could create each object just before you test it, for instance.
I see that the class of the objects is tested in tests-simulate.R
whereas this file tests methods, maybe a small comment saying that would be good here for future maintainers. Alternatively, you could combine the two files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could create each object just before you test it, for instance.
Thanks. I'll make the change. This was how I organised it originally but noticed I was doing the same thing many times, so decided to move it to the top. But it does make sense to have it within the context in which they're tested for ease of reading. it seems to be a tradeoff between readability and efficiency. I've made the suggested changes here fdf45f8 and 395641d.
Alternatively, you could combine the two files.
I think I'll keep them separate to keep the script-to-test mapping organisation.
tests/testthat/test-epichains.R
Outdated
test_that("head and tail methods work", { | ||
expect_snapshot(head(epichains_tree)) | ||
expect_snapshot(head(epichains_tree2)) | ||
expect_snapshot(tail(epichains_tree)) | ||
expect_snapshot(tail(epichains_tree2)) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest adding a check for the return type, which is currently a data.frame. Might be worth adding this to the method documentation as well, as users might be expecting an <epichains>
object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added the tests here 01c5ef9. I'll create an issue for the documentation issue.
tests/testthat/test-epichains.R
Outdated
expect_s3_class( | ||
aggreg_by_gen, | ||
"epichains_aggregate_df" | ||
) | ||
expect_s3_class( | ||
aggreg_by_time, | ||
"epichains_aggregate_df" | ||
) | ||
expect_s3_class( | ||
aggreg_by_both, | ||
"epichains_aggregate_df" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here it might be worth also checking that the data aggregated by "both" inherits from a list, whereas the other two inherit from data.frame. I think there should be a wider rethinking of classes in {epichains} as well to avoid this ambiguous inheritance structure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion for rethink the class structures. I'll raise an issue for further discussion. For now, I've added the suggested test here fdf45f8.
tests/testthat/test-helpers.R
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm reminded by the recent {cfr} review that test descriptions that include function names can be brittle to function name changes - might be good to change that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that they are brittle but I wonder if they are really that hard to maintain in a small code base like this. Even for "larger" code bases like that of dplyr, their tests like this one for across use function names.
Generic descriptions in long test files are hard to debug in my opinion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was pretty much my logic in {cfr} too - but you suggested to make them more descriptive. I'm happy for the function names to stay in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the suggestion was to more specific with the testing contexts. Maybe there was a lapse in communication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jamesmbaazam - looks alright to me overall. I haven't really looked into {epichains} before. From what I can see, I think the tests cover the package functionality, and there seem to be tests for correctness so hopefully the chain simulation functions work as expected.
This PR is mostly adding tests, but one issue that I noticed is that <epichain>
and <epichains_aggregated_df>
objects can inherit from different base classes (data.frame and vector, and data.frame and list, respectively). I'm not sure whether it's a good idea to get into conditional inheritance in this way. This would probably make it difficult for future developers to easily account for what sort of object a function will return. If the data in each case (e.g. epichain_tree vs epichain_summary) is sufficiently different, perhaps it would be better to have separate classes. If you would find an overarching signature more convenient for some methods, the existing classes could be defined as abstract super-classes with sub-classes instead. Happy to discuss this further.
A minor point is that looking at the functions that are related to checking the offspring distribution function, you would be restricting users to pass functions that are available in {stats} and can thus be found by exists()
. Are there likely to be cases where users would want to specify a function from another package, and pass it explicitly namespaced as "pkg::function"
? If so, do you intend to support that, and would the check_*()
functions need to account for it?
ab4dfd0
to
537ceab
Compare
I've logged this here #79.
This issue of function look-up has been discussed extensively in #25 and #33 (comment). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes @jamesmbaazam - looks alright to me.
Thanks for your thorough review as always. |
This PR adds tests for all the functions in the package to close #45.