Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate coverage reports for our test suite and integrate it into our CI #245

Open
2 tasks
bockthom opened this issue Nov 8, 2023 · 2 comments
Open
2 tasks

Comments

@bockthom
Copy link
Collaborator

bockthom commented Nov 8, 2023

We already had thought about that some years ago, but we never integrated or deployed it:

It would be very helpful to generate a coverage report when running our test suite, to find out which parts of coronet need more tests, which branches are uncovered yet, etc.

To not forget about this, I open this issue.

In general, there are two tasks:

  • search for different coverage tools in R that work with together with the testthat and patrick packages that we use for testing, and identify the advantages and drawbacks of the different coverage tools
  • find out whether and how we can integrate this step into our current CI pipeline (maybe codecov could be used together with GitHub Actions, but this depends also on the first step, which coverage tools are in general useful in our setting)
@maxloeffler
Copy link
Contributor

I have investigated the R covr package, which is the foundation of codecov. In the following, I will document all my findings.

Integration

First, It is not straight forward to integrate covr into coronet. Contrary to my first impression, covr::package_coverage is not applicable by default for us, as it assumes a project structure that we do not comply with, as well as at least one configuration file (labeled DESCRIPTION by the covr team) that also does not seem to be trivially constructed and kept up-to-date in our structure. However, we can use covr::file_coverage to achieve a similar result, here an example integration:

## define paths
code.dir = c(".")
test.dir = c("./tests")

## define filter
test.regex = ""

## generate coverage report
test.files = unlist(sapply(test.dir, list.files, pattern = "\\.R$", full.names = TRUE))
test.files = test.files[grepl(test.regex, test.files)]
code.files = unlist(sapply(code.dir, list.files, pattern = "\\.R$", full.names = TRUE))
code.files = code.files[!code.files %in% c("./tests.R", "./showcase.R", "./install.R", "./util-init.R")]

report = covr::file_coverage(source_files = code.files, test_files = test.files)

In this integration, all files ending with .R in the main folder of the repository will be instrumented by coverage measures, excluding tests.R, showcase.R, install.R, and util-init.R. Further, all files in ./tests/ ending in .R comprise the set of executed unit-tests. In the end, the report object contains coverage information. For your interest, here is an excerpt of the coverage report generated by covr:

Browse[1]> report
Coverage: 79.17%
./util-networks-metrics.R: 0.00%
./util-plot-evaluation.R: 0.00%
./util-plot.R: 0.00%
./util-tensor.R: 0.00%
./util-bulk.R: 22.58%
./util-core-peripheral.R: 38.04%
./util-conf.R: 73.99%
./util-data.R: 85.45%
./util-misc.R: 86.13%
./util-data-misc.R: 87.50%
./util-networks.R: 92.83%
./util-read.R: 92.83%
./util-networks-misc.R: 94.12%
./util-networks-covariates.R: 94.91%
./util-split.R: 97.69%
./util-motifs.R: 98.82%

Forwarding this report to their online service does not seem to work right out the box, I did not investigate the causes yet (it might be related to us using covr::file_coverage instead of covr::package_coverage), however here are the errors I get:

Browse[1]> cov = covr::codecov(coverage = report)
Request failed [400]. Retrying in 1 seconds...
Request failed [400]. Retrying in 1 seconds...

Browse[1]> cov
{html_document}
<html>
[1] <body><p>Could not determine repo and owner</p></body>

Caveats

Unfortunately, there are some caveats of this approach that I will discuss in the following:

  • Generating code coverage requires to re-execute all tests. This does not seem to be an issue per-se, we could just not execute the tests before generating coverage info. Coverage generation will stop when encountering a failing test, and it will also display logs and accumulated warnings as expected.
  • Executing the tests through covr::file_coverage will print artifacts like Test passed 🥳 or Test passed 🌈 when a test successfully executed. The function does not seem to handle any argument that would allow us to disable these.
  • Executing the tests through this approach does change the ProjectConf's datapath attribute, which is explicitly tested in test-data.R. This new approach appends an explicit ./tests/ before the rest of the path. Therefore, an update of the tests would be required. I investigated the tests further and found that this path is already explicitly set, when debugging a file individually. Hence, I think it is not a deal breaker to update these paths:

## use only when debugging this file independently
if (!dir.exists(CF.DATA)) CF.DATA = file.path(".", "tests", "codeface-data")

I have tried to circumvent this by adding a new file coverage.R which would include all additions discussed above and then setting test.files = "tests.R", however, then the coverage report would yield a score of 0% for all source files, as well as in total, despite executing all tests. I am not aware of any better fix, therefore. Also, I believe that the discussed tests have to be updated whether we use covr in this way or not, since they test for a hardcoded path and do not respect the fact that the path might be different, when the file is debugged individually.

Summary

I do believe, coverage reports would greatly benefit coronet and that covr is a suited tool for achieving this goal. However, there are some minor hurdles that we should discuss how to overcome.

@maxloeffler
Copy link
Contributor

Here comes a small follow-up:

Other coverage tools

I tried finding other suitable coverage tools to compare them to covr and found rcov and testCoverage. I directly exclude rcov as it is a tiny project in an almost 10 years-old stillstand and it requires heavy source-code instrumentation. testCoverage is still rather small but has been at least maintained more consistently than rcov until it was publicly archived by the owner 5 years ago. I still wanted to have a look at testCoverage.

Integrating it in our tests.R is straightforward as with covr:

library("testCoverage")

## define paths
code.dir = c(".")
test.dir = c("./tests")

## define filter
test.regex = ""

## generate coverage report
test.files = unlist(sapply(test.dir, list.files, pattern = "\\.R$", full.names = TRUE))
test.files = test.files[grepl(test.regex, test.files)]
code.files = unlist(sapply(code.dir, list.files, pattern = "\\.R$", full.names = TRUE))
code.files = code.files[!code.files %in% c("./tests.R", "./showcase.R", "./install.R", "./util-init.R")]

report = testCoverage::reportCoverage(sourcefiles = code.files, executionfiles = test.files)

From a first look, it seems to correctly instrument coronet and also execute the tests correctly. It generates a nice, interactable HTML file locally. Unfortunately, either testCoverage or R is currently is not behaving, here is an excerpt from the error log:

...
1 : reading test-core-peripheral.R ...1 test-core-peripheral.R failed with error
<simpleError in context("Tests for the file 'util-core-peripheral.R'"): konnte Funktion "context" nicht finden>
2 : reading test-data-cut.R ...2 test-data-cut.R failed with error
<simpleError in context("Cutting functionality on ProjectData side."): konnte Funktion "context" nicht finden>
3 : reading test-data.R ...3 test-data.R failed with error
<simpleError in context("Tests for ProjectData functionalities."): konnte Funktion "context" nicht finden>
4 : reading test-misc.R ...4 test-misc.R failed with error
...

These errors cause the coverage to be close to 0%, however I was able to receive 25% with less but still a significant amout of these weird errors earlier 🤷‍♂️.

covr-only test execution

In our last meeting, we discussed if it is feasible to run our tests through covr-only in comparison to first running covr then running our tests again (for obvious performance reasons). The most striking disadvantage of covr-only is that it does not grant us something like "8 out of 11 tests passed", it just prints the output and debug logs. I believe that this alone is enough to reconsider "double-execution". Further, it seems like covr does not continue execution when a test fails, instead it directly halts. I believe it is unwanted.

Generating nice coverage-reports through "codecov.io"

Previously, I did not correctly use codecov, i.e., the errors I received were self-inflicted. However, now I signed-up, obtained a correct api token for my local coronet repo and tried the upload again, but it still did not work for whatever reason (same error). In the future, we might be able to sort this out, depending on how important the detailedness of the reports are to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants