Replies: 2 comments 1 reply
-
From what I have seen, functions are typically tested outside the pipeline on smaller scaled-down inputs (smaller data, smaller iterations of the algorithm, etc. so unit tests can run quickly enough to be feasible). And yes, having a separate |
Beta Was this translation helpful? Give feedback.
-
I've been pondering the same question recently. I find that the In run_tests <- function() {
testthat::test_that("description", {
# your expectations
} In the pipeline: tar_target(tests, run_tests()) This way, the function will throw an error once one of the tests fails and interrupt the execution of the pipeline. Another, more informal way could be to render an RMarkdown document as part of the pipeline and inspect the output of your function there, either by just running them or including the tests. In another thread, @wlandau recommended introducing new methods (with unit tests) in a separate package in addition to the project that contains your pipeline. Not sure what to make of that idea though because at least in my case, new methods will likely be quite specific to the project at hand and changes made to the package won't show up in version control set up in the main project... |
Beta Was this translation helpful? Give feedback.
-
I am curious about a couple of things.
Is it recommended or commonly done to write unit tests for functions in a
targets
workflow? I have a lot of functions to generate targets, but they aren't tested. Maybe that is just overkill for the vast majority of projects built withtargets
? I suppose if you have a pipeline (or components) that is so commonly used or shared, then you could just package it up and add tests.If you want to write tests, can you use
testthat
? I imagine that you can create a tests/testthat directory in your project similar to what you would do for a package. There is thetar_test()
function. Is that function intended only for target factory packages likestantargets
?Beta Was this translation helpful? Give feedback.
All reactions