Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add helper functions to access result vectors from test results #11

Open
gorcha opened this issue Apr 16, 2020 · 7 comments
Open

Add helper functions to access result vectors from test results #11

gorcha opened this issue Apr 16, 2020 · 7 comments
Labels
Effort: ⭐⭐⭐ Challenging Importance: ❗ Desirable Task: Enhancement New feature or request

Comments

@gorcha
Copy link
Collaborator

gorcha commented Apr 16, 2020

No description provided.

@gorcha gorcha added the Task: Enhancement New feature or request label Apr 16, 2020
@kinto-b
Copy link
Contributor

kinto-b commented Jan 22, 2021

Something like this?

filter_failed <- function(expectation, dat) {
     res <- testthat::capture_expectation(expectation)
     dat[!res$custom$result, ]
}

filter_failed(expect_values(cyl, 4:6, data = mtcars), mtcars)

#>                     mpg cyl  disp  hp drat    wt  qsec vs am gear carb
#> Hornet Sportabout   18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
#> Duster 360          14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
#> Merc 450SE          16.4   8 275.8 180 3.07 4.070 17.40  0  0    3    3
#> Merc 450SL          17.3   8 275.8 180 3.07 3.730 17.60  0  0    3    3
#> Merc 450SLC         15.2   8 275.8 180 3.07 3.780 18.00  0  0    3    3
#> Cadillac Fleetwood  10.4   8 472.0 205 2.93 5.250 17.98  0  0    3    4
#> Lincoln Continental 10.4   8 460.0 215 3.00 5.424 17.82  0  0    3    4
#> Chrysler Imperial   14.7   8 440.0 230 3.23 5.345 17.42  0  0    3    4
#> Dodge Challenger    15.5   8 318.0 150 2.76 3.520 16.87  0  0    3    2
#> AMC Javelin         15.2   8 304.0 150 3.15 3.435 17.30  0  0    3    2
#> Camaro Z28          13.3   8 350.0 245 3.73 3.840 15.41  0  0    3    4
#> Pontiac Firebird    19.2   8 400.0 175 3.08 3.845 17.05  0  0    3    2
#> Ford Pantera L      15.8   8 351.0 264 4.22 3.170 14.50  0  1    5    4
#> Maserati Bora       15.0   8 301.0 335 3.54 3.570 14.60  0  1    5    8

We could also store a reference to the dataframe in the expectation result so we wouldn't have to write mtcars twice.

We would probably have to introduce expectation classes to separate record-level expectations (like expect_values) from dataset-level expectations (like expect_unique) and then throw an informative error if filter_failed() were used on a dataset-level expectation

@kinto-b kinto-b added Effort: ⭐⭐⭐ Challenging Importance: ❗ Desirable labels Feb 8, 2021
@kinto-b
Copy link
Contributor

kinto-b commented Apr 6, 2021

@g-hyo I think this was initially your idea. Is this roughly what you had in mind?

@g-hyo
Copy link

g-hyo commented Apr 6, 2021

I'm not sure if this was my idea but I am happy to take the credit :p

Definitely seems like this functionality would be extremely useful for investigating failed tests though

@kinto-b
Copy link
Contributor

kinto-b commented Apr 6, 2021

Oh, I thought it was you who suggested it ages back when we were first doing the timeseries everything stuff.

Thinking again about this, the approach above is probably sound. While it's slightly unaesthetic/unwieldy to have to specify data twice, it's probably forgivable given that the user will often be setting the test_data, e.g.:

filter_failed <- function(expectation, data = get_testdata()) {
     res <- testthat::capture_expectation(expectation)
     data[!res$custom$result, ]
}

set_testdata(mtcars)
filter_failed(expect_values(cyl, 4:6))

#>                     mpg cyl  disp  hp drat    wt  qsec vs am gear carb
#> Hornet Sportabout   18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
#> Duster 360          14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
#> Merc 450SE          16.4   8 275.8 180 3.07 4.070 17.40  0  0    3    3
#> Merc 450SL          17.3   8 275.8 180 3.07 3.730 17.60  0  0    3    3
#> Merc 450SLC         15.2   8 275.8 180 3.07 3.780 18.00  0  0    3    3
#> Cadillac Fleetwood  10.4   8 472.0 205 2.93 5.250 17.98  0  0    3    4
#> Lincoln Continental 10.4   8 460.0 215 3.00 5.424 17.82  0  0    3    4
#> Chrysler Imperial   14.7   8 440.0 230 3.23 5.345 17.42  0  0    3    4
#> Dodge Challenger    15.5   8 318.0 150 2.76 3.520 16.87  0  0    3    2
#> AMC Javelin         15.2   8 304.0 150 3.15 3.435 17.30  0  0    3    2
#> Camaro Z28          13.3   8 350.0 245 3.73 3.840 15.41  0  0    3    4
#> Pontiac Firebird    19.2   8 400.0 175 3.08 3.845 17.05  0  0    3    2
#> Ford Pantera L      15.8   8 351.0 264 4.22 3.170 14.50  0  1    5    4
#> Maserati Bora       15.0   8 301.0 335 3.54 3.570 14.60  0  1    5    8

However, having to implement separate classes of expectations to get around the fact that there are a few expect_() functions which don't test the data row-by-row is irritating

@kinto-b
Copy link
Contributor

kinto-b commented Jan 7, 2022

@gorcha I might go ahead and implement this unless you can think of a better way to achieve this?

@gorcha
Copy link
Collaborator Author

gorcha commented Jan 8, 2022

Hold off for the moment, I've been thinking about the best way to do this in the context of the "test explorer" kind of thing we were briefly chatting about so might move in a different direction.

@frycast
Copy link

frycast commented Mar 20, 2024

Just checking if this is still on the cards?

This is the version I'm using now:

filter_failed <- function(expectation, dat) {
  res <- testthat::capture_error(expectation)
  if (!is.null(res)) {
    failed_rows <- dat[!res$custom$result, ]
    print(failed_rows)
    stop(res)
  }
  invisible(NULL)
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Effort: ⭐⭐⭐ Challenging Importance: ❗ Desirable Task: Enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants