diff --git a/.Rbuildignore b/.Rbuildignore index 55871de..5fffd81 100644 --- a/.Rbuildignore +++ b/.Rbuildignore @@ -11,3 +11,11 @@ ^revdep$ ^codecov\.yml$ ^\.github$ +^vignettes$ +^man/_cache$ +^README_cache$ +^README\.html$ +^README\.Rmd$ +^README[.]html$ +^rhub2\.Rproj$ +^\.Rproj\.user$ diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md new file mode 100644 index 0000000..3ac34c8 --- /dev/null +++ b/.github/CODE_OF_CONDUCT.md @@ -0,0 +1,126 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, caste, color, religion, or sexual +identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our +community include: + +* Demonstrating empathy and kindness toward other people +* Being respectful of differing opinions, viewpoints, and experiences +* Giving and gracefully accepting constructive feedback +* Accepting responsibility and apologizing to those affected by our mistakes, + and learning from the experience +* Focusing on what is best not just for us as individuals, but for the overall + community + +Examples of unacceptable behavior include: + +* The use of sexualized language or imagery, and sexual attention or advances of + any kind +* Trolling, insulting or derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or email address, + without their explicit permission +* Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders responsible for enforcement at codeofconduct@posit.co. +All complaints will be reviewed and investigated promptly and fairly. + +All community leaders are obligated to respect the privacy and security of the +reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series of +actions. + +**Consequence**: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or permanent +ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including +sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within the +community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.1, available at +. + +Community Impact Guidelines were inspired by +[Mozilla's code of conduct enforcement ladder][https://github.com/mozilla/inclusion]. + +For answers to common questions about this code of conduct, see the FAQ at +. Translations are available at . + +[homepage]: https://www.contributor-covenant.org diff --git a/.github/workflows/R-CMD-check.yaml b/.github/workflows/R-CMD-check.yaml index 091d3e1..b0aabba 100644 --- a/.github/workflows/R-CMD-check.yaml +++ b/.github/workflows/R-CMD-check.yaml @@ -22,25 +22,24 @@ jobs: fail-fast: false matrix: config: - - {os: macOS-latest, r: 'release'} + - {os: macos-latest, r: 'release'} - {os: windows-latest, r: 'release'} - # Use 3.6 to trigger usage of RTools35 - - {os: windows-latest, r: '3.6'} + # use 4.1 to check with rtools40's older compiler + - {os: windows-latest, r: '4.1'} - {os: ubuntu-latest, r: 'devel', http-user-agent: 'release'} - {os: ubuntu-latest, r: 'release'} - {os: ubuntu-latest, r: 'oldrel-1'} - {os: ubuntu-latest, r: 'oldrel-2'} - {os: ubuntu-latest, r: 'oldrel-3'} - - {os: ubuntu-latest, r: 'oldrel-4'} env: GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} R_KEEP_PKG_SOURCE: yes steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v4 - uses: r-lib/actions/setup-pandoc@v2 @@ -58,3 +57,4 @@ jobs: - uses: r-lib/actions/check-r-package@v2 with: upload-snapshots: true + build_args: 'c("--no-manual","--compact-vignettes=gs+qpdf")' diff --git a/.github/workflows/pkgdown.yaml b/.github/workflows/pkgdown.yaml index 0b26021..f1a0231 100644 --- a/.github/workflows/pkgdown.yaml +++ b/.github/workflows/pkgdown.yaml @@ -19,8 +19,11 @@ jobs: group: pkgdown-${{ github.event_name != 'pull_request' || github.run_id }} env: GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} + R_CLI_NUM_COLORS: 256 + permissions: + contents: write steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v4 - uses: r-lib/actions/setup-pandoc@v2 @@ -34,12 +37,16 @@ jobs: needs: website - name: Build site - run: pkgdown::build_site_github_pages(new_process = FALSE, install = FALSE) + run: | + rmarkdown::render("README.Rmd") + pkgdown::build_site_github_pages(new_process = FALSE, install = FALSE) shell: Rscript {0} + env: + IN_PKGDOWN: true - name: Deploy to GitHub pages 🚀 if: github.event_name != 'pull_request' - uses: JamesIves/github-pages-deploy-action@4.1.4 + uses: JamesIves/github-pages-deploy-action@v4.5.0 with: clean: false branch: gh-pages diff --git a/.github/workflows/pr-commands.yaml b/.github/workflows/pr-commands.yaml index 97271eb..eea58c5 100644 --- a/.github/workflows/pr-commands.yaml +++ b/.github/workflows/pr-commands.yaml @@ -14,7 +14,7 @@ jobs: env: GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v4 - uses: r-lib/actions/pr-fetch@v2 with: @@ -51,7 +51,7 @@ jobs: env: GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v4 - uses: r-lib/actions/pr-fetch@v2 with: diff --git a/.github/workflows/test-coverage.yaml b/.github/workflows/test-coverage.yaml index 4b65418..21b8a93 100644 --- a/.github/workflows/test-coverage.yaml +++ b/.github/workflows/test-coverage.yaml @@ -15,7 +15,7 @@ jobs: GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v4 - uses: r-lib/actions/setup-r@v2 with: @@ -27,5 +27,24 @@ jobs: needs: coverage - name: Test coverage - run: covr::codecov(quiet = FALSE) + run: | + covr::codecov( + quiet = FALSE, + clean = FALSE, + install_path = file.path(normalizePath(Sys.getenv("RUNNER_TEMP"), winslash = "/"), "package") + ) shell: Rscript {0} + + - name: Show testthat output + if: always() + run: | + ## -------------------------------------------------------------------- + find '${{ runner.temp }}/package' -name 'testthat.Rout*' -exec cat '{}' \; || true + shell: bash + + - name: Upload test results + if: failure() + uses: actions/upload-artifact@v4 + with: + name: coverage-test-failures + path: ${{ runner.temp }}/package diff --git a/.gitignore b/.gitignore index 54834f8..a0af6b7 100644 --- a/.gitignore +++ b/.gitignore @@ -5,3 +5,6 @@ inst/doc .RData rhub.Rproj /revdep +/man/_cache +/README_cache +/README.html diff --git a/DESCRIPTION b/DESCRIPTION index c9a2131..633fb98 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,46 +1,45 @@ Package: rhub -Title: Connect to 'R-hub' -Version: 1.1.2.9000 +Title: Tools for R package developers +Version: 1.9.9.9000 Authors@R: c( person("Gábor", "Csárdi",, "csardi.gabor@gmail.com", role = c("aut", "cre")), - person("Maëlle", "Salmon", role = "aut", + person("Maëlle", "Salmon", role = "aut", email = "maelle.salmon@yahoo.se", comment = c(ORCID = "0000-0002-2815-0399")), person("R Consortium", role = c("fnd"))) -Description: Run 'R CMD check' on any of the 'R-hub' () - architectures, from the command line. The current architectures include - 'Windows', 'macOS', 'Solaris' and various 'Linux' distributions. +Description: R-hub v2 uses GitHub Actions to run `R CMD check` and + similar package checks. The rhub package helps you set up + R-hub v2 for your R package, and start running checks. License: MIT + file LICENSE URL: https://github.com/r-hub/rhub, https://r-hub.github.io/rhub/ BugReports: https://github.com/r-hub/rhub/issues -RoxygenNote: 7.2.1.9000 +RoxygenNote: 7.3.1.9000 Roxygen: list(markdown = TRUE) +Depends: + R (>= 4.0) Imports: - assertthat, callr, - cli (>= 1.1.0), - crayon, + cli, + curl, desc, - digest, - httr, + gert, + glue, + gitcreds, jsonlite, - parsedate, - pillar, - prettyunits, + pkgbuild, processx, - R6, rappdirs, - rcmdcheck (>= 1.2.1), rematch, - tibble, + R6, + rprojroot, utils, - uuid, - whoami, - withr -Suggests: - covr, - testthat, + whoami +Suggests: + asciicast, + debugme, knitr, - rmarkdown + pillar, + rmarkdown, + testthat (>= 3.0.0), + webfakes Encoding: UTF-8 -VignetteBuilder: knitr, rmarkdown diff --git a/LICENSE b/LICENSE index 0b95b9e..9d833b7 100644 --- a/LICENSE +++ b/LICENSE @@ -1,2 +1,2 @@ -YEAR: 2019 -COPYRIGHT HOLDER: R Consortium +YEAR: 2019-2024 +COPYRIGHT HOLDER: R Consortium, Posit PBC diff --git a/NAMESPACE b/NAMESPACE index ad60cb2..1e1c0c0 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -1,22 +1,11 @@ # Generated by roxygen2: do not edit by hand -S3method("[",rhub_column_group_id) -S3method("[",rhub_column_id) -S3method("[",rhub_column_result) -S3method("[",rhub_column_status) -S3method(pillar_shaft,difftime) -S3method(pillar_shaft,rhub_column_group_id) -S3method(pillar_shaft,rhub_column_id) -S3method(pillar_shaft,rhub_column_result) -S3method(pillar_shaft,rhub_column_status) -S3method(print,rhub_docker_images) -S3method(print,rhub_handle) -S3method(print,rhub_local_check) +S3method("[",rhub_platforms) +S3method(format,rhub_platforms) +S3method(format,rhub_platforms_summary) S3method(print,rhub_platforms) -S3method(type_sum,rhub_column_group_id) -S3method(type_sum,rhub_column_id) -S3method(type_sum,rhub_column_result) -S3method(type_sum,rhub_column_status) +S3method(print,rhub_platforms_summary) +S3method(summary,rhub_platforms) export(check) export(check_for_cran) export(check_on_centos) @@ -41,56 +30,26 @@ export(list_validated_emails) export(local_check_linux) export(local_check_linux_images) export(platforms) +export(rc_list_local_tokens) +export(rc_list_repos) +export(rc_new_token) +export(rc_submit) +export(rhub_check) +export(rhub_doctor) +export(rhub_platforms) +export(rhub_setup) export(validate_email) importFrom(R6,R6Class) -importFrom(assertthat,"on_failure<-") -importFrom(assertthat,assert_that) -importFrom(callr,rcmd_safe) -importFrom(cli,col_blue) -importFrom(cli,col_green) -importFrom(cli,col_red) -importFrom(cli,make_ansi_style) -importFrom(cli,style_bold) -importFrom(cli,style_inverse) importFrom(cli,symbol) -importFrom(crayon,blue) -importFrom(crayon,cyan) -importFrom(crayon,green) -importFrom(crayon,make_style) -importFrom(crayon,red) -importFrom(crayon,underline) -importFrom(crayon,yellow) -importFrom(desc,desc_get) -importFrom(desc,desc_get_maintainer) -importFrom(digest,digest) -importFrom(httr,DELETE) -importFrom(httr,GET) -importFrom(httr,POST) -importFrom(httr,add_headers) -importFrom(httr,content) -importFrom(httr,headers) -importFrom(httr,status_code) -importFrom(jsonlite,base64_enc) -importFrom(jsonlite,fromJSON) importFrom(jsonlite,toJSON) -importFrom(jsonlite,unbox) -importFrom(parsedate,parse_iso_8601) -importFrom(pillar,new_pillar_shaft_simple) -importFrom(pillar,pillar_shaft) -importFrom(pillar,type_sum) -importFrom(prettyunits,pretty_dt) -importFrom(prettyunits,pretty_ms) -importFrom(processx,run) importFrom(rappdirs,user_data_dir) -importFrom(rcmdcheck,rcmdcheck) importFrom(rematch,re_match) -importFrom(utils,browseURL) +importFrom(utils,getSrcDirectory) +importFrom(utils,getSrcFilename) +importFrom(utils,getSrcLocation) importFrom(utils,head) importFrom(utils,menu) +importFrom(utils,modifyList) importFrom(utils,read.csv) -importFrom(utils,tail) -importFrom(utils,untar) importFrom(utils,write.table) -importFrom(uuid,UUIDgenerate) importFrom(whoami,email_address) -importFrom(withr,with_dir) diff --git a/NEWS.md b/NEWS.md index 442c5f1..3340172 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,6 +1,15 @@ # rhub development version +## R-hub v2 + +This is a completely new system, see `?rhubv2` manual page or +the 'Getting started with R-hub v2' article at +https://r-hub.github.io/rhub/dev to start. + +Previous functions are now deprecated and defunct. They will be removed +in the next version of the package. + # rhub 1.1.2 * Replace `platform` parameter with `platforms` in `check()` (#497). @@ -17,17 +26,17 @@ ## Enhancements -* `cran_summary()` now messages that we recommend to fix all NOTEs, WARNINGs +* `cran_summary()` now messages that we recommend to fix all NOTEs, WARNINGs and ERRORs before a CRAN submission when the check results aren't 0 NOTE, 0 WARNING, 0 ERROR. - -* `cran_summary()` now outputs informative messages when any of the builds + +* `cran_summary()` now outputs informative messages when any of the builds of the group hasn't completed (yet, or at all). ## Bug fixes -* `cran_summary()` now works for packages whose R CMD Check result include - no NOTE/WARNING/ERROR, and gives an informative error message when not all +* `cran_summary()` now works for packages whose R CMD Check result include + no NOTE/WARNING/ERROR, and gives an informative error message when not all builds are completed yet. * `cran_summary()` now prints lines to screen without unwanted indentation. @@ -45,7 +54,7 @@ * New `get_check()` function that works with check ids, or a check group id. -* `list_package_checks()` and `list_my_checks()` now output a `tibble`, that +* `list_package_checks()` and `list_my_checks()` now output a `tibble`, that is nicely formatted when printed to the screen. * The output of `get_check()`, `check()`, `check_on_`, `check_for_cran()`, diff --git a/R/a-rstudio-detect.R b/R/a-rstudio-detect.R new file mode 100644 index 0000000..6a2800d --- /dev/null +++ b/R/a-rstudio-detect.R @@ -0,0 +1,177 @@ + +rstudio <- local({ + + standalone_env <- environment() + parent.env(standalone_env) <- baseenv() + + # -- Collect data ------------------------------------------------------ + + data <- NULL + + get_data <- function() { + envs <- c( + "R_BROWSER", + "R_PDFVIEWER", + "RSTUDIO", + "RSTUDIO_TERM", + "RSTUDIO_CONSOLE_COLOR", + "ASCIICAST") + + d <- list( + pid = Sys.getpid(), + envs = Sys.getenv(envs), + api = tryCatch( + asNamespace("rstudioapi")$isAvailable(), + error = function(err) FALSE + ), + tty = isatty(stdin()), + gui = .Platform$GUI, + args = commandArgs(), + search = search() + ) + d$ver <- if (d$api) asNamespace("rstudioapi")$getVersion() + d$desktop <- if (d$api) asNamespace("rstudioapi")$versionInfo()$mode + + d + } + + # -- Auto-detect environment ------------------------------------------- + + is_rstudio <- function() { + Sys.getenv("RSTUDIO") == "1" + } + + detect <- function(clear_cache = FALSE) { + # Cached? + if (clear_cache) data <<- list() + if (!is.null(data)) return(get_caps(data)) + + # Otherwise get data + new <- get_data() + + # Cache unless told otherwise + cache <- TRUE + + new$type <- if (new$envs[["RSTUDIO"]] != "1") { + # 1. Not RStudio at all + "not_rstudio" + + } else if (new$gui == "RStudio" && new$api) { + # 2. RStudio console, properly initialized + "rstudio_console" + + } else if (new$gui == "RStudio" && ! new$api) { + # 3. RStudio console, initilizing + cache <- FALSE + "rstudio_console_starting" + + } else if (new$tty && new$envs[["ASCIICAST"]] != "true") { + # 4. R in the RStudio terminal + # This could also be a subprocess of the console or build pane + # with a pseudo-terminal. There isn't really a way to rule that + # out, without inspecting some process data with ps::ps_*(). + # At least we rule out asciicast + "rstudio_terminal" + + } else if (! new$tty && + new$envs[["RSTUDIO_TERM"]] == "" && + new$envs[["R_BROWSER"]] == "false" && + new$envs[["R_PDFVIEWER"]] == "false" && + is_build_pane_command(new$args)) { + # 5. R in the RStudio build pane + # https://github.com/rstudio/rstudio/blob/master/src/cpp/session/ + # modules/build/SessionBuild.cpp#L231-L240 + "rstudio_build_pane" + + } else { + # Otherwise it is a subprocess of the console, terminal or + # build pane, and it is hard to say which, so we do not try. + "rstudio_subprocess" + } + + if (cache) data <<- new + + get_caps(new) + } + + is_build_pane_command <- function(args) { + cmd <- gsub("[\"']", "", args[[length(args)]]) + rcmd <- sub("[(].*$", "", cmd) + rcmd %in% c("devtools::build", "devtools::test", "devtools::check") + } + + # -- Capabilities ------------------------------------------------------ + + caps <- list() + + caps$not_rstudio <- function(data) { + list( + type = "not_rstudio", + dynamic_tty = FALSE, + ansi_tty = FALSE, + ansi_color = FALSE, + num_colors = 1L + ) + } + + caps$rstudio_console <- function(data) { + list( + type = "rstudio_console", + dynamic_tty = TRUE, + ansi_tty = FALSE, + ansi_color = data$envs[["RSTUDIO_CONSOLE_COLOR"]] != "", + num_colors = as.integer(data$envs[["RSTUDIO_CONSOLE_COLOR"]]) + ) + } + + caps$rstudio_console_starting <- function(data) { + res <- caps$rstudio_console(data) + res$type <- "rstudio_console_starting" + res + } + + caps$rstudio_terminal <- function(data) { + list( + type = "rstudio_terminal", + dynamic_tty = TRUE, + ansi_tty = TRUE, + ansi_color = data$envs[["RSTUDIO_CONSOLE_COLOR"]] != "", + num_colors = as.integer(data$envs[["RSTUDIO_CONSOLE_COLOR"]]) + ) + } + + caps$rstudio_build_pane <- function(data) { + list( + type = "rstudio_build_pane", + dynamic_tty = TRUE, + ansi_tty = FALSE, + ansi_color = data$envs[["RSTUDIO_CONSOLE_COLOR"]] != "", + num_colors = as.integer(data$envs[["RSTUDIO_CONSOLE_COLOR"]]) + ) + } + + caps$rstudio_subprocess <- function(data) { + list( + type = "rstudio_subprocess", + dynamic_tty = FALSE, + ansi_tty = FALSE, + ansi_color = FALSE, + num_colors = 1L + ) + } + + get_caps <- function(data, type = data$type) { + ret <- caps[[type]](data) + ret$version <- data$ver + ret + } + + structure( + list( + .internal = standalone_env, + is_rstudio = is_rstudio, + detect = detect + ), + class = c("standalone_rstudio_detect", "standalone") + ) +}) diff --git a/R/aa-assertthat.R b/R/aa-assertthat.R new file mode 100644 index 0000000..2a7679f --- /dev/null +++ b/R/aa-assertthat.R @@ -0,0 +1,91 @@ + +assert_that <- function(..., env = parent.frame(), msg = NULL) { + res <- see_if(..., env = env, msg = msg) + if (res) return(TRUE) + + throw(new_assert_error(attr(res, "msg"))) +} + +new_assert_error <- function (message, call = NULL) { + cond <- new_error(message, call. = call) + class(cond) <- c("assert_error", class(cond)) + cond +} + +see_if <- function(..., env = parent.frame(), msg = NULL) { + asserts <- eval(substitute(alist(...))) + + for (assertion in asserts) { + res <- tryCatch({ + eval(assertion, env) + }, error = function(e) { + structure(FALSE, msg = e$message) + }) + check_result(res) + + # Failed, so figure out message to produce + if (!res) { + if (is.null(msg)) + msg <- get_message(res, assertion, env) + return(structure(FALSE, msg = msg)) + } + } + + res +} + +check_result <- function(x) { + if (!is.logical(x)) + throw(new_assert_error("assert_that: assertion must return a logical value")) + if (any(is.na(x))) + throw(new_assert_error("assert_that: missing values present in assertion")) + if (length(x) != 1) { + throw(new_assert_error("assert_that: length of assertion is not 1")) + } + + TRUE +} + +get_message <- function(res, call, env = parent.frame()) { + stopifnot(is.call(call), length(call) >= 1) + + if (has_attr(res, "msg")) { + return(attr(res, "msg")) + } + + f <- eval(call[[1]], env) + if (!is.primitive(f)) call <- match.call(f, call) + fname <- deparse(call[[1]]) + + fail <- on_failure(f) %||% base_fs[[fname]] %||% fail_default + fail(call, env) +} + +# The default failure message works in the same way as stopifnot, so you can +# continue to use any function that returns a logical value: you just won't +# get a friendly error message. +# The code below says you get the first 60 characters plus a ... +fail_default <- function(call, env) { + call_string <- deparse(call, width.cutoff = 60L) + if (length(call_string) > 1L) { + call_string <- paste0(call_string[1L], "...") + } + + paste0(call_string, " is not TRUE") +} + +on_failure <- function(x) attr(x, "fail") + +"on_failure<-" <- function(x, value) { + stopifnot(is.function(x), identical(names(formals(value)), c("call", "env"))) + attr(x, "fail") <- value + x +} + +has_attr <- function(x, which) !is.null(attr(x, which, exact = TRUE)) +on_failure(has_attr) <- function(call, env) { + paste0(deparse(call$x), " does not have attribute ", eval(call$which, env)) +} +"%has_attr%" <- has_attr + +base_fs <- new.env(parent = emptyenv()) diff --git a/R/aaa-async.R b/R/aaa-async.R new file mode 100644 index 0000000..29cd6bc --- /dev/null +++ b/R/aaa-async.R @@ -0,0 +1,4800 @@ + +#' Create an async function +#' +#' Create an async function, that returns a deferred value, from a +#' regular function. If `fun` is already an async function, then it does +#' nothing, just returns it. +#' +#' The result function will have the same arguments, with the same default +#' values, and the same environment as the original input function. +#' +#' @param fun Original function. +#' @return Async version of the original function. +#' +#' @noRd +#' @examples +#' f <- function(x) 42 +#' af <- async(f) +#' is_async(f) +#' is_async(af) +#' f() +#' synchronise(dx <- af()) +#' dx + +async <- function(fun) { + fun <- as.function(fun) + if (is_async(fun)) return(fun) + + async_fun <- fun + body(async_fun) <- bquote({ + mget(ls(environment(), all.names = TRUE), environment()) + fun2 <- function() { + evalq( + .(body(fun)), + envir = parent.env(environment()) + ) + } + + deferred$new( + type = "async", + action = function(resolve) resolve(fun2()) + ) + }) + + # This is needed, otherwise async_fun might not find 'deferred' + async_env <- new.env(parent = environment(async_fun)) + async_env$deferred <- deferred + environment(async_fun) <- async_env + + mark_as_async(async_fun) +} + +mark_as_async <- function(fun) { + attr(body(fun), "async")$async <- TRUE + + ## These are not valid any more, anyway + attr(fun, "srcref") <- NULL + attr(body(fun), "srcref") <- NULL + + fun +} + +#' Checks if a function is async +#' +#' If `fun` is not a function, an error is thrown. +#' +#' Currently, it checks for the `async` attribute, which is set by +#' [async()]. +#' +#' @param fun Function. +#' @return Logical scalar, whether `fun` is async. +#' +#' @noRd +#' @examples +#' f <- function(x) 42 +#' af <- async(f) +#' is_async(f) +#' is_async(af) +#' f() +#' synchronise(dx <- af()) +#' dx + +is_async <- function(fun) { + assert_that(is.function(fun)) + is.list(a <- attr(body(fun), "async")) && identical(a$async, TRUE) +} + +is_string <- function(x) { + is.character(x) && length(x) == 1 && !is.na(x) +} + +on_failure(is_string) <- function(call, env) { + paste0(deparse(call$x), " is not a string (length 1 character)") +} + +is_flag <- function(x) { + is.logical(x) && length(x) == 1 && !is.na(x) +} + +on_failure(is_flag) <- function(call, env) { + paste0(deparse(call$x), " is not a flag (length 1 logical)") +} + +is_action_function <- function(x) { + is.function(x) && length(formals(x)) %in% 1:2 +} + +on_failure(is_action_function) <- function(call, env) { + paste0(deparse(call$x), " is not a function with two arguments") +} + +is_time_interval <- function(x) { + inherits(x, "difftime") || + (is.numeric(x) && length(x) == 1 && !is.na(x) && x >= 0) +} + +on_failure(is_time_interval) <- function(call, env) { + paste0(deparse(call$x), " is not a valid time interval") +} + +is_count <- function(x) { + is.numeric(x) && length(x) == 1 && !is.na(x) && as.integer(x) == x +} + +on_failure(is_count) <- function(call, env) { + paste0(deparse(call$x), " is not a count (non-negative integer)") +} + +is_flag <- function(x) { + is.logical(x) && length(x) == 1 && !is.na(x) +} + +on_failure(is_flag) <- function(call, env) { + paste0(deparse(call$x), " must be a flag (length 1 logical)") +} + +#' Retry an asynchronous function with exponential backoff +#' +#' Keeps trying until the function's deferred value resolves without +#' error, or `times` tries have been performed, or `time_limit` seconds +#' have passed since the start of the first try. +#' +#' Note that all unnamed arguments are passed to `task`. +#' +#' @param task An asynchronous function. +#' @param ... Arguments to pass to `task`. +#' @param .args More arguments to pass to `task`. +#' @param times Maximum number of tries. +#' @param time_limit Maximum number of seconds to try. +#' @param custom_backoff If not `NULL` then a callback function to +#' calculate waiting time, after the `i`the try. `i` is passed as an +#' argument. If `NULL`, then the default is used, which is a uniform +#' random number of seconds between 1 and 2^i. +#' @param on_progress Callback function for a progress bar. Retries are +#' announced here, if not `NULL`. `on_progress` is called with two +#' arguments. The first is a named list with entries: +#' * `event`: string that is either `"retry"` or `"givenup"`, +#' * `tries`: number of tried so far, +#' * `spent`: number of seconds spent trying so far, +#' * `error`: the error object for the last failure, +#' * `retry_in`: number of seconds before the next try. +#' The second argument is `progress_data`. +#' @param progress_data `async_backoff()` will pass this object to +#' `on_progress` as the second argument. +#' @return Deferred value for the operation with retries. +#' +#' @family async control flow +#' @noRd +#' @examples +#' \donttest{ +#' afun <- function() { +#' wait_100_ms <- function(i) 0.1 +#' async_backoff( +#' function() if (runif(1) < 0.8) stop("nope") else "yes!", +#' times = 5, +#' custom_backoff = wait_100_ms +#' ) +#' } +#' +#' # There is a slight chance that it fails +#' tryCatch(synchronise(afun()), error = function(e) e) +#' } + +async_backoff <- function(task, ..., .args = list(), times = Inf, + time_limit = Inf, custom_backoff = NULL, + on_progress = NULL, progress_data = NULL) { + + task <- async(task) + args <- c(list(...), .args) + times <- times + time_limit <- time_limit + custom_backoff <- custom_backoff %||% default_backoff + on_progress <- on_progress + progress_data <- progress_data + + did <- 0L + started <- NULL + limit <- NULL + + self <- deferred$new( + type = "backoff", call = sys.call(), + action = function(resolve) { + started <<- Sys.time() + limit <<- started + time_limit + do.call(task, args)$then(self) + }, + parent_reject = function(value, resolve) { + did <<- did + 1L + now <- Sys.time() + if (did < times && now < limit) { + wait <- custom_backoff(did) + if (!is.null(on_progress)) { + on_progress(list( + event = "retry", + tries = did, + spent = now - started, + error = value, + retry_in = wait + ), progress_data) + } + delay(wait)$ + then(function() do.call(task, args))$ + then(self) + } else { + if (!is.null(on_progress)) { + on_progress(list( + event = "givenup", + tries = did, + spent = now - started, + error = value, + retry_in = NA_real_ + ), progress_data) + } + stop(value) + } + } + ) +} + +async_backoff <- mark_as_async(async_backoff) + +default_backoff <- function(i) { + as.integer(stats::runif(1, min = 1, max = 2^i) * 1000) / 1000 +} + +#' Asynchronous function call, in a worker pool +#' +#' The function will be called on another process, very much like +#' [callr::r()]. +#' +#' @param func Function to call. See also the notes at [callr::r()]. +#' @param args Arguments to pass to the function. They will be copied +#' to the worker process. +#' @return Deferred object. +#' +#' @noRd + +call_function <- function(func, args = list()) { + func; args + + id <- NULL + + deferred$new( + type = "pool-task", call = sys.call(), + action = function(resolve) { + resolve + reject <- environment(resolve)$private$reject + id <<- get_default_event_loop()$add_pool_task( + function(err, res) if (is.null(err)) resolve(res) else reject(err), + list(func = func, args = args)) + }, + on_cancel = function(reason) { + if (!is.null(id)) { + get_default_event_loop()$cancel(id) + } + } + ) +} + +call_function <- mark_as_async(call_function) + +#' Make a minimal deferred that resolves to the specified value +#' +#' This is sometimes useful to start a deferred chain. +#' +#' Note that the evaluation of `value` is forced when the deferred value +#' is created. +#' +#' @param value The value to resolve to. +#' @return A deferred value. +#' +#' @noRd +#' @examples +#' afun <- async(function() { +#' async_constant(1/100)$ +#' then(function(x) delay(x))$ +#' then(function(x) print(x)) +#' }) +#' synchronise(afun()) + +async_constant <- function(value = NULL) { + force(value) + deferred$new( + type = "constant", call = sys.call(), + function(resolve) resolve(value)) +} + +async_constant <- mark_as_async(async_constant) + +async_env <- new.env(parent = emptyenv()) +async_env$loops <- list() + +get_default_event_loop <- function() { + num_loops <- length(async_env$loops) + if (num_loops == 0) { + err <- make_error( + "You can only call async functions from an async context", + class = "async_synchronization_barrier_error" + ) + stop(err) + } + + async_env$loops[[num_loops]] +} + +push_event_loop <- function() { + num_loops <- length(async_env$loops) + if (num_loops > 0) async_env$loops[[num_loops]]$suspend() + new_el <- event_loop$new() + async_env$loops <- c(async_env$loops, list(new_el)) + new_el +} + +pop_event_loop <- function() { + num_loops <- length(async_env$loops) + async_env$loops[[num_loops]] <- NULL + if (num_loops > 1) async_env$loops[[num_loops - 1]]$wakeup() +} + +#' Async debugging utilities +#' +#' Helper function to help with the non-trivial debugging of async code. +#' +#' Async debugging can be turned on by setting the `async_debug` global +#' option to `TRUE`: +#' ``` +#' options(async_debug = TRUE) +#' ``` +#' Setting this value to `FALSE` will turn off debugging. +#' +#' If debugging is on, a [synchronise()] call will stop at the beginning +#' of the event loop. No deferred actions of other callbacks have run at +#' this point. [synchronise()] stops by calling [base::browser()]. All the +#' usual [browser()] commands (see its manual) can be used here, plus some +#' extra commands to help async debugging. The extra commands: +#' +#' `async_debug_shortcuts()` adds handy shortcuts to most of the helper +#' functions. E.g. `async_next()` can be invoked as `.an` (without the +#' parens). You only need to run it once per R session. Note that it adds +#' the shortcuts to the global environment. +#' +#' `async_debug_remove_shortcuts()` removes the shortcuts from the global +#' environment. +#' +#' `.an` (or `async_next()`) runs the next iteration of the event loop. +#' Note that it does not return until _something_ happens in the event loop: +#' an action or a parent callback is executed, or HTTP or other I/O is +#' performed. Also note, that a single iteration of the event loop typically +#' runs multiple action, parent or other callbacks. Once the iteration is +#' done, the control is returned to the browser. +#' +#' `.as` (or `async_step()`) is similar to `.an`, but it also starts the +#' debugging of the action or parent callbacks. I.e. another [browser()] is +#' called at the beginning of _all_ callbacks in the next iteration of the +#' event loop. +#' +#' `.asb` (or `async_step_back()`) stops the debugging of the callbacks. +#' It does not actually exdecutes anything from the event loop, so to go +#' back to the main async browser, you also need to execute `c` (continue). +#' +#' `.al` (or `async_list()`) lists all deferred values in the current async +#' phase. (Only the ones that already exist, some may be created in the +#' future.) It returns a data frame with columns: +#' +#' * `id`: The integer id of the deferred value. +#' * `parents`: Integer vector, the parents of the deferred value. +#' * `label`: A character label, that is used by `async_tree()` to nicely +#' format information about a deferred value. +#' * `call`: The call (language object) that created the deferred value. +#' * `children`: The list of children, an integer vector. A deferred value +#' can only have one child, unless it is shared. +#' * `type`: The type of the deferred value. This is an arbitrary label, +#' specified when the deferred value was created. +#' * `running`: Whether the deferred value is already running. +#' * `state`: The state of the deferred value, `"pending"`, `"fulfilled"` or +#' `"rejected"`. This is typically pending, since resolved deferred +#' values are removed from the async DAG (in the next event loop +#' iteration.) +#' * `cancelled`: Whether the deferred value was cancelled. +#' * `shared`: Whether the deferred value is shared. +#' * `filename`: The file name for the source code that created the +#' deferred value. Only present if this code was parsed with source +#' references enabled. +#' * `position`: The start file position, in line:column format, as a +#' string. Only present if this code was parsed with source references +#' enabled. +#' +#' `.at` (or `async_tree()`) prints the DAG of the deferred values. +#' +#' `async_debug()` can be used to debug the action and/or parent callbacks +#' of the specified deferred value. +#' +#' `async_wait_for()` runs the event loop until the specified deferred +#' value is resolved (i.e. fulfilled or rejected). +#' +#' `.aw` (or `async_where()`) prints a call stack and marks the frame the +#' corresponds to an action or parent callback. +#' +#' @param el Event loop, defaults to the current event loop. +#' @param def Deferred value that is used at the root of the DAG. Defaults +#' to the deferred value corresponding to the result of the async phase. +#' @param id Integer scalar, the if of the deferred to debug or to wait for. +#' @param action Whether to debug the action callback. +#' @param parent Whether to debug the parent callbacks. +#' @param calls The calls to print, result of `sys.calls()`. Defaults to +#' the current call stack. +#' @param parents The parent frames in the call stack, result of +#' `sys.parents()`. Defaults to the current parents. +#' @param frm The async frame to mark. Defaults to the most recent async +#' frame in the stack. +#' +#' @name async_debug +#' @noRd +NULL + +#' @noRd +#' @aliases .an +#' @rdname async_debug + +async_next <- function(el = NULL) { + el <- el %||% find_sync_frame()$new_el + if (is.null(el)) stop("No async context") + ## TODO: some visual indication that something has happened? + if (! el$run("once")) message("[ASYNC] async phase complete") +} + +# nocov start + +#' @noRd +#' @aliases .as +#' @rdname async_debug + +async_step <- function() { + el <- find_sync_frame()$new_el + if (is.null(el)) stop("No async context") + ## TODO: some visual indication that something has happened? + old <- options(async_debug_steps = TRUE) + on.exit(options(old)) + if (! el$run("once")) { + message("[ASYNC] async phase complete") + } +} + +#' @noRd +#' @aliases .asb +#' @rdname async_debug + +async_step_back <- function() { + options(async_debug_steps = FALSE) + message("[ASYNC] step back, you still need to 'c'ontinue") +} + +# nocov end + +#' @noRd +#' @aliases .al +#' @rdname async_debug + +async_list <- function(def = NULL) { + def <- def %||% find_sync_frame()$res + if (is.null(def)) stop("No async context") + info <- list() + find_parents <- function(def) { + info <<- c(info, list(get_private(def)$get_info())) + prn <- get_private(def)$parents + lapply(prn, find_parents) + } + find_parents(def) + + do.call(rbind, info) +} + +#' @noRd +#' @aliases .at +#' @rdname async_debug + +async_tree <- function(def = NULL) { + def <- def %||% find_sync_frame()$res + data <- async_list(def) + root <- as.character(get_private(def)$id) + cli::tree(data, root = root) +} + +#' @noRd +#' @rdname async_debug + +async_debug <- function(id, action = TRUE, parent = TRUE) { + def <- find_deferred(id) + if (is.null(def)) stop("Cannot find deferred `", id, "`") + prv <- get_private(def) + + if (prv$state != "pending") { + message("[ASYNC] ", id, " already resolved") + return(invisible()) + } + + what <- character() + if (action) { + if (prv$running) { + message("[ASYNC] ", id, " action already running") + } else if (is.null(prv$action)) { + message("[ASYNC] ", id, " has no action") + } else { + ## TODO: make a copy? Or should the deferred make a copy? + debug1(prv$action) + what <- "action" + } + } + + if (parent) { + ## TODO: make copies? + debug_all(prv$parent_resolve) + debug_all(prv$parent_reject) + what <- c(what, "parent callbacks") + } + + if (length(what) == 1) { + message("[ASYNC] ", id, " debugging ", what) + } + if (length(what) == 2) { + message("[ASYNC] ", id, " debugging ", what[1], " and ", what[2]) + } + + invisible(def) +} + +#' @noRd +#' @rdname async_debug + +async_wait_for <- function(id) { + el <- find_sync_frame()$new_el + if (is.null(el)) stop("No async context") + def <- find_deferred(id) + if (is.null(def)) stop("Cannot find deferred `", id, "`") + priv <- get_private(def) + while (priv$state == "pending") el$run("once") + message("[ASYNC] ", id, " resolved") +} + +#' @noRd +#' @aliases .aw +#' @rdname async_debug + +async_where <- function(calls = sys.calls(), parents = sys.parents(), + frm = get_async_frames()) { + afrm <- viapply(frm, "[[", "frame") + num <- seq_along(calls) + + src <- lapply(calls, get_source_position) + + res <- data.frame( + stringsAsFactors = FALSE, + call = I(calls), + parent = parents, + filename = vcapply(src, "[[", "filename"), + position = vcapply(src, "[[", "position"), + async = num %in% afrm + ) + + res$def_id <- NA_integer_ + res$def_id[afrm] <- viapply(frm, function(x) x$deferred) + res$def_cb_type <- NA_character_ + res$def_cb_type[afrm] <- vcapply(frm, function(x) x$type) + res$def_call <- I(list(NULL)) + res$def_call[afrm] <- lapply(frm, "[[", "call") + + def_src <- lapply(res$def_call[afrm], get_source_position) + res$def_filename <- NA_character_ + res$def_filename[afrm] <- vcapply(def_src, "[[", "filename") + res$def_position <- NA_character_ + res$def_position[afrm] <- vcapply(def_src, "[[", "position") + + class(res) <- c("async_where", class(res)) + res +} + +# nocov start + +#' @noRd + +print.async_where <- function(x, ...) { + cat(format(x, ...)) + invisible(x) +} + +# nocov end + +#' @noRd + +format.async_where <- function(x, ...) { + paste0(paste( + formatC(seq_len(nrow(x)), width = 3), + vcapply(x$call, expr_name), + paste0(" ", x$filename, ":", x$position), + ifelse (! x$async, "", + paste0("\n ", x$def_id, " ", x$def_cb_type, " ", + x$def_call, " ", x$def_filename, ":", x$def_position)), + collapse = "\n" + ), "\n") +} + +get_async_frames <- function() { + drop_nulls(lapply(seq_along(sys.frames()), function(i) { + if (! is.null(data <- sys.frame(i)$`__async_data__`)) { + list(frame = i + data$skip %||% 1L, deferred = data[[1]], type = data[[2]], + call = get_private(data[[3]])$mycall) + } + })) +} + +find_sync_frame <- function() { + for (i in seq_along(sys.frames())) { + cand <- sys.frame(-i) + if (isTRUE(cand$`__async_synchronise_frame__`)) return(cand) + } +} + +find_async_data_frame <- function() { + frames <- sys.frames() + for (i in seq_along(frames)) { + cand <- sys.frame(-i) + if (!is.null(data <- cand$`__async_data__`)) { + return(list(frame = length(frames) - i + 1L, data = data)) + } + } +} + +find_deferred <- function(id, def = NULL) { + def <- def %||% find_sync_frame()$res + if (is.null(def)) stop("No async context") + search_parents <- function(def) { + if (get_private(def)$id == id) return(def) + prn <- get_private(def)$parents + for (p in lapply(prn, search_parents)) { + if (!is.null(p)) return(p) + } + } + search_parents(def) +} + +# nocov start + +debug1 <- function(fun) { + debugonce(fun) +} + +#' @noRd +#' @rdname async_debug + +async_debug_shortcuts <- function() { + as <- function(name, fun) { + makeActiveBinding(name, fun, .GlobalEnv) + } + as(".an", async_next) + as(".as", async_step) + as(".asb", async_step_back) + as(".al", async_list) + as(".at", async_tree) + as(".aw", async_where) +} + +#' @noRd +#' @rdname async_debug + +async_debug_remove_shortcuts <- function() { + tryCatch( + rm(list = c(".an", ".as", ".asb", ".al", ".at", ".aw"), + envir = .GlobalEnv), + error = function(x) x) +} + +# nocov end + +debug_all <- function(fun) { + debug(fun) +} + +#' Deferred value +#' +#' @section Usage: +#' ``` +#' dx <- deferred$new(action = NULL, on_progress = NULL, on_cancel = NULL, +#' parents = NULL, parent_resolve = NULL, parent_reject = NULL, +#' type = NULL) +#' dx$then(on_fulfilled) +#' dx$catch(...) +#' dx$finally(on_finally) +#' dx$cancel(reason = "Cancelled") +#' dx$share() +#' ``` +#' +#' @section Arguments: +#' * `action`: Function to call when the deferred value starts running. +#' it needs to have at least two arguments: `resolve` and `reject`, +#' and the third `progress` argument is optional. See details below. +#' * `on_progress`: A function to call to report progress. See details +#' below. +#' * `on_cancel`: A function to call when the deferred is cancelled. See +#' details below. +#' * `parents`: A list of deferred values that will be the parents of the +#' deferred value being created. If some of them are already owned, +#' an error is thrown. +#' * `parent_resolve`: A function to call when a parent is resolved. +#' See details below. +#' * `parent_reject`: A function to call when a parent throws an error. +#' See details below. +#' * `type`: A label that can be used to indicate the type of the deferred +#' value to create. This might be useful for debugging, but otherwise +#' it is not used. +#' * `on_fulfilled`: Function to call when the parent deferred is resolved. +#' Essentially this is the `parent_resolve` function of the `then()` +#' deferred. +#' * `...` Error handlers, as in `tryCatch()`, see details below. +#' * `on_finally`: Function to call, after the deferred value is resolved +#' or after it has thrown an error. It will be called without arguments. +#' * `reason` Error message or error object that will be used to cancel the +#' deferred. +#' +#' @section Deferred values: +#' +#' Asynchronous computation is represented by deferred values. +#' A deferred value is an [R6](https://github.com/wch/R6) object. +#' +#' ``` +#' deferred$new(action = NULL, on_progress = NULL, on_cancel = NULL, +#' parents = NULL, parent_resolve = NULL, parent_reject = NULL, +#' type = NULL) +#' ``` +#' +#' Creates a new deferred value. `action` is a function that is called +#' once the deferred value is _started_ (i.e. _not_ when `dx` is created). +#' It must have one or two arguments: `resolve`, or `resolve` and `progress` +#' It should call `resolve` when it is done, with the final value of the +#' deferred as the argument. (See examples below.) If it has two arguments, +#' then the second one is a callback function for creating progress bars. +#' The deferred value may report its progress through this function. +#' See details in the _Progress bars_ section below. +#' +#' `action` is called when the evaluation of the deferred value is started. +#' Only deferred values that are needed to calculate the value of the +#' async phase, are evaluated. (See also _Lazy Evaluation_ below.) +#' +#' Note that `action` is optional, for some deferred values, no action is +#' takes when they are started. (These typically depend on their parent +#' nodes.) +#' +#' `on_cancel` is a function that is called without arguments when a +#' deferred value is cancelled. This includes explicit cancellation by +#' calling its `$cancel()` method, or auto-cancellation (see below). +#' +#' `parents` is a list of deferred values that need to be computed before +#' the current deferred value. When a parent deferred is resolved, the +#' `parent_resolve` function is called. When a parent referred throws an +#' error, the parent_reject` function is called. +#' +#' `parent_resolve` is a function with (up to) two arguments: +#' `value` and `resolve`. It will be called with the value of the +#' parent, the `resolve` callback of the deferred. +#' `parent_resolve` can resolve the deferred by calling the supplied `resolve` +#' callback, or it can keep waiting on other parents and/or external +#' computation. It may throw an error to fail the deferred. +#' +#' `parent_resolve` allows some shorthands as well: +#' * `NULL`: the deferred is resolved with the value of the parent. +#' * A function with no arguments: this function is called, and the deferred +#' resolves to its return value. +#' * A function with one argument: this function is called with the value +#' of the parent as the argument, and the deferred is resolved to its +#' return value. +#' * A function with arguments `value` and `resolve`. This function is +#' called with the value of the parent, and the resolve callback of the +#' deferred. +#' +#' `parent_reject` is a function with (up to) two arguments: +#' `value`, `resolve`. It will be called with the error object +#' thrown by the parent. +#' +#' `parent_resolve` can resolve the deferred by calling the supplied +#' `resolve` callback, or it can keep waiting on other parents and/or +#' external computation. It may throw an error to fail the deferred. It may +#' also re-throw the error received from the parent, if it does not wish +#' to handle it. +#' +#' `parent_reject` also accepts some shorthands as well: +#' * `NULL`: the deferred throws the same error as the parent. +#' * A function with no arguments: this function is called, and the deferred +#' resolves to its return value. +#' * A function with one argument: this function is called with the value +#' of the parent as the argument, and the deferred is resolved to its +#' return value. +#' * A function with arguments `value` and `resolve`. This function is +#' called with the value of the parent, and the resolve callback of the +#' deferred. + +#' * A list of named error handlers, corresponding to the error handlers +#' of `$catch()` (and `tryCatch()`). If these error handlers handle the +#' parent's error, the deferred is resolved with the result of the +#' handlers. Otherwise the deferred will be failed with the parent's +#' error. The error handlers may also throw a new error. +#' +#' @section Error handling: +#' +#' The action function of the deferred, and also the `parent_resolve` and +#' `parent_reject` handlers may throw errors if the deferred cannot be +#' computed. Errors can be handled wit the `$catch()` member function: +#' +#' ``` +#' dx$catch(...) +#' ``` +#' +#' It takes the same named error handler arguments as `tryCatch()`. +#' +#' Technically, `$catch()` creates a new deferred value, and this new +#' deferred value is resolved to the result of the error handlers. Of the +#' handlers do not handle the error, then the new deferred will fail +#' with the same error. +#' +#' The `$finally()` method can be used to run create finalizer code that +#' runs when a deferred is resolved or when it fails. It can be used to +#' close database connections or other resources: +#' +#' ``` +#' dx$finally(on_finally) +#' ``` +#' +#' Technically, `$finally()` creates a new deferred, which will resolve +#' or fail the same way as the original one, but before doing that it will +#' call the `on_finally` function with no arguments. +#' +#' @section Builtin async functions: +#' +#' The async package comes with some basic async functions: +#' * [delay()] sets a timer and then resolves to `TRUE`. +#' * [async_constant()] resolves successfully to its argument. +#' * [http_get()] and [http_head()] make HTTP GET and HEAD requests. +#' +#' @section Combining async values: +#' +#' Async computation (just like ordinary sync computation) usually +#' consists of several steps that needs to be performed in the specified +#' order. The `$then()` method specifies that a step of computation needs +#' to be performed after the deferred value is known: +#' +#' ``` +#' dx$then(on_fulfilled) +#' ``` +#' +#' `on_fulfilled` is a function with zero or one formal arguments. +#' It will be called once the result of the deferred is known, with its +#' result. (The result is omitted if it has no arguments). +#' +#' `$then()` creates another deferred value, that will resolve to the +#' result of the `on_fulfilled` callback. Should this callback return +#' with a deferred value, then `$then()` the deferred value will be a +#' child of this newly creted deferred, and only resolve after that. +#' +#' See also [when_all()], [when_some()] and [when_any()], which can combine +#' multiple deferred values into one. +#' +#' You cannot call `$then()` (or [when_any()], [when_all()], etc. on the +#' same deferred value multiple times, unless it is a shared deferred +#' value. See _Ownership_ below. +#' +#' The [async_reflect()], [async_retry()], [async_sequence()], +#' [async_timeout()], [async_until()] and [async_whilst()] functions are +#' helpers for more complex async control flow. +#' +#' @section Ownership: +#' +#' async follows a strong ownership model. Each deferred value must be +#' owned by exactly one other deferred value (unless they are shared, see +#' below). +#' +#' After a `dx2 <- dx$then()` call, the `dx` deferred is _owned_ by the +#' newly created deferred value. (The same applied to [when_any()], etc.) +#' This means that it is not possible to call `$then()` on the same +#' deferred value multiple times. The deferred value that is synchronized +#' by calling [synchronise()] on it, is owned by [synchronise()], see +#' _Synchronization_ below. +#' +#' The deferred values of an async phase form a directed graph, which we +#' call the async DAG (directed, acyclic graph). Usually (when no deferred +#' is shared, see below), this DAG is a rooted tree, the root of the tree +#' is the synchronised deferred, the final result of the async phase. +#' +#' @section Shared Deferred Values: +#' +#' In the rare cases when the strong ownership model is too restrictive, +#' a deferred value can be marked as _shared_: +#' +#' ``` +#' dx$share() +#' ``` +#' +#' This has the following implications: +#' * A shared deferred value can have multiple children (owners) in the +#' async DAG. +#' * A shared deferred value is started after its first child is started. +#' * A shared deferred value is not auto-cancelled when all of its children +#' are finished. (Because it might have more children in the future.) +#' * A shared deferred value is still auto-cancelled at the end of the +#' event loop. +#' +#' Use shared deferred values sparingly, only when they are really needed, +#' as they forbid auto-cancellation, so deferred values will hold on to +#' resources longer, until the async phase is finished. +#' +#' @section Synchronization: +#' +#' async allows embedding asynchronous computation in synchronous code. +#' The execution of such a program has a sync phase and async phases. When the +#' program starts, it is in the sync phase. In the sync phase you cannot +#' create deferred values. (But you can still define (async) functions, that +#' will create deferred values when called.) +#' +#' To enter into an async phase, call [synchronise()] on an expression that +#' evaluates to a deferred value. The async phase will last until this +#' deferred value is computed or an error is thrown (and the error reaches +#' [synchronise()]). +#' +#' [synchronise()] creates an event loop, which manages the computation of +#' the deferred values in this particular async phase. +#' +#' Async phases can be embedded into each other. I.e. a program may call +#' [synchronise()] while in the async phase. The outer async phase's event +#' loop then stops until the inner async phase terminates. Deferred values +#' cannot be passed through a `synchronise()` barrier, to anoter (sync or +#' async phase). Should this happen, an error is reported on the first +#' operation on the leaked deferred value. +#' +#' In a typical application, a function is implemented asynchronously, and +#' then used synchronously by the interactive user, or another piece of +#' synchronous code, via [synchronise()] calls. The following example makes +#' three HTTP requests in parallel: +#' +#' ``` +#' http_status3 <- function() { +#' http_status <- function(url) { +#' http_get(url)$then(function(response) response$status_code) +#' } +#' r1 <- http_status("https://eu.httpbin.org/status/403") +#' r2 <- http_status("https://eu.httpbin.org/status/404") +#' r3 <- http_status("https://eu.httpbin.org/status/200") +#' when_all(r1, r2, r3) +#' } +#' synchronise(http_status3()) +#' ``` +#' +#' This async function can also be used asychronously, as a parent of +#' another deferred value, in an async phase. +#' +#' @section Lazy evaluation: +#' +#' async does not evaluate deferred values that are not part of the async +#' DAG of the async phase. These are clearly not needed to compute the +#' result of the async phase, so it would be a waste of resources working on +#' them. (It is also unclear how their errors should be handled.) +#' +#' In the following example, `d1` and `d2` are created, but they are not +#' part of the async DAG, so they are never evaluated. +#' +#' ``` +#' do <- function() { +#' d1 <- delay(1/100)$then(function() print("d1")) +#' d2 <- d1$then(function() print("d2")) +#' d3 <- delay(1/100)$then(function() print("d3")) +#' d4 <- d3$then(function() print("d4")) +#' d4 +#' } +#' invisible(synchronise(do())) +#' ``` +#' +#' @section Cancellation: +#' +#' The computation of a deferred can be cancelled when it is not needed +#' any more: +#' +#' ``` +#' dx$cancel(reason = "Cancelled") +#' ``` +#' +#' This will _fail_ the children of the deferred, unless they have been +#' completed already. It will also auto-cancel the parent DAG of the +#' deferred, unless they are shared deferreds, see the next Section. +#' +#' @section Auto-cancellation: +#' +#' In an async phase, it might happen that parts of the async DAG are not +#' needed for the final result any more. E.g. if a parent of a `when_all()` +#' node throws an error, then the other parents don't have to be computed. +#' In this case the event loop of the phase automatically cancels these +#' deferred values. Similarly, if a single parent of a [when_any()] node is +#' resolved, the other parents can be cancelled. +#' +#' In general, if a node of the async DAG is resolved, the whole directed +#' DAG, rooted at that node, can be cancelled (except for nodes that were +#' already resolved and nodes that have already failed). +#' +#' Auto-cancellation is very convenient, as you can be sure that resources +#' are free as soon as they are not needed. Some practical examples: +#' +#' * Making HTTP requests to many mirror web sites, to check their response +#' time. As soon as the first reply is in, the rest of the HTTP requests +#' are cancelled. +#' * In multi-process computation, as soon as one process fails, the rest are +#' automatically cancelled. (Unless the failure is handled, of course.) +#' +#' async also has another type of cancellation, when [synchronise()] is +#' interrupted externally, either by the user or some system error. In this +#' case all processes and resources that were created in the event loop, +#' are cancelled and freed. +#' +#' Shared deferred values (see `$share()`) are not auto-cancelled when their +#' children are resolved or errored, but they are always cancelled at the +#' end of the async phase. +#' +#' @section Progress bars: +#' +#' A deferred value may report on its progress, if its action has a progress +#' callback. The progress callback is called with a list that describes +#' and event. We suggest that it always has an `event` entry, which is a +#' simple string. The rest of the list entries can be defined as needed, +#' but typically there will be a counter counting ticks, or a ratio +#' describing what part of the computation is already. See [http_get()] +#' for an async function that reports progress. +#' +#' @section Collections helper functions: +#' +#' async provides some utilities that make it easier to deal with +#' collections of deferred values: +#' +#' The current iterators: +#' * [async_map()] applies an async function to all elements of a vector or +#' list (collection). +#' * [async_detect()] finds an element of a collection that passed an async +#' truth test. +#' * [async_every()] checks if every element of a collection satisfies an +#' async predicate. [async_some()] checks if any element does that. +#' * [async_filter()] keeps elements that pass an async truth test. +#' +#' @section Control flow helper functions: +#' +#' Control flow with deferred values can be challenging. Some helpers: +#' * [async_reflect()] creates an async function that always succeeds. +#' This is useful if you want to apply it to a collection, and don't +#' want to stop at the first error. +#' * [async_retry()] tries an async function a number of times. +#' [async_retryable()] turns a regular function into a retryable one. +#' * [async_sequence()] chains two async functions. Calling their sequence +#' is equivalent calling '$then()` on them, but [async_sequence()] is +#' easier to use programmatically. +#' * [async_until()] and [async_whilst()] let you call an async function +#' repeatedly, until or while a (syncronous) condition holds. +#' * [async_timeout()] runs an async function with a timeout. +#' +#' @section Examples: +#' Please see the README and the vignettes for examples. +#' @name deferred +#' @noRd +NULL + +#' @importFrom R6 R6Class +#' @noRd + +deferred <- R6Class( + "deferred", + public = list( + initialize = function(action = NULL, on_progress = NULL, on_cancel = NULL, + parents = NULL, parent_resolve = NULL, + parent_reject = NULL, type = NULL, + call = sys.call(-1), event_emitter = NULL) + async_def_init(self, private, action, on_progress, on_cancel, + parents, parent_resolve, parent_reject, type, call, + event_emitter), + then = function(on_fulfilled) + def_then(self, private, on_fulfilled), + catch = function(...) + def_catch(self, private, ...), + finally = function(on_finally) + def_finally(self, private, on_finally), + cancel = function(reason = "Cancelled") + def_cancel(self, private, reason), + share = function() { private$shared <<- TRUE; invisible(self) }, + + event_emitter = NULL + ), + + private = list( + action = NULL, + running = FALSE, + id = NULL, + type = NULL, + state = c("pending", "fulfilled", "rejected")[1], + event_loop = NULL, + value = NULL, + children = list(), + progress_callback = NULL, + cancel_callback = NULL, + cancelled = FALSE, + dead_end = FALSE, + parents = NULL, + parent_resolve = NULL, + parent_reject = NULL, + shared = FALSE, + mycall = NULL, + + run_action = function() + def__run_action(self, private), + + null = function() + def__null(self, private), + + resolve = function(value) + def__resolve(self, private, value), + reject = function(reason) + def__reject(self, private, reason), + progress = function(data) + def__progress(self, private, data), + + make_error_object = function(err) + def__make_error_object(self, private, err), + + maybe_cancel_parents = function(reason) + def__maybe_cancel_parents(self, private, reason), + add_as_parent = function(child) + def__add_as_parent(self, private, child), + update_parent = function(old, new) + def__update_parent(self, private, old, new), + + get_info = function() + def__get_info(self, private) + ) +) + +async_def_init <- function(self, private, action, on_progress, + on_cancel, parents, parent_resolve, + parent_reject, type, call, event_emitter) { + + private$type <- type + private$id <- get_id() + private$event_loop <- get_default_event_loop() + private$parents <- parents + private$action <- action + private$mycall <- call + self$event_emitter <- event_emitter + + "!DEBUG NEW `private$id` (`type`)" + + assert_that(is.null(on_progress) || is.function(on_progress)) + private$progress_callback <- on_progress + assert_that(is.null(on_cancel) || is.function(on_cancel)) + private$cancel_callback <- on_cancel + + ## Handle the parents + + private$parent_resolve <- def__make_parent_resolve(parent_resolve) + private$parent_reject <- def__make_parent_reject(parent_reject) + + for (prt in parents) { + prt_pvt <- get_private(prt) + prt_pvt$add_as_parent(self) + } + + invisible(self) +} + +def__run_action <- function(self, private) { + if (private$running) return() + action <- private$action + private$running <- TRUE + private$action <- NULL + "!DEBUG ACTION `private$type` `private$id`" + + if (!is.null(action)) { + if (!is.function(action)) { + action <- as.function(action) + formals(action) <- alist(resolve = NULL, progress = NULL) + } + assert_that(is_action_function(action)) + + action_args <- names(formals(action)) + args <- list(private$resolve) + if (!is.na(pr_arg <- match("progress", action_args))) { + args$progress <- private$progress + } + + private$event_loop$add_next_tick( + function() { + if (isTRUE(getOption("async_debug_steps", FALSE))) debug1(action) + `__async_data__` <- list(private$id, "action", self, skip = 2L) + do.call(action, args) }, + function(err, res) if (!is.null(err)) private$reject(err)) + } + + ## If some parents are done, we want them to notify us. + ## We also start the ones that are not running yet + for (prt in private$parents) { + prt_priv <- get_private(prt) + if (prt_priv$state != "pending") { + def__call_then( + if (prt_priv$state == "fulfilled") "parent_resolve" else "parent_reject", + self, prt_priv$value) + } + prt_priv$run_action() + } +} + +def_then <- function(self, private, on_fulfilled = NULL, + on_rejected = NULL) { + force(self) + force(private) + + if (! identical(private$event_loop, get_default_event_loop())) { + err <- make_error( + "Cannot create deferred chain across synchronization barrier", + class = "async_synchronization_barrier_error") + stop(err) + } + + if (!is_deferred(on_fulfilled)) { + parent_resolve <- def__make_parent_resolve(on_fulfilled) + parent_reject <- def__make_parent_reject(on_rejected) + + deferred$new(parents = list(self), + type = paste0("then-", private$id), + parent_resolve = parent_resolve, + parent_reject = parent_reject, + call = sys.call(-1)) + + } else { + private$add_as_parent(on_fulfilled) + child_private <- get_private(on_fulfilled) + child_private$parents <- c(child_private$parents, self) + self + } +} + +def_catch <- function(self, private, ...) { + def_then(self, private, on_rejected = list(...)) +} + +def_finally <- function(self, private, on_finally) { + force(on_finally) + def_then( + self, + private, + on_fulfilled = function(value) { + on_finally() + value + }, + on_rejected = function(reason) { + on_finally() + stop(reason) + } + ) +} + +def_cancel <- function(self, private, reason) { + if (private$state != "pending") return() + cancel_cond <- structure( + list(message = reason %||% "Deferred computation cancelled", call = NULL), + class = c("async_cancelled", "error", "condition") + ) + private$reject(cancel_cond) + invisible(self) +} + +def__null <- function(self, private) { + self$.__enclos_env__$private$dead_end <- TRUE + invisible(self) +} + +def__resolve <- function(self, private, value) { + if (private$cancelled) return() + if (private$state != "pending") return() + + if (is_deferred(value)) { + private$parent_resolve <- def__make_parent_resolve(NULL) + private$parent_reject <- def__make_parent_reject(NULL) + + # we need this in case self was shared and had multiple children + val_pvt <- get_private(value) + val_pvt$id <- private$id + val_pvt$shared <- private$shared + val_pvt$dead_end <- private$dead_end # This should not happen, though + + for (child in private$children) { + ch_pvt <- get_private(child) + ch_pvt$update_parent(self, value) + } + + val_pvt$run_action() + + } else { + if (!private$dead_end && !length(private$children) && + !private$shared) { + ## This cannot happen currently + "!DEBUG ??? DEAD END `private$id`" # nocov + warning("Computation going nowhere...") # nocov + } + + "!DEBUG +++ RESOLVE `private$id`" + private$state <- "fulfilled" + private$value <- value + for (child in private$children) { + def__call_then("parent_resolve", child, value) + } + private$maybe_cancel_parents(private$value) + private$parents <- NULL + } +} + +#' Create an error object for a rejected deferred computation +#' +#' * Make sure that the error is an error object. +#' * Make sure that the error has the correct classes. +#' +#' @param self self +#' @param private private self +#' @return error object +#' +#' @noRd +#' @keywords internal + +def__make_error_object <- function(self, private, err) { + class(err) <- unique(c("async_rejected", class(err))) + err +} + +def__make_parent_resolve <- function(fun) { + if (is.null(fun)) { + function(value, resolve) resolve(value) + } else if (!is.function(fun)) { + fun <- as.function(fun) + function(value, resolve) resolve(fun(value)) + } else if (num_args(fun) == 0) { + function(value, resolve) resolve(fun()) + } else if (num_args(fun) == 1) { + function(value, resolve) resolve(fun(value)) + } else if (identical(names(formals(fun)), + c("value", "resolve"))) { + fun + } else { + stop("Invalid parent_resolve callback") + } +} + +def__make_parent_reject <- function(fun) { + if (is.null(fun)) { + function(value, resolve) stop(value) + } else if (is.list(fun)) { + def__make_parent_reject_catch(fun) + } else if (!is.function(fun)) { + fun <- as.function(fun) + function(value, resolve) resolve(fun(value)) + } else if (num_args(fun) == 0) { + function(value, resolve) resolve(fun()) + } else if (num_args(fun) == 1) { + function(value, resolve) resolve(fun(value)) + } else if (identical(names(formals(fun)), + c("value", "resolve"))) { + fun + } else { + stop("Invalid parent_reject callback") + } +} + +def__make_parent_reject_catch <- function(handlers) { + handlers <- lapply(handlers, as.function) + function(value, resolve) { + ok <- FALSE + ret <- tryCatch({ + quo <- as.call(c(list(quote(tryCatch), quote(stop(value))), handlers)) + ret <- eval(quo) + ok <- TRUE + ret + }, error = function(x) x) + + if (ok) resolve(ret) else stop(ret) + } +} + +def__reject <- function(self, private, reason) { + if (private$cancelled) return() + if (private$state != "pending") return() + + ## 'reason' cannot be a deferred here + + "!DEBUG !!! REJECT `private$id`" + private$state <- "rejected" + private$value <- private$make_error_object(reason) + if (inherits(private$value, "async_cancelled")) { + private$cancelled <- TRUE + } + if (!is.null(private$cancel_callback)) { + private$cancel_callback(conditionMessage(private$value)) + } + for (child in private$children) { + def__call_then("parent_reject", child, private$value) + } + private$maybe_cancel_parents(private$value) + private$parents <- NULL +} + +def__maybe_cancel_parents <- function(self, private, reason) { + for (parent in private$parents) { + if (is.null(parent)) next + + parent_priv <- get_private(parent) + if (parent_priv$state != "pending") next + if (parent_priv$shared) next + parent$cancel(reason) + } +} + +def__call_then <- function(which, x, value) { + force(value); + private <- get_private(x) + if (!private$running) return() + if (private$state != "pending") return() + + cb <- private[[which]] + private$event_loop$add_next_tick( + function() { + if (isTRUE(getOption("async_debug_steps", FALSE))) { + debug1(private[[which]]) # nocov + } + `__async_data__` <- list(private$id, "parent", x) + private[[which]](value, private$resolve) + }, + function(err, res) if (!is.null(err)) private$reject(err)) +} + +def__add_as_parent <- function(self, private, child) { + "!DEBUG EDGE [`private$id` -> `get_private(child)$id`]" + + if (! identical(private$event_loop, get_private(child)$event_loop)) { + err <- make_error( + "Cannot create deferred chain across synchronization barrier", + class = "async_synchronization_barrier_error") + stop(err) + } + if (length(private$children) && !private$shared) { + stop("Deferred value is already owned") + } + + private$children <- c(private$children, child) + + if (get_private(child)$running) private$run_action() + if (private$state == "pending") { + ## Nothing to do + + } else if (private$state == "fulfilled") { + def__call_then("parent_resolve", child, private$value) + + } else { + def__call_then("parent_reject", child, private$value) + } +} + +def__update_parent <- function(self, private, old, new) { + for (i in seq_along(private$parents)) { + if (identical(private$parents[[i]], old)) { + private$parents[[i]] <- new + } + } + + new_pvt <- get_private(new) + new_pvt$add_as_parent(self) +} + +def__progress <- function(self, private, data) { + if (private$state != "pending") return() + if (is.null(private$progress_callback)) return() + private$progress_callback(data) +} + +def__get_info <- function(self, private) { + res <- data.frame( + stringsAsFactors = FALSE, + id = private$id, + parents = I(list(viapply(private$parents, function(x) get_private(x)$id))), + label = as.character(private$id), + call = I(list(private$mycall)), + children = I(list(viapply(private$children, function(x) get_private(x)$id))), + type = private$type %||% "unknown", + running = private$running, + state = private$state, + cancelled = private$cancelled, + shared = private$shared + ) + src <- get_source_position(private$mycall) + res$filename <- src$filename + res$position <- src$position + res$label <- paste0( + res$id, " ", + if (private$state == "fulfilled") paste0(cli::symbol$tick, " "), + if (private$state == "rejected") paste0(cli::symbol$cross, " "), + deparse(private$mycall)[1], " @ ", + res$filename, ":", res$position) + + res +} + +#' Is object a deferred value? +#' +#' @param x object +#' @return Whether it is a deferred value. +#' +#' @noRd +#' @examples +#' is_deferred(1:10) +#' afun <- function() { +#' print(is_deferred(dx <- delay(1/100))) +#' dx +#' } +#' synchronise(afun()) + +is_deferred <- function(x) { + inherits(x, "deferred") +} + +#' Delay async computation for the specified time +#' +#' Since R is single-threaded, the deferred value might be resolved (much) +#' later than the specified time period. +#' +#' @param delay Time interval in seconds, the amount of time to delay +#' to delay the execution. It can be a fraction of a second. +#' @return A deferred object. +#' +#' @noRd +#' @examples +#' \donttest{ +#' ## Two HEAD requests with 1/2 sec delay between them +#' resp <- list() +#' afun <- async(function() { +#' http_head("https://eu.httpbin.org?q=2")$ +#' then(function(value) resp[[1]] <<- value$status_code)$ +#' then(function(...) delay(1/2))$ +#' then(function(...) http_head("https://eu.httpbin.org?q=2"))$ +#' then(function(value) resp[[2]] <<- value$status_code) +#' }) +#' synchronise(afun()) +#' resp +#' } + +delay <- function(delay) { + force(delay) + id <- NULL + deferred$new( + type = "delay", call = sys.call(), + action = function(resolve) { + assert_that(is_time_interval(delay)) + force(resolve) + id <<- get_default_event_loop()$add_delayed( + delay, + function() TRUE, + function(err, res) resolve(TRUE) + ) + }, + on_cancel = function(reason) { + if (!is.null(id)) get_default_event_loop()$cancel(id) + } + ) +} + +delay <- mark_as_async(delay) + +#' Find the value of a match, asynchronously +#' +#' All predicates are running in parallel, and the returned match +#' is not guaranteed to be the first one. +#' +#' @param .x A list or atomic vector. +#' @param .p An asynchronous predicate function. +#' @param ... Additional arguments to the predicate function. +#' @param .limit Number of elements to process simulateneously. +#' If it is 1, then the predicate is applied sequentially. +#' @return A deferred value for the result. +#' +#' @family async iterators +#' @noRd +#' @examples +#' \donttest{ +#' synchronise(async_detect( +#' c("https://eu.httpbin.org/status/404", "https://eu.httpbin.org", +#' "https://eu.httpbin.org/status/403"), +#' async_sequence(http_head, function(x) x$status_code == 200) +#' )) +#' } + +async_detect <- function(.x, .p, ..., .limit = Inf) { + if (.limit < length(.x)) { + async_detect_limit(.x, .p, ..., .limit = .limit) + } else { + async_detect_nolimit(.x, .p, ...) + } +} + +async_detect <- mark_as_async(async_detect) + +async_detect_nolimit <- function(.x, .p, ...) { + defs <- lapply(.x, async(.p), ...) + nx <- length(defs) + done <- FALSE + + self <- deferred$new( + type = "async_detect", call = sys.call(), + action = function(resolve) { + lapply(seq_along(defs), function(idx) { + defs[[idx]]$then(function(val) if (isTRUE(val)) idx)$then(self) + }) + if (nx == 0) resolve(NULL) + }, + parent_resolve = function(value, resolve) { + if (!done && !is.null(value)) { + done <<- TRUE + resolve(.x[[value]]) + } else if (!done) { + nx <<- nx - 1L + if (nx == 0) resolve(NULL) + } + } + ) +} + +async_detect_limit <- function(.x, .p, ..., .limit = .limit) { + len <- length(.x) + nx <- len + .p <- async(.p) + args <- list(...) + + done <- FALSE + nextone <- .limit + 1L + firsts <- lapply(.x[seq_len(.limit)], .p, ...) + + self <- deferred$new( + type = "async_detect (limit)", call = sys.call(), + action = function(resolve) { + lapply(seq_along(firsts), function(idx) { + firsts[[idx]]$then(function(val) if (isTRUE(val)) idx)$then(self) + }) + if (nx == 0) resolve(NULL) + }, + parent_resolve = function(value, resolve) { + if (!done && !is.null(value)) { + done <<- TRUE + resolve(.x[[value]]) + } else if (!done) { + nx <<- nx - 1L + if (nx == 0) { + resolve(NULL) + } else if (nextone <= len) { + idx <- nextone + dx <- .p(.x[[nextone]], ...) + dx$then(function(val) if (isTRUE(val)) idx)$then(self) + nextone <<- nextone + 1L + } + } + } + ) + + self +} + +#' @importFrom R6 R6Class + +event_loop <- R6Class( + "event_loop", + public = list( + initialize = function() + el_init(self, private), + + add_http = function(handle, callback, file = NULL, progress = NULL, + data = NULL) + el_add_http(self, private, handle, callback, file, progress, data), + http_setopt = function(total_con = NULL, host_con = NULL, multiplex = NULL) + el_http_setopt(self, private, total_con, host_con, multiplex), + + add_process = function(conns, callback, data) + el_add_process(self, private, conns, callback, data), + add_r_process = function(conns, callback, data) + el_add_r_process(self, private, conns, callback, data), + add_pool_task = function(callback, data) + el_add_pool_task(self, private, callback, data), + add_delayed = function(delay, func, callback, rep = FALSE) + el_add_delayed(self, private, delay, func, callback, rep), + add_next_tick = function(func, callback, data = NULL) + el_add_next_tick(self, private, func, callback, data), + + cancel = function(id) + el_cancel(self, private, id), + cancel_all = function() + el_cancel_all(self, private), + + run = function(mode = c("default", "nowait", "once")) + el_run(self, private, mode = match.arg(mode)), + + suspend = function() + el_suspend(self, private), + wakeup = function() + el_wakeup(self, private) + ), + + private = list( + create_task = function(callback, ..., id = NULL, type = "foobar") + el__create_task(self, private, callback, ..., id = id, type = type), + ensure_pool = function() + el__ensure_pool(self, private), + get_poll_timeout = function() + el__get_poll_timeout(self, private), + run_pending = function() + el__run_pending(self, private), + run_timers = function() + el__run_timers(self, private), + is_alive = function() + el__is_alive(self, private), + update_time = function() + el__update_time(self, private), + io_poll = function(timeout) + el__io_poll(self, private, timeout), + update_curl_data = function() + el__update_curl_data(self, private), + + id = NULL, + time = Sys.time(), + stop_flag = FALSE, + tasks = list(), + timers = Sys.time()[numeric()], + pool = NULL, + curl_fdset = NULL, # return value of multi_fdset() + curl_poll = TRUE, # should we poll for curl sockets? + curl_timer = NULL, # call multi_run() before this + next_ticks = character(), + worker_pool = NULL, + http_opts = NULL + ) +) + +el_init <- function(self, private) { + private$id <- new_event_loop_id() + invisible(self) +} + +el_add_http <- function(self, private, handle, callback, progress, file, + data) { + self; private; handle; callback; progress; outfile <- file; data + + id <- private$create_task(callback, list(handle = handle, data = data), + type = "http") + private$ensure_pool() + if (!is.null(outfile)) cat("", file = outfile) + + content <- NULL + + curl::multi_add( + handle = handle, + pool = private$pool, + done = function(response) { + task <- private$tasks[[id]] + task$data$data$event_emitter$emit("end") + private$tasks[[id]] <- NULL + response$content <- do.call(c, as.list(content)) + response$file <- outfile + task$callback(NULL, response) + }, + data = function(bytes, ...) { + task <- private$tasks[[id]] + task$data$data$event_emitter$emit("data", bytes) + if (!is.null(outfile)) { + ## R runs out of connections very quickly, especially because they + ## are not removed until a gc(). However, calling gc() is + ## expensive, so we only do it if we have to. This is a temporary + ## solution until we can use our own connections, that are not + ## so limited in their numbers. + con <- tryCatch( + file(outfile, open = "ab"), + error = function(e) { gc(); file(outfile, open = "ab") } # nocov + ) + writeBin(bytes, con) + close(con) + } else { + content <<- c(content, list(bytes)) + } + }, + fail = function(error) { + task <- private$tasks[[id]] + private$tasks[[id]] <- NULL + error <- make_error(message = error) + class(error) <- unique(c("async_rejected", "async_http_error", + class(error))) + task$callback(error, NULL) + } + ) + id +} + +el_add_process <- function(self, private, conns, callback, data) { + self; private; conns; callback; data + data$conns <- conns + private$create_task(callback, data, type = "process") +} + +el_add_r_process <- function(self, private, conns, callback, data) { + self; private; conns; callback; data + data$conns <- conns + private$create_task(callback, data, type = "r-process") +} + +el_add_pool_task <- function(self, private, callback, data) { + self; private; callback; data + id <- private$create_task(callback, data, type = "pool-task") + if (is.null(async_env$worker_pool)) { + async_env$worker_pool <- worker_pool$new() + } + async_env$worker_pool$add_task(data$func, data$args, id, private$id) + id +} + +el_add_delayed <- function(self, private, delay, func, callback, rep) { + force(self); force(private); force(delay); force(func); force(callback) + force(rep) + id <- private$create_task( + callback, + data = list(delay = delay, func = func, rep = rep), + type = "delayed" + ) + # This has to be real time, because our event loop time might + # be very much in the past when his is called. + private$timers[id] <- Sys.time() + as.difftime(delay, units = "secs") + id +} + +el_add_next_tick <- function(self, private, func, callback, data) { + force(self) ; force(private) ; force(callback); force(data) + data$func <- func + id <- private$create_task(callback, data = data, type = "nexttick") + private$next_ticks <- c(private$next_ticks, id) +} + +el_cancel <- function(self, private, id) { + private$next_ticks <- setdiff(private$next_ticks, id) + private$timers <- private$timers[setdiff(names(private$timers), id)] + if (id %in% names(private$tasks) && private$tasks[[id]]$type == "http") { + curl::multi_cancel(private$tasks[[id]]$data$handle) + } else if (id %in% names(private$tasks) && + private$tasks[[id]]$type %in% c("process", "r-process")) { + private$tasks[[id]]$data$process$kill() + } else if (id %in% names(private$tasks) && + private$tasks[[id]]$type == "pool-task") { + async_env$worker_pool$cancel_task(id) + } + private$tasks[[id]] <- NULL + invisible(self) +} + +el_cancel_all <- function(self, private) { + http <- curl::multi_list(pool = private$pool) + lapply(http, curl::multi_cancel) + private$next_ticks <- character() + private$timers <- Sys.time()[numeric()] + + ## Need to cancel pool tasks, these are interrupts for the workers + types <- vcapply(private$tasks, "[[", "type") + ids <- vcapply(private$tasks, "[[", "id") + for (id in ids[types == "pool-task"]) { + self$cancel(id) + } + + private$tasks <- list() + invisible(self) +} + +el_run <- function(self, private, mode) { + + ## This is closely modeled after the libuv event loop, on purpose, + ## because some time we might switch to that. + + alive <- private$is_alive() + if (! alive) private$update_time() + + while (alive && !private$stop_flag) { + private$update_time() + private$update_curl_data() + private$run_timers() + ran_pending <- private$run_pending() + ## private$run_idle() + ## private$run_prepare() + + timeout <- 0 + if ((mode == "once" && !ran_pending) || mode == "default") { + timeout <- private$get_poll_timeout() + } + + private$io_poll(timeout) + ## private$run_check() + ## private$run_closing_handles() + + if (mode == "once") { + ## If io_poll returned without doing anything, that means that + ## we have some timers that are due, so run those. + ## At this point we have surely made progress + private$update_time() + private$run_timers() + } + + alive <- private$is_alive() + if (mode == "once" || mode == "nowait") break + } + + private$stop_flag <- FALSE + + alive +} + +el_suspend <- function(self, private) { + ## TODO +} + +el_wakeup <- function(self, private) { + ## TODO +} + +el__run_pending <- function(self, private) { + next_ticks <- private$next_ticks + private$next_ticks <- character() + for (id in next_ticks) { + task <- private$tasks[[id]] + private$tasks[[id]] <- NULL + call_with_callback(task$data$func, task$callback, + info = task$data$error_info) + } + + ## Check for workers from the pool finished before, while another + ## event loop was active + finished_pool <- FALSE + pool <- async_env$worker_pool + if (!is.null(pool)) { + done_pool <- pool$list_tasks(event_loop = private$id, status = "done") + finished_pool <- nrow(done_pool) > 0 + for (tid in done_pool$id) { + task <- private$tasks[[tid]] + private$tasks[[tid]] <- NULL + res <- pool$get_result(tid) + err <- res$error + res <- res[c("result", "stdout", "stderr")] + task$callback(err, res) + } + } + + length(next_ticks) > 0 || finished_pool +} + +el__io_poll <- function(self, private, timeout) { + + types <- vcapply(private$tasks, "[[", "type") + + ## The things we need to poll, and their types + ## We put the result here as well + pollables <- data.frame( + stringsAsFactors = FALSE, + id = character(), + pollable = I(list()), + type = character(), + ready = character() + ) + + ## HTTP. + if (private$curl_poll) { + curl_pollables <- data.frame( + stringsAsFactors = FALSE, + id = "curl", + pollable = I(list(processx::curl_fds(private$curl_fdset))), + type = "curl", + ready = "silent") + pollables <- rbind(pollables, curl_pollables) + } + + ## Processes + proc <- types %in% c("process", "r-process") + if (sum(proc)) { + conns <- unlist(lapply( + private$tasks[proc], function(t) t$data$conns), + recursive = FALSE) + proc_pollables <- data.frame( + stringsAsFactors = FALSE, + id = names(private$tasks)[proc], + pollable = I(conns), + type = types[proc], + ready = rep("silent", sum(proc))) + pollables <- rbind(pollables, proc_pollables) + } + + ## Pool + px_pool <- if (!is.null(async_env$worker_pool)) { + async_env$worker_pool$get_poll_connections() + } + if (length(px_pool)) { + pool_pollables <- data.frame( + stringsAsFactors = FALSE, + id = names(px_pool), + pollable = I(px_pool), + type = rep("pool", length(px_pool)), + ready = rep("silent", length(px_pool))) + pollables <- rbind(pollables, pool_pollables) + } + + if (!is.null(private$curl_timer) && private$curl_timer <= private$time) { + curl::multi_run(timeout = 0L, poll = TRUE, pool = private$pool) + private$curl_timer <- NULL + } + + if (nrow(pollables)) { + + ## OK, ready to poll + pollables$ready <- unlist(processx::poll(pollables$pollable, timeout)) + + ## Any HTTP? + if (private$curl_poll && + pollables$ready[match("curl", pollables$type)] == "event") { + curl::multi_run(timeout = 0L, poll = TRUE, pool = private$pool) + } + + ## Any processes + proc_ready <- pollables$type %in% c("process", "r-process") & + pollables$ready == "ready" + for (id in pollables$id[proc_ready]) { + p <- private$tasks[[id]] + private$tasks[[id]] <- NULL + ## TODO: this should be async + p$data$process$wait(1000) + p$data$process$kill() + res <- list( + status = p$data$process$get_exit_status(), + stdout = read_all(p$data$stdout, p$data$encoding), + stderr = read_all(p$data$stderr, p$data$encoding), + timeout = FALSE + ) + + error <- FALSE + if (p$type == "r-process") { + res$result <- tryCatch({ + p$data$process$get_result() + }, error = function(e) { error <<- TRUE; e }) + } + + unlink(c(p$data$stdout, p$data$stderr)) + + if (p$data$error_on_status && (error || res$status != 0)) { + err <- make_error("process exited with non-zero status") + err$data <- res + res <- NULL + } else { + err <- NULL + } + p$callback(err, res) + } + + ## Worker pool + pool_ready <- pollables$type == "pool" & pollables$ready == "ready" + if (sum(pool_ready)) { + pool <- async_env$worker_pool + done <- pool$notify_event(as.integer(pollables$id[pool_ready]), + event_loop = private$id) + mine <- intersect(done, names(private$tasks)) + for (tid in mine) { + task <- private$tasks[[tid]] + private$tasks[[tid]] <- NULL + res <- pool$get_result(tid) + err <- res$error + res <- res[c("result", "stdout", "stderr")] + task$callback(err, res) + } + } + + } else if (length(private$timers) || !is.null(private$curl_timer)) { + Sys.sleep(timeout / 1000) + } +} + +el__create_task <- function(self, private, callback, data, ..., id, type) { + id <- id %||% get_uuid() + private$tasks[[id]] <- list( + type = type, + id = id, + callback = callback, + data = data, + error = NULL, + result = NULL + ) + id +} + +el__ensure_pool <- function(self, private) { + getopt <- function(nm) { + anm <- paste0("async_http_", nm) + if (!is.null(v <- getOption(anm))) return(v) + if (!is.na(v <- Sys.getenv(toupper(anm), NA_character_))) return(v) + NULL + } + if (is.null(private$pool)) { + private$http_opts <- list( + total_con = getopt("total_con") %||% 100, + host_con = getopt("host_con") %||% 6, + multiplex = getopt("multiplex") %||% TRUE + ) + private$pool <- curl::new_pool( + total_con = private$http_opts$total_con, + host_con = private$http_opts$host_con, + multiplex = private$http_opts$multiplex + ) + } +} + +el_http_setopt <- function(self, private, total_con, host_con, multiplex) { + private$ensure_pool() + if (!is.null(total_con)) private$http_opts$total_con <- total_con + if (!is.null(host_con)) private$http_opts$host_con <- host_con + if (!is.null(multiplex)) private$http_opts$multiplex <- multiplex + curl::multi_set( + pool = private$pool, + total_con = private$http_opts$total_con, + host_con = private$http_opts$host_con, + multiplex = private$http_opts$multiplex + ) +} + +el__get_poll_timeout <- function(self, private) { + t <- if (length(private$next_ticks)) { + ## TODO: can this happen at all? Probably not, but it does not hurt... + 0 # nocov + } else { + max(0, min(Inf, private$timers - private$time)) + } + + if (!is.null(private$curl_timer)) { + t <- min(t, private$curl_timer - private$time) + } + + t <- max(t, 0) + + if (is.finite(t)) as.integer(t * 1000) else -1L +} + +el__run_timers <- function(self, private) { + + expired <- names(private$timers)[private$timers <= private$time] + expired <- expired[order(private$timers[expired])] + for (id in expired) { + task <- private$tasks[[id]] + if (private$tasks[[id]]$data$rep) { + ## If it is repeated, then re-init + private$timers[id] <- + private$time + as.difftime(task$data$delay, units = "secs") + } else { + ## Otherwise remove + private$tasks[[id]] <- NULL + private$timers <- private$timers[setdiff(names(private$timers), id)] + } + call_with_callback(task$data$func, task$callback) + } +} + +el__is_alive <- function(self, private) { + length(private$tasks) > 0 || + length(private$timers) > 0 || + length(private$next_ticks) > 0 +} + +el__update_time <- function(self, private) { + private$time <- Sys.time() +} + +el__update_curl_data <- function(self, private) { + private$curl_fdset <- curl::multi_fdset(private$pool) + num_fds <- length(unique(unlist(private$curl_fdset[1:3]))) + private$curl_poll <- num_fds > 0 + private$curl_timer <- if ((t <- private$curl_fdset$timeout) != -1) { + private$time + as.difftime(t / 1000.0, units = "secs") + } +} + +#' Generic Event Emitter +#' +#' This is a generic class that can be used to create event emitters. +#' It is mostly modelled after the 'node.js' `EventEmitter` class +#' +#' @section Usage: +#' ``` +#' ee <- event_emitter$new(async = TRUE) +#' ee$listen_on(event, callback) +#' ee$listen_off(event, callback) +#' ee$listen_once(event, callback) +#' ee$emit(event, ...) +#' ee$get_event_names() +#' ee$get_listener_count(event) +#' ee$remove_all_listeners(event) +#' ``` +#' +#' @section Arguments: +#' * `async`: Whether to call listeners asynchronously, i.e. in the next +#' tick of the event loop. +#' * `event`: String, name of the event. +#' * `callback`: Function, listener to call when the event is emitted. +#' Its arguments must match the arguments passed to the `$emit()` +#' method. It is possible to add the same callback function multiple +#' times as a listener. It will be called as many times, as many times +#' it was added. +#' * `...`: Arguments to pass to the listeners. They can be named or +#' unnnamed. +#' +#' @section Details: +#' +#' `ee$listen_on()` adds `callback` as a new listener for `event`. It is +#' always added to the end of the listener list. Listeners will be called in +#' the order they were added. It returns a reference to the `event_emitter` +#' object, so calls can be chained. +#' +#' `ee$listen_off()` removes the first instance of `callback` from the +#' listener list of `event`. It uses [base::identical()] to find the +#' listener to remove. If `callback` is not among the listeners, nothing +#' happens. Note that if you call this method from an event handler, that +#' does not affect the already emitted events. It returns a reference to +#' the `event_emitter` object, so calls can be chained. +#' +#' `ee$listen_once` is similar to `ee$listen_on()`, but the callback will +#' be only called for a single event, and then it will be removed. +#' (Technically, the listener is removed before the callback is called.) +#' It returns a reference to the `event_emitter` object, so calls can be +#' chained. +#' +#' `ee$emit()` emits an event. All listeners in its listener list will be +#' called, in the order they were added. The arguments are passed to the +#' listeners, so they have to be compatible with them. +#' +#' `ee$get_event_names()` returns the names of the active events, +#' in a character vector. An event is active if it has at least one +#' listener. +#' +#' `ee$get_listener_count()` returns the number of listeners for an event. +#' +#' `ee$remove_all_listener()` removes all listeners for an an event. +#' +#' @section Error handling: +#' Errors are handled by special `error` events. If a listener errors, +#' and the event emitter has an active `error` event (i.e. some listeners +#' exist for `error`, then _all_ listeners are called, in the order they +#' were specified. They receive the originally thrown error object as the +#' single argument. The error object has an `event` entry, which contains +#' the event name the failed listener was called on. +#' +#' If the event emitter does not have any listeners for the `error` event, +#' then it throws an error. This error propagates to the next +#' synchronization barrier, i.e. the last `synchronise()` or +#' `run_event_loop()` call, which fails. +#' +#' In an error happen within an `error` listener, then the same happens, +#' the last `synchronise()` or `run_event_loop()` call fails. You can +#' wrap the body of the error listeners in a `tryCatch()` call, +#' if you want to avoid this. +#' +#' @noRd +#' @importFrom R6 R6Class + +event_emitter <- R6Class( + "event_emitter", + public = list( + initialize = function(async = TRUE) + ee_init(self, private, async), + + listen_on = function(event, callback) + ee_listen_on(self, private, event, callback), + + listen_off = function(event, callback) + ee_listen_off(self, private, event, callback), + + listen_once = function(event, callback) + ee_listen_once(self, private, event, callback), + + emit = function(event, ...) + ee_emit(self, private, event, ...), + + get_event_names = function() + ee_get_event_names(self, private), + + get_listener_count = function(event) + ee_get_listener_count(self, private, event), + + remove_all_listeners = function(event) + ee_remove_all_listeners(self, private, event) + ), + + private = list( + lsts = NULL, + async = NULL, + + cleanup_events = function() + ee__cleanup_events(self, private), + error_callback = function(err, res) + ee__error_callback(self, private, err, res) + ) +) + +ee_init <- function(self, private, async) { + assert_that(is_flag(async)) + private$lsts <- structure(list(), names = character()) + private$async <- async + invisible(self) +} + +ee_listen_on <- function(self, private, event, callback) { + assert_that(is_string(event), is.function(callback)) + private$lsts[[event]] <- + c(private$lsts[[event]], list(list(cb = callback, once = FALSE))) + invisible(self) +} + +ee_listen_off <- function(self, private, event, callback) { + assert_that(is_string(event), is.function(callback)) + for (idx in seq_along(private$lsts[[event]])) { + if (identical(private$lsts[[event]][[idx]]$cb, callback)) { + private$lsts[[event]] <- private$lsts[[event]][-idx] + break + } + } + invisible(self) +} + +ee_listen_once <- function(self, private, event, callback) { + assert_that(is_string(event), is.function(callback)) + private$lsts[[event]] <- + c(private$lsts[[event]], list(list(cb = callback, once = TRUE))) + invisible(self) +} + +ee_emit <- function(self, private, event, ...) { + assert_that(is_string(event)) + list(...) + tocall <- private$lsts[[event]] + once <- vlapply(tocall, "[[", "once") + if (any(once)) private$lsts[[event]] <- tocall[!once] + + ## a for loop is not good here, because it does not create + ## a closure for lst + lapply(tocall, function(lst) { + lst + if (private$async) { + get_default_event_loop()$add_next_tick( + function() lst$cb(...), + private$error_callback, + data = list(error_info = list(event = event))) + + } else { + call_with_callback( + function() lst$cb(...), + private$error_callback, + info = list(event = event)) + } + }) + + invisible(self) +} + +ee_get_event_names <- function(self, private) { + private$cleanup_events() + names(private$lsts) +} + +ee_get_listener_count <- function(self, private, event) { + assert_that(is_string(event)) + length(private$lsts[[event]]) +} + +ee_remove_all_listeners <- function(self, private, event) { + assert_that(is_string(event)) + private$lsts[[event]] <- NULL + invisible(self) +} + +ee__cleanup_events <- function(self, private) { + len <- viapply(private$lsts, length) + private$lsts <- private$lsts[len > 0] +} + +ee__error_callback <- function(self, private, err, res) { + if (is.null(err)) return() + tocall <- private$lsts[["error"]] + once <- vlapply(tocall, "[[", "once") + if (any(once)) private$lsts[["error"]] <- tocall[!once] + + if (length(tocall)) { + for (lst in tocall) lst$cb(err) + } else { + stop(err) + } +} + +#' Do every or some elements of a list satisfy an asynchronous predicate? +#' +#' @param .x A list or atomic vector. +#' @param .p An asynchronous predicate function. +#' @param ... Additional arguments to the predicate function. +#' @return A deferred value for the result. +#' +#' @family async iterators +#' @noRd +#' @examples +#' # Check if all numbers are odd +#' # Note the use of force() here. Otherwise x will be evaluated later, +#' # and by then its value might change. +#' is_odd <- async(function(x) { +#' force(x) +#' delay(1/1000)$then(function() as.logical(x %% 2)) +#' }) +#' synchronise(async_every(c(1,3,5,7,10,11), is_odd)) +#' synchronise(async_every(c(1,3,5,7,11), is_odd)) + +async_every <- function(.x, .p, ...) { + defs <- lapply(.x, async(.p), ...) + nx <- length(defs) + done <- FALSE + + deferred$new( + type = "async_every", call = sys.call(), + parents = defs, + action = function(resolve) if (nx == 0) resolve(TRUE), + parent_resolve = function(value, resolve) { + if (!done && !isTRUE(value)) { + done <<- TRUE + resolve(FALSE) + } else if (!done) { + nx <<- nx - 1L + if (nx == 0) resolve(TRUE) + } + } + ) +} + +async_every <- mark_as_async(async_every) + +#' Keep or drop elements using an asyncronous predicate function +#' +#' `async_filter` keep the elements for which `.p` is true. (Tested +#' via `isTRUE()`. `async_reject` is the opposite, it drops them. +#' +#' @param .x A list or atomic vector. +#' @param .p An asynchronous predicate function. +#' @param ... Additional arguments to the predicate function. +#' @return A deferred value for the result. +#' +#' @family async iterators +#' @noRd +#' @examples +#' \donttest{ +#' ## Filter out non-working URLs +#' afun <- async(function(urls) { +#' test_url <- async_sequence( +#' http_head, function(x) identical(x$status_code, 200L)) +#' async_filter(urls, test_url) +#' }) +#' urls <- c("https://eu.httpbin.org/get", +#' "https://eu.httpbin.org/status/404") +#' synchronise(afun(urls)) +#' } + +async_filter <- function(.x, .p, ...) { + when_all(.list = lapply(.x, async(.p), ...))$ + then(function(res) .x[vlapply(res, isTRUE)]) +} + +async_filter <- mark_as_async(async_filter) + +#' @rdname async_filter +#' @noRd + +async_reject <- function(.x, .p, ...) { + when_all(.list = lapply(.x, async(.p), ...))$ + then(function(res) .x[! vlapply(res, isTRUE)]) +} + +async_reject <- mark_as_async(async_reject) +#' HTTP event emitter for server-sent events +#' +#' Server-sent events are a technique to stream events from a web server +#' to a client, through an open HTTP connection. +#' +#' This class implements an event emitter on an async HTTP query created +#' with [http_get()] and friends, that fires an `"event"` event when the +#' server sends an event. An `"end"` event is emitted when the server +#' closes the connection. +#' +#' An event is a named character vector, the names are the keys of the +#' events. +#' +#' Example using our built-in toy web app: +#' ```r +#' http <- webfakes::new_app_process(async:::sseapp()) +#' stream_events <- function() { +#' query <- http_get(http$url("/sse")) +#' sse <- sse_events$new(query) +#' sse$ +#' listen_on("event", function(event) { +#' writeLines("Got an event:") +#' print(event) +#' })$ +#' listen_on("end", function() { +#' writeLines("Done.") +#' }) +#' query +#' } +#' +#' response <- synchronise(stream_events()) +#' ``` +#' +#' +#' @noRd + +sse_events <- R6Class( + "sse_events", + inherit = event_emitter, + public = list( + initialize = function(http_handle) { + super$initialize(async = FALSE) + http_handle$event_emitter$listen_on("data", function(bytes) { + private$data <- c(private$data, bytes) + private$emit_events() + }) + http_handle$event_emitter$listen_on("end", function() { + self$emit("end") + }) + } + ), + + private = list( + data = NULL, + sep = as.raw(c(0xaL, 0xaL)), + emit_events = function() { + evs <- chunk_sse_events(private$data, private$sep) + private$data <- evs$rest + for (ev in evs$events) { + self$emit("event", ev) + } + } + ) +) + +chunk_sse_events <- function(data, sep = NULL) { + # skip leading \n + no <- 0L + while (no <= length(data) && data[no + 1] == 0x0a) { + no <- no + 1L + } + if (no > 0) { + data <- data[(no + 1L):length(data)] + } + sep <- sep %||% as.raw(c(0xaL, 0xaL)) + mtch <- grepRaw(sep, data, fixed = TRUE, all = TRUE) + # shortcut for no events + if (length(mtch) == 0) { + return(list(events = list(), rest = data)) + } + + events <- vector("list", length(mtch)) + for (p in seq_along(mtch)) { + from <- if (p == 1) 1L else mtch[p - 1] + 2L + to <- mtch[p] - 1L + events[[p]] <- parse_sse_event(data[from:to]) + } + events <- drop_nulls(events) + + restfrom <- mtch[length(mtch)] + 2L + rest <- if (restfrom <= length(data)) { + data[restfrom:length(data)] + } else { + raw() + } + list(events = events, rest = rest) +} + +parse_sse_event <- function(data) { + txt <- rawToChar(data) + Encoding(txt) <- "UTF-8" + lines <- strsplit(txt, "\n", fixed = TRUE)[[1]] + lines <- lines[lines != ""] + if (length(lines) == 0) { + return(NULL) + } + keys <- sub(":.*$", "", lines) + vals <- sub("^[^:]*:[ ]*", "", lines) + structure(vals, names = keys) +} + +drop_nulls <- function(x) { + x[!vapply(x, is.null, logical(1))] +} + +sseapp <- function() { + app <- webfakes::new_app() + app$get("/sse", function(req, res) { + `%||%` <- function(l, r) if (is.null(l)) r else l + if (is.null(res$locals$sse)) { + duration <- as.double(req$query$duration %||% 2) + delay <- as.double(req$query$delay %||% 0) + numevents <- as.integer(req$query$numevents %||% 5) + pause <- max(duration / numevents, 0.01) + res$locals$sse <- list( + sent = 0, + numevents = numevents, + pause = pause + ) + + res$ + set_header("cache-control", "no-cache")$ + set_header("content-type", "text/event-stream")$ + set_header("access-control-allow-origin", "*")$ + set_header("connection", "keep-alive")$ + set_status(200) + + if (delay > 0) { + return(res$delay(delay)) + } + } + + msg <- paste0( + "event: ", res$locals$sse$sent + 1L, "\n", + "message: live long and prosper\n\n" + ) + res$locals$sse$sent <- res$locals$sse$sent + 1L + res$write(msg) + + if (res$locals$sse$sent == res$locals$sse$numevents) { + res$send("") + } else { + res$delay(res$locals$sse$pause) + } + }) +} + +#' Asynchronous HTTP GET request +#' +#' Start an HTTP GET request in the background, and report its completion +#' via a deferred. +#' +#' @section HTTP event emitters: +#' An async HTTP deferred object is also an event emitter, see +#' [event_emitter]. Use `$event_emitter` to access the event emitter API, +#' and call `$event_emitter$listen_on()` etc. to listen on HTTP events, +#' etc. +#' +#' * `"data"` is emitted when we receive data from the server, the data is +#' passed on to the listeners as a raw vector. Note that zero-length +#' raw vectors might also happen. +#' * `"end"` is emitted at the end of the HTTP data stream, without +#' additional arguments (Also on error.) +#' +#' Here is an example, that uses the web server from the webfakes +#' package: +#' ```r +#' http <- webfakes::new_app_process(webfakes::httpbin_app()) +#' stream_http <- function() { +#' query <- http_get(http$url("/drip?duration=3&numbytes=10")) +#' query$event_emitter$ +#' listen_on("data", function(bytes) { +#' writeLines(paste("Got", length(bytes), "byte(s):")) +#' print(bytes) +#' })$ +#' listen_on("end", function() { +#' writeLines("Done.") +#' }) +#' query +#' } +#' +#' response <- synchronise(stream_http()) +#' ``` +#' +#' @param url URL to connect to. +#' @param headers HTTP headers to send. +#' @param file If not `NULL`, it must be a string, specifying a file. +#' The body of the response is written to this file. +#' @param options Options to set on the handle. Passed to +#' [curl::handle_setopt()]. +#' @param on_progress Progress handler function. It is only used if the +#' response body is written to a file. See details below. +#' @return Deferred object. +#' +#' @section Progress bars: +#' +#' `http_get` can report on the progress of the download, via the +#' `on_progress` argument. This is called with a list, with entries: +#' * `url`: the specified url to download +#' * `handle`: the curl handle of the request. This can be queried using +#' [curl::handle_data()] to get the response status_code, the final +#' URL (after redirections), timings, etc. +#' * `file`: the `file` argument. +#' * `total`: total bytes of the response. If this is unknown, it is set +#' to zero. +#' * `current`: already received bytes of the response. +#' +#' @family asyncronous HTTP calls +#' @noRd +#' @examples +#' \donttest{ +#' afun <- async(function() { +#' http_get("https://eu.httpbin.org/status/200")$ +#' then(function(x) x$status_code) +#' }) +#' synchronise(afun()) +#' } + +http_get <- function(url, headers = character(), file = NULL, + options = list(), on_progress = NULL) { + + url; headers; file; options; on_progress + options <- get_default_curl_options(options) + + make_deferred_http( + function() { + assert_that(is_string(url)) + handle <- curl::new_handle(url = url) + curl::handle_setheaders(handle, .list = headers) + + if (!is.null(on_progress)) { + options$noprogress <- FALSE + fun <- options$progressfunction <- function(down, up) { + on_progress(list( + url = url, + handle = handle, + file = file, + total = down[[1]], + current = down[[2]] + )) + TRUE + } + ## This is a workaround for curl not PROTECT-ing the progress + ## callback function + reg.finalizer(handle, function(...) fun, onexit = TRUE) + } + + curl::handle_setopt(handle, .list = options) + list(handle = handle, options = options) + }, + file + ) +} + +http_get <- mark_as_async(http_get) + +#' Asynchronous HTTP HEAD request +#' +#' An async HTTP deferred object is also an event emitter, see +#' [http_get()] for details, and also [event_emitter]. +#' +#' @inheritParams http_get +#' @return Deferred object. +#' +#' @family asyncronous HTTP calls +#' @noRd +#' @examples +#' \donttest{ +#' afun <- async(function() { +#' dx <- http_head("https://eu.httpbin.org/status/200")$ +#' then(function(x) x$status_code) +#' }) +#' synchronise(afun()) +#' +#' # Check a list of URLs in parallel +#' afun <- function(urls) { +#' when_all(.list = lapply(urls, http_head))$ +#' then(function(x) lapply(x, "[[", "status_code")) +#' } +#' urls <- c("https://google.com", "https://eu.httpbin.org") +#' synchronise(afun(urls)) +#' } + +http_head <- function(url, headers = character(), file = NULL, + options = list(), on_progress = NULL) { + + url; headers; file; options; on_progress + options <- get_default_curl_options(options) + + make_deferred_http( + function() { + assert_that(is_string(url)) + handle <- curl::new_handle(url = url) + curl::handle_setheaders(handle, .list = headers) + curl::handle_setopt(handle, customrequest = "HEAD", nobody = TRUE, + .list = options) + list(handle = handle, options = options) + }, + file + ) +} + +http_head <- mark_as_async(http_head) + +#' Asynchronous HTTP POST request +#' +#' Start an HTTP POST request in the background, and report its completion +#' via a deferred value. +#' +#' An async HTTP deferred object is also an event emitter, see +#' [http_get()] for details, and also [event_emitter]. +#' +#' @inheritParams http_get +#' @param data Data to send. Either a raw vector, or a character string +#' that will be converted to raw with [base::charToRaw]. At most one of +#' `data`, `data_file` and `data_form` can be non `NULL`. +#' @param data_file Data file to send. At most one of `data`, `data_file` +#' and `data_form` can be non `NULL`. +#' @param data_form Form data to send. A name list, where each element +#' is created with either [curl::form_data()] or [curl::form_file()]. +#' At most one of `data`, `data_file` and `data_form` can be non `NULL`. +#' @param on_progress Progress handler function. It is only used if the +#' response body is written to a file. See details at [http_get()]. +#' +#' @noRd +#' @examples +#' json <- jsonlite::toJSON(list(baz = 100, foo = "bar")) +#' +#' do <- function() { +#' headers <- c("content-type" = "application/json") +#' http_post("https://eu.httpbin.org/post", data = json, headers = headers)$ +#' then(http_stop_for_status)$ +#' then(function(x) { +#' jsonlite::fromJSON(rawToChar(x$content))$json +#' }) +#' } +#' +#' synchronise(do()) + +http_post <- function(url, data = NULL, data_file = NULL, + data_form = NULL, headers = character(), file = NULL, + options = list(), on_progress = NULL) { + + url; data; data_file; data_form; headers; file; options; on_progress + if ((!is.null(data) + !is.null(data_file) + !is.null(data_form)) > 1) { + stop( + "At most one of `data`, `data_file` and `data_form` ", + "can be non `NULL`." + ) + } + if (!is.null(data_file)) { + data <- readBin(data_file, "raw", file.size(data_file)) + } + if (!is.null(data) && !is.raw(data)) data <- charToRaw(data) + options <- get_default_curl_options(options) + + make_deferred_http( + function() { + assert_that(is_string(url)) + handle <- curl::new_handle(url = url) + curl::handle_setheaders(handle, .list = headers) + curl::handle_setopt(handle, customrequest = "POST", + postfieldsize = length(data), postfields = data, + .list = options) + if (!is.null(data_form)) { + curl::handle_setform(handle, .list = data_form) + } + list(handle = handle, options = options) + }, + file + ) +} + +http_post <- mark_as_async(http_post) + +http_delete <- function(url, headers = character(), file = NULL, + options = list()) { + url; headers; options; + + make_deferred_http( + function() { + assert_that(is_string(url)) + handle <- curl::new_handle(url = url) + curl::handle_setheaders(handle, .list = headers) + curl::handle_setopt(handle, customrequest = "DELETE", .list = options) + list(handle = handle, options = options) + }, + file + ) +} + +http_delete <- mark_as_async(http_delete) + +#' @importFrom utils modifyList + +get_default_curl_options <- function(options) { + getopt <- function(nm) { + if (!is.null(v <- options[[nm]])) return(v) + anm <- paste0("async_http_", nm) + if (!is.null(v <- getOption(anm))) return(v) + if (!is.na(v <- Sys.getenv(toupper(anm), NA_character_))) return (v) + } + modifyList( + options, + drop_nulls(list( + timeout = as.integer(getopt("timeout") %||% 0), + connecttimeout = as.integer(getopt("connecttimeout") %||% 300), + low_speed_time = as.integer(getopt("low_speed_time") %||% 0), + low_speed_limit = as.integer(getopt("low_speed_limit") %||% 0), + cainfo = getopt("cainfo") + )) + ) +} + +http_events <- R6Class( + "http_events", + inherit = event_emitter, + public = list( + listen_on = function(event, callback) { + private$check(event) + super$listen_on(event, callback) + }, + listen_off = function(event, callback) { + private$check(event) + super$listen_off(event, callback) + } + ), + private = list( + check = function(event) { + stopifnot(event %in% c("data", "end")) + } + ) +) + +make_deferred_http <- function(cb, file) { + cb; file + id <- NULL + ee <- http_events$new() + deferred$new( + type = "http", call = sys.call(), + action = function(resolve, progress) { + resolve; progress + ## This is a temporary hack until we have proper pollables + ## Then the deferred will have a "work" callback, which will + ## be able to throw. + reject <- environment(resolve)$private$reject + ho <- cb() + id <<- get_default_event_loop()$add_http( + ho$handle, + function(err, res) if (is.null(err)) resolve(res) else reject(err), + progress, + file, + data = c(ho$options, list(event_emitter = ee)) + ) + }, + on_cancel = function(reason) { + if (!is.null(id)) get_default_event_loop()$cancel(id) + }, + event_emitter = ee + ) +} + +#' Throw R errors for HTTP errors +#' +#' Status codes below 400 are considered successful, others will trigger +#' errors. Note that this is different from the `httr` package, which +#' considers the 3xx status code errors as well. +#' +#' @param resp HTTP response from [http_get()], [http_head()], etc. +#' @return The HTTP response invisibly, if it is considered successful. +#' Otherwise an error is thrown. +#' +#' @noRd +#' @examples +#' \donttest{ +#' afun <- async(function() { +#' http_get("https://eu.httpbin.org/status/404")$ +#' then(http_stop_for_status) +#' }) +#' +#' tryCatch(synchronise(afun()), error = function(e) e) +#' } + +http_stop_for_status <- function(resp) { + if (!is.integer(resp$status_code)) stop("Not an HTTP response") + if (resp$status_code < 400) return(invisible(resp)) + stop(http_error(resp)) +} + +http_error <- function(resp, call = sys.call(-1)) { + status <- resp$status_code + reason <- http_status(status)$reason + message <- sprintf("%s (HTTP %d).", reason, status) + status_type <- (status %/% 100) * 100 + if (length(resp[["content"]]) == 0 && !is.null(resp$file) && + file.exists(resp$file)) { + tryCatch({ + n <- file.info(resp$file, extra_cols = FALSE)$size + resp$content <- readBin(resp$file, what = raw(), n = n) + }, error = identity) + } + http_class <- paste0("async_http_", unique(c(status, status_type, "error"))) + structure( + list(message = message, call = call, response = resp), + class = c(http_class, "error", "condition") + ) +} + +http_status <- function(status) { + status_desc <- http_statuses[as.character(status)] + if (is.na(status_desc)) { + stop("Unknown http status code: ", status, call. = FALSE) + } + + status_types <- c("Information", "Success", "Redirection", "Client error", + "Server error") + status_type <- status_types[[status %/% 100]] + + # create the final information message + message <- paste(status_type, ": (", status, ") ", status_desc, sep = "") + + list( + category = status_type, + reason = status_desc, + message = message + ) +} + +http_statuses <- c( + "100" = "Continue", + "101" = "Switching Protocols", + "102" = "Processing (WebDAV; RFC 2518)", + "200" = "OK", + "201" = "Created", + "202" = "Accepted", + "203" = "Non-Authoritative Information", + "204" = "No Content", + "205" = "Reset Content", + "206" = "Partial Content", + "207" = "Multi-Status (WebDAV; RFC 4918)", + "208" = "Already Reported (WebDAV; RFC 5842)", + "226" = "IM Used (RFC 3229)", + "300" = "Multiple Choices", + "301" = "Moved Permanently", + "302" = "Found", + "303" = "See Other", + "304" = "Not Modified", + "305" = "Use Proxy", + "306" = "Switch Proxy", + "307" = "Temporary Redirect", + "308" = "Permanent Redirect (experimental Internet-Draft)", + "400" = "Bad Request", + "401" = "Unauthorized", + "402" = "Payment Required", + "403" = "Forbidden", + "404" = "Not Found", + "405" = "Method Not Allowed", + "406" = "Not Acceptable", + "407" = "Proxy Authentication Required", + "408" = "Request Timeout", + "409" = "Conflict", + "410" = "Gone", + "411" = "Length Required", + "412" = "Precondition Failed", + "413" = "Request Entity Too Large", + "414" = "Request-URI Too Long", + "415" = "Unsupported Media Type", + "416" = "Requested Range Not Satisfiable", + "417" = "Expectation Failed", + "418" = "I'm a teapot (RFC 2324)", + "420" = "Enhance Your Calm (Twitter)", + "422" = "Unprocessable Entity (WebDAV; RFC 4918)", + "423" = "Locked (WebDAV; RFC 4918)", + "424" = "Failed Dependency (WebDAV; RFC 4918)", + "424" = "Method Failure (WebDAV)", + "425" = "Unordered Collection (Internet draft)", + "426" = "Upgrade Required (RFC 2817)", + "428" = "Precondition Required (RFC 6585)", + "429" = "Too Many Requests (RFC 6585)", + "431" = "Request Header Fields Too Large (RFC 6585)", + "444" = "No Response (Nginx)", + "449" = "Retry With (Microsoft)", + "450" = "Blocked by Windows Parental Controls (Microsoft)", + "451" = "Unavailable For Legal Reasons (Internet draft)", + "499" = "Client Closed Request (Nginx)", + "500" = "Internal Server Error", + "501" = "Not Implemented", + "502" = "Bad Gateway", + "503" = "Service Unavailable", + "504" = "Gateway Timeout", + "505" = "HTTP Version Not Supported", + "506" = "Variant Also Negotiates (RFC 2295)", + "507" = "Insufficient Storage (WebDAV; RFC 4918)", + "508" = "Loop Detected (WebDAV; RFC 5842)", + "509" = "Bandwidth Limit Exceeded (Apache bw/limited extension)", + "510" = "Not Extended (RFC 2774)", + "511" = "Network Authentication Required (RFC 6585)", + "598" = "Network read timeout error (Unknown)", + "599" = "Network connect timeout error (Unknown)" +) + +#' Set curl HTTP options in an event loop +#' +#' The event loop must be already running. In other words, you can only +#' call this function from async functions. +#' +#' The default values are set when the first deferred HTTP operation of the +#' event loop is created, and they are taken from the `async_http_total_con`, +#' `async_http_host_con` and `async_http_multiplex` options. +#' +#' @param total_con,host_con,multiplex They are passed to +#' [curl::multi_set()]. If an argument is `NULL` (the default) then it is +#' ignored. +#' @noRd +#' @family asyncronous HTTP calls + +http_setopt <- function(total_con = NULL, host_con = NULL, multiplex = NULL) { + get_default_event_loop()$http_setopt(total_con, host_con, multiplex) + invisible() +} + +#' Apply an asynchronous function to each element of a vector +#' +#' @param .x A list or atomic vector. +#' @param .f Asynchronous function to apply. +#' @param ... Additional arguments to `.f`. +#' @param .args More additional arguments to `.f`. +#' @param .limit Number of elements to process simulateneously. +#' @return Deferred value that is resolved after all deferred values +#' from the application of `.f` are resolved. +#' +#' @family async iterators +#' @noRd +#' @examples +#' synchronise(async_map( +#' seq(10, 100, by = 10) / 100, +#' function(wait) delay(wait)$then(function() "OK") +#' )) + +async_map <- function(.x, .f, ..., .args = list(), .limit = Inf) { + if (.limit < length(.x)) { + async_map_limit(.x, .f, ..., .args = .args, .limit = .limit) + } else { + defs <- do.call(lapply, c(list(.x, async(.f), ...), .args)) + when_all(.list = defs) + } +} + +async_map <- mark_as_async(async_map) + +async_map_limit <- function(.x, .f, ..., .args = list(), .limit = Inf) { + len <- length(.x) + nx <- len + .f <- async(.f) + args <- c(list(...), .args) + + nextone <- .limit + 1L + firsts <- lapply_args(.x[seq_len(.limit)], .f, .args = args) + + result <- structure( + vector(mode = "list", length = len), + names = names(.x) + ) + + self <- deferred$new( + type = "async_map (limit)", call = sys.call(), + action = function(resolve) { + self; nx; firsts + lapply(seq_along(firsts), function(idx) { + firsts[[idx]]$then(function(val) list(idx, val))$then(self) + }) + if (nx == 0) resolve(result) + }, + parent_resolve = function(value, resolve) { + self; nx; nextone; result; .f + nx <<- nx - 1L + result[ value[[1]] ] <<- value[2] + if (nx == 0) { + resolve(result) + } else if (nextone <= len) { + idx <- nextone + dx <- do.call(".f", c(list(.x[[nextone]]), args)) + dx$then(function(val) list(idx, val))$then(self) + nextone <<- nextone + 1L + } + } + ) + + self +} + +## nocov start + +.onLoad <- function(libname, pkgname) { + if (Sys.getenv("DEBUGME") != "" && + requireNamespace("debugme", quietly = TRUE)) { + debugme::debugme() + } +} + +## nocov end + +#' Asynchronous external process execution +#' +#' Start an external process in the background, and report its completion +#' via a deferred. +#' +#' @inheritParams processx::run +#' @param error_on_status Whether to reject the referred value if the +#' program exits with a non-zero status. +#' @return Deferred object. +#' +#' @family asynchronous external processes +#' @noRd +#' @examples +#' \dontrun{ +#' afun <- function() { +#' run_process("ls", "-l")$ +#' then(function(x) strsplit(x$stdout, "\r?\n")[[1]]) +#' } +#' synchronise(afun()) +#' } + +run_process <- function(command = NULL, args = character(), + error_on_status = TRUE, wd = NULL, env = NULL, + windows_verbatim_args = FALSE, windows_hide_window = FALSE, + encoding = "", ...) { + + command; args; error_on_status; wd; env; windows_verbatim_args; + windows_hide_window; encoding; list(...) + + id <- NULL + + deferred$new( + type = "process", call = sys.call(), + action = function(resolve) { + resolve + reject <- environment(resolve)$private$reject + stdout <- tempfile() + stderr <- tempfile() + px <- processx::process$new(command, args = args, + stdout = stdout, stderr = stderr, poll_connection = TRUE, + env = env, cleanup = TRUE, cleanup_tree = TRUE, wd = wd, + encoding = encoding, ...) + pipe <- px$get_poll_connection() + id <<- get_default_event_loop()$add_process( + list(pipe), + function(err, res) if (is.null(err)) resolve(res) else reject(err), + list(process = px, stdout = stdout, stderr = stderr, + error_on_status = error_on_status, encoding = encoding)) + }, + on_cancel = function(reason) { + if (!is.null(id)) get_default_event_loop()$cancel(id) + } + ) +} + +run_process <- mark_as_async(run_process) + +#' Asynchronous call to an R function, in a background R process +#' +#' Start a background R process and evaluate a function call in it. +#' It uses [callr::r_process] internally. +#' +#' @inheritParams callr::r_bg +#' @noRd +#' +#' @examples +#' \dontrun{ +#' afun <- function() { +#' run_r_process(function() Sys.getpid()) +#' } +#' synchronise(afun()) +#' } + +run_r_process <- function(func, args = list(), libpath = .libPaths(), + repos = c(getOption("repos"), c(CRAN = "https://cloud.r-project.org")), + cmdargs = c("--no-site-file", "--slave", "--no-save", "--no-restore"), + system_profile = FALSE, user_profile = FALSE, env = callr::rcmd_safe_env()) { + + func; args; libpath; repos; cmdargs; system_profile; user_profile; env + + id <- NULL + + deferred$new( + type = "r-process", call = sys.calls(), + action = function(resolve) { + resolve + reject <- environment(resolve)$private$reject + stdout <- tempfile() + stderr <- tempfile() + opts <- callr::r_process_options( + func = func, args = args, libpath = libpath, repos = repos, + cmdargs = cmdargs, system_profile = system_profile, + user_profile = user_profile, env = env, stdout = stdout, + stderr = stderr, extra = list(cleanup_tree = TRUE)) + + rx <- callr::r_process$new(opts) + pipe <- rx$get_poll_connection() + id <<- get_default_event_loop()$add_r_process( + list(pipe), + function(err, res) if (is.null(err)) resolve(res) else reject(err), + list(process = rx, stdout = stdout, stderr = stderr, + error_on_status = TRUE, encoding = "")) + }, + on_cancel = function(reason) { + if (!is.null(id)) get_default_event_loop()$cancel(id) + } + ) +} + +run_r_process <- mark_as_async(run_r_process) + +#' A deferred value that resolves when the specified number of deferred +#' values resolve, or is rejected when one of them is rejected +#' +#' These functions are similar to [when_some()] and [when_any()], but they +#' do not ignore errors. If a deferred is rejected, then `async_race_some()` and +#' `async_race()` are rejected as well. +#' +#' `async_race()` is a special case of `count = `: it resolves or is rejected +#' as soon as one deferred resolves or is rejected. +#' +#' async has auto-cancellation, so if the required number of deferred values +#' are resolved, or any deferred value is rejected, the rest are cancelled. +#' +#' @param count Number of deferred values that need to resolve. +#' @param ... Deferred values. +#' @param .list More deferred values. +#' @return A deferred value, that is conditioned on all deferred values +#' in `...` and `.list`. +#' +#' @noRd + +async_race_some <- function(count, ..., .list = list()) { + when_some_internal(count, ..., .list = .list, .race = TRUE) +} + +async_race_some <- mark_as_async(async_race_some) + +#' @noRd +#' @rdname async_race_some + +async_race <- function(..., .list = list()) { + when_some_internal(1L, ..., .list = .list, .race = TRUE)$ + then(function(x) x[[1]]) +} + +async_race <- mark_as_async(async_race) + +#' Make an asynchronous function that always succeeds +#' +#' This is sometimes useful, if the function is applied to entries in +#' a vector or list. +#' +#' @param task Function to transform. +#' @return Async function returning a deferred value that is never +#' rejected. Instead its value is a list with entries `error` and +#' `result`. If the original deferred was resolved, then `error` is +#' `NULL`. If the original deferred was rejected, then `result` is +#' `NULL`. +#' +#' @family async control flow +#' @noRd +#' @examples +#' badfun <- async(function() stop("oh no!")) +#' safefun <- async_reflect(badfun) +#' synchronise(when_all(safefun(), "good")) + +async_reflect <- function(task) { + task <- async(task) + function(...) { + task(...)$ + then(function(value) list(error = NULL, result = value))$ + catch(error = function(reason) list(error = reason, result = NULL)) + } +} + +async_reflect <- mark_as_async(async_reflect) + +#' Replicate an async function a number of times +#' +#' Similar to [base::replicate()], with some differences: +#' * it takes an async function, instead of an expression, and +#' * it always returns a list. +#' +#' @param n Number of replications. +#' @param task Async function to call. +#' @param ... Additional arguments to `task`. +#' @param .limit Number of concurrent async processes to create. +#' @return Resolves to a list of the results of the `n` `task` calls. +#' +#' @noRd +#' @examples +#' \donttest{ +#' ## perform an HTTP request three times, and list the reponse times +#' do <- function() { +#' async_replicate(3, +#' function() http_get("https://eu.httpbin.org")$then(function(x) x$times)) +#' } +#' synchronise(do()) +#' } + +async_replicate <- function(n, task, ..., .limit = Inf) { + assert_that( + is_count(n), + .limit == Inf || is_count(.limit), .limit >= 1L) + + force(list(...)) + task <- async(task) + + if (n == 0) { + async_constant(list()) + } else if (n <= .limit) { + async_replicate_nolimit(n, task, ...) + } else { + async_replicate_limit(n, task, ..., .limit = .limit) + } +} + +async_replicate_nolimit <- function(n, task, ...) { + defs <- lapply(seq_len(n), function(i) task(...)) + when_all(.list = defs) +} + +async_replicate_limit <- function(n, task, ..., .limit = .limit) { + n; .limit + + defs <- nextone <- result <- NULL + + self <- deferred$new( + type = "async_replicate", call = sys.call(), + action = function(resolve) { + defs <<- lapply(seq_len(n), function(i) task(...)) + result <<- vector(n, mode = "list") + lapply(seq_len(.limit), function(idx) { + defs[[idx]]$then(function(val) list(idx, val))$then(self) + }) + nextone <<- .limit + 1L + }, + parent_resolve = function(value, resolve) { + result[ value[[1]] ] <<- value[2] + if (nextone > n) { + resolve(result) + } else { + idx <- nextone + defs[[nextone]]$then(function(val) list(idx, val))$then(self) + nextone <<- nextone + 1L + } + } + ) + + self +} + +#' Retry an asynchronous function a number of times +#' +#' Keeps trying until the function's deferred value resolves without +#' error, or `times` tries have been performed. +#' +#' @param task An asynchronous function. +#' @param times Number of tries. +#' @param ... Arguments to pass to `task`. +#' @return Deferred value for the operation with retries. +#' +#' @family async control flow +#' @noRd +#' @examples +#' \donttest{ +#' ## Try a download at most 5 times +#' afun <- async(function() { +#' async_retry( +#' function() http_get("https://eu.httpbin.org"), +#' times = 5 +#' )$then(function(x) x$status_code) +#' }) +#' +#' synchronise(afun()) +#' } + +async_retry <- function(task, times, ...) { + task <- async(task) + times <- times + force(list(...)) + + self <- deferred$new( + type = "retry", call = sys.call(), + parents = list(task(...)), + parent_reject = function(value, resolve) { + times <<- times - 1L + if (times > 0) { + task(...)$then(self) + } else { + stop(value) + } + } + ) +} + +async_retry <- mark_as_async(async_retry) + +#' Make an asynchronous funcion retryable +#' +#' @param task An asynchronous function. +#' @param times Number of tries. +#' @return Asynchronous retryable function. +#' +#' @family async control flow +#' @noRd +#' @examples +#' \donttest{ +#' ## Create a downloader that retries five times +#' http_get_5 <- async_retryable(http_get, times = 5) +#' ret <- synchronise( +#' http_get_5("https://eu.httpbin.org/get?q=1")$ +#' then(function(x) rawToChar(x$content)) +#' ) +#' cat(ret) +#' } + +async_retryable <- function(task, times) { + task <- async(task) + force(times) + function(...) { + async_retry(task, times, ...) + } +} + +#' Compose asynchronous functions +#' +#' This is equivalent to using the `$then()` method of a deferred, but +#' it is easier to use programmatically. +#' +#' @param ... Asynchronous functions to compose. +#' @param .list Mose asynchronous functions to compose. +#' @return Asynchronous function, the composition of all input functions. +#' They are performed left to right, the ones in `.list` are the last +#' ones. +#' +#' @family async control flow +#' @noRd +#' @examples +#' \donttest{ +#' check_url <- async_sequence( +#' http_head, function(x) identical(x$status_code, 200L)) +#' synchronise(check_url("https://eu.httpbin.org/status/404")) +#' synchronise(check_url("https://eu.httpbin.org/status/200")) +#' } + +async_sequence <- function(..., .list = NULL) { + funcs <- c(list(...), .list) + if (length(funcs) == 0) stop("Function list empty in `async_sequence`") + + function(...) { + dx <- async(funcs[[1]])(...) + for (i in seq_along(funcs)[-1]) dx <- dx$then(funcs[[i]]) + dx + } +} + +async_sequence <- mark_as_async(async_sequence) + +#' @noRd +#' @rdname async_every + +async_some <- function(.x, .p, ...) { + defs <- lapply(.x, async(.p), ...) + nx <- length(defs) + done <- FALSE + + deferred$new( + type = "async_some", call = sys.call(), + parents = defs, + action = function(resolve) if (nx == 0) resolve(FALSE), + parent_resolve = function(value, resolve) { + if (!done && isTRUE(value)) { + done <<- TRUE + resolve(TRUE) + } else if (!done) { + nx <<- nx - 1L + if (nx == 0) resolve(FALSE) + } + } + ) +} + +async_some <- mark_as_async(async_some) + +#' Synchronously wrap asynchronous code +#' +#' Evaluate an expression in an async phase. It creates an event loop, +#' then evaluates the supplied expression. If its result is a deferred +#' value, it keeps running the event loop, until the deferred value is +#' resolved, and returns its resolved value. +#' +#' If an error is not handled in the async phase, `synchronise()` will +#' re-throw that error. +#' +#' `synchronise()` cancels all async processes on interrupt or external +#' error. +#' +#' @param expr Async function call expression. If it does not evaluate +#' to a deferred value, then it is just returned. +#' +#' @noRd +#' @examples +#' \donttest{ +#' http_status <- function(url, ...) { +#' http_get(url, ...)$ +#' then(function(x) x$status_code) +#' } +#' +#' synchronise(http_status("https://eu.httpbin.org/status/418")) +#' } + +synchronise <- function(expr) { + new_el <- push_event_loop() + on.exit({ new_el$cancel_all(); pop_event_loop() }, add = TRUE) + + ## Mark this frame as a synchronization point, for debugging + `__async_synchronise_frame__` <- TRUE + + ## This is to allow `expr` to contain `async_list()` etc + ## calls that look for the top promise. Without this there + ## is no top promise. This is a temporary top promise that + ## is never started. + res <- async_constant(NULL) + + res <- expr + + if (!is_deferred(res)) return(res) + + ## We need an extra final promise that cannot be replaced, + ## so priv stays the same. + res <- res$then(function(x) x) + + priv <- get_private(res) + if (! identical(priv$event_loop, new_el)) { + err <- make_error( + "Cannot create deferred chain across synchronization barrier", + class = "async_synchronization_barrier_error") + stop(err) + } + + priv$null() + priv$run_action() + + if (isTRUE(getOption("async_debug"))) start_browser() + while (priv$state == "pending") new_el$run("once") + + if (priv$state == "fulfilled") priv$value else stop(priv$value) +} + +start_browser <- function() { + async_debug_shortcuts() + on.exit(async_debug_remove_shortcuts(), add = TRUE) + cat("This is a standard `browser()` call, but you can also use the\n") + cat("following extra commands:\n") + cat("- .an / async_next(): next event loop iteration.\n") + cat("- .as / async_step(): next event loop, debug next action or parent callback.\n") + cat("- .asb / async_step_back(): stop debugging of callbacks.\n") + cat("- .al / async_list(): deferred values in the current async phase.\n") + cat("- .at / async_tree(): DAG of the deferred values.\n") + cat("- .aw / async_where(): print call stack, mark async callback.\n") + cat("- async_wait_for(): run until deferred is resolved.\n") + cat("- async_debug(): debug action and/or parent callbacks of deferred.\n") + cat("\n") + browser(skipCalls = 1) +} + +#' Run event loop to completion +#' +#' Creates a new event loop, evaluates `expr` in it, and then runs the +#' event loop to completion. It stops when the event loop does not have +#' any tasks. +#' +#' The expression typically creates event loop tasks. It should not create +#' deferred values, though, because those will never be evaluated. +#' +#' Unhandled errors propagate to the `run_event_loop()` call, which fails. +#' +#' In case of an (unhandled) error, all event loop tasks will be cancelled. +#' +#' @param expr Expression to run after creating a new event loop. +#' @return `NULL`, always. If the event loop is to return some value, +#' you can use lexical scoping, see the example below. +#' +#' @noRd +#' @examples +#' counter <- 0L +#' do <- function() { +#' callback <- function() { +#' counter <<- counter + 1L +#' if (runif(1) < 1/10) t$cancel() +#' } +#' t <- async_timer$new(1/1000, callback) +#' } +#' run_event_loop(do()) +#' counter + +run_event_loop <- function(expr) { + new_el <- push_event_loop() + on.exit({ new_el$cancel_all(); pop_event_loop() }, add = TRUE) + + ## Mark this frame as a synchronization point, for debugging + `__async_synchronise_frame__` <- TRUE + + expr + new_el$run() + + invisible() +} + +distill_error <- function(err) { + if (is.null(err$aframe)) return(err) + err$aframe <- list( + frame = err$aframe$frame, + deferred = err$aframe$data[[1]], + type = err$aframe$data[[2]], + call = get_private(err$aframe$data[[3]])$mycall + ) + err +} + +# nocov start +#' @noRd + +print.async_rejected <- function(x, ...) { + cat(format(x, ...)) + invisible(x) +} + +# nocov end + +#' @noRd + +format.async_rejected <- function(x, ...) { + x <- distill_error(x) + src <- get_source_position(x$aframe$call) + paste0( + "" + ) +} + +#' @noRd + +summary.async_rejected <- function(object, ...) { + x <- distill_error(object) + fmt_out <- format(object, ...) + stack <- async_where(calls = x$calls, parents = x$parents, + frm = list(x$aframe)) + stack_out <- format(stack) + structure( + paste0(fmt_out, "\n\n", stack_out), + class = "async_rejected_summary") +} + +# nocov start + +#' @noRd + +print.async_rejected_summary <- function(x, ...) { + cat(x) + invisible(x) +} + +# nocov end + +#' Asynchronous function call with a timeout +#' +#' If the deferred value is not resolved before the timeout expires, +#' `async_timeout()` throws an `async_timeout` error. +#' +#' @param task Asynchronous function. +#' @param timeout Timeout as a `difftime` object, or number of seconds. +#' @param ... Additional arguments to `task`. +#' @return A deferred value. An `async_timeout` error is thrown if it is +#' not resolved within the specified timeout. +#' +#' @family async utilities +#' @noRd +#' @examples +#' ## You can catch the error, asynchronously +#' synchronise( +#' async_timeout(function() delay(1/10)$then(function() "OK"), 1/1000)$ +#' catch(async_timeout = function(e) "Timed out", +#' error = function(e) "Other error") +#' ) +#' +#' ## Or synchronously +#' tryCatch( +#' synchronise( +#' async_timeout(function() delay(1/10)$then(function() "OK"), 1/1000) +#' ), +#' async_timeout = function(e) "Timed out. :(", +#' error = function(e) paste("Other error:", e$message) +#' ) + +async_timeout <- function(task, timeout, ...) { + task <- async(task) + force(timeout) + list(...) + done <- FALSE + + self <- deferred$new( + type = "timeout", call = sys.call(), + action = function(resolve) { + task(...)$then(function(x) list("ok", x))$then(self) + delay(timeout)$then(function() list("timeout"))$then(self) + }, + parent_resolve = function(value, resolve) { + if (!done) { + done <<- TRUE + if (value[[1]] == "ok") { + resolve(value[[2]]) + } else { + cnd <- structure( + list(message = "Aync operation timed out"), + class = c("async_timeout", "error", "condition") + ) + stop(cnd) + } + } + } + ) +} + +async_timeout <- mark_as_async(async_timeout) + +#' Repeated timer +#' +#' The supplied callback function will be called by the event loop +#' every `delay` seconds. +#' +#' @section Usage: +#' ``` +#' t <- async_timer$new(delay, callback) +#' t$cancel() +#' ``` +#' +#' @section Arguments: +#' * `delay`: Time interval in seconds, the amount of time to delay +#' to delay the execution. It can be a fraction of a second. +#' * `callback`: Callback function without arguments. It will be called +#' from the event loop every `delay` seconds. +#' +#' @section Details: +#' +#' An `async_timer` is an `[event_emitter]` object with a `timeout` event. +#' It is possible to add multiple listeners to this event, once the timer +#' is created. Note, however, that removing all listeners does not cancel +#' the timer, `timeout` events will be still emitted as usual. +#' For proper cancellation you'll need to call the `cancel()` method. +#' +#' It is only possible to create `async_timer` objects in an asynchronous +#' context, i.e. within a `synchronise()` or `run_event_loop()` call. +#' A `synchronise()` call finishes as soon as its returned deferred value +#' is resolved (or rejected), even if some timers are still active. The +#' active timers will be automatically cancelled in this case. +#' +#' @section Errors: +#' Errors are handled the same way as for generic event emitters. I.e. to +#' catch errors thrown in the `callback` function, you need to add a +#' listener to the `error` event, see the example below. +#' +#' @section Congestion: +#' `async_timer` is _not_ a real-time timer. In particular, if `callback` +#' does not return in time, before the next timer event, then all future +#' timer events will be delayed. Even if `callback` returns promptly, the +#' event loop might be busy with other events, and then the next timer +#' event is not emitted in time. In general there is no guarantee about +#' the timing of the timer events. +#' +#' @importFrom R6 R6Class +#' @noRd +#' @examples +#' ## Call 10 times a second, cancel with 1/10 probability +#' counter <- 0L +#' do <- function() { +#' cb <- function() { +#' cat("called\n") +#' counter <<- counter + 1L +#' if (runif(1) < 0.1) t$cancel() +#' } +#' t <- async_timer$new(1/10, cb) +#' } +#' +#' run_event_loop(do()) +#' counter +#' +#' ## Error handling +#' counter <- 0L +#' do <- function() { +#' cb <- function() { +#' cat("called\n") +#' counter <<- counter + 1L +#' if (counter == 2L) stop("foobar") +#' if (counter == 3L) t$cancel() +#' } +#' t <- async_timer$new(1/10, cb) +#' handler <- function(err) { +#' cat("Got error:", sQuote(conditionMessage(err)), ", handled\n") +#' } +#' t$listen_on("error", handler) +#' } +#' +#' run_event_loop(do()) +#' counter +#' +#' ## Error handling at the synchonization point +#' counter <- 0L +#' do <- function() { +#' cb <- function() { +#' cat("called\n") +#' counter <<- counter + 1L +#' if (counter == 2L) stop("foobar") +#' if (counter == 3L) t$cancel() +#' } +#' t <- async_timer$new(1/10, cb) +#' } +#' +#' tryCatch(run_event_loop(do()), error = function(x) x) +#' counter + +async_timer <- R6Class( + "async_timer", + inherit = event_emitter, + public = list( + initialize = function(delay, callback) + async_timer_init(self, private, super, delay, callback), + cancel = function() + async_timer_cancel(self, private) + ), + + private = list( + id = NULL + ) +) + +async_timer_init <- function(self, private, super, delay, callback) { + assert_that( + is_time_interval(delay), + is.function(callback) && length(formals(callback)) == 0) + + ## event emitter + super$initialize() + + private$id <- get_default_event_loop()$add_delayed( + delay, + function() self$emit("timeout"), + function(err, res) { + if (!is.null(err)) self$emit("error", err) # nocov + }, + rep = TRUE) + + self$listen_on("timeout", callback) + + invisible(self) +} + +async_timer_cancel <- function(self, private) { + self; private + self$remove_all_listeners("timeout") + get_default_event_loop()$cancel(private$id) + invisible(self) +} + +#' It runs each task in series but stops whenever any of the functions were +#' successful. If one of the tasks were successful, the callback will be +#' passed the result of the successful task. If all tasks fail, the +#' callback will be passed the error and result (if any) of the final +#' attempt. +#' @param ... Deferred values to run in series. +#' @param .list More deferred values to run, `.list` is easier to use +#' programmatically. +#' @return Resolves to the result of the first successful deferred. +#' Otherwise throws an error. The error objects of all failed deferreds +#' will be in the `errors` member of the error object. +#' +#' @family async control flow +#' @noRd +#' @examples +#' do <- function() { +#' async_try_each( +#' async(function() stop("doh"))(), +#' async(function() "cool")(), +#' async(function() stop("doh2"))(), +#' async(function() "cool2")() +#' ) +#' } +#' synchronise(do()) + +async_try_each <- function(..., .list = list()) { + defs <- c(list(...), .list) + wh <- nx <- NULL + errors <- list() + + self <- deferred$new( + type = "async_try_each", call = sys.call(), + action = function(resolve) { + nx <<- length(defs) + if (nx == 0) resolve(NULL) + wh <<- 1L + defs[[wh]]$then(self) + }, + parent_resolve = function(value, resolve) { + resolve(value) + }, + parent_reject = function(value, resolve) { + errors <<- c(errors, list(value)) + if (wh == nx) { + err <- structure( + list(errors = errors, message = "async_try_each failed"), + class = c("async_rejected", "error", "condition")) + stop(err) + } else { + wh <<- wh + 1 + defs[[wh]]$then(self) + } + } + ) + + self +} + +async_try_each <- mark_as_async(async_try_each) + +#' Repeatedly call task until it its test function returns `TRUE` +#' +#' @param test Synchronous test function. +#' @param task Asynchronous function to call repeatedly. +#' @param ... Arguments to pass to `task`. +#' @return Deferred value, that is resolved when the iteration is done. +#' +#' @family async control flow +#' @noRd +#' @examples +#' ## Keep calling until it "returns" a number less than < 0.1 +#' calls <- 0 +#' number <- Inf +#' synchronise(async_until( +#' function() number < 0.1, +#' function() { +#' calls <<- calls + 1 +#' number <<- runif(1) +#' } +#' )) +#' calls + +async_until <- function(test, task, ...) { + force(test) + task <- async(task) + + self <- deferred$new( + type = "async_until", call = sys.call(), + parents = list(task(...)), + parent_resolve = function(value, resolve) { + if (test()) { + resolve(value) + } else { + task(...)$then(self) + } + } + ) + + self +} + +async_until <- mark_as_async(async_until) + +`%||%` <- function(l, r) if (is.null(l)) r else l + +vlapply <- function(X, FUN, ..., FUN.VALUE = logical(1)) { + vapply(X, FUN, FUN.VALUE = FUN.VALUE, ...) +} + +viapply <- function(X, FUN, ..., FUN.VALUE = integer(1)) { + vapply(X, FUN, FUN.VALUE = FUN.VALUE, ...) +} + +vcapply <- function(X, FUN, ..., FUN.VALUE = character(1)) { + vapply(X, FUN, FUN.VALUE = FUN.VALUE, ...) +} + +make_error <- function(message, class = "simpleError", call = NULL) { + class <- c(class, "error", "condition") + structure( + list(message = as.character(message), call = call), + class = class + ) +} + +num_args <- function(fun) { + length(formals(fun)) +} + +get_private <- function(x) { + x$.__enclos_env__$private +} + +#' Call `func` and then call `callback` with the result +#' +#' `callback` will be called with two arguments, the first one will the +#' error object if `func()` threw an error, or `NULL` otherwise. The second +#' argument is `NULL` on error, and the result of `func()` otherwise. +#' +#' @param func Function to call. +#' @param callback Callback to call with the result of `func()`, +#' or the error thrown. +#' @param info Extra info to add to the error object. Must be a named list. +#' +#' @noRd +#' @keywords internal + +call_with_callback <- function(func, callback, info = NULL) { + recerror <- NULL + result <- NULL + tryCatch( + withCallingHandlers( + result <- func(), + error = function(e) { + recerror <<- e + recerror$aframe <<- recerror$aframe %||% find_async_data_frame() + recerror$calls <<- recerror$calls %||% sys.calls() + if (is.null(recerror[["call"]])) recerror[["call"]] <<- sys.call() + recerror$parents <<- recerror$parents %||% sys.parents() + recerror[names(info)] <<- info + handler <- getOption("async.error") + if (is.function(handler)) handler() + } + ), + error = identity + ) + callback(recerror, result) +} + +get_id <- local({ + id <- 0L + function() { + id <<- id + 1L + id + } +}) + +new_event_loop_id <- local({ + id <- 0L + function() { + id <<- id + 1L + id + } +}) + +lapply_args <- function(X, FUN, ..., .args = list()) { + do.call("lapply", c(list(X = X, FUN = FUN), list(...), .args)) +} + +drop_nulls <- function(x) { + x[!vlapply(x, is.null)] +} + +#' @importFrom utils getSrcDirectory getSrcFilename getSrcLocation + +get_source_position <- function(call) { + list( + filename = file.path( + c(getSrcDirectory(call), "?")[1], + c(getSrcFilename(call), "?")[1]), + position = paste0( + getSrcLocation(call, "line", TRUE) %||% "?", ":", + getSrcLocation(call, "column", TRUE) %||% "?") + ) +} + +file_size <- function(...) { + file.info(..., extra_cols = FALSE)$size +} + +read_all <- function(filename, encoding) { + if (is.null(filename)) return(NULL) + r <- readBin(filename, what = raw(0), n = file_size(filename)) + s <- rawToChar(r) + Encoding(s) <- encoding + s +} + +crash <- function () { + get("attach")(structure(list(), class = "UserDefinedDatabase")) +} + +str_trim <- function(x) { + sub("\\s+$", "", sub("^\\s+", "", x)) +} + +expr_name <- function(expr) { + if (is.null(expr)) { + return("NULL") + } + + if (is.symbol(expr)) { + return(as.character(expr)) + } + + if (is.call(expr)) { + cl <- as.list(expr)[[1]] + if (is.symbol(cl)) { + return(as.character(cl)) + } else { + return(paste0(format(cl), collapse = "")) + } + } + + if (is.atomic(expr) && length(expr) == 1) { + return(as.character(expr)) + } + + gsub("\n.*$", "...", as.character(expr)) +} + +get_uuid <- function() { + async_env$pid <- async_env$pid %||% Sys.getpid() + async_env$counter <- async_env$counter %||% 0 + async_env$counter <- async_env$counter + 1L + paste0(async_env$pid, "-", async_env$counter) +} + +#' Deferred value for a set of deferred values +#' +#' Create a deferred value that is resolved when all listed deferred values +#' are resolved. Note that the error of an input deferred value +#' triggers the error `when_all` as well. +#' +#' async has auto-cancellation, so if one deferred value errors, the rest +#' of them will be automatically cancelled. +#' +#' @param ... Deferred values. +#' @param .list More deferred values. +#' @return A deferred value, that is conditioned on all deferred values +#' in `...` and `.list`. +#' +#' @seealso [when_any()], [when_some()] +#' @noRd +#' @examples +#' \donttest{ +#' ## Check that the contents of two URLs are the same +#' afun <- async(function() { +#' u1 <- http_get("https://eu.httpbin.org") +#' u2 <- http_get("https://eu.httpbin.org/get") +#' when_all(u1, u2)$ +#' then(function(x) identical(x[[1]]$content, x[[2]]$content)) +#' }) +#' synchronise(afun()) +#' } + +when_all <- function(..., .list = list()) { + + defs <- c(list(...), .list) + nx <- 0L + + self <- deferred$new( + type = "when_all", + call = sys.call(), + action = function(resolve) { + self; nx; defs + lapply(seq_along(defs), function(idx) { + idx + if (is_deferred(defs[[idx]])) { + nx <<- nx + 1L + defs[[idx]]$then(function(val) list(idx, val))$then(self) + } + }) + if (nx == 0) resolve(defs) + }, + parent_resolve = function(value, resolve) { + defs[ value[[1]] ] <<- value[2] + nx <<- nx - 1L + if (nx == 0L) resolve(defs) + } + ) +} + +when_all <- mark_as_async(when_all) + +#' Resolve a deferred as soon as some deferred from a list resolve +#' +#' `when_some` creates a deferred value that is resolved as soon as the +#' specified number of deferred values resolve. +#' +#' `when_any` is a special case for a single. +#' +#' If the specified number of deferred values cannot be resolved, then +#' `when_any` throws an error. +#' +#' async has auto-cancellation, so if the required number of deferred values +#' are resolved, or too many of them throw error, the rest of the are +#' cancelled. +#' +#' If `when_any` throws an error, then all the underlying error objects +#' are returned in the `errors` member of the error object thrown by +#' `when_any`. +#' +#' @param count Number of deferred values that need to resolve. +#' @param ... Deferred values. +#' @param .list More deferred values. +#' @return A deferred value, that is conditioned on all deferred values +#' in `...` and `.list`. +#' +#' @seealso [when_all()] +#' @noRd +#' @examples +#' \donttest{ +#' ## Use the URL that returns first +#' afun <- function() { +#' u1 <- http_get("https://eu.httpbin.org") +#' u2 <- http_get("https://eu.httpbin.org/get") +#' when_any(u1, u2)$then(function(x) x$url) +#' } +#' synchronise(afun()) +#' } + +when_some <- function(count, ..., .list = list()) { + when_some_internal(count, ..., .list = .list, .race = FALSE) +} + +when_some <- mark_as_async(when_some) + +when_some_internal <- function(count, ..., .list, .race) { + force(count) + force(.race) + defs <- c(list(...), .list) + num_defs <- length(defs) + num_failed <- 0L + ifdef <- vlapply(defs, is_deferred) + resolved <- defs[!ifdef] + errors <- list() + + cancel_all <- function() lapply(defs[ifdef], function(x) x$cancel()) + + deferred$new( + type = "when_some", call = sys.call(), + parents = defs[ifdef], + action = function(resolve) { + if (num_defs < count) { + stop("Cannot resolve enough deferred values") + } else if (length(resolved) >= count) { + resolve(resolved[seq_len(count)]) + } + }, + parent_resolve = function(value, resolve) { + resolved <<- c(resolved, list(value)) + if (length(resolved) == count) { + resolve(resolved) + } + }, + parent_reject = function(value, resolve) { + if (.race) { + stop(value) + } + num_failed <<- num_failed + 1L + errors <<- c(errors, list(value)) + if (num_failed + count == num_defs + 1L) { + err <- structure( + list(errors = errors, message = "when_some / when_any failed"), + class = c("async_rejected", "error", "condition")) + stop(err) + } + } + ) +} + +#' @noRd +#' @rdname when_some + +when_any <- function(..., .list = list()) { + when_some(1, ..., .list = .list)$then(function(x) x[[1]]) +} + +when_any <- mark_as_async(when_any) + +#' Repeatedly call task, while test returns true +#' +#' @param test Synchronous test function. +#' @param task Asynchronous function to call repeatedly. +#' @param ... Arguments to pass to `task`. +#' @return Deferred value, that is resolved when the iteration is done. +#' +#' @family async control flow +#' @noRd +#' @examples +#' ## Keep calling while result is bigger than 0.1 +#' calls <- 0 +#' number <- Inf +#' synchronise(async_whilst( +#' function() number >= 0.1, +#' function() { +#' calls <<- calls + 1 +#' number <<- runif(1) +#' } +#' )) +#' calls + +async_whilst <- function(test, task, ...) { + force(test) + task <- async(task) + + self <- deferred$new( + type = "async_whilst", call = sys.call(), + action = function(resolve) { + if (!test()) { + resolve(NULL) + } else { + task(...)$then(self) + } + }, + parent_resolve = function(value, resolve) { + if (!test()) { + resolve(value) + } else { + task(...)$then(self) + } + } + ) + + self +} + +async_whilst <- mark_as_async(async_whilst) + +#' Worker pool +#' +#' The worker pool functions are independent of the event loop, to allow +#' independent testing. +#' +#' @family worker pool functions +#' @name worker_pool +#' @noRd +#' @keywords internal +#' @importFrom R6 R6Class +NULL + +worker_pool <- R6Class( + public = list( + initialize = function() + wp_init(self, private), + add_task = function(func, args, id, event_loop) + wp_add_task(self, private, func, args, id, event_loop), + get_fds = function() + wp_get_fds(self, private), + get_pids = function() + wp_get_pids(self, private), + get_poll_connections = function() + wp_get_poll_connections(self, private), + notify_event = function(pids, event_loop) + wp_notify_event(self, private, pids, event_loop), + start_workers = function() + wp_start_workers(self, private), + kill_workers = function() + wp_kill_workers(self, private), + cancel_task = function(id) + wp_cancel_task(self, private, id), + cancel_all_tasks = function() + wp_cancel_all_tasks(self, private), + get_result = function(id) + wp_get_result(self, private, id), + list_workers = function() + wp_list_workers(self, private), + list_tasks = function(event_loop = NULL, status = NULL) + wp_list_tasks(self, private, event_loop, status), + finalize = function() self$kill_workers() + ), + + private = list( + workers = list(), + tasks = list(), + + try_start = function() + wp__try_start(self, private), + interrupt_worker = function(pid) + wp__interrupt_worker(self, private, pid) + ) +) + +wp_init <- function(self, private) { + self$start_workers() + invisible(self) +} + +wp_start_workers <- function(self, private) { + num <- worker_pool_size() + + ## See if we need to start more + if (NROW(private$workers) >= num) return(invisible()) + + ## Yeah, start some more + to_start <- num - NROW(private$workers) + sess <- lapply(1:to_start, function(x) callr::r_session$new(wait = FALSE)) + fd <- viapply(sess, function(x) processx::conn_get_fileno(x$get_poll_connection())) + new_workers <- data.frame( + stringsAsFactors = FALSE, + session = I(sess), + task = NA_character_, + pid = viapply(sess, function(x) x$get_pid()), + fd = fd, + event_loop = NA_integer_ + ) + + private$workers <- rbind(private$workers, new_workers) + invisible() +} + +wp_add_task <- function(self, private, func, args, id, event_loop) { + private$tasks <- rbind( + private$tasks, + data.frame( + stringsAsFactors = FALSE, + event_loop = event_loop, id = id, func = I(list(func)), + args = I(list(args)), status = "waiting", result = I(list(NULL))) + ) + + private$try_start() + invisible() +} + +## We only need to poll the sessions that actually do something... + +wp_get_fds <- function(self, private) { + sts <- vcapply(private$workers$session, function(x) x$get_state()) + private$workers$fd[sts %in% c("starting", "busy")] +} + +wp_get_pids <- function(self, private) { + sts <- vcapply(private$workers$session, function(x) x$get_state()) + private$workers$pid[sts %in% c("starting", "busy")] +} + +wp_get_poll_connections <- function(self, private) { + sts <- vcapply(private$workers$session, function(x) x$get_state()) + busy <- sts %in% c("starting", "busy") + structure( + lapply(private$workers$session[busy], + function(x) x$get_poll_connection()), + names = private$workers$pid[busy]) +} + +wp_notify_event <- function(self, private, pids, event_loop) { + done <- NULL + dead <- integer() + which <- match(pids, private$workers$pid) + for (w in which) { + msg <- private$workers$session[[w]]$read() + if (is.null(msg)) next + if (msg$code == 200 || (msg$code >= 500 && msg$code < 600)) { + if (msg$code >= 500 && msg$code < 600) dead <- c(dead, w) + wt <- match(private$workers$task[[w]], private$tasks$id) + if (is.na(wt)) stop("Internal error, no such task") + private$tasks$result[[wt]] <- msg + private$tasks$status[[wt]] <- "done" + private$workers$task[[w]] <- NA_character_ + done <- c(done, private$tasks$id[[wt]]) + } + } + if (length(dead)) { + private$workers <- private$workers[-dead,] + self$start_workers() + } + + private$try_start() + + done +} + +worker_pool_size <- function() { + getOption("async.worker_pool_size") %||% + as.integer(Sys.getenv("ASYNC_WORKER_POOL_SIZE", 4)) +} + +wp_kill_workers <- function(self, private) { + lapply(private$workers$session, function(x) x$kill()) + private$workers <- NULL + invisible() +} + +wp_cancel_task <- function(self, private, id) { + wt <- match(id, private$tasks$id) + if (is.na(wt)) stop("Unknown task") + + if (private$tasks$status[[wt]] == "running") { + wk <- match(id, private$workers$task) + if (!is.na(wk)) private$interrupt_worker(private$workers$pid[wk]) + } + private$tasks <- private$tasks[-wt, ] + invisible() +} + +wp_cancel_all_tasks <- function(self, private) { + stop("`cancel_all_tasks` method is not implemented yet") +} + +wp_get_result <- function(self, private, id) { + wt <- match(id, private$tasks$id) + if (is.na(wt)) stop("Unknown task") + + if (private$tasks$status[[wt]] != "done") stop("Task not done yet") + result <- private$tasks$result[[wt]] + private$tasks <- private$tasks[-wt, ] + result +} + +wp_list_workers <- function(self, private) { + private$workers[, setdiff(colnames(private$workers), "session")] +} + +wp_list_tasks <- function(self, private, event_loop, status) { + dont_show <- c("func", "args", "result") + ret <- private$tasks + if (!is.null(event_loop)) ret <- ret[ret$event_loop %in% event_loop, ] + if (!is.null(status)) ret <- ret[ret$status %in% status, ] + ret[, setdiff(colnames(private$tasks), dont_show)] +} + +## Internals ------------------------------------------------------------- + +#' @importFrom utils head + +wp__try_start <- function(self, private) { + sts <- vcapply(private$workers$session, function(x) x$get_state()) + if (all(sts != "idle")) return() + can_work <- sts == "idle" + + can_run <- private$tasks$status == "waiting" + num_start <- min(sum(can_work), sum(can_run)) + will_run <- head(which(can_run), num_start) + will_work <- head(which(can_work), num_start) + + for (i in seq_along(will_run)) { + wt <- will_run[[i]] + ww <- will_work[[i]] + func <- private$tasks$func[[wt]] + args <- private$tasks$args[[wt]] + private$workers$session[[ww]]$call(func, args) + private$tasks$status[[wt]] <- "running" + private$workers$task[[ww]] <- private$tasks$id[[wt]] + } + + invisible() +} + +#' Interrupt a worker process +#' +#' We need to make sure that the worker is in a usable state after this. +#' +#' For speed, we try to interrupt with a SIGINT first, and if that does +#' not work, then we kill the worker and start a new one. +#' +#' When we interrupt with a SIGINT a number of things can happen: +#' 1. we successfully interrupted a computation, then +#' we'll just poll_io(), and read() and we'll get back an +#' interrupt error. +#' 2. The computation has finished, so we did not interrupt it. +#' In this case the background R process will apply the interrupt +#' to the next computation (at least on Unix) so the bg process +#' needs to run a quick harmless call to absorb the interrupt. +#' We can use `Sys.sleep()` for this, and `write_input()` directly +#' for speed and simplicity. +#' 3. The process has crashed already, in this case `interrupt()` will +#' return `FALSE`. `poll_io()` will return with "ready" immediately, +#' `read()` will return with an error, and `write_input()` throws +#' an error. +#' 4. We could not interrupt the process, because it was in a +#' non-interruptable state. In this case we kill it, and start a +#' new process. `poll_io()` will not return with "ready" in this case. +#' +#' @param self self +#' @param private private self +#' @param pid pid of process +#' @noRd + +wp__interrupt_worker <- function(self, private, pid) { + ww <- match(pid, private$workers$pid) + if (is.na(ww)) stop("Unknown task in interrupt_worker() method") + + kill <- FALSE + sess <- private$workers$session[[ww]] + int <- sess$interrupt() + pr <- sess$poll_io(100)["process"] + + if (pr == "ready") { + msg <- sess$read() + if (! inherits(msg, "interrupt")) { + tryCatch({ + sess$write_input("base::Sys.sleep(0)\n") + sess$read_output() + sess$read_error() + }, error = function(e) kill <<- TRUE) + } + private$workers$task[[ww]] <- NA_character_ + } else { + kill <- TRUE + } + + if (kill) { + sess$close() + private$workers <- private$workers[-ww, ] + ## Make sure that we have enough workers running + self$start_workers() + } + + invisible() +} + +#' External process via a process generator +#' +#' Wrap any [processx::process] object into a deferred value. The +#' process is created by a generator function. +#' +#' @param process_generator Function that returns a [processx::process] +#' object. See details below about the current requirements for the +#' returned process. +#' @param error_on_status Whether to fail if the process terminates +#' with a non-zero exit status. +#' @param ... Extra arguments, passed to `process_generator`. +#' @return Deferred object. +#' +#' Current requirements for `process_generator`: +#' * It must take a `...` argument, and pass it to +#' `processx::process$new()`. +#' * It must use the `poll_connection = TRUE` argument. +#' These requirements might be relaxed in the future. +#' +#' If you want to obtain the standard output and/or error of the +#' process, then `process_generator` must redirect them to files. +#' If you want to discard them, `process_generator` can set them to +#' `NULL`. +#' +#' `process_generator` should not use pipes (`"|"`) for the standard +#' output or error, because the process will stop running if the +#' pipe buffer gets full. We currently never read out the pipe buffer. +#' +#' @noRd +#' @examples +#' \dontrun{ +#' lsgen <- function(dir = ".", ...) { +#' processx::process$new( +#' "ls", +#' dir, +#' poll_connection = TRUE, +#' stdout = tempfile(), +#' stderr = tempfile(), +#' ... +#' ) +#' } +#' afun <- function() { +#' external_process(lsgen) +#' } +#' synchronise(afun()) +#' } + +external_process <- function(process_generator, error_on_status = TRUE, + ...) { + + process_generator; error_on_status; args <- list(...) + args$encoding <- args$encoding %||% "" + args$cleanup_tree <- args$cleanup_tree %||% TRUE + + id <- NULL + + deferred$new( + type = "external_process", call = sys.call(), + action = function(resolve) { + resolve + reject <- environment(resolve)$private$reject + px <- do.call(process_generator, args) + stdout <- px$get_output_file() + stderr <- px$get_error_file() + pipe <- px$get_poll_connection() + id <<- get_default_event_loop()$add_process( + list(pipe), + function(err, res) if (is.null(err)) resolve(res) else reject(err), + list(process = px, stdout = stdout, stderr = stderr, + error_on_status = error_on_status, encoding = args$encoding) + ) + }, + on_cancel = function(reason) { + if (!is.null(id)) get_default_event_loop()$cancel(id) + } + ) +} diff --git a/R/api.R b/R/api.R index 67d9d11..0cb825d 100644 --- a/R/api.R +++ b/R/api.R @@ -1,104 +1,104 @@ - baseurl <- function() { - paste0(Sys.getenv("RHUB_SERVER", "https://builder.r-hub.io"), "/api") + paste0(Sys.getenv("RHUB_SERVER", "https://builder2.rhub.io"), "/api/-") } -rhub_server <- baseurl - -endpoints <- list( - c("GET PLATFORMS", "GET", "/platform/list", FALSE), - c("VALIDATE EMAIL", "POST", "/check/validate_email", FALSE), - c("SUBMIT PACKAGE", "POST", "/check/submit", FALSE), - c("GET STATUS", "POST", "/status", FALSE), - c("GET GROUP STATUS", "GET", "/status/group/:id", FALSE), - c("LIST BUILDS EMAIL", "GET", "/list/:email", TRUE), - c("LIST BUILDS PACKAGE", "GET", "/list/:email/:package", TRUE), - c("LIVE LOG", "GET", "/livelog/text/:id", FALSE) -) - default_headers <- c( - "Accept" = "application/json", - "Content-Type" = "application/json", - "User-Agent" = "R-hub client" + "accept" = "application/json", + "content-type" = "application/json", + "user-agent" = "R-hub client" ) -#' @importFrom httr GET POST DELETE add_headers #' @importFrom jsonlite toJSON -query <- function(endpoint, params = list(), data = NULL, - query = list(), headers = character(), as = NULL) { - - ep <- get_endpoint(endpoint, params) - headers <- update( - update(default_headers, ep$headers), - as.character(headers)) - - url <- paste0(baseurl(), ep$path) - - json <- if (!is.null(data)) toJSON(data) - - response <- if (ep$method == "GET") { - GET(url, add_headers(.headers = headers), query = query) +query <- function(endpoint, method = "GET", headers = character(), + data = NULL, data_form = NULL, sse = FALSE) { - } else if (ep$method == "POST") { - POST(url, add_headers(.headers = headers), body = json, query = query) - - } else if (ep$method == "DELETE") { - DELETE(url, add_headers(.headers = headers), query = query) + url <- paste0(baseurl(), endpoint) + headers <- update(default_headers, headers) + response <- if (sse) { + query_sse(method, url, headers, data, data_form) } else { - stop("Unexpected HTTP verb, internal rhub error") + query_plain(method, url, headers, data, data_form) } - report_error(response) + if (response$status_code >= 400) { + cnd <- http_error(response) + tryCatch({ + bdy <- jsonlite::fromJSON( + rawToChar(response$content), + simplifyVector = FALSE + ) + }, error = function(err) { stop(cnd) }) + if ("message" %in% names(bdy)) { + throw(new_error(bdy[["message"]]), parent = cnd) + } else { + stop(cnd) + } + } - parse_response(response, as = as) + response } -get_endpoint <- function(endpoint, params) { - - idx <- match(endpoint, vapply(endpoints, "[[", "", 1)) - if (is.na(idx)) stop("Unknown API endpoint: ", sQuote(endpoint)) - - method <- endpoints[[idx]][2] - path <- endpoints[[idx]][3] - - colons <- re_match_all(path, ":[a-zA-Z0-9_]+")$.match[[1]] +query_sse <- function(method, url, headers, data, data_form) { + synchronise( + query_sse_async(method, url, headers, data, data_form) + ) +} - for (col in colons) { - col1 <- substring(col, 2) - value <- params[[col1]] %||% stop("Unknown API parameter: ", col) - path <- gsub(col, value, path, fixed = TRUE) +query_sse_async <- function(method, url, headers, data, data_form) { + if (method == "GET") { + q <- http_get(url, headers = headers) + } else if (method == "POST") { + q <- http_post( + url, + headers = headers, + data = data, + data_form = data_form + ) + } else { + stop("Unexpected HTTP verb, internal rhub error") } - headers <- if (endpoints[[idx]][[4]]) { - if (is.null(params$token)) { - stop("Cannot find token, email address is not validated?") + msgs <- list() + handle_sse <- function(evt) { + msgs <<- c(msgs, list(evt)) + if (evt[["event"]] == "progress") { + msg <- jsonlite::fromJSON(evt[["data"]]) + cli::cli_alert(msg, .envir = emptyenv()) + } else if (evt[["event"]] == "result") { + cli::cli_alert_success("Done.") + } else if (evt[["event"]] == "error") { + msg <- jsonlite::fromJSON(evt[["data"]]) + cli::cli_alert_danger(msg, .envir = emptyenv()) + stop("Aborting") } - c("Authorization" = paste("token", params$token)) } - list(method = method, path = path, headers = headers) -} - -#' @importFrom httr headers content -#' @importFrom jsonlite fromJSON - -parse_response <- function(response, as = NULL) { + evs <- sse_events$new(q) + evs$listen_on("event", handle_sse) - content_type <- headers(response)$`content-type` + q$then(function(response) { + response$sse <- msgs + response + }) +} - if (is.null(content_type) || length(content_type) == 0) { - content(response, as = "text") +query_plain <- function(method, url, headers, data, data_form) { + response <- if (method == "GET") { + synchronise(http_get(url, headers = headers)) - } else if (grepl("^application/json", content_type, ignore.case = TRUE)) { - if (is.null(as)) { - fromJSON(content(response, as = "text"), simplifyVector = FALSE) - } else { - content(response, as = as) - } + } else if (method == "POST") { + synchronise(http_post( + url, + headers = headers, + data = data, + data_form = data_form + )) } else { - content(response, as = "text") + stop("Unexpected HTTP verb, internal rhub error") } -} + + response +} \ No newline at end of file diff --git a/R/assertions.R b/R/assertions.R index e2563c9..f4bbb57 100644 --- a/R/assertions.R +++ b/R/assertions.R @@ -1,110 +1,68 @@ -#' @importFrom assertthat assert_that on_failure<- - -is_pkg_dir <- function(path) { - file.exists(path) && - file.info(path)$isdir && - file.exists(file.path(path, "DESCRIPTION")) -} - -is_pkg_tarball <- function(path) { - file.exists(path) && - grepl("\\.tar\\.gz", path) -} - -is_pkg_dir_or_tarball <- function(path) { - is_pkg_tarball(path) || is_pkg_dir(path) -} - -on_failure(is_pkg_dir_or_tarball) <- function(call, env) { - paste0( - deparse(call$path), - " is not an R package directory or source R package" - ) +is_character <- function(x) { + if (!is.character(x)) { + structure( + FALSE, + msg = "{.arg {(.arg)}} must be a character vector without {.code NA}, + but it is {.type {x}}", + env = environment() + ) + } else if (anyNA(x)) { + structure( + FALSE, + msg = "{.arg {(.arg)}} must be a character vector without {.code NA}, + but it has {sum(is.na(x))} {.code NA} value{?s}.", + env = environment() + ) + } else { + TRUE + } } is_string <- function(x) { - !is.null(x) && - is.character(x) && - length(x) == 1 && - !is.na(x) -} - -on_failure(is_string) <- function(call, env) { - paste0(deparse(call$x), " is not a string") -} - -is_string_or_null <- function(x) { - is_string(x) || is.null(x) -} - -on_failure(is_string_or_null) <- function(call, env) { - paste0(deparse(call$x), " is not a string and not NULL") -} - -is_email <- function(x) { - assert_that(is_string(x)) - grepl(".@.", x) -} - -on_failure(is_email) <- function(call, env) { - paste0(deparse(call$x), " is not an email address") -} - -is_flag <- function(x) { - !is.null(x) && - is.logical(x) && - length(x) == 1 && - !is.na(x) -} - -on_failure(is_flag) <- function(call, env) { - paste0(deparse(call$x), " is not a flag (length one logical)") -} - -is_named <- function(x) { - length(names(x)) == length(x) && - all(names(x) != "") -} - -on_failure(is_named) <- function(call, env) { - paste0(deparse(call$x), " does not have names") -} - -is_token <- function(x) { - assert_that(is_string(x)) - grepl("[a-zA-Z0-9]{6}", x, perl = TRUE) -} - -on_failure(is_token) <- function(call, env) { - paste0(deparse(call$x), " does not look like an R-hub token") -} - -is_check_ids <- function(x) { - is.character(x) && length(x) >= 1 && all(x != "") -} - -on_failure(is_check_ids) <- function(call, env) { - paste0(deparse(call$x), " is not a vector of check ids") -} - -is_count <- function(x) { - is.numeric(x) && length(x) == 1 && as.integer(x) == x -} - -on_failure(is_count) <- function(call, env) { - paste0(deparse(call$x), " is not a count (length 1 integer)") -} - -as_timeout <- function(x) { - if (inherits(x, "difftime")) return(x) - as.difftime(as.double(x), units = "secs") -} - -is_timeout <- function(x) { - inherits(x, "difftime") && length(x) == 1 && !is.na(x) + if (is.character(x) && length(x) == 1 && !is.na(x)) return(TRUE) + if (is.character(x) && length(x) == 1 && is.na(x)) { + structure( + FALSE, + msg = "{.arg {(.arg)}} must not be {.code NA}.", + env = environment() + ) + } else { + structure( + FALSE, + msg = "{.arg {(.arg)}} must be a string (character scalar), + but it is {.type {x}}.", + env = environment() + ) + } +} + +is_optional_string <- function(x) { + if (is.null(x) || is_string(x)) return(TRUE) + structure( + FALSE, + msg = "{.arg {(.arg)}} must be a path (character scalar), + but it is {.type {x}}.", + env = environment() + ) } -on_failure(is_timeout) <- function(call, env) { - paste0(deparse(call$x), " must be a timeout, a 'difftime' constant") +is_optional_gh_url <- function(x) { + if (is.null(x)) return(TRUE) + + if (!is_string(x)) { + structure( + FALSE, + msg = "{.arg gh_url} must be a character string. + You supplied {.type {x}}." + ) + } else if (!grepl("^https?://", x)) { + structure( + FALSE, + msg = "{.arg gh_url} must be an HTTP or HTTPS URL. + You supplied: {.val {x}}." + ) + } else { + TRUE + } } diff --git a/R/assertthat.R b/R/assertthat.R new file mode 100644 index 0000000..c24b704 --- /dev/null +++ b/R/assertthat.R @@ -0,0 +1,183 @@ + +assert_that <- function(..., env = parent.frame(), msg = NULL) { + asserts <- eval(substitute(alist(...))) + + for (assertion in asserts) { + res <- tryCatch({ + eval(assertion, env) + }, assertError = function(e) { + structure(FALSE, msg = e$message) + }) + check_result(res) + if (res) next + + if (is.null(msg)) { + msg <- get_message(res, assertion, env) + evalenv <- attr(res, "env") %||% env + } else { + evalenv <- env + } + throw(assert_error( + assertion, + res, + msg, + call. = sys.call(-1), + .envir = evalenv, + ), frame = env) + } + + invisible(TRUE) +} + +assert_error <- function(assertion, result, msg, .data = NULL, .class = NULL, + .envir = parent.frame(), call. = TRUE) { + + myenv <- new.env(parent = .envir) + myenv$.arg <- if (length(assertion) >= 2) deparse(assertion[[2]]) + myenv$.arg2 <- if (length(assertion) >= 3) deparse(assertion[[3]]) + .hide_from_trace <- TRUE + cnd <- new_error( + call. = call., + cli::format_error( + .envir = myenv, + msg + ) + ) + + if (length(.data)) cnd[names(.data)] <- .data + if (length(class)) class(cnd) <- unique(c(.class, "assertError", class(cnd))) + + cnd +} +check_result <- function(x) { + if (!is.logical(x)) { + throw(pkg_error( + "{.fun assert_that}: assertion must return a logical value.", + "i" = "it returned {.type {x}} instead." + )) + } + + if (length(x) != 1) { + throw(pkg_error( + "{.fun assert_that}: assertion must return a scalar.", + "i" = "it returned a vector of length {length(x)}." + )) + } + + if (any(is.na(x))) { + throw(pkg_error( + "{.fun assert_that}: assertion must not return {.code NA}." + )) + } + + TRUE +} + +get_message <- function(res, call, env = parent.frame()) { + if (has_attr(res, "msg")) { + return(attr(res, "msg")) + } + + f <- eval(call[[1]], env) + if (is.call(call) && !is.primitive(f)) call <- match.call(f, call) + fname <- deparse(call[[1]]) + + base_fs[[fname]] %||% fail_default(call, env) +} + +# The default failure message works in the same way as stopifnot, so you can +# continue to use any function that returns a logical value: you just won't +# get a friendly error message. +# The code below says you get the first 60 characters plus a ... +fail_default <- function(call, env) { + call_string <- deparse(call, width.cutoff = 60L) + if (length(call_string) > 1L) { + call_string <- paste0(call_string[1L], "...") + } + + paste0(call_string, " is not true") +} + +has_attr <- function(x, which) { + if (!is.null(attr(x, which, exact = TRUE))) return(TRUE) + structure( + FALSE, + msg = "{.arg {(.arg)}} must have attribute {.code {which}}.", + env = environment() + ) +} +"%has_attr%" <- has_attr + +base_fs <- new.env(parent = emptyenv()) + +# nocov start + +logical_is_not <- function(failed) { + paste0("{.arg {(.arg)}} must ", failed, " {.arg {(.arg2)}}.") +} + +base_fs$"==" <- logical_is_not("equal") +base_fs$"<" <- logical_is_not("be less than") +base_fs$">" <- logical_is_not("be greater than") +base_fs$">=" <- logical_is_not("be greater than or equal to") +base_fs$"<=" <- logical_is_not("be less than or equal to") +base_fs$"!=" <- logical_is_not("not be equal to") + +is_not <- function(thing) { + paste0("{.arg {(.arg)}} must be ", thing, ".") +} + +# nocov end + +# Vectors +base_fs$is.atomic <- is_not("an atomic vector") +base_fs$is.character <- is_not("a character vector") +base_fs$is.complex <- is_not("a complex vector") +base_fs$is.double <- is_not("a numeric vector") +base_fs$is.integer <- is_not("an integer vector") +base_fs$is.numeric <- is_not("a numeric or integer vector") +base_fs$is.raw <- is_not("a raw vector") +base_fs$is.vector <- is_not("an atomic vector without attributes") + +# Factors +base_fs$is.factor <- is_not("a factor") +base_fs$is.ordered <- is_not("an ordered factor") + +# More complicated data structures +base_fs$is.array <- is_not("an array") +base_fs$is.data.frame <- is_not("a data frame") +base_fs$is.list <- is_not("a list") +base_fs$is.matrix <- is_not("a matrix") +base_fs$is.null <- is_not("{.code NULL}") + +# Functions and environments +base_fs$is.environment <- is_not("an environment") +base_fs$is.function <- is_not("a function") +base_fs$is.primitive <- is_not("a primitive function") + +# Computing on the language +base_fs$is.call <- is_not("a quoted call") +base_fs$is.expression <- is_not("an expression object") +base_fs$is.name <- is_not("a name") +base_fs$is.pairlist <- is_not("a pairlist") +base_fs$is.recursive <- is_not("a recursive object") +base_fs$is.symbol <- is_not("a name") + +# Catch all +base_fs$"&&" <- + "{.arg {(.arg)}} and {.arg {(.arg2)}} must both be true." + +base_fs$"||" <- + "One of {.arg {(.arg)}} and {.arg {(.arg2)}} must be true." + +base_fs$any <- + "At least one of {.arg {(.arg)}} must be true." + +base_fs$all <- + "All of {.arg {(.arg)}} must be true." + +base_fs$file.exists <- + "Path {.arg {(.arg)}} must exist." + +base_fs$identical <- + "{.arg {(.arg)}} must be identical to {.arg {(.arg2)}}." diff --git a/R/build.R b/R/build.R deleted file mode 100644 index 18a7739..0000000 --- a/R/build.R +++ /dev/null @@ -1,26 +0,0 @@ - -#' @importFrom withr with_dir -#' @importFrom callr rcmd_safe - -build_package <- function(path, tmpdir) { - - path <- normalizePath(path) - - dir.create(tmpdir) - file.copy(path, tmpdir, recursive = TRUE) - - ## If not a tar.gz, build it. Otherwise just leave it as it is. - if (file.info(path)$isdir) { - build_status <- with_dir( - tmpdir, - rcmd_safe("build", basename(path)) - ) - unlink(file.path(tmpdir, basename(path)), recursive = TRUE) - report_system_error("Build failed", build_status) - } - - file.path( - tmpdir, - list.files(tmpdir, pattern = "\\.tar\\.gz$") - ) -} diff --git a/R/check-class.R b/R/check-class.R deleted file mode 100644 index f755bd4..0000000 --- a/R/check-class.R +++ /dev/null @@ -1,438 +0,0 @@ - -#' @title R-hub check ids -#' @section R-hub ids: -#' -#' Every R-hub check has a unique id, that is constructed from the -#' name of the source package archive, and a random string. For example: -#' ```r -#' devtools_2.0.0.tar.gz-fe53bbba85de4a579f6dc3b852bf76a3 -#' ``` -#' -#' @section R-hub group ids: -#' -#' For every check submission, R-hub also creates a unique check group id. -#' One check group may contain multiple checks. E.g. [check_for_cran()] -#' typically creates three or four check groups. Group ids look the same -#' as individual check ids. -#' -#' @section Abbreviating ids: -#' -#' The rhub package keeps a list of all the checks that it has seen in the -#' current session, and these checks can be also referenced by any unique -#' prefix of the random string part of the id, e.g. in the [get_check()] -#' function. E.g. if rhub already know the devtools check above, then -#' ```r -#' get_check("fe53bbb") -#' ``` -#' works. -#' -#' This is only recommended in interactive mode, and we suggest that you -#' always use the full ids when using rhub programmatically. -#' -#' @name rhub-ids -NULL - -#' An `rhub_check` object holds status and results of rhub checks -#' -#' @section Usage: -#' ``` -#' ch <- rhub_check$new(ids = NULL, status = NULL, group = NULL) -#' ch$get_ids() -#' ch$update() -#' ch$print(...) -#' ch$browse(which = NULL) -#' ch$urls(which = NULL) -#' ch$livelog(which = 1) -#' ch$cran_summary() -#' ``` -#' -#' @section Arguments: -#' * `ch` An rhub check object. It can be created using [`check()`], -#' and other check functions including [`check_for_cran`]. -#' See also [last_check()]. -#' * `ids` Character vector of check ids. -#' * `status` Check status for `ids` or `group`. -#' * `group` Check group id, string scalar. Either `group` or `ids` must -#' be non-`NULL`. -#' * `...` Extra arguments are currently ignored. -#' * `which` Which check to show, if the object contains multiple -#' checks. For `browse` the default is all checks. For `livelog` the -#' default is the first check. A check can be selected via its number -#' or id. -#' -#' @section Details: -#' -#' An `rhub_check` object can be created by [check()], [list_my_checks()], -#' or [list_package_checks()]. [last_check()] returns the last check(s) -#' submitted from the current R session. Do not confuse `rhub_check`/`rhub_check_for_cran` -#' (classes) with [check()] or [check_for_cran()] (functions). -#' -#' `ch$get_ids()` returns the check ids. These can be used to query if a -#' check has finished. -#' -#' `ch$update()` updates the status of the check. Printing the check -#' status to the screen does not perform an update, unless the status of -#' the check(s) is unknown. -#' -#' `ch$print()` prints the status of the check(s) to the screen. -#' -#' `ch$cran_summary()` prints text to be copy-pasted in cran-comments.md, -#' it is especially useful on the output of [`check_for_cran()`]. -#' -#' `ch$browse()` opens a tab or window in the default web browser, that points -#' to the detailed logs of the check(s). -#' -#' `ch$urls()` return a [`tibble::tibble`] with URL to the html log, text log and artifacts -#' of the check(s). -#' -#' For both `ch$browse()` and `ch$urls()`, note that the logs and artifacts -#' are not kept forever, they are accessible for a few days after submission. -#' -#' `ch$livelog()` shows the live log of the check. The live log can be -#' interrupted using the usual interruption keyboard shortcut, usually -#' `CTRL+c` or `ESC`. -#' -#' @name rhub_check -NULL - -#' @importFrom R6 R6Class - -rhub_check <- R6Class( - "rhub_check", - - public = list( - - initialize = function(ids = NULL, status = NULL, group = NULL) - check_init(self, private, ids, status, group), - - get_ids = function() private$ids_, - - update = function() - check_update(self, private), - - print = function(...) - check_print(self, private, ...), - - web = function(which = NULL) - check_web(self, private, which), - - browse = function(which = NULL) - self$web(which), - - urls = function(which = NULL) - check_urls(self, private, which), - - livelog = function(which = 1) - check_livelog(self, private, which), - - cran_summary = function() - check_cran_summary(self, private) - ), - - private = list( - ids_ = NULL, # character vector of ids - group_ = NULL, # group id - status_ = NULL, # list of status objects, as in DB - status_updated_ = NULL # last time status was updated - ) -) - -check_init <- function(self, private, ids, status, group) { - assert_that( - is_check_ids(ids) || is.null(ids), - (is_check_ids(group) && length(group) == 1) || is.null(group), - !is.null(ids) || !is.null(group)) - - private$ids_ <- ids - private$group_ <- group - private$status_ <- status - status_updated_ <- Sys.time() - invisible(self) -} - -check_update <- function(self, private) { - ## If it is a group, we need to get the ids first. This also updates - ## the status of the individual checks - if (!is.null(private$group_) && is.null(private$ids_)) { - grp <- query("GET GROUP STATUS", list(id = private$group_)) - private$ids_ <- map_chr(grp, "[[", "id") - private$status_ <- grp - private$status_updated_ <- Sys.time() - for (i in seq_along(grp)) cache_put(grp[[i]]$id, grp[[i]]) - return(invisible(self)) - } - - ## Check which ones need update. We need to update if we don't know - ## anything about the id, or if it has not finished yet. - cached <- lapply(private$ids_, cache_get) - need_upd <- map_lgl(cached, function(x) { - is.null(x) || x$status %in% c("created", "in-progress") - }) - - if (any(need_upd)) { - ## Update - upd <- query("GET STATUS", data = list(id = private$ids_[need_upd])) - cached[need_upd] <- upd - - ## Update the cache - for (i in seq_along(upd)) cache_put(private$ids_[need_upd][i], upd[[i]]) - } - - ## Update the object, we always do this, in case the object is outdated, - ## but the cache is not - private$status_ <- cached - private$status_updated_ <- Sys.time() - - invisible(self) -} - -#' @importFrom utils browseURL - -check_web <- function(self, private, which) { - - ids <- select_ids(which = which, self = self, - private = private) - - urls <- paste0(sub("/api$", "/status/", baseurl()), ids) - - lapply(urls, browseURL) - invisible(self) -} - -check_urls <- function(self, private, which) { - - ids <- select_ids(which = which, self = self, - private = private) - - tibble::tibble(html = paste0(sub("/api$", "/status/", baseurl()), ids), - text = paste0(sub("/api$", "/status/original/", baseurl()), - ids), - artifacts = paste0("https://artifacts.r-hub.io/", ids), - stringsAsFactors = FALSE) -} - -select_ids <- function(which, self, private){ - ids <- if (is.null(which)) { - private$ids_ - } else if (is.numeric(which)) { - private$ids_[which] - } else if (is.character(which)) { - intersect(private$ids_, which) - } else { - stop("Unknown check selected", - call. = FALSE) - } - - return(ids) -} - -check_cran_summary <- function(self, private) { - - self$update() - - x <- private$status_ - - statuses <- map_chr(x, "[[", "status") - - if (any(statuses %in% c("in-progress", "created"))) { - stop(paste("At least one of the builds has not finished yet.", - "Please wait before calling `cran_summary()` again."), - call. = FALSE) - } - - if (any(statuses %in% problem_statuses())) { - platforms <- lapply(x, "[[", "platform") - platform_names <- map_chr(platforms, "[[", "name") - stop(paste("Build failures on platforms:", - toString(platform_names[statuses %in% problem_statuses()]), - ". \n", - "Read the log(s) to fix and if needed ask for help via ", - "https://docs.r-hub.io/#pkg-dev-help"), - call. = FALSE) - } - - result <- do.call("rbind", - lapply(x, rectangle_status)) - - systems <- paste0("- R-hub ", - vapply(x, function(xx) xx$platform$name, ""), - " (", - vapply(x, function(xx) xx$platform$rversion, ""), - ")") - lines <- paste0(systems, "\n") - - - result <- result[!is.na(result$type),] - - if (nrow(result) > 0){ - message("For a CRAN submission we recommend that you fix all NOTEs, WARNINGs and ERRORs.") - unique_results <- unique(result[, c("type", "hash")]) - - makeshift <- structure( - list( - package = x$package, - version = toString(vapply(x, function(xx) xx$platform$name, "")), - rversion = toString(systems), - output = list(), - platform = toString(systems), - notes = unlist(lapply(unique(unique_results$hash[unique_results$type == "NOTE"]), - combine_message, result = result)), - warnings = unlist(lapply(unique(unique_results$hash[unique_results$type == "WARNING"]), - combine_message, result = result)), - errors = unlist(lapply(unique(unique_results$hash[unique_results$type == "ERROR"]), - combine_message, result = result)) - ), - class = "rcmdcheck" - ) - - } else { - makeshift <- structure( - list( - package = x$package, - version = toString(vapply(x, function(xx) xx$platform$name, "")), - rversion = toString(systems), - output = list(), - platform = toString(systems), - notes = NULL, - warnings = NULL, - errors = NULL - ), - class = "rcmdcheck" - ) - } - - cat("## Test environments\n") - cat(lines, sep = "") - cat("\n") - cat("## R CMD check results\n") - print(makeshift, header = FALSE) - - invisible(self) -} - - -get_status_part <- function(part, x){ - output <- unlist(x[part]) - if(is.null(output)){ - return("") - }else{ - output - } -} - -rectangle_status <- function(x){ - - df <- rbind(data.frame(type = "ERROR", - message = get_status_part("errors", x$result), - stringsAsFactors = FALSE), - data.frame(type = "WARNING", - message = get_status_part("warnings", x$result), - stringsAsFactors = FALSE), - data.frame(type = "NOTE", - message = get_status_part("notes", x$result), - stringsAsFactors = FALSE)) - - df <- df[df$message != "",] - - if(nrow(df) == 0){ - df <- data.frame(package = x$package, - type = NA, - message = NA, - hash = NA) - } else{ - df$hash <- hash_check(df$message) - } - - df$package <- x$package - df$version <- x$version - df$submitted <- x$submitted - df$platform <- paste0(x$platform$name, " (", x$platform$rversion, - ")") - - return(df) -} - -combine_message <- function(hash, result){ - paste0("On ", toString(result$platform[result$hash == hash]), "\n", - result$message[result$hash == hash][1]) -} - -# from rcmdcheck https://github.com/r-lib/rcmdcheck/blob/968fd9ba76ee9b7bf65d192568555ab57160165e/R/parse.R#L110 -#' @importFrom digest digest - -hash_check <- function(check) { - cleancheck <- gsub("[^a-zA-Z0-9]", "", first_line(check)) - vapply(cleancheck, digest::digest, "") -} - -# from rcmdcheck https://github.com/r-lib/rcmdcheck/blob/afadc6c53310cad2b64e0a58e399efd1ae18d7dd/R/utils.R#L91 -first_line <- function(x) { - l <- strsplit(x, "\n", fixed = TRUE) - vapply(l, "[[", "", 1) -} - -#' Retrieve the result of R-hub checks -#' -#' @param ids One of the following: -#' - A single R-hub check id. -#' - A character vector of check ids. -#' - An R-hub check group id. -#' All ids can be abbreviated, see [R-hub ids][rhub-ids]. -#' @return An [rhub_check] object. -#' -#' @section Examples: -#' ``` -#' chk <- get_check("915ee61") -#' chk -#' chk$update() -#' chk$browse() -#' chk$cran_summary() -#' chk$urls() -#' ``` -#' -#' @export -#' @seealso [list_my_checks()] and [list_package_checks()] to list -#' R-hub checks. - -get_check <- function(ids) { - assert_that(is_check_ids(ids)) - - short <- grep(re_id, ids, invert = TRUE, value = TRUE) - if (length(short) && - length(package_data$ids) == 0 && - length(package_data$groups) == 0) { - stop( - "Short check id '", short[1], "' ", - if (length(short) > 1) paste0("(and ", length(short)-1, " more) "), - "can only be used for cached ids, and no ids are cached yet.\n", - " Try calling `list_my_checks()` or `list_package_checks()` first." - ) - } - - sle <- cache_get_ids(ids) - grp <- cache_get_group_ids(ids) - - err <- NULL - - ## If we are not sure that it is a group id, then query single ids - res <- if (length(ids) > 1 || is.na(grp)) { - ids2 <- ifelse(is.na(sle), ids, sle) - tryCatch( - rhub_check$new(ids2)$update(), - error = function(e) { err <<- e; NULL } - ) - } - - if (!is.null(res)) return(res) - - ## If there is a chance that it is a group, then we try that as well - if (length(ids) == 1 && is.na(sle)) { - ids3 <- if (is.na(grp)) ids else grp - res <- rhub_check$new(group = ids3)$update() - res - } else { - stop(err) - } -} - -re_id <- "-[0-9a-f]{32}$" diff --git a/R/check-cran.R b/R/check-cran.R deleted file mode 100644 index ae6f3f4..0000000 --- a/R/check-cran.R +++ /dev/null @@ -1,56 +0,0 @@ - -#' Check an R-package on R-hub, for a CRAN submission -#' -#' This function calls [check()] with arguments and platforms, that -#' are suggested for a CRAN submission. -#' -#' In particular, if `platforms` is `NULL` (the default), then -#' * It checks the package on Windows, and Linux. -#' * It checks the package on R-release and R-devel. -#' * It uses the `--as-cran` argument to `R CMD check`. -#' * It requires all dependencies, including suggested ones. -#' -#' @details This function is wrapped by `devtools::check_rhub()` which you -#' might find useful if you load `devtools` via your .Rprofile (see `usethis::use_devtools()`). -#' -#' @param check_args Arguments for `R CMD check`. By default `--as-cran` -#' is used. -#' @param env_vars Character vector of environment variables to set on the builder. -#' By default `_R_CHECK_FORCE_SUGGESTS_="true"` is set, to require all packages used. -#' `_R_CHECK_CRAN_INCOMING_USE_ASPELL_="true"` is also set, to use the -#' spell checker. -#' @param ... Additional arguments are passed to [check()]. -#' @inheritParams check -#' @return An [rhub_check] object. -#' -#' @export -#' @examples -#' \dontrun{ -#' ch <- check_for_cran("package", show_status = FALSE) -#' ch$update() -#' ch$livelog(3) -#' } - -check_for_cran <- function( - path = ".", email = NULL, check_args = "--as-cran", - env_vars = c("_R_CHECK_FORCE_SUGGESTS_" = "true", - "_R_CHECK_CRAN_INCOMING_USE_ASPELL_" = "true"), platforms = NULL, - ...) { - - path <- normalizePath(path) - assert_that(is_pkg_dir_or_tarball(path)) - - platforms <- platforms %||% default_cran_check_platforms(path) - - check(path = path, platforms = platforms, email = email, - check_args = check_args, env_vars = env_vars, ...) -} - -default_cran_check_platforms <- function(path) { - c( - "windows-x86_64-devel", - "ubuntu-gcc-release", - "fedora-clang-devel", - if (needs_compilation(path)) "linux-x86_64-rocker-gcc-san" - ) -} diff --git a/R/check-shortcuts.R b/R/check-shortcuts.R deleted file mode 100644 index 3f142e1..0000000 --- a/R/check-shortcuts.R +++ /dev/null @@ -1,139 +0,0 @@ - -## Various OSes -------------------------------------------------------- - -#' Check an R package on an R-hub platform -#' -#' These functions provide a quick easy to use interface to check a -#' package on a platform with some particular aspect. Which platform -#' they use might change over time. -#' -#' @param ... Additional arguments are passed to [check()]. -#' @return An [rhub_check] object. -#' @inheritParams check -#' -#' @export -#' @rdname check_shortcuts - -check_on_linux <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$linux, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_on_windows <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$windows, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_on_macos <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$macos, ...) -} - -## Various Linux OSes -------------------------------------------------- - -#' @export -#' @rdname check_shortcuts - -check_on_debian <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$debian, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_on_ubuntu <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$ubuntu, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_on_fedora <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$fedora, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_on_solaris <- function(path = ".", check_args = - "'--no-manual --no-build-vignettes'", ...) { - check(path = path, platforms = check_shortcut_platforms$solaris, - check_args = check_args, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_on_centos <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$centos, ...) -} - -## R versions -------------------------------------------------------- - -#' @export -#' @rdname check_shortcuts - -check_with_roldrel <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$roldrel, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_with_rrelease <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$rrelease, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_with_rpatched <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$rpatched, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_with_rdevel <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$rdevel, ...) -} - -## Extra checks -------------------------------------------------------- - -#' @export -#' @rdname check_shortcuts - -check_with_valgrind <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$valgrind, - valgrind = TRUE, ...) -} - -#' @export -#' @rdname check_shortcuts - -check_with_sanitizers <- function(path = ".", ...) { - check(path = path, platforms = check_shortcut_platforms$sanitizers, ...) -} - -## ------------------------------------------------------------------- - - -check_shortcut_platforms <- list( - "linux" = "debian-gcc-release", - "windows" = "windows-x86_64-release", - "macos" = "macos-highsierra-release", - "valgrind" = "debian-gcc-release", - "sanitizers" = "linux-x86_64-rocker-gcc-san", - "roldrel" = "windows-x86_64-oldrel", - "rrelease" = "debian-gcc-release", - "rpatched" = "debian-gcc-patched", - "rdevel" = "debian-gcc-devel", - "debian" = "debian-gcc-release", - "ubuntu" = "ubuntu-gcc-release", - "fedora" = "fedora-gcc-devel", - "centos" = "linux-x86_64-centos6-epel", - "solaris" = "solaris-x86-patched" -) diff --git a/R/check.R b/R/check.R index 42fda1d..6b0ed0f 100644 --- a/R/check.R +++ b/R/check.R @@ -1,114 +1,74 @@ -#' Check an R package on R-hub +#' Check a package on R-hub #' -#' @param path Path to a directory containing an R package, or path to -#' source R package tarball built with `R CMD build` or -#' `devtools::build()`. -#' @param platforms A character vector of one or more platforms to build/check -#' the package on. See [platforms()] for the available platforms. If this is -#' \code{NULL}, and the R session is interactive, then a menu is shown. If it -#' is \code{NULL}, and the session is not interactive, then the default R-hub -#' platforms are used. A vector of platforms which saves time by building one -#' R package tarball that is used for all the platforms specified. -#' @param email Email address to send notification to about the check. -#' It must be a validated email address, see [validate_email()]. If -#' `NULL`, then the email address of the maintainer is used, as defined -#' in the `DESCRIPTION` file of the package. -#' @param valgrind Whether to run the check in valgrind. Only supported on -#' Linux currently, and ignored on other platforms. -#' @param check_args Extra arguments for the `R CMD check` command. -#' @param env_vars Environment variables to set on the builder machine -#' before the check. A named character vector. -#' @param show_status Whether to show the status of the build and check -#' (live log) as it is happening. -#' @return An [rhub_check] object. +#' @param gh_url GitHub URL of a package to check, or `NULL` to check +#' the package in the current directory. +#' @param platforms Platforms to use, a character vector. Use `NULL` to +#' select from a list in interactive sessions. See [rhub_platforms()]. +#' @param r_versions Which R version(s) to use for the platforms that +#' supports multiple R versions. This arguemnt is not implemented yet. +#' @param branch Branch to use to run R-hub. Defaults to the current +#' branch if `gh_url` is `NULL`. Otherwise defaults to `"main"`. Note that +#' this branch also need to include the `rhub.yaml` workflow file. +#' @return TODO #' #' @export -#' @examples -#' \dontrun{ -#' check(".") -#' check("mypackage_1.0.0.tar.gz", platforms = "fedora-clang-devel") -#' } -check <- function(path = ".", platforms = NULL, - email = NULL, valgrind = FALSE, check_args = character(), - env_vars = character(), show_status = interactive()) { - - ## Check that it is a package - path <- normalizePath(path) - assert_that(is_pkg_dir_or_tarball(path)) - assert_that(is_flag(valgrind)) - assert_that(is_named(env_vars)) - assert_that(is.character(env_vars)) - - ## Make sure that maintainer email was validated - if (is.null(email)) email <- get_maintainer_email(path) - if (is.na(email)) stop("Cannot get email address from package") - assert_validated_email_for_check(email) +rhub_check <- function(gh_url = NULL, platforms = NULL, r_versions = NULL, + branch = NULL) { + assert_that( + is_optional_gh_url(gh_url), + is.null(platforms) || is_character(platforms), + is_optional_string(branch) + ) - platforms <- match_platform(platforms) + git_root <- if (is.null(gh_url)) setup_find_git_root() + pat_url <- gh_url %||% "https://github.com" + pat <- doctor_find_pat(pat_url) + gh_url <- gh_url %||% doctor_find_gh_url(repo = git_root) - ## Build the tar.gz, if needed - if (file.info(path)$isdir) { - header_line("Building package") - pkg_targz <- build_package(path, tmpdir <- tempfile()) - } else { - pkg_targz <- path + if (is.null(branch)) { + if (!is.null(git_root)) { + branch <- gert::git_branch(repo = git_root) + } else { + branch <- "main" + } } - ## Add valgrind to check_args - check_args <- c( - check_args, - if (valgrind) "--use-valgrind" - ) + platforms <- select_platforms() - ## Submit to R-hub - response <- submit_package( - email, - pkg_targz, - platforms = platforms, - check_args = check_args, - env_vars = env_vars + url <- parse_gh_url(gh_url) + ep <- glue::glue("/repos/{url$user}/{url$repo}/actions/workflows/rhub.yaml/dispatches") + config <- list(platforms = platforms) + name <- paste(platforms, collapse = ", ") + id <- random_id() + data <- list( + ref = branch, + inputs = list( + config = jsonlite::toJSON(config, auto_unbox = TRUE), + name = name, + id = id + ) ) + jsondata <- jsonlite::toJSON(data, auto_unbox = TRUE) - ids <- vapply(response, "[[", "", "id") - chk <- rhub_check$new(ids = ids) + resp <- gh_rest_post(url$api, ep, token = pat, data = jsondata) - package_data$last_handle <- chk - lapply(ids, cache_put, status = NULL) - - ## Show the live status, if requested - if (show_status) chk$livelog() + if (resp$status_code != 204) { + throw(pkg_error( + ":( Failed to start check: {resp$content$message}.", + i = "If you think this is a bug in the {.pkg rhub} package, please + open an issues at {.url https://github.com/r-hub/rhub/issues}." + )) + } - invisible(chk) -} + aurl <- paste0("https://", url$host, "/", url$user, "/", url$repo, "/actions") + cli::cli_text() + cli::cli_bullets(c( + "v" = "Check started: {name} ({id}).", + " " = "See {.url {aurl}} for live output!" + )) -assert_validated_email_for_check <- function(email) { - assert_that(is_email(email)) - code <- email_get_token(email) - if (is.null(code)) { - if (is_interactive()) { - cat("\n") - message(paste(collapse = "\n", strwrap(indent = 2, exdent = 2, paste0( - sQuote(crayon::green(email)), " is not validated, or does not match ", - "the package maintainer's email. To validate it now, please enter ", - "the email address below. Note that R-hub will send a token to ", - "this address. If the address does not belong to you, quit now by ", - "pressing ", crayon::yellow("ENTER"), ". You can also specify a ", - "different email by suppling email=." - )))) - cat("\n") - email2 <- readline(" Email address: ") - cat("\n") - if (email2 == "") { - stop("Aborting.", call. = FALSE) - } else if (email != email2) { - stop("Emails don't match, aborting", call. = FALSE) - } - validate_email(email) - } else { - stop(sQuote(email), " is not validated") - } - } + invisible(NULL) } diff --git a/R/cli.R b/R/cli.R new file mode 100644 index 0000000..ea10084 --- /dev/null +++ b/R/cli.R @@ -0,0 +1,11 @@ + +cli_status <- function(msg, ..., .auto_close = FALSE) { + msg + cli::cli_status( + msg = "{.alert {msg}}", + msg_done = "{.alert-success {msg}}", + msg_failed = "{.alert-danger {msg}}", + .auto_close = .auto_close, + ... + ) +} diff --git a/R/column-dt.R b/R/column-dt.R deleted file mode 100644 index e439221..0000000 --- a/R/column-dt.R +++ /dev/null @@ -1,13 +0,0 @@ - -column_dt <- function(x) { - as.difftime(x / 1000, units = "secs") -} - -#' @importFrom pillar pillar_shaft new_pillar_shaft_simple -#' @importFrom prettyunits pretty_dt -#' @export - -pillar_shaft.difftime <- function(x, ...) { - cx <- my_pretty_dt(x) - new_pillar_shaft_simple(cx, ...) -} diff --git a/R/column-group-id.R b/R/column-group-id.R deleted file mode 100644 index b26492b..0000000 --- a/R/column-group-id.R +++ /dev/null @@ -1,25 +0,0 @@ - -column_group_id <- function(x) { - structure(x, class = unique(c("rhub_column_group_id", class(x)))) -} - -#' @export - -`[.rhub_column_group_id` <- function(x, i) { - column_group_id(NextMethod("[")) -} - -#' @importFrom pillar pillar_shaft new_pillar_shaft_simple -#' @export - -pillar_shaft.rhub_column_group_id <- function(x, ...) { - cx <- shorten_rhub_id(x) - new_pillar_shaft_simple(cx, ...) -} - -#' @importFrom pillar type_sum -#' @export - -type_sum.rhub_column_group_id <- function(x) { - "rhub::group_id" -} diff --git a/R/column-id.R b/R/column-id.R deleted file mode 100644 index 49d3d8c..0000000 --- a/R/column-id.R +++ /dev/null @@ -1,25 +0,0 @@ - -column_id <- function(x) { - structure(x, class = unique(c("rhub_column_id", class(x)))) -} - -#' @export - -`[.rhub_column_id` <- function(x, i) { - column_id(NextMethod("[")) -} - -#' @importFrom pillar pillar_shaft new_pillar_shaft_simple -#' @export - -pillar_shaft.rhub_column_id <- function(x, ...) { - cx <- shorten_rhub_id(x) - new_pillar_shaft_simple(cx, ...) -} - -#' @importFrom pillar type_sum -#' @export - -type_sum.rhub_column_id <- function(x) { - "rhub::id" -} diff --git a/R/column-result.R b/R/column-result.R deleted file mode 100644 index b4ab819..0000000 --- a/R/column-result.R +++ /dev/null @@ -1,39 +0,0 @@ -column_result <- function(x) { - structure(x, class = unique(c("rhub_column_result", class(x)))) -} - -#' @export - -`[.rhub_column_result` <- function(x, i) { - column_result(NextMethod("[")) -} - -#' @importFrom pillar pillar_shaft new_pillar_shaft_simple -#' @export - -pillar_shaft.rhub_column_result <- function(x, ...) { - cx <- lapply(x, color_column_result) - new_pillar_shaft_simple(cx, ...) -} - -color_column_result <- function(x) { - if (is.null(x)) return("in-progress") - E <- if (n <- length(x$errors)) status_style_error(strrep("E", n)) - W <- if (n <- length(x$warnings)) status_style_error(strrep("W", n)) - N <- if (n <- length(x$notes)) status_style_note(strrep("N", n)) - - switch( - x$status, - "parseerror" = status_style_error("parseerror"), - "preperror" = status_style_error("preperror"), - "aborted" = status_style_aborted("aborted"), - "ok" = status_style_ok("ok"), - paste0(E, W, N)) -} - -#' @importFrom pillar type_sum -#' @export - -type_sum.rhub_column_result <- function(x) { - "rhub::result" -} diff --git a/R/column-status.R b/R/column-status.R deleted file mode 100644 index 486fc38..0000000 --- a/R/column-status.R +++ /dev/null @@ -1,49 +0,0 @@ - -column_status <- function(x) { - structure(x, class = unique(c("rhub_column_status", class(x)))) -} - -#' @export - -`[.rhub_column_status` <- function(x, i) { - column_status(NextMethod("[")) -} - -#' @importFrom pillar pillar_shaft new_pillar_shaft_simple -#' @export - -pillar_shaft.rhub_column_status <- function(x, ...) { - ## status can be - ## - created - ## - in-progress - ## - parseerror (the R-hub output parser failed) - ## - preperror (build failed before R CMD check has started - ## - aborted (build was aborted) - ## - error - ## - warning - ## - note - ## - ok - - hst <- c( - "created" = status_style_created("created"), - "in-progress" = status_style_in_progress("in-progress"), - "parseerror" = status_style_error("parseerror"), - "preperror" = status_style_error("preperror"), - "aborted" = status_style_aborted("aborted"), - "error" = status_style_error("error"), - "warning" = status_style_error("warning"), - "note" = status_style_note("note"), - "ok" = status_style_ok("ok")) - - cx <- hst[x] - cx[is.na(cx)] <- x[is.na(cx)] - - new_pillar_shaft_simple(cx, ...) -} - -#' @importFrom pillar type_sum -#' @export - -type_sum.rhub_column_status <- function(x) { - "rhub::status" -} diff --git a/R/compat-vctrs.R b/R/compat-vctrs.R new file mode 100644 index 0000000..3f07fb4 --- /dev/null +++ b/R/compat-vctrs.R @@ -0,0 +1,640 @@ + +# nocov start + +compat_vctrs <- local({ + +# Modified from https://github.com/r-lib/rlang/blob/master/R/compat-vctrs.R + +# Construction ------------------------------------------------------------ + +# Constructs data frames inheriting from `"tbl"`. This allows the +# pillar package to take over printing as soon as it is loaded. +# The data frame otherwise behaves like a base data frame. +data_frame <- function(...) { + new_data_frame(df_list(...), .class = "tbl") +} + +new_data_frame <- function(.x = list(), + ..., + .size = NULL, + .class = NULL) { + n_cols <- length(.x) + if (n_cols != 0 && is.null(names(.x))) { + stop("Columns must be named.", call. = FALSE) + } + + if (is.null(.size)) { + if (n_cols == 0) { + .size <- 0 + } else { + .size <- vec_size(.x[[1]]) + } + } + + structure( + .x, + class = c(.class, "data.frame"), + row.names = .set_row_names(.size), + ... + ) +} + +df_list <- function(..., .size = NULL) { + vec_recycle_common(list(...), size = .size) +} + + +# Binding ----------------------------------------------------------------- + +vec_rbind <- function(...) { + xs <- vec_cast_common(list(...)) + do.call(base::rbind, xs) +} + +vec_cbind <- function(...) { + xs <- list(...) + + ptype <- vec_ptype_common(lapply(xs, `[`, 0)) + class <- setdiff(class(ptype), "data.frame") + + xs <- vec_recycle_common(xs) + out <- do.call(base::cbind, xs) + new_data_frame(out, .class = class) +} + + +# Slicing ----------------------------------------------------------------- + +vec_size <- function(x) { + if (is.data.frame(x)) { + nrow(x) + } else { + length(x) + } +} + +vec_rep <- function(x, times) { + i <- rep.int(seq_len(vec_size(x)), times) + vec_slice(x, i) +} + +vec_recycle_common <- function(xs, size = NULL) { + sizes <- vapply(xs, vec_size, integer(1)) + + n <- unique(sizes) + + if (length(n) == 1 && is.null(size)) { + return(xs) + } + n <- setdiff(n, 1L) + + ns <- length(n) + + if (ns == 0) { + if (is.null(size)) { + return(xs) + } + } else if (ns == 1) { + if (is.null(size)) { + size <- n + } else if (ns != size) { + stop("Inputs can't be recycled to `size`.", call. = FALSE) + } + } else { + stop("Inputs can't be recycled to a common size.", call. = FALSE) + } + + to_recycle <- sizes == 1L + xs[to_recycle] <- lapply(xs[to_recycle], vec_rep, size) + + xs +} + +vec_slice <- function(x, i) { + if (is.logical(i)) { + i <- which(i) + } + stopifnot(is.numeric(i) || is.character(i)) + + if (is.null(x)) { + return(NULL) + } + + if (is.data.frame(x)) { + # We need to be a bit careful to be generic. First empty all + # columns and expand the df to final size. + out <- x[i, 0, drop = FALSE] + + # Then fill in with sliced columns + out[seq_along(x)] <- lapply(x, vec_slice, i) + + # Reset automatic row names to work around `[` weirdness + if (is.numeric(attr(x, "row.names"))) { + row_names <- .set_row_names(nrow(out)) + } else { + row_names <- attr(out, "row.names") + } + + return(out) + } + + d <- vec_dims(x) + if (d == 1) { + if (is.object(x)) { + out <- x[i] + } else { + out <- x[i, drop = FALSE] + } + } else if (d == 2) { + out <- x[i, , drop = FALSE] + } else { + j <- rep(list(quote(expr = )), d - 1) + out <- eval(as.call(list(quote(`[`), quote(x), quote(i), j, drop = FALSE))) + } + + out +} +vec_dims <- function(x) { + d <- dim(x) + if (is.null(d)) { + 1L + } else { + length(d) + } +} + +vec_as_location <- function(i, n, names = NULL) { + out <- seq_len(n) + names(out) <- names + + # Special-case recycling to size 0 + if (is_logical(i, n = 1) && !length(out)) { + return(out) + } + + unname(out[i]) +} + +vec_init <- function(x, n = 1L) { + vec_slice(x, rep_len(NA_integer_, n)) +} + +vec_assign <- function(x, i, value) { + if (is.null(x)) { + return(NULL) + } + + if (is.logical(i)) { + i <- which(i) + } + stopifnot( + is.numeric(i) || is.character(i) + ) + + value <- vec_recycle(value, vec_size(i)) + value <- vec_cast(value, to = x) + + d <- vec_dims(x) + + if (d == 1) { + x[i] <- value + } else if (d == 2) { + x[i, ] <- value + } else { + stop("Can't slice-assign arrays.", call. = FALSE) + } + + x +} + +vec_recycle <- function(x, size) { + if (is.null(x) || is.null(size)) { + return(NULL) + } + + n_x <- vec_size(x) + + if (n_x == size) { + x + } else if (size == 0L) { + vec_slice(x, 0L) + } else if (n_x == 1L) { + vec_slice(x, rep(1L, size)) + } else { + stop("Incompatible lengths: ", n_x, ", ", size, call. = FALSE) + } +} + + +# Coercion ---------------------------------------------------------------- + +vec_cast_common <- function(xs, to = NULL) { + ptype <- vec_ptype_common(xs, ptype = to) + lapply(xs, vec_cast, to = ptype) +} + +vec_cast <- function(x, to) { + if (is.null(x)) { + return(NULL) + } + if (is.null(to)) { + return(x) + } + + if (vec_is_unspecified(x)) { + return(vec_init(to, vec_size(x))) + } + + stop_incompatible_cast <- function(x, to) { + stop( + sprintf("Can't convert <%s> to <%s>.", + .rlang_vctrs_typeof(x), + .rlang_vctrs_typeof(to) + ), + call. = FALSE + ) + } + + lgl_cast <- function(x, to) { + lgl_cast_from_num <- function(x) { + if (any(!x %in% c(0L, 1L))) { + stop_incompatible_cast(x, to) + } + as.logical(x) + } + + switch( + .rlang_vctrs_typeof(x), + logical = x, + integer = , + double = lgl_cast_from_num(x), + stop_incompatible_cast(x, to) + ) + } + + int_cast <- function(x, to) { + int_cast_from_dbl <- function(x) { + out <- suppressWarnings(as.integer(x)) + if (any((out != x) | xor(is.na(x), is.na(out)))) { + stop_incompatible_cast(x, to) + } else { + out + } + } + + switch( + .rlang_vctrs_typeof(x), + logical = as.integer(x), + integer = x, + double = int_cast_from_dbl(x), + stop_incompatible_cast(x, to) + ) + } + + dbl_cast <- function(x, to) { + switch( + .rlang_vctrs_typeof(x), + logical = , + integer = as.double(x), + double = x, + stop_incompatible_cast(x, to) + ) + } + + chr_cast <- function(x, to) { + switch( + .rlang_vctrs_typeof(x), + character = x, + stop_incompatible_cast(x, to) + ) + } + + list_cast <- function(x, to) { + switch( + .rlang_vctrs_typeof(x), + list = x, + stop_incompatible_cast(x, to) + ) + } + + df_cast <- function(x, to) { + # Check for extra columns + if (length(setdiff(names(x), names(to))) > 0 ) { + stop("Can't convert data frame because of missing columns.", call. = FALSE) + } + + # Avoid expensive [.data.frame method + out <- as.list(x) + + # Coerce common columns + common <- intersect(names(x), names(to)) + out[common] <- Map(vec_cast, out[common], to[common]) + + # Add new columns + from_type <- setdiff(names(to), names(x)) + out[from_type] <- lapply(to[from_type], vec_init, n = vec_size(x)) + + # Ensure columns are ordered according to `to` + out <- out[names(to)] + + new_data_frame(out) + } + + rlib_df_cast <- function(x, to) { + new_data_frame(df_cast(x, to), .class = "tbl") + } + tib_cast <- function(x, to) { + new_data_frame(df_cast(x, to), .class = c("tbl_df", "tbl")) + } + + switch( + .rlang_vctrs_typeof(to), + logical = lgl_cast(x, to), + integer = int_cast(x, to), + double = dbl_cast(x, to), + character = chr_cast(x, to), + list = list_cast(x, to), + + base_data_frame = df_cast(x, to), + rlib_data_frame = rlib_df_cast(x, to), + tibble = tib_cast(x, to), + + stop_incompatible_cast(x, to) + ) +} + +vec_ptype_common <- function(xs, ptype = NULL) { + if (!is.null(ptype)) { + return(vec_ptype(ptype)) + } + + xs <- Filter(function(x) !is.null(x), xs) + + if (length(xs) == 0) { + return(NULL) + } + + if (length(xs) == 1) { + out <- vec_ptype(xs[[1]]) + } else { + xs <- map(xs, vec_ptype) + out <- Reduce(vec_ptype2, xs) + } + + vec_ptype_finalise(out) +} + +vec_ptype_finalise <- function(x) { + if (is.data.frame(x)) { + x[] <- lapply(x, vec_ptype_finalise) + return(x) + } + + if (inherits(x, "rlang_unspecified")) { + logical() + } else { + x + } +} + +vec_ptype <- function(x) { + if (vec_is_unspecified(x)) { + return(.rlang_vctrs_unspecified()) + } + + if (is.data.frame(x)) { + out <- new_data_frame(lapply(x, vec_ptype)) + + attrib <- attributes(x) + attrib$row.names <- attr(out, "row.names") + attributes(out) <- attrib + + return(out) + } + + vec_slice(x, 0) +} + +vec_ptype2 <- function(x, y) { + stop_incompatible_type <- function(x, y) { + stop( + sprintf("Can't combine types <%s> and <%s>.", + .rlang_vctrs_typeof(x), + .rlang_vctrs_typeof(y)), + call. = FALSE + ) + } + + x_type <- .rlang_vctrs_typeof(x) + y_type <- .rlang_vctrs_typeof(y) + + if (x_type == "unspecified" && y_type == "unspecified") { + return(.rlang_vctrs_unspecified()) + } + if (x_type == "unspecified") { + return(y) + } + if (y_type == "unspecified") { + return(x) + } + + df_ptype2 <- function(x, y) { + set_partition <- function(x, y) { + list( + both = intersect(x, y), + only_x = setdiff(x, y), + only_y = setdiff(y, x) + ) + } + + # Avoid expensive [.data.frame + x <- as.list(vec_slice(x, 0)) + y <- as.list(vec_slice(y, 0)) + + # Find column types + names <- set_partition(names(x), names(y)) + if (length(names$both) > 0) { + common_types <- Map(vec_ptype2, x[names$both], y[names$both]) + } else { + common_types <- list() + } + only_x_types <- x[names$only_x] + only_y_types <- y[names$only_y] + + # Combine and construct + out <- c(common_types, only_x_types, only_y_types) + out <- out[c(names(x), names$only_y)] + new_data_frame(out) + } + + rlib_df_ptype2 <- function(x, y) { + new_data_frame(df_ptype2(x, y), .class = "tbl") + } + tib_ptype2 <- function(x, y) { + new_data_frame(df_ptype2(x, y), .class = c("tbl_df", "tbl")) + } + + ptype <- switch( + x_type, + + logical = switch( + y_type, + logical = x, + integer = y, + double = y, + stop_incompatible_type(x, y) + ), + + integer = switch( + .rlang_vctrs_typeof(y), + logical = x, + integer = x, + double = y, + stop_incompatible_type(x, y) + ), + + double = switch( + .rlang_vctrs_typeof(y), + logical = x, + integer = x, + double = x, + stop_incompatible_type(x, y) + ), + + character = switch( + .rlang_vctrs_typeof(y), + character = x, + stop_incompatible_type(x, y) + ), + + list = switch( + .rlang_vctrs_typeof(y), + list = x, + stop_incompatible_type(x, y) + ), + + base_data_frame = switch( + .rlang_vctrs_typeof(y), + base_data_frame = , + s3_data_frame = df_ptype2(x, y), + rlib_data_frame = rlib_df_ptype2(x, y), + tibble = tib_ptype2(x, y), + stop_incompatible_type(x, y) + ), + + rlib_data_frame = switch( + .rlang_vctrs_typeof(y), + base_data_frame = , + rlib_data_frame = , + s3_data_frame = rlib_df_ptype2(x, y), + tibble = tib_ptype2(x, y), + stop_incompatible_type(x, y) + ), + + tibble = switch( + .rlang_vctrs_typeof(y), + base_data_frame = , + rlib_data_frame = , + tibble = , + s3_data_frame = tib_ptype2(x, y), + stop_incompatible_type(x, y) + ), + + stop_incompatible_type(x, y) + ) + + vec_slice(ptype, 0) +} + +.rlang_vctrs_typeof <- function(x) { + if (is.object(x)) { + class <- class(x) + + if (identical(class, "rlang_unspecified")) { + return("unspecified") + } + if (identical(class, "data.frame")) { + return("base_data_frame") + } + if (identical(class, c("tbl", "data.frame"))) { + return("rlib_data_frame") + } + if (identical(class, c("tbl_df", "tbl", "data.frame"))) { + return("tibble") + } + if (inherits(x, "data.frame")) { + return("s3_data_frame") + } + + class <- paste0(class, collapse = "/") + stop(sprintf("Unimplemented class <%s>.", class), call. = FALSE) + } + + type <- typeof(x) + switch( + type, + NULL = return("null"), + logical = if (vec_is_unspecified(x)) { + return("unspecified") + } else { + return(type) + }, + integer = , + double = , + character = , + raw = , + list = return(type) + ) + + stop(sprintf("Unimplemented type <%s>.", type), call. = FALSE) +} + +vec_is_unspecified <- function(x) { + !is.object(x) && + typeof(x) == "logical" && + length(x) && + all(vapply(x, identical, logical(1), NA)) +} + +.rlang_vctrs_unspecified <- function(x = NULL) { + structure( + rep(NA, length(x)), + class = "rlang_unspecified" + ) +} + +.rlang_vctrs_s3_method <- function(generic, class, env = parent.frame()) { + fn <- get(generic, envir = env) + + ns <- asNamespace(topenv(fn)) + tbl <- ns$.__S3MethodsTable__. + + for (c in class) { + name <- paste0(generic, ".", c) + if (exists(name, envir = tbl, inherits = FALSE)) { + return(get(name, envir = tbl)) + } + if (exists(name, envir = globalenv(), inherits = FALSE)) { + return(get(name, envir = globalenv())) + } + } + + NULL +} + +environment() + +}) + +data_frame <- compat_vctrs$data_frame + +as_data_frame <- function(x) { + if (is.matrix(x)) { + x <- as.data.frame(x, stringsAsFactors = FALSE) + } else { + x <- compat_vctrs$vec_recycle_common(x) + } + compat_vctrs$new_data_frame(x, .class = "tbl") +} + +# nocov end diff --git a/R/doctor.R b/R/doctor.R new file mode 100644 index 0000000..f8c36fc --- /dev/null +++ b/R/doctor.R @@ -0,0 +1,276 @@ + +#' Check if the current or the specified package is ready to use with R-hub +#' +#' Errors if the package or repository is not set up correctly, and +#' advises on possible solutions. +#' +#' @param gh_url Use `NULL` for the package in the current working +#' directory. Alternatively, use the URL of a GitHub repository that +#' contains an R package that was set up to use with R-hub. +#' +#' @export + +rhub_doctor <- function(gh_url = NULL) { + assert_that( + is_optional_gh_url(gh_url) + ) + + rpkg_root <- if (is.null(gh_url)) setup_find_r_package() + git_root <- if (is.null(gh_url)) setup_find_git_root() + if (is.null(gh_url)) check_rpkg_root(rpkg_root, git_root) + + pat_url <- gh_url %||% "https://github.com" + pat <- doctor_find_pat(pat_url) + gh_url <- gh_url %||% doctor_find_gh_url(repo = git_root) + + # ----------------------------------------------------------------------- + # Do these up front, concurrently + # We need the following pieces: + # 1 check if we are indeed talking to GitHub + # 2 check that the token is valid, and we have access to the repo + # 3 check that the token has the right scopes + # 4 check that the workflow file exists on the default branch + # 5 check that the workflow exists (e.g. not a fork with disabled actions) + # 6 check that the workflow is enabled + # 7 check that the workflow file is the latest version + # + # Unfortunately we cannot do all this with a single graphql query, because + # (AFAICT) you cannot currently query the workflows of a repository with + # GraphQL. + # + # So we'll have + # - a graphql query for (1), (2), (3), (4), (7) + # - a REST query for (5) and (6) + + resp <- synchronise(when_all( + gql = doctor_async_gql(gh_url, token = pat), + wfl = doctor_async_rest(gh_url, token = pat) + )) + + doctor_check_github(gh_url, resp$gql) + doctor_check_pat_scopes(resp$gql) + doctor_check_workflow(gh_url, resp$gql, resp$wfl) + + cli::cli_alert( + "WOOT! You are ready to run {.run rhub::rhub_check()} on this package.", + wrap = TRUE + ) + + invisible(NULL) +} + +# TODO: multiple remotes, what if it is not origin? +# TODO: what if there is a remote, but it does not have a URL? + +doctor_find_gh_url <- function(repo) { + remote <- gert::git_info(repo)$remote + if (is.na(remote)) { + throw(pkg_error( + call. = FALSE, + "Cannot determine GitHub URL from git remote in repository at + {.file {repo}}. Is your repository on GitHub?", + i = "If this repository is on GitHub, call + {.code git remote add origin } to add GitHub as a + remote.", + i = "Alternatively, specify the GitHub URL of the repository in + the {.arg gh_url} argument.", + i = "If it is not on GitHub, then you'll need to put it there. + Create a new repository at {.url https://github.com/new}." + )) + } + gert::git_remote_info(repo = repo)$url +} + +doctor_find_pat <- function(pat_url) { + pid <- cli_status("Do you have a GitHub personal access token (PAT)?") + # TODO: get GH URL from git remote, if any + tryCatch( + pat <- gitcreds::gitcreds_get(url = pat_url)$password, + gitcreds_nogit_error = function(e) { + cli::cli_status_clear(pid, result = "failed") + env <- gitcreds::gitcreds_cache_envvar(pat_url) + throw(pkg_error( + call. = FALSE, + "Could not find a GitHub personal access token (PAT) for {.url {pat_url}}.", + i = "I also could not find a working git installation. If you + don't want to install git, but you have a PAT, you can set the + {.env {env}} environment variable to the PAT.", + i = "You can read more about PATs at + {.url https://usethis.r-lib.org/articles/git-credentials.html}." + )) + }, + gitcreds_no_credentials = function(e) { + cli::cli_status_clear(pid, result = "failed") + env <- gitcreds::gitcreds_cache_envvar(pat_url) + throw(pkg_error( + call. = FALSE, + "Could not find a GitHub personal access token (PAT) for {.url {pat_url}}.", + i = "If you have a GitHub PAT, you can use {.run gitcreds::gitcreds_set()} + to add it to the git credential store, so R-hub can use it.", + i = "If you don't have a PAT, you can create one by running + {.run usethis::create_github_token()}.", + i = "You can read more about PATs at + {.url https://usethis.r-lib.org/articles/git-credentials.html}." + )) + }, + error = function(e) { + cli::cli_status_clear(pid, result = "failed") + throw(e) + } + ) + cli::cli_status_clear(pid, result = "clear") + cli::cli_alert_success("Found GitHub PAT.") + + pat +} + +doctor_check_github <- function(gh_url, resp) { + pid <- cli_status(cli::format_inline("Is the package on GitHub at {.url {gh_url}}?")) + if (!"x-ratelimit-limit" %in% names(resp$headers)) { + cli::cli_status_clear(pid, result = "failed") + throw(pkg_error( + call. = FALSE, + "Remote repository at {.url {gh_url}} does not seem like a GitHub + repository.", + i = "R-hub only supports GitHub packages in GitHub repositories + currently.", + i = "If you think that this is a bug in the {pkg rhub} package, + please let us know!" + )) + } + cli::cli_status_clear(pid, result = "clear") + cli::cli_alert_success( + "Found repository on GitHub at {.url {gh_url}}.", + wrap = TRUE + ) +} + +# we can assume a GH response at this point + +doctor_check_pat_scopes <- function(resp) { + pid <- cli_status("Does your GitHub PAT have the right scopes?") + scopes <- trimws(strsplit( + resp[["headers"]][["x-oauth-scopes"]] %||% "NOPE", + ",", + fixed = TRUE + )[[1]]) + + if (identical(scopes, "NOPE")) { + cli::cli_status_clear(pid, result = "failed") + throw(pkg_error( + call. = FALSE, + "Could not use the PAT to authenticate to GitHub", + i = "Make sure that the URL and your PAT are correct." + )) + } + + if (!"repo" %in% scopes) { + cli::cli_status_clear(pid, result = "failed") + throw(pkg_error( + call. = FALSE, + "Your PAT does not have a {.code repo} scope.", + i = "Withoput a {.code repo} scope R-hub cannot start jobs on GitHub.", + i = "Change the scoped of the PAT on the GitHub web page, or create + a new PAT." + )) + } + cli::cli_status_clear(pid, result = "clear") + cli::cli_alert_success("GitHub PAT has the right scopes.") +} + +doctor_check_workflow <- function(gh_url, gql, rest) { + pid <- cli_status( + "Does the default branch of your git repo have the R-hub workflow file?" + ) + + if (is.null(gql$workflow)) { + cli::cli_status_clear(pid, result = "failed") + throw(pkg_error( + call. = FALSE, + "Could not find R-hub's workflow file in the repository at + {.url {gh_url}}.", + i = "The workflow file must be at {.path .github/workflows/rhub.yaml}.", + i = "If you have added and committed the workflow file, you need to + push the commit to GitHub with {.code git push}.", + i = if (isTRUE(gql$is_forked)) + "This repository is a fork. Make sure you enabled GitHub Actions + on it, in the {.emph Actions} tab of the repository web page." + )) + } + + if (rest$workflow$state != "active") { + cli::cli_status_clear(pid, result = "failed") + throw(pkg_error( + call. = FALSE, + "The workflow is disabled.", + i = "You need to enable it, click on the {.code ...} button at the + top right corner of the web page of the workflow." + )) + } + + cli::cli_status_clear(pid, result = "clear") + cli::cli_alert_success( + "Found R-hub workflow in default branch, and it is active." + ) +} + +# We need the following pieces: +# - check if we are indeed talking to GitHub +# - check that the token is valid, and we have access to the repo +# - check that the token has the right scopes +# - check that the workflow file exists on the default branch +# - check that the workflow file is the latest version + +doctor_async_gql <- function(gh_url, token) { + url <- parse_gh_url(gh_url) + query <- glue::glue("{ + repository(owner: \"\", name: \"\") { + workflow_file: object(expression: \"HEAD:.github/workflows/rhub.yaml\") { + ... on Blob { + isBinary + text + } + } + sha: object(expression: \"HEAD\") { + oid + } + branch: defaultBranchRef { + name + } + isFork + } + }", .open = "<", .close = ">") + async_gh_gql_get(url$graphql, query, token)$ + then(function(resp) { + data <- resp$content$data + list( + status_code = resp$status_code, + headers = resp$headers, + is_repo = !is.null(data$repository), + workflow_binary = data$repository$workflow_file$isBinary, + workflow = data$repository$workflow_file$text, + sha = data$repository$sha$oid, + branch = data$repository$branch$name, + is_fork = data$repository$isFork, + errors = resp$content$errors + ) + }) +} + +# Goal is to +# - check if workflow exist (e.g. not a form with disabled actions) +# - check that workflow is enabled + +doctor_async_rest <- function(gh_url, token) { + url <- parse_gh_url(gh_url) + ep <- glue::glue("/repos/{url$user}/{url$repo}/actions/workflows/rhub.yaml") + async_gh_rest_get(url$api, ep, token)$ + then(function(resp) { + list( + status_code = resp$status_code, + headers = resp$headers, + workflow = resp$content, + errors = resp$content$errors + ) + }) +} diff --git a/R/email.R b/R/email.R deleted file mode 100644 index e33db4c..0000000 --- a/R/email.R +++ /dev/null @@ -1,236 +0,0 @@ - -#' Validate an email address on R-hub -#' -#' To build and check R packages on R-hub, you need to validate your -#' email address. This is because R-hub sends out emails about check -#' results. -#' -#' The `rhub` package stores validated email addresses in a user -#' configuration file, at a platform-dependent location. -#' On your current platform the file is at -#' \Sexpr[stage=render]{rhub:::email_file()}. -#' -#' To validate a new email address, call this function from an interactive -#' R session, without any arguments. -#' -#' To add an email address that was validated before (probably on another -#' machine), to the configuration file, call this function with the `email` -#' and `token` arguments. -#' -#' @param email The email address to validate. -#' @param token Token obtained from `rhub`, to validate the email address. -#' -#' @family email validation -#' @export -#' @importFrom jsonlite unbox - -validate_email <- function(email = NULL, token = NULL) { - - if (is.null(email) || is.null(token)) { - if (!is_interactive()) { - stop("No email or no token and not in interactive mode") - } - return(validate_email_interactive(email, token)) - } - - assert_that(is_email(email)) - assert_that(is_token(token)) - - email_add_token(email, token) - message("Token added for ", sQuote(email)) - cat("\n") - token_file_msg() - cat("\n") - invisible() -} - -#' @importFrom cli symbol -#' @importFrom utils menu -#' @importFrom whoami email_address - -get_email_to_validate <- function(path) { - - ## Find out email first. List currently validated addresses, - ## Offer address by whoami::email_address(), and also the - ## maintainer address, if any. - - valid <- list_validated_emails2(msg_if_empty = FALSE) - guess <- email_address() - maint <- tryCatch(get_maintainer_email(path), error = function(e) NULL) - - choices <- rbind( - if (nrow(valid)) cbind(valid = TRUE, valid), - if (!is.null(guess) && ! guess %in% valid$email) { - data_frame(valid = FALSE, email = guess, token = NA) - }, - if (!is.null(maint) && ! maint %in% valid$email && maint != guess) { - data_frame(valid = FALSE, email = maint, token = NA) - }, - data_frame(valid = NA, email = "New email address", token = NA) - ) - - ## Only show the menu if there is more than one thing there - if (nrow(choices) != 1) { - choices_str <- paste( - sep = " ", - ifelse( - choices$valid & !is.na(choices$valid), - crayon::green(symbol$tick), - " " - ), - choices$email - ) - - cat("\n") - title <- crayon::yellow(paste0( - symbol$line, symbol$line, - " Choose email address to (re)validate (or 0 to exit)" - )) - ch <- menu(choices_str, title = title) - - if (ch == 0) stop("Cancelled email validation", call. = FALSE) - - } else { - ch <- 1 - } - - ## Get another address if that is selected - if (is.na(choices$valid[ch])) { - cat("\n") - email <- readline("Email address: ") - } else { - email <- choices$email[ch] - } -} - -validate_email_interactive <- function(email, token, path = ".") { - - if (is.null(email)) email <- get_email_to_validate(path) - assert_that(is_email(email)) - - ## Token next. For this we need to make an API query. - if (is.null(token)) { - query("VALIDATE EMAIL", data = list(email = unbox(email))) - message(crayon::yellow( - "Please check your emails for the R-hub access token." - )) - token <- readline("Token: ") - } - assert_that(is_token(token)) - - ## We got everything now - validate_email(email, token) -} - -#' List validated email addresses -#' -#' @description List email addresses validated on R-hub on the current machine. -#' -#' @return A `data.frame` with two columns: `email` and `token`. -#' If in interactive mode, and there are no validated email addresses, -#' then a message is printed and the data frame is returned invisibly. -#' -#' @family email validation -#' @export - -list_validated_emails <- function() { - list_validated_emails2() -} - -list_validated_emails2 <- function(msg_if_empty = TRUE) { - file <- email_file() - res <- if (file.exists(file)) { - if (is_interactive()) { - token_file_msg() - } - - structure( - read.csv(file, stringsAsFactors = FALSE, header = FALSE), - names = c("email", "token") - ) - } else { - data.frame( - email = character(), - token = character(), - stringsAsFactors = FALSE - ) - } - if (is_interactive() && nrow(res) == 0) { - if (msg_if_empty) message("No validated emails found.") - invisible(res) - } else { - res - } -} - -#' @importFrom rappdirs user_data_dir - -email_file <- function() { - rhub_data_dir <- user_data_dir("rhub", "rhub") - file.path(rhub_data_dir, "validated_emails.csv") -} - -#' @importFrom utils read.csv - -email_get_token <- function(email) { - file <- email_file() - if (! file.exists(file)) return(NULL) - - tokens <- read.csv(file, stringsAsFactors = FALSE, header = FALSE) - if (! email %in% tokens[,1]) return(NULL) - - tokens[match(email, tokens[,1]), 2] -} - -## If it exists already, then overwrites - -#' @importFrom utils read.csv write.table - -email_add_token <- function(email, token) { - - assert_that(is_email(email)) - assert_that(is_token(token)) - - file <- email_file() - - if (!file.exists(file)) { - parent <- dirname(file) - if (!file.exists(parent)) dir.create(parent, recursive = TRUE) - tokens <- data.frame( - V1 = character(), - V2 = character(), - stringsAsFactors = FALSE - ) - - } else { - tokens <- read.csv(file, stringsAsFactors = FALSE, header = FALSE) - } - - if (! email %in% tokens[,1]) { - tokens <- rbind(tokens, c(email, token)) - - } else{ - tokens[match(email, tokens[,1]), 2] <- token - } - - write.table( - tokens, - file = file, - sep = ",", - col.names = FALSE, - row.names = FALSE - ) - - invisible() -} - -token_file_msg <- function() { - message( - crayon::green( - paste0( - "For info the token(s) and email(s) are stored at ", - email_file() - ) - ) - ) -} diff --git a/R/env.R b/R/env.R deleted file mode 100644 index 3da0689..0000000 --- a/R/env.R +++ /dev/null @@ -1,51 +0,0 @@ - -package_data <- new.env(parent = emptyenv()) -package_data$status <- new.env(parent = emptyenv()) -package_data$ids <- character() -package_data$groups <- character() - -## Since the status can be NULL, meaning unknown, we put all cache elements -## in a list of length 1. - -cache_get <- function(id) { - e <- package_data$status - if (!is.null(x <- e[[id]][[1]])) return(x) - nms <- ls(e) - sts <- grep(paste0("-", id, "[0-9a-f]*$"), nms) - if (length(sts) == 0) return(NULL) - if (length(sts) == 1) return(e[[ nms[sts] ]][[1]]) - stop("Multiple checks match, please use a more specific id", call. = FALSE) -} - -cache_put <- function(id, status) { - cache_put_ids(id) - cache_put_group_ids(status$group) - package_data$status[[id]] <- list(status) - invisible() -} - -cache_put_ids <- function(id) { - id <- unique(setdiff(id, package_data$ids)) - if (length(id)) package_data$ids <- c(id, package_data$ids) -} - -cache_put_group_ids <- function(id) { - id <- unique(setdiff(id, package_data$groups)) - if (length(id)) package_data$groups <- c(id, package_data$groups) -} - -cache_get_ids <- function(ids) { - w <- match_partial(ids, package_data$ids) - package_data$ids[w] -} - -cache_get_group_ids <- function(ids) { - w <- match_partial(ids, package_data$groups) - package_data$groups[w] -} - -match_partial <- function(x, table) { - hash <- sub("^.*-", "", table) - m <- match(x, table) - ifelse(is.na(m), pmatch(x, hash), m) -} diff --git a/R/error.R b/R/error.R deleted file mode 100644 index 095cb0c..0000000 --- a/R/error.R +++ /dev/null @@ -1,50 +0,0 @@ - -#' @importFrom crayon yellow red underline - -report_system_error <- function(msg, status) { - - if (status$status == 0) return() - - if (status$stderr == "") { - stop( - msg, ", unknown error, standard output:\n", - yellow(status$stdout), - call. = FALSE - ) - - } else { - stop( - underline(yellow(paste0("\n", msg, ", standard output:\n\n"))), - yellow(status$stdout), "\n", - underline(red("Standard error:\n\n")), red(status$stderr), - call. = FALSE - ) - } -} - -#' @importFrom httr status_code - -report_error <- function(response) { - if (status_code(response) < 300) { - invisible(response) - } else { - call <- sys.call(-1) - stop(create_condition(response, "error", call = call)) - } -} - -#' @importFrom httr content - -create_condition <- function(response, - class = c("error", "warning", "message"), - call) { - - class <- match.arg(class) - - message <- content(response)$message %||% "rhub error" - - structure( - list(message = message, call = call), - class = c("rhub_error", class, "condition") - ) -} diff --git a/R/errors.R b/R/errors.R new file mode 100644 index 0000000..3f10eda --- /dev/null +++ b/R/errors.R @@ -0,0 +1,1198 @@ + +# nocov start + +# # Standalone file for better error handling ---------------------------- +# +# If can allow package dependencies, then you are probably better off +# using rlang's functions for errors. +# +# The canonical location of this file is in the processx package: +# https://github.com/r-lib/processx/blob/main/R/errors.R +# +# ## Dependencies +# - rstudio-detect.R for better printing in RStudio +# +# ## Features +# +# - Throw conditions and errors with the same API. +# - Automatically captures the right calls and adds them to the conditions. +# - Sets `.Last.error`, so you can easily inspect the errors, even if they +# were not caught. +# - It only sets `.Last.error` for the errors that are not caught. +# - Hierarchical errors, to allow higher level error messages, that are +# more meaningful for the users, while also keeping the lower level +# details in the error object. (So in `.Last.error` as well.) +# - `.Last.error` always includes a stack trace. (The stack trace is +# common for the whole error hierarchy.) The trace is accessible within +# the error, e.g. `.Last.error$trace`. The trace of the last error is +# also at `.Last.error.trace`. +# - Can merge errors and traces across multiple processes. +# - Pretty-print errors and traces, if the cli package is loaded. +# - Automatically hides uninformative parts of the stack trace when +# printing. +# +# ## API +# +# ``` +# new_cond(..., call. = TRUE, srcref = NULL, domain = NA) +# new_error(..., call. = TRUE, srcref = NULL, domain = NA) +# throw(cond, parent = NULL, frame = environment()) +# throw_error(cond, parent = NULL, frame = environment()) +# chain_error(expr, err, call = sys.call(-1)) +# chain_call(.NAME, ...) +# chain_clean_call(.NAME, ...) +# onload_hook() +# add_trace_back(cond, frame = NULL) +# format$advice(x) +# format$call(call) +# format$class(x) +# format$error(x, trace = FALSE, class = FALSE, advice = !trace, ...) +# format$error_heading(x, prefix = NULL) +# format$header_line(x, prefix = NULL) +# format$srcref(call, srcref = NULL) +# format$trace(x, ...) +# ``` +# +# ## Roadmap: +# - better printing of anonymous function in the trace +# +# ## NEWS: +# +# ### 1.0.0 -- 2019-06-18 +# +# * First release. +# +# ### 1.0.1 -- 2019-06-20 +# +# * Add `rlib_error_always_trace` option to always add a trace +# +# ### 1.0.2 -- 2019-06-27 +# +# * Internal change: change topenv of the functions to baseenv() +# +# ### 1.1.0 -- 2019-10-26 +# +# * Register print methods via onload_hook() function, call from .onLoad() +# * Print the error manually, and the trace in non-interactive sessions +# +# ### 1.1.1 -- 2019-11-10 +# +# * Only use `trace` in parent errors if they are `rlib_error`s. +# Because e.g. `rlang_error`s also have a trace, with a slightly +# different format. +# +# ### 1.2.0 -- 2019-11-13 +# +# * Fix the trace if a non-thrown error is re-thrown. +# * Provide print_this() and print_parents() to make it easier to define +# custom print methods. +# * Fix annotating our throw() methods with the incorrect `base::`. +# +# ### 1.2.1 -- 2020-01-30 +# +# * Update wording of error printout to be less intimidating, avoid jargon +# * Use default printing in interactive mode, so RStudio can detect the +# error and highlight it. +# * Add the rethrow_call_with_cleanup function, to work with embedded +# cleancall. +# +# ### 1.2.2 -- 2020-11-19 +# +# * Add the `call` argument to `catch_rethrow()` and `rethrow()`, to be +# able to omit calls. +# +# ### 1.2.3 -- 2021-03-06 +# +# * Use cli instead of crayon +# +# ### 1.2.4 -- 2021-04-01 +# +# * Allow omitting the call with call. = FALSE in `new_cond()`, etc. +# +# ### 1.3.0 -- 2021-04-19 +# +# * Avoid embedding calls in trace with embed = FALSE. +# +# ### 2.0.0 -- 2021-04-19 +# +# * Versioned classes and print methods +# +# ### 2.0.1 -- 2021-06-29 +# +# * Do not convert error messages to native encoding before printing, +# to be able to print UTF-8 error messages on Windows. +# +# ### 2.0.2 -- 2021-09-07 +# +# * Do not translate error messages, as this converts them to the native +# encoding. We keep messages in UTF-8 now. +# +# ### 3.0.0 -- 2022-04-19 +# +# * Major rewrite, use rlang compatible error objects. New API. +# +# ### 3.0.1 -- 2022-06-17 +# +# * Remove the `rlang_error` and `rlang_trace` classes, because our new +# deparsed `call` column in the trace is not compatible with rlang. +# +# ### 3.0.2 -- 2022-08-01 +# +# * Use a `procsrcref` column for processed source references. +# Otherwise testthat (and probably other rlang based packages), will +# pick up the `srcref` column, and they expect an `srcref` object there. +# +# ### 3.1.0 -- 2022-10-04 +# +# * Add ANSI hyperlinks to stack traces, if we have a recent enough +# cli package that supports this. +# +# ### 3.1.1 -- 2022-11-17 +# +# * Use `[[` instead of `$` to fix some partial matches. +# * Use fully qualified `base::stop()` to enable overriding `stop()` +# in a package. (Makes sense if compat files use `stop()`. +# * The `is_interactive()` function is now exported. +# +# ### 3.1.2 -- 2022-11-18 +# +# * The `parent` condition can now be an interrupt. + +err <- local({ + + # -- dependencies ----------------------------------------------------- + rstudio_detect <- rstudio$detect + + # -- condition constructors ------------------------------------------- + + #' Create a new condition + #' + #' @noRd + #' @param ... Parts of the error message, they will be converted to + #' character and then concatenated, like in [stop()]. + #' @param call. A call object to include in the condition, or `TRUE` + #' or `NULL`, meaning that [throw()] should add a call object + #' automatically. If `FALSE`, then no call is added. + #' @param srcref Alternative source reference object to use instead of + #' the one of `call.`. + #' @param domain Translation domain, see [stop()]. We set this to + #' `NA` by default, which means that no translation occurs. This + #' has the benefit that the error message is not re-encoded into + #' the native locale. + #' @return Condition object. Currently a list, but you should not rely + #' on that. + + new_cond <- function(..., call. = TRUE, srcref = NULL, domain = NA) { + message <- .makeMessage(..., domain = domain) + structure( + list(message = message, call = call., srcref = srcref), + class = c("condition")) + } + + #' Create a new error condition + #' + #' It also adds the `rlib_error` class. + #' + #' @noRd + #' @param ... Passed to [new_cond()]. + #' @param call. Passed to [new_cond()]. + #' @param srcref Passed tp [new_cond()]. + #' @param domain Passed to [new_cond()]. + #' @return Error condition object with classes `rlib_error`, `error` + #' and `condition`. + + new_error <- function(..., call. = TRUE, srcref = NULL, domain = NA) { + cond <- new_cond(..., call. = call., domain = domain, srcref = srcref) + class(cond) <- c("rlib_error_3_0", "rlib_error", "error", "condition") + cond + } + + # -- throwing conditions ---------------------------------------------- + + #' Throw a condition + #' + #' If the condition is an error, it will also call [stop()], after + #' signalling the condition first. This means that if the condition is + #' caught by an exiting handler, then [stop()] is not called. + #' + #' @noRd + #' @param cond Condition object to throw. If it is an error condition, + #' then it calls [stop()]. + #' @param parent Parent condition. + #' @param frame The throwing context. Can be used to hide frames from + #' the backtrace. + + throw <- throw_error <- function(cond, parent = NULL, frame = environment()) { + if (!inherits(cond, "condition")) { + cond <- new_error(cond) + } + if (!is.null(parent) && !inherits(parent, "condition")) { + throw(new_error("Parent condition must be a condition object")) + } + + if (isTRUE(cond[["call"]])) { + cond[["call"]] <- sys.call(-1) %||% sys.call() + } else if (identical(cond[["call"]], FALSE)) { + cond[["call"]] <- NULL + } + + cond <- process_call(cond) + + if (!is.null(parent)) { + cond$parent <- process_call(parent) + } + + # We can set an option to always add the trace to the thrown + # conditions. This is useful for example in context that always catch + # errors, e.g. in testthat tests or knitr. This options is usually not + # set and we signal the condition here + always_trace <- isTRUE(getOption("rlib_error_always_trace")) + .hide_from_trace <- 1L + # .error_frame <- cond + if (!always_trace) signalCondition(cond) + + if (is.null(cond$`_pid`)) cond$`_pid` <- Sys.getpid() + if (is.null(cond$`_timestamp`)) cond$`_timestamp` <- Sys.time() + + # If we get here that means that the condition was not caught by + # an exiting handler. That means that we need to create a trace. + # If there is a hand-constructed trace already in the error object, + # then we'll just leave it there. + if (is.null(cond$trace)) cond <- add_trace_back(cond, frame = frame) + + # Set up environment to store .Last.error, it will be just before + # baseenv(), so it is almost as if it was in baseenv() itself, like + # .Last.value. We save the print methods here as well, and then they + # will be found automatically. + if (! "org:r-lib" %in% search()) { + do.call("attach", list(new.env(), pos = length(search()), + name = "org:r-lib")) + } + env <- as.environment("org:r-lib") + env$.Last.error <- cond + env$.Last.error.trace <- cond$trace + + # If we always wanted a trace, then we signal the condition here + if (always_trace) signalCondition(cond) + + # If this is not an error, then we'll just return here. This allows + # throwing interrupt conditions for example, with the same UI. + if (! inherits(cond, "error")) return(invisible()) + .hide_from_trace <- NULL + + # Top-level handler, this is intended for testing only for now, + # and its design might change. + if (!is.null(th <- getOption("rlib_error_handler")) && + is.function(th)) { + return(th(cond)) + } + + # In non-interactive mode, we print the error + the traceback + # manually, to make sure that it won't be truncated by R's error + # message length limit. + out <- format( + cond, + trace = !is_interactive(), + class = FALSE, + full = !is_interactive() + ) + writeLines(out, con = default_output()) + + # Dropping the classes and adding "duplicate_condition" is a workaround + # for the case when we have non-exiting handlers on throw()-n + # conditions. These would get the condition twice, because stop() + # will also signal it. If we drop the classes, then only handlers + # on "condition" objects (i.e. all conditions) get duplicate signals. + # This is probably quite rare, but for this rare case they can also + # recognize the duplicates from the "duplicate_condition" extra class. + class(cond) <- c("duplicate_condition", "condition") + + # Turn off the regular error printing to avoid printing + # the error twice. + opts <- options(show.error.messages = FALSE) + on.exit(options(opts), add = TRUE) + + base::stop(cond) + } + + # -- rethrow with parent ----------------------------------------------- + + #' Re-throw an error with a better error message + #' + #' Evaluate `expr` and if it errors, then throw a new error `err`, + #' with the original error set as its parent. + #' + #' @noRd + #' @param expr Expression to evaluate. + #' @param err Error object or message to use for the child error. + #' @param call Call to use in the re-thrown error. See [throw()]. + + chain_error <- function(expr, err, call = sys.call(-1), srcref = NULL) { + .hide_from_trace <- 1 + force(call) + srcref <- srcref %||% utils::getSrcref(sys.call()) + withCallingHandlers({ + expr + }, error = function(e) { + .hide_from_trace <- 0:1 + e$srcref <- srcref + e$procsrcref <- NULL + if (!inherits(err, "condition")) { + err <- new_error(err, call. = call) + } + throw_error(err, parent = e) + }) + } + + # -- rethrowing conditions from C code --------------------------------- + + #' Version of .Call that throw()s errors + #' + #' It re-throws error from compiled code. If the error had class + #' `simpleError`, like all errors, thrown via `error()` in C do, it also + #' adds the `c_error` class. + #' + #' @noRd + #' @param .NAME Compiled function to call, see [.Call()]. + #' @param ... Function arguments, see [.Call()]. + #' @return Result of the call. + + chain_call <- function(.NAME, ...) { + .hide_from_trace <- 1:3 # withCallingHandlers + do.call + .handleSimpleError (?) + call <- sys.call() + call1 <- sys.call(-1) + srcref <- utils::getSrcref(call) + withCallingHandlers( + do.call(".Call", list(.NAME, ...)), + error = function(e) { + .hide_from_trace <- 0:1 + e$srcref <- srcref + e$procsrcref <- NULL + e[["call"]] <- call + name <- native_name(.NAME) + err <- new_error("Native call to `", name, "` failed", call. = call1) + cerror <- if (inherits(e, "simpleError")) "c_error" + class(err) <- c(cerror, "rlib_error_3_0", "rlib_error", "error", "condition") + throw_error(err, parent = e) + } + ) + } + + package_env <- topenv() + + #' Version of entrace_call that supports cleancall + #' + #' This function is the same as [entrace_call()], except that it + #' uses cleancall's [.Call()] wrapper, to enable resource cleanup. + #' See https://github.com/r-lib/cleancall#readme for more about + #' resource cleanup. + #' + #' @noRd + #' @param .NAME Compiled function to call, see [.Call()]. + #' @param ... Function arguments, see [.Call()]. + #' @return Result of the call. + + chain_clean_call <- function(.NAME, ...) { + .hide_from_trace <- 1:3 + call <- sys.call() + call1 <- sys.call(-1) + srcref <- utils::getSrcref(call) + withCallingHandlers( + package_env$call_with_cleanup(.NAME, ...), + error = function(e) { + .hide_from_trace <- 0:1 + e$srcref <- srcref + e$procsrcref <- NULL + e[["call"]] <- call + name <- native_name(.NAME) + err <- new_error("Native call to `", name, "` failed", call. = call1) + cerror <- if (inherits(e, "simpleError")) "c_error" + class(err) <- c(cerror, "rlib_error_3_0", "rlib_error", "error", "condition") + throw_error(err, parent = e) + } + ) + } + + # -- create traceback ------------------------------------------------- + + #' Create a traceback + #' + #' [throw()] calls this function automatically if an error is not caught, + #' so there is currently not much use to call it directly. + #' + #' @param cond Condition to add the trace to + #' @param frame Use this context to hide some frames from the traceback. + #' + #' @return A condition object, with the trace added. + + add_trace_back <- function(cond, frame = NULL) { + + idx <- seq_len(sys.parent(1L)) + frames <- sys.frames()[idx] + + # TODO: remove embedded objects from calls + calls <- as.list(sys.calls()[idx]) + parents <- sys.parents()[idx] + namespaces <- unlist(lapply( + seq_along(frames), + function(i) { + if (is_operator(calls[[i]])) { + "o" + } else { + env_label(topenvx(environment(sys.function(i)))) + } + } + )) + pids <- rep(cond$`_pid` %||% Sys.getpid(), length(calls)) + + mch <- match(format(frame), sapply(frames, format)) + if (is.na(mch)) { + visibles <- TRUE + } else { + visibles <- c(rep(TRUE, mch), rep(FALSE, length(frames) - mch)) + } + + scopes <- vapply(idx, FUN.VALUE = character(1), function(i) { + tryCatch( + get_call_scope(calls[[i]], namespaces[[i]]), + error = function(e) "" + ) + }) + + namespaces <- ifelse(scopes %in% c("::", ":::"), namespaces, NA_character_) + funs <- ifelse( + is.na(namespaces), + ifelse(scopes != "", paste0(scopes, " "), ""), + paste0(namespaces, scopes) + ) + funs <- paste0( + funs, + vapply(calls, function(x) format_name(x[[1]])[1], character(1)) + ) + visibles <- visibles & mark_invisible_frames(funs, frames) + + pcs <- lapply(calls, function(c) process_call(list(call = c))) + calls <- lapply(pcs, "[[", "call") + srcrefs <- I(lapply(pcs, "[[", "srcref")) + procsrcrefs <- I(lapply(pcs, "[[", "procsrcref")) + + cond$trace <- new_trace( + calls, + parents, + visibles = visibles, + namespaces = namespaces, + scopes = scopes, + srcrefs = srcrefs, + procsrcrefs = procsrcrefs, + pids + ) + + cond + } + + is_operator <- function(cl) { + is.call(cl) && length(cl) >= 1 && is.symbol(cl[[1]]) && + grepl("^[^.a-zA-Z]", as.character(cl[[1]])) + } + + mark_invisible_frames <- function(funs, frames) { + visibles <- rep(TRUE, length(frames)) + hide <- lapply(frames, "[[", ".hide_from_trace") + w_hide <- unlist(mapply(seq_along(hide), hide, FUN = function(i, w) { + i + w + }, SIMPLIFY = FALSE)) + w_hide <- w_hide[w_hide <= length(frames)] + visibles[w_hide] <- FALSE + + hide_from <- which(funs %in% names(invisible_frames)) + for (start in hide_from) { + hide_this <- invisible_frames[[ funs[start] ]] + for (i in seq_along(hide_this)) { + if (start + i > length(funs)) break + if (funs[start + i] != hide_this[i]) break + visibles[start + i] <- FALSE + } + } + + visibles + } + + invisible_frames <- list( + "base::source" = c("base::withVisible", "base::eval", "base::eval"), + "base::stop" = "base::.handleSimpleError", + "cli::cli_abort" = c( + "rlang::abort", + "rlang:::signal_abort", + "base::signalCondition"), + "rlang::abort" = c("rlang:::signal_abort", "base::signalCondition") + ) + + call_name <- function(x) { + if (is.call(x)) { + if (is.symbol(x[[1]])) { + as.character(x[[1]]) + } else if (x[[1]][[1]] == quote(`::`) || x[[1]][[1]] == quote(`:::`)) { + as.character(x[[1]][[2]]) + } else { + NULL + } + } else { + NULL + } + } + + get_call_scope <- function(call, ns) { + if (is.na(ns)) return("global") + if (!is.call(call)) return("") + if (is.call(call[[1]]) && + (call[[1]][[1]] == quote(`::`) || call[[1]][[1]] == quote(`:::`))) return("") + if (ns == "base") return("::") + if (! ns %in% loadedNamespaces()) return("") + name <- call_name(call) + nsenv <- asNamespace(ns)$.__NAMESPACE__. + if (is.null(nsenv)) return("::") + if (is.null(nsenv$exports)) return(":::") + if (exists(name, envir = nsenv$exports, inherits = FALSE)) { + "::" + } else if (exists(name, envir = asNamespace(ns), inherits = FALSE)) { + ":::" + } else { + "local" + } + } + + topenvx <- function(x) { + topenv(x, matchThisEnv = err_env) + } + + new_trace <- function (calls, parents, visibles, namespaces, scopes, srcrefs, procsrcrefs, pids) { + trace <- data.frame( + stringsAsFactors = FALSE, + parent = parents, + visible = visibles, + namespace = namespaces, + scope = scopes, + srcref = srcrefs, + procsrcref = procsrcrefs, + pid = pids + ) + trace[["call"]] <- calls + + class(trace) <- c("rlib_trace_3_0", "rlib_trace", "tbl", "data.frame") + trace + } + + env_label <- function(env) { + nm <- env_name(env) + if (nzchar(nm)) { + nm + } else { + env_address(env) + } + } + + env_address <- function(env) { + class(env) <- "environment" + sub("^.*(0x[0-9a-f]+)>$", "\\1", format(env), perl = TRUE) + } + + env_name <- function(env) { + if (identical(env, err_env)) { + return(env_name(package_env)) + } + if (identical(env, globalenv())) { + return(NA_character_) + } + if (identical(env, baseenv())) { + return("base") + } + if (identical(env, emptyenv())) { + return("empty") + } + nm <- environmentName(env) + if (isNamespace(env)) { + return(nm) + } + nm + } + + # -- S3 methods ------------------------------------------------------- + + format_error <- function(x, trace = FALSE, class = FALSE, + advice = !trace, full = trace, header = TRUE, + ...) { + if (has_cli()) { + format_error_cli(x, trace, class, advice, full, header, ...) + } else { + format_error_plain(x, trace, class, advice, full, header, ...) + } + } + + print_error <- function(x, trace = TRUE, class = TRUE, + advice = !trace, ...) { + writeLines(format_error(x, trace, class, advice, ...)) + } + + format_trace <- function(x, ...) { + if (has_cli()) { + format_trace_cli(x, ...) + } else { + format_trace_plain(x, ...) + } + } + + print_trace <- function(x, ...) { + writeLines(format_trace(x, ...)) + } + + cnd_message <- function(cond) { + paste(cnd_message_(cond, full = FALSE), collapse = "\n") + } + + cnd_message_ <- function(cond, full = FALSE) { + if (has_cli()) { + cnd_message_cli(cond, full) + } else { + cnd_message_plain(cond, full) + } + } + + # -- format API ------------------------------------------------------- + + format_advice <- function(x) { + if (has_cli()) { + format_advice_cli(x) + } else { + format_advice_plain(x) + } + } + + format_call <- function(call) { + if (has_cli()) { + format_call_cli(call) + } else { + format_call_plain(call) + } + } + + format_class <- function(x) { + if (has_cli()) { + format_class_cli(x) + } else { + format_class_plain(x) + } + } + + format_error_heading <- function(x, prefix = NULL) { + if (has_cli()) { + format_error_heading_cli(x, prefix) + } else { + format_error_heading_plain(x, prefix) + } + } + + format_header_line <- function(x, prefix = NULL) { + if (has_cli()) { + format_header_line_cli(x, prefix) + } else { + format_header_line_plain(x, prefix) + } + } + + format_srcref <- function(call, srcref = NULL) { + if (has_cli()) { + format_srcref_cli(call, srcref) + } else { + format_srcref_plain(call, srcref) + } + } + + # -- condition message with cli --------------------------------------- + + cnd_message_robust <- function(cond) { + class(cond) <- setdiff(class(cond), "rlib_error_3_0") + conditionMessage(cond) %||% + (if (inherits(cond, "interrupt")) "interrupt") %||% + "" + } + + cnd_message_cli <- function(cond, full = FALSE) { + exp <- paste0(cli::col_yellow("!"), " ") + add_exp <- is.null(names(cond$message)) + msg <- cnd_message_robust(cond) + + c( + paste0(if (add_exp) exp, msg), + if (inherits(cond$parent, "condition")) { + msg <- if (full && inherits(cond$parent, "rlib_error_3_0")) { + format(cond$parent, + trace = FALSE, + full = TRUE, + class = FALSE, + header = FALSE, + advice = FALSE + ) + } else if (inherits(cond$parent, "interrupt")) { + "interrupt" + } else { + conditionMessage(cond$parent) + } + add_exp <- substr(cli::ansi_strip(msg[1]), 1, 1) != "!" + if (add_exp) msg[1] <- paste0(exp, msg[1]) + c(format_header_line_cli(cond$parent, prefix = "Caused by error"), + msg + ) + } + ) + } + + # -- condition message w/o cli ---------------------------------------- + + cnd_message_plain <- function(cond, full = FALSE) { + exp <- "! " + add_exp <- is.null(names(cond$message)) + c( + paste0(if (add_exp) exp, cnd_message_robust(cond)), + if (inherits(cond$parent, "condition")) { + msg <- if (full && inherits(cond$parent, "rlib_error_3_0")) { + format(cond$parent, + trace = FALSE, + full = TRUE, + class = FALSE, + header = FALSE, + advice = FALSE + ) + } else if (inherits(cond$parent, "interrupt")) { + "interrupt" + } else { + conditionMessage(cond$parent) + } + add_exp <- substr(msg[1], 1, 1) != "!" + if (add_exp) { + msg[1] <- paste0(exp, msg[1]) + } + c(format_header_line_plain(cond$parent, prefix = "Caused by error"), + msg + ) + } + ) + } + + # -- printing error with cli ------------------------------------------ + + # Error parts: + # - "Error:" or "Error in " prefix, the latter if the error has a call + # - the call, possibly syntax highlightedm possibly trimmed (?) + # - source ref, with link to the file, potentially in a new line in cli + # - error message, just `conditionMessage()` + # - advice about .Last.error and/or .Last.error.trace + + format_error_cli <- function(x, trace = TRUE, class = TRUE, + advice = !trace, full = trace, + header = TRUE, ...) { + p_class <- if (class) format_class_cli(x) + p_header <- if (header) format_header_line_cli(x) + p_msg <- cnd_message_cli(x, full) + p_advice <- if (advice) format_advice_cli(x) else NULL + p_trace <- if (trace && !is.null(x$trace)) { + c("---", "Backtrace:", format_trace_cli(x$trace)) + } + + c(p_class, + p_header, + p_msg, + p_advice, + p_trace) + } + + format_header_line_cli <- function(x, prefix = NULL) { + p_error <- format_error_heading_cli(x, prefix) + p_call <- format_call_cli(x[["call"]]) + p_srcref <- format_srcref_cli(conditionCall(x), x$procsrcref %||% x$srcref) + paste0(p_error, p_call, p_srcref, if (!is.null(conditionCall(x))) ":") + } + + format_class_cli <- function(x) { + cls <- unique(setdiff(class(x), "condition")) + cls # silence codetools + cli::format_inline("{.cls {cls}}") + } + + format_error_heading_cli <- function(x, prefix = NULL) { + str_error <- if (is.null(prefix)) { + cli::style_bold(cli::col_yellow("Error")) + } else { + cli::style_bold(paste0(prefix)) + } + if (is.null(conditionCall(x))) { + paste0(str_error, ": ") + } else { + paste0(str_error, " in ") + } + } + + format_call_cli <- function(call) { + if (is.null(call)) { + NULL + } else { + cl <- trimws(format(call)) + if (length(cl) > 1) cl <- paste0(cl[1], " ", cli::symbol$ellipsis) + cli::format_inline("{.code {cl}}") + } + } + + format_srcref_cli <- function(call, srcref = NULL) { + ref <- get_srcref(call, srcref) + if (is.null(ref)) return("") + + link <- if (ref$file != "") { + if (Sys.getenv("R_CLI_HYPERLINK_STYLE") == "iterm") { + cli::style_hyperlink( + cli::format_inline("{basename(ref$file)}:{ref$line}:{ref$col}"), + paste0("file://", ref$file, "#", ref$line, ":", ref$col) + ) + } else { + cli::style_hyperlink( + cli::format_inline("{basename(ref$file)}:{ref$line}:{ref$col}"), + paste0("file://", ref$file), + params = c(line = ref$line, col = ref$col) + ) + } + } else { + paste0("line ", ref$line) + } + + cli::col_silver(paste0(" at ", link)) + } + + str_advice <- "Type .Last.error to see the more details." + + format_advice_cli <- function(x) { + cli::col_silver(str_advice) + } + + format_trace_cli <- function(x, ...) { + x$num <- seq_len(nrow(x)) + + scope <- ifelse( + is.na(x$namespace), + ifelse(x$scope != "", paste0(x$scope, " "), ""), + paste0(x$namespace, x$scope) + ) + + visible <- if ("visible" %in% names(x)) { + x$visible + } else { + rep(TRUE, nrow(x)) + } + + srcref <- if ("srcref" %in% names(x) || "procsrcref" %in% names(x)) { + vapply( + seq_len(nrow(x)), + function(i) format_srcref_cli(x[["call"]][[i]], x$procsrcref[[i]] %||% x$srcref[[i]]), + character(1) + ) + } else { + unname(vapply(x[["call"]], format_srcref_cli, character(1))) + } + + lines <- paste0( + cli::col_silver(format(x$num), ". "), + ifelse (visible, "", "| "), + scope, + vapply(seq_along(x$call), function(i) { + format_trace_call_cli(x$call[[i]], x$namespace[[i]]) + }, character(1)), + srcref + ) + + lines[!visible] <- cli::col_silver(cli::ansi_strip( + lines[!visible], + link = FALSE + )) + + lines + } + + format_trace_call_cli <- function(call, ns = "") { + envir <- tryCatch(asNamespace(ns), error = function(e) .GlobalEnv) + cl <- trimws(format(call)) + if (length(cl) > 1) { cl <- paste0(cl[1], " ", cli::symbol$ellipsis) } + # Older cli does not have 'envir'. + if ("envir" %in% names(formals(cli::code_highlight))) { + fmc <- cli::code_highlight(cl, envir = envir)[1] + } else { + fmc <- cli::code_highlight(cl)[1] + } + cli::ansi_strtrim(fmc, cli::console_width() - 5) + } + + # ---------------------------------------------------------------------- + + format_error_plain <- function(x, trace = TRUE, class = TRUE, + advice = !trace, full = trace, header = TRUE, + ...) { + p_class <- if (class) format_class_plain(x) + p_header <- if (header) format_header_line_plain(x) + p_msg <- cnd_message_plain(x, full) + p_advice <- if (advice) format_advice_plain(x) else NULL + p_trace <- if (trace && !is.null(x$trace)) { + c("---", "Backtrace:", format_trace_plain(x$trace)) + } + + c(p_class, + p_header, + p_msg, + p_advice, + p_trace) + } + + format_trace_plain <- function(x, ...) { + x$num <- seq_len(nrow(x)) + + scope <- ifelse( + is.na(x$namespace), + ifelse(x$scope != "", paste0(x$scope, " "), ""), + paste0(x$namespace, x$scope) + ) + + visible <- if ("visible" %in% names(x)) { + x$visible + } else { + rep(TRUE, nrow(x)) + } + + srcref <- if ("srcref" %in% names(x) || "procsrfref" %in% names(x)) { + vapply( + seq_len(nrow(x)), + function(i) format_srcref_plain(x[["call"]][[i]], x$procsrcref[[i]] %||% x$srcref[[i]]), + character(1) + ) + } else { + unname(vapply(x[["call"]], format_srcref_plain, character(1))) + } + + lines <- paste0( + paste0(format(x$num), ". "), + ifelse (visible, "", "| "), + scope, + vapply(x[["call"]], format_trace_call_plain, character(1)), + srcref + ) + + lines + } + + format_advice_plain <- function(x, ...) { + str_advice + } + + format_header_line_plain <- function(x, prefix = NULL) { + p_error <- format_error_heading_plain(x, prefix) + p_call <- format_call_plain(x[["call"]]) + p_srcref <- format_srcref_plain(conditionCall(x), x$procsrcref %||% x$srcref) + paste0(p_error, p_call, p_srcref, if (!is.null(conditionCall(x))) ":") + } + + format_error_heading_plain <- function(x, prefix = NULL) { + str_error <- if (is.null(prefix)) "Error" else prefix + if (is.null(conditionCall(x))) { + paste0(str_error, ": ") + } else { + paste0(str_error, " in ") + } + } + + format_class_plain <- function(x) { + cls <- unique(setdiff(class(x), "condition")) + paste0("<", paste(cls, collapse = "/"), ">") + } + + format_call_plain <- function(call) { + if (is.null(call)) { + NULL + } else { + cl <- trimws(format(call)) + if (length(cl) > 1) cl <- paste0(cl[1], " ...") + paste0("`", cl, "`") + } + } + + format_srcref_plain <- function(call, srcref = NULL) { + ref <- get_srcref(call, srcref) + if (is.null(ref)) return("") + + link <- if (ref$file != "") { + paste0(basename(ref$file), ":", ref$line, ":", ref$col) + } else { + paste0("line ", ref$line) + } + + paste0(" at ", link) + } + + format_trace_call_plain <- function(call) { + fmc <- trimws(format(call)[1]) + if (length(fmc) > 1) { fmc <- paste0(fmc[1], " ...") } + strtrim(fmc, getOption("width") - 5) + } + + # -- utilities --------------------------------------------------------- + + cli_version <- function() { + # this loads cli! + package_version(asNamespace("cli")[[".__NAMESPACE__."]]$spec[["version"]]) + } + + has_cli <- function() { + "cli" %in% loadedNamespaces() && cli_version() >= "3.3.0" + } + + `%||%` <- function(l, r) if (is.null(l)) r else l + + bytes <- function(x) { + nchar(x, type = "bytes") + } + + process_call <- function(cond) { + cond[c("call", "srcref", "procsrcref")] <- list( + call = if (is.null(cond[["call"]])) { + NULL + } else if (is.character(cond[["call"]])) { + cond[["call"]] + } else { + deparse(cond[["call"]], nlines = 2) + }, + srcref = NULL, + procsrcref = get_srcref(cond[["call"]], cond$procsrcref %||% cond$srcref) + ) + cond + } + + get_srcref <- function(call, srcref = NULL) { + ref <- srcref %||% utils::getSrcref(call) + if (is.null(ref)) return(NULL) + if (inherits(ref, "processed_srcref")) return(ref) + file <- utils::getSrcFilename(ref, full.names = TRUE)[1] + if (is.na(file)) file <- "" + line <- utils::getSrcLocation(ref) %||% "" + col <- utils::getSrcLocation(ref, which = "column") %||% "" + structure( + list(file = file, line = line, col = col), + class = "processed_srcref" + ) + } + + is_interactive <- function() { + opt <- getOption("rlib_interactive") + if (isTRUE(opt)) { + TRUE + } else if (identical(opt, FALSE)) { + FALSE + } else if (tolower(getOption("knitr.in.progress", "false")) == "true") { + FALSE + } else if (tolower(getOption("rstudio.notebook.executing", "false")) == "true") { + FALSE + } else if (identical(Sys.getenv("TESTTHAT"), "true")) { + FALSE + } else { + interactive() + } + } + + no_sink <- function() { + sink.number() == 0 && sink.number("message") == 2 + } + + rstudio_stdout <- function() { + rstudio <- rstudio_detect() + rstudio$type %in% c( + "rstudio_console", + "rstudio_console_starting", + "rstudio_build_pane", + "rstudio_job", + "rstudio_render_pane" + ) + } + + default_output <- function() { + if ((is_interactive() || rstudio_stdout()) && no_sink()) { + stdout() + } else { + stderr() + } + } + + onload_hook <- function() { + reg_env <- Sys.getenv("R_LIB_ERROR_REGISTER_PRINT_METHODS", "TRUE") + if (tolower(reg_env) != "false") { + registerS3method("format", "rlib_error_3_0", format_error, baseenv()) + registerS3method("format", "rlib_trace_3_0", format_trace, baseenv()) + registerS3method("print", "rlib_error_3_0", print_error, baseenv()) + registerS3method("print", "rlib_trace_3_0", print_trace, baseenv()) + registerS3method("conditionMessage", "rlib_error_3_0", cnd_message, baseenv()) + } + } + + native_name <- function(x) { + if (inherits(x, "NativeSymbolInfo")) { + x$name + } else { + format(x) + } + } + + # There is no format() for 'name' in R 3.6.x and before + format_name <- function(x) { + if (is.name(x)) { + as.character(x) + } else { + format(x) + } + } + + # -- public API -------------------------------------------------------- + + err_env <- environment() + parent.env(err_env) <- baseenv() + + structure( + list( + .internal = err_env, + new_cond = new_cond, + new_error = new_error, + throw = throw, + throw_error = throw_error, + chain_error = chain_error, + chain_call = chain_call, + chain_clean_call = chain_clean_call, + add_trace_back = add_trace_back, + process_call = process_call, + onload_hook = onload_hook, + is_interactive = is_interactive, + format = list( + advice = format_advice, + call = format_call, + class = format_class, + error = format_error, + error_heading = format_error_heading, + header_line = format_header_line, + srcref = format_srcref, + trace = format_trace + ) + ), + class = c("standalone_errors", "standalone")) +}) + +# These are optional, and feel free to remove them if you prefer to +# call them through the `err` object. + +new_cond <- err$new_cond +new_error <- err$new_error +throw <- err$throw +throw_error <- err$throw_error +chain_error <- err$chain_error +chain_call <- err$chain_call +chain_clean_call <- err$chain_clean_call + +# nocov end diff --git a/R/gh.R b/R/gh.R new file mode 100644 index 0000000..b9877cb --- /dev/null +++ b/R/gh.R @@ -0,0 +1,75 @@ + +parse_gh_url <- function(url) { + pcs <- parse_url(url) + host <- pcs$host + if (pcs$host == "github.com") { + api <- paste0(pcs$protocol, "://api.github.com") + graphql <- paste0(pcs$protocol, "://api.github.com/graphql") + } else { + api <- paste0(pcs$protocol, "://", pcs$host, "/api/v3") + graphql <- paste0(pcs$protocol, "://", pcs$host, "/api/graphql") + } + cmps <- strsplit(pcs$path, "/", fixed = TRUE)[[1]] + if (cmps[1] == "") cmps <- cmps[-1] + if (length(cmps) < 2) cmps <- c(cmps, "", "")[1:2] + cmps[2] <- sub("[.]git$", "", cmps[2]) + list( + host = host, + api = api, + graphql = graphql, + user = cmps[1], + repo = cmps[2], + slug = paste0(cmps[1], "/", cmps[2]) + ) +} + +gh_headers <- function(token) { + c( + Accept = "application/vnd.github+json", + Authorization = paste0("Bearer ", token) + ) +} + +gh_query_process_response <- function(resp) { + if (grepl("^application/json\\b", resp$type)) { + resp$content <- jsonlite::fromJSON( + rawToChar(resp$content), + simplifyVector = FALSE + ) + } + resp$headers <- curl::parse_headers_list(resp$headers) + resp +} + +gh_rest_get <- function(host, endpoint, token) { + synchronise(async_gh_rest_get(host, endpoint, token = token)) +} + +async_gh_rest_get <- function(host, endpoint, token) { + url <- paste0(host, endpoint) + headers <- gh_headers(token) + http_get(url, headers = headers)$ + then(gh_query_process_response) +} + +gh_rest_post <- function(host, endpoint, token, data) { + synchronise(async_gh_rest_post(host, endpoint, token, data)) +} + +async_gh_rest_post <- function(host, endpoint, token, data) { + url <- paste0(host, endpoint) + headers <- gh_headers(token) + http_post(url, data = data, headers = headers)$ + then(gh_query_process_response) +} + +gh_gql_get <- function(host, query, token) { + synchronise(async_gh_rest_get(host, query, token)) +} + +async_gh_gql_get <- function(host, query, token) { + headers <- gh_headers(token) + data <- jsonlite::toJSON(list(query = query), auto_unbox = TRUE) + http_post(host, headers = headers, data = data)$ + then(gh_query_process_response) +} diff --git a/R/handle.R b/R/handle.R deleted file mode 100644 index 2bc968d..0000000 --- a/R/handle.R +++ /dev/null @@ -1,26 +0,0 @@ - -handle_id <- function(x) { - if (is.character(x)) { - unname(sub("^.*/([^/]+)$", "\\1", x, perl = TRUE)) - } else if (inherits(x, "rhub_handle")) { - unname(vapply(x, "[[", "", "id")) - } else { - stop("Invalid R-hub check id") - } -} - -#' @export - -print.rhub_handle <- function(x, ...) { - id <- handle_id(x) - if (length(id) == 1) { - cat("R-hub check: ", id, "\n", sep = "") - - } else { - cat( - "R-hub checks:\n", - paste(" ", id, collapse = "\n") - ) - } - invisible(x) -} diff --git a/R/http-cache.R b/R/http-cache.R new file mode 100644 index 0000000..b1f917d --- /dev/null +++ b/R/http-cache.R @@ -0,0 +1,18 @@ + +the_cache <- new.env(parent = emptyenv()) + +async_cached_http_get <- function(url, headers = character(), + options = list()) { + hash <- cli::hash_md5(paste0("http-get-", url)) + if (hash %in% names(the_cache)) { + async_constant(the_cache[[hash]]) + } else { + http_get(url, headers = headers, options = options)$ + then(http_stop_for_status)$ + then(function(response) { + json <- rawToChar(response$content) + the_cache[[hash]] <- json + json + }) + } +} diff --git a/R/last.R b/R/last.R deleted file mode 100644 index 62af220..0000000 --- a/R/last.R +++ /dev/null @@ -1,19 +0,0 @@ - -#' The last rhub check of this R session -#' -#' `rhub` caches the id(s) of the last submission. This can be retrieved -#' with `last_check`. -#' -#' @return An rhub_check object. -#' -#' @export -#' @examples -#' \dontrun{ -#' check("packagedir") -#' last_check() -#' last_check()$livelog() -#' } - -last_check <- function() { - package_data$last_handle -} diff --git a/R/list.R b/R/list.R deleted file mode 100644 index 3a28dfd..0000000 --- a/R/list.R +++ /dev/null @@ -1,139 +0,0 @@ - -#' List all checks for an email address -#' -#' @param email Email address. By default it is guessed with -#' [whoami::email_address()]. The address must be validated, see -#' [validate_email()]. -#' @param package `NULL`, or a character scalar. Can be used to restrict -#' the search for a single package. -#' @param howmany How many check groups (checks submitted simultaneously) -#' to show. The current API limit is 20. -#' @return A [tibble::tibble] with columns: -#' * package Name of the package. -#' * version Package version. -#' * result: More detailed result of the check. Can be `NULL` for errors. -#' This is a list column with members: `status`, `errors`, `warnings`, -#' `notes`. -#' * group: R-hub check group id. -#' * id: `R-hub check id. -#' * platform_name: Name of the check platform. -#' * build_time: Build time, a [difftime] object. -#' * submitted: Time of submission. -#' * started: Time of the check start. -#' * platform: Detailed platform data, a list column. -#' * builder: Name of the builder machine. -#' * status Status of the check. Possible values: -#' - `created`: check job was created, but not running yet. -#' - `in-progress`: check job is running. -#' - `parseerror`: internal R-hub error parsing the check results. -#' - `preperror`: check error, before the package check has started. -#' - `aborted`: aborted by admin or user. -#' - `error`: failed check. (Possibly warnings and notes as well.) -#' - `warning`: `R CMD check` reported warnings. (Possibly notes as well.) -#' - `note`: `R CMD check` reported notes. -#' - `ok`: successful check. -#' * email: Email address of maintainer / submitter. -#' -#' @export -#' @seealso list_package_checks -#' @examples -#' \dontrun{ -#' ch <- list_my_checks() -#' ch -#' ch$details() -#' } - -list_my_checks <- function(email = email_address(), package = NULL, - howmany = 20) { - - assert_that(is_email(email)) - assert_that(is_string_or_null(package)) - assert_that(is_count(howmany)) - - response <- if (is.null(package)) { - query( - "LIST BUILDS EMAIL", - params = list(email = email, token = email_get_token(email))) - } else { - query( - "LIST BUILDS PACKAGE", - params = list(email = email, package = package, - token = email_get_token(email))) - } - - if (length(response) > howmany) response <- response[seq_len(howmany)] - - make_check_list(response) -} - - -#' List checks of a package -#' -#' @param package Directory of an R package, or a package tarball. -#' @param email Email address that was used for the check(s). -#' If `NULL`, then the maintainer address is used. -#' @param howmany How many checks to show. The current maximum of the API -#' is 20. -#' @inherit list_my_checks return -#' -#' @export -#' @importFrom desc desc_get -#' @examples -#' \dontrun{ -#' ch <- list_package_checks() -#' ch -#' ch$details(1) -#' } - -list_package_checks <- function(package = ".", email = NULL, howmany = 20) { - - assert_that(is_pkg_dir_or_tarball(package)) - if (is.null(email)) email <- get_maintainer_email(package) - assert_that(is_email(email)) - assert_that(is_count(howmany)) - - package <- unname(desc_get("Package", file = package)) - - response <- query( - "LIST BUILDS PACKAGE", - params = list(email = email, package = package, - token = email_get_token(email)) - ) - - if (length(response) > howmany) response <- response[seq_len(howmany)] - - make_check_list(response) -} - -make_check_list <- function(response) { - data <- unlist(response, recursive = FALSE) - - df <- tibble::tibble( - package = map_chr(data, "[[", "package"), - version = map_chr(data, "[[", "version"), - result = column_result(map(data, function(x) x$result)), - group = column_group_id(map_chr(data, "[[", "group")), - id = column_id(map_chr(data, "[[", "id")), - platform_name = map_chr(data, function(x) x$platform$name), - build_time = column_dt(map_int(data, function(x) { - suppressWarnings(as.integer(x$build_time %||% NA_integer_)) - })), - submitted = column_time(map_chr(data, "[[", "submitted")), - started = column_time(map_chr(data, function(x) x$started %||% NA_character_)), - platform = map(data, "[[", "platform"), - builder = map_chr(data, function(x) x$builder_machine %||% NA_character_), - status = column_status(map_chr(data, "[[", "status")), - email = map_chr(data, "[[", "email") - ) - - cache_put_ids(df$id) - cache_put_group_ids(df$group) - - df -} - -column_time <- function(x) { - res <- rep(as.POSIXct(NA_character_), length(x)) - res[! is.na(x)] <- parse_iso_8601(x[!is.na(x)]) - res -} diff --git a/R/livelog.R b/R/livelog.R deleted file mode 100644 index a4e3cb9..0000000 --- a/R/livelog.R +++ /dev/null @@ -1,129 +0,0 @@ - -check_livelog <- function(self, private, which) { - assert_that(is_count(which) || is_string(which)) - if (is_count(which) && (which < 1 || which > length(private$ids_))) { - stop("Unknown check selected") - } - if (is.character(which) && ! which %in% private$ids_) { - stop("Unknow check selected") - } - - make_streamer(private$ids_[[which]], make_status_parser) - self$update() - invisible(self) -} - -make_streamer <- function(id, parser_factory) { - - if (length(id) > 1) { - warning("Only first submission is streamed") - id <- id[1] - } - - start <- 0 - parser <- parser_factory() - - spinner <- c("-", "/", "|", "\\") - spin <- function() { - cat("\r", spinner[1], sep = "") - spinner <<- c(spinner[-1], spinner[1]) - } - - errors <- 100 - - repeat { - response <- tryCatch( - query( - "LIVE LOG", - params = list(id = id), - query = list(start = start) - ), - error = function(e) { - if (errors > 0) { - errors <- errors - 1 - list(text = list(), more = TRUE, size = start) - } else { - stop("Internal R-hub error") - list(text = list(), more = FALSE) - } - } - ) - - for (i in response$text) parser(i) - if (!response$more) break; - start <- response$size - for (i in 1:5) { Sys.sleep(0.1); spin() } - } - - cat("\r \n") - - if (grepl( - "^(Finished: ABORTED|Finished: ERROR)$", - response$text[[length(response$text)]] - )) { - cat(response$text[[length(response$text)]], "\n", sep = "") - } -} - -#' @importFrom rcmdcheck rcmdcheck - -make_status_parser <- function() { - - first <- TRUE - checking <- FALSE - - ## This is to make sure that `rhub` works with older and newer - ## rcmdcheck versions as well. Older versions expect a call for each - ## line. Newer versions just take a block of output. - formatter <- try( - ("rcmdcheck" %:::% "check_callback")(top_line = FALSE), - silent = TRUE - ) - if (inherits(formatter, "try-error")) { - cb <- ("rcmdcheck" %:::% "block_callback")(top_line = FALSE) - formatter <- function(x) cb(paste0(x, "\n")) - } - - function(x) { - - ## Make sure we are at the beginning of the line - cat("\r") - - if (first) { - header_line("Build started") - first <<- FALSE - } - - ## Get rid of potential \r characters - x <- gsub("[\r]+", "", x) - - ## Checking (already, and still) - - if (checking) { - if (grepl("^Status: ", x)) { - checking <<- FALSE - return(formatter(x)) - } else { - return(formatter(x)) - } - } - - ## Not checking (yet, or any more) - - if (grepl("^>>>>>=====+ Running R CMD check", x)) { - checking <<- TRUE - x <- sub("^>>>>>=+ ", "", x) - header_line(x) - - } else if (grepl("^>>>>>=====", x)) { - x <- sub("^>>>>>=+ ", "", x) - header_line(x) - - } else if (grepl("^\\+R-HUB-R-HUB-R-HUB", x)) { - x <- sub("^\\+R-HUB-R-HUB-R-HUB", "", x) - - } else { - ## print nothing - } - } -} diff --git a/R/local.R b/R/local.R deleted file mode 100644 index f7faeb3..0000000 --- a/R/local.R +++ /dev/null @@ -1,159 +0,0 @@ - -#' Run a package check locally, in a Docker container -#' -#' @description Run a package check locally, in a Docker container. UNTESTED -#' ON WINDOWS, bug reports welcome. :-) -#' -#' @param quiet Whether to print the check output -#' @param image Docker image to use. If `NULL`, a default image is selected. -#' @param valgrind Whether to run the check with Valgrind. -#' @param timeout Timeout for a check, a `difftime` object or a scalar -#' that will be interpreted as seconds. -#' @param artifacts Where to copy the build artifacts after the build. -#' @inheritParams check -#' @return An `rcmdcheck::rcmdcheck` object, with extra fields: -#' * `all_output`: all output from the check, both standard output and -#' error. -#' * `container_name`: name of the Docker container that performed the -#' build. It is a random name. -#' * `artifacts`: directory of build artifacts. -#' -#' @export -#' @importFrom withr with_dir -#' @importFrom processx run -#' @importFrom utils tail -#' @importFrom uuid UUIDgenerate -#' -#' @details You'll need to have bash and Docker installed. - -local_check_linux <- function(path = ".", quiet = FALSE, image = NULL, - valgrind = FALSE, check_args = character(), - env_vars = character(), timeout = Inf, artifacts = tempfile()) { - - ## Check that it is a package - path <- normalizePath(path) - assert_that(is_pkg_dir_or_tarball(path)) - assert_that(is_flag(quiet)) - assert_that(is.null(image) || is.character(image)) - assert_that(is_flag(valgrind)) - assert_that(is_named(env_vars)) - assert_that(is.character(env_vars)) - assert_that(is_timeout(timeout <- as_timeout(timeout))) - assert_that(is.character(artifacts)) - - if ((bash <- Sys.which("bash")) == "" || Sys.which("docker") == "") { - stop("You need bash and Docker to run local Linux checks") - } - - ## Build the tar.gz, if needed - if (file.info(path)$isdir) { - header_line("Building package") - pkg_targz <- build_package(path, tmpdir <- tempfile()) - } else { - pkg_targz <- path - } - - ## Add valgrind to check_args - check_args <- c( - check_args, - if (valgrind) "--use-valgrind" - ) - - dir.create(artifacts, showWarnings = FALSE, recursive = TRUE) - artifacts <- normalizePath(artifacts) - - container_name <- UUIDgenerate() - if (!quiet) { - cat(sep = "", "\nContainer name: ", container_name, "-2", "\n") - cat("It will _not_ be removed after the check.\n\n") - } - - ## Arguments - env_str <- paste(paste0(names(env_vars), "=", env_vars), collapse = "\n") - args <- c( - "-k", - if (!is.null(image)) c("-i", image), - if (length(check_args)) c("-c", paste(check_args, collapse = " ")), - if (length(env_vars)) c("-e", env_str), - c("-a", artifacts), - c("-d", container_name), - pkg_targz) - - output <- character() - callback <- function(x, proc) output <<- c(output, x) - - ## Run it - wd <- system.file(package = .packageName, "bin") - result <- with_dir( - wd, - run(bash, c(file.path(wd, "rhub-linux.sh"), args), echo = TRUE, - stdout_line_callback = callback, stderr_line_callback = callback, - timeout = timeout, spinner = FALSE) - ) - - ## TODO: better error object - if (result$timeout) stop("Check timed out") - - if (!quiet) cat("Artifacts in", artifacts, "\n") - if (!quiet) cat(sep = "", "Container name: ", container_name, "-2", "\n\n") - - ## Try to parse as R CMD check result - check_start <- grep("^>>>>>=====+ Running R CMD check", output)[1] - if (is.na(check_start)) stop("Failed before check started") - check_output <- tail(output, -check_start) - check_result <- tryCatch( - rcmdcheck::parse_check(text = check_output), - error = function(e) NULL) - - result <- list( - check_result = check_result, - output = output, - image = image, - artifacts = artifacts, - container_name = paste0(container_name, "-2")) - class(result) <- "rhub_local_check" - result -} - -#' @importFrom utils head -#' @export - -print.rhub_local_check <- function(x, ...) { - cat0("\n") - if (!is.null(x$image)) cat0(symbol$bullet, " image: ", x$image, "\n") - if (!is.null(x$output)) { - cat0(symbol$bullet, " output:\n") - cat(paste0(" ", c(head(x$output, 5), "...")), sep = "\n") - } - cat0(symbol$bullet, " container_name: ", x$container_name, "\n") - if (!is.null(x$artifacts)) { - cat0(symbol$bullet, " artifacts: \n ", x$artifacts, "\n") - } - if (!is.null(x$check_result)) { - cat0(symbol$bullet, " check_result:\n") - print(x$check_result) - } -} - -#' List R-hub Docker images -#' -#' The images are pretty-printed in a short format. Use -#' `as.data.frame()` to get all available platform metadata. -#' -#' @export - -local_check_linux_images <- function() { - plat <- platforms() - plat <- plat[!is.na(plat$`docker-image`), ] - class(plat) <- c("rhub_docker_images", class(plat)) - plat -} - -#' @export - -print.rhub_docker_images <- function(x, ...) { - res <- paste(cyan(paste0("rhub/", x$`docker-image`)), - green(x$description), sep = ":\n ") - cat(res, sep = "\n") - invisible(x) -} diff --git a/R/platform.R b/R/platform.R deleted file mode 100644 index 9ddfb12..0000000 --- a/R/platform.R +++ /dev/null @@ -1,66 +0,0 @@ - -#' List all R-hub platforms -#' -#' The platforms are pretty-printed in a short format. Use -#' `as.data.frame(platforms())` to get all available platform metadata. -#' -#' @export -#' @importFrom jsonlite fromJSON -#' @importFrom crayon green cyan -#' @examples -#' \dontrun{ -#' platforms() -#' as.data.frame(platforms()) -#' } - -platforms <- function() { - json <- query("GET PLATFORMS", as = "text") - pls <- fromJSON(json, simplifyDataFrame = TRUE) - pls <- pls[order(pls$name), , drop = FALSE] - class(pls) <- c("rhub_platforms", class(pls)) - pls -} - -#' @export - -print.rhub_platforms <- function(x, ...) { - res <- paste(cyan(x$name), green(x$description), sep = ":\n ") - cat(res, sep = "\n") - invisible(x) -} - -match_platform <- function(platform) { - all_platforms <- platforms() - if (is.null(platform)) { - if (is_interactive()) { - select_platform_interactively(all_platforms) - } else { - all_platforms$name[1] - } - - } else { - if (! all(platform %in% all_platforms$name)) { - stop("Unknown R-hub platform, see rhub::platforms() for a list") - } - platform - } -} - -select_platform_interactively <- function(platforms) { - - choices <- paste0( - platforms$description, - crayon::green(" (", platforms$name, ")", sep = "") - ) - - cat("\n") - title <- crayon::yellow(paste0( - symbol$line, symbol$line, - " Choose build platform" - )) - ch <- menu(choices, title = title) - cat("\n") - if (ch == 0) stop("R-hub check aborted", call. = FALSE) - - platforms$name[ch] -} diff --git a/R/platforms.R b/R/platforms.R new file mode 100644 index 0000000..b62c324 --- /dev/null +++ b/R/platforms.R @@ -0,0 +1,259 @@ + +get_platforms <- function() { + url_platforms <- "https://raw.githubusercontent.com/r-hub/actions/v1/setup/platforms.json" + url_containers <- "https://r-hub.github.io/containers/manifest.json" + ret <- synchronise(when_all( + async_cached_http_get(url_platforms), + async_cached_http_get(url_containers) + )) + ret +} + +#' List R-hub platforms +#' +#' @return Data frame with columns: +#' * `name`: platform name. Use this in the `platforms` argument of +#' [rhub_check()]. +#' * `aliases`: alternative platform names. They can also be used in the +#' `platforms` argument of [rhub_check()]. +#' * `type`: `"os"` or `"container"`. +#' * `os_type`: Linux, macOS or Windows currently. +#' * `container`: URL of the container image for container platforms. +#' * `github_os`: name of the OS on GitHub Actions for non-container +#' platforms. +#' * `r_version`: R version string. If `"*"` then any supported R version +#' can be selected for this platform. +#' * `os_name`: name of the operating system, including Linux distribution +#' name and version for container actions. +#' +#' @export + +rhub_platforms <- function() { + ret <- get_platforms() + platforms <- jsonlite::fromJSON(ret[[1]]) + containers <- jsonlite::fromJSON(ret[[2]], simplifyVector = FALSE)$containers + + res <- data_frame( + name = platforms[["name"]], + aliases = lapply(zip(platforms[["cran-names"]], platforms[["aliases"]]), unique), + type = platforms[["type"]], + os_type = platforms[["os-type"]], + container = platforms[["container"]], + github_os = platforms[["os"]], + r_version = platforms[["r-version"]], + os_name = NA_character_ + ) + + wcnt <- res$type == "container" + cnt_tags <- vcapply(containers, "[[", "tag") + res$r_version[wcnt] <- vcapply(res$container[wcnt], function(x) { + if (! x %in% cnt_tags) return(NA_character_) + sess <- containers[[match(x, cnt_tags)]]$builds[[1]]$`sessionInfo()` + strsplit(sess, "\n", fixed = TRUE)[[1]][1] + }) + + res$os_name[wcnt] <- vcapply(res$container[wcnt], function(x) { + if (! x %in% cnt_tags) return(NA_character_) + osr <- containers[[match(x, cnt_tags)]]$builds[[1]]$`/etc/os-release` + osr <- strsplit(osr, "\n", fixed = TRUE)[[1]] + pn <- grep("^PRETTY_NAME", osr, value = TRUE)[1] + pn <- sub("^PRETTY_NAME=", "", pn) + pn <- unquote(pn) + pn + }) + + res <- res[order(res$type == "container", res$name), ] + + res <- add_class(res, "rhub_platforms") + res +} + +#' @export + +format.rhub_platforms <- function(x, ...) { + ret <- character() + wvms <- which(x$type == "os") + wcts <- which(x$type == "container") + counter <- 1L + grey <- cli::make_ansi_style("gray70", grey = TRUE) + if (length(wvms)) { + vm <- if (has_emoji()) "\U1F5A5 " else "[VM] " + ret <- c(ret, cli::rule("Virtual machines")) + for (p in wvms) { + ret <- c( + ret, + paste0( + format(counter, width = 2), " ", vm, " ", + cli::style_bold(cli::col_blue(x$name[p])) + ), + if (x$r_version[p] == "*") { + grey(paste0(" All R versions on GitHub Actions ", x$github_os[p])) + } else { + x$r_version + } + ) + counter <- counter + 1L + } + } + if (length(wcts)) { + if (length(ret)) ret <- c(ret, "") + ret <- c(ret, cli::rule("Containers")) + for (p in wcts) { + ct <- if (has_emoji()) "\U1F40B" else "[CT] " + rv <- x$r_version[p] + os <- x$os_name[p] + al <- sort(unique(x$aliases[[p]])) + al <- if (length(al)) { + grey(paste0(" [", paste(al, collapse = ", "), "]")) + } else { + "" + } + ret <- c( + ret, + paste0( + format(counter, width = 2), " ", ct, " ", + cli::style_bold(cli::col_blue(x$name[p])), + al + ), + grey(paste0( + " ", + if (!is.na(rv)) rv, + if (!is.na(rv) && !is.na(os)) " on ", + if (!is.na(os)) os + )), + cli::style_italic(grey(paste0(" ", x$container[p]))) + ) + counter <- counter + 1L + } + } + + ret +} + +#' @export + +print.rhub_platforms <- function(x, ...) { + writeLines(cli::ansi_strtrim(format(x, ...))) +} + +#' @export + +`[.rhub_platforms` <- function(x, i, j, drop = FALSE) { + class(x) <- setdiff(class(x), "rhub_platforms") + NextMethod("[") +} + +#' @export + +summary.rhub_platforms <- function(object, ...) { + class(object) <- c("rhub_platforms_summary", class(object)) + object +} + +#' @export + +format.rhub_platforms_summary <- function(x, ...) { + num <- format(seq_len(nrow(x))) + icon <- if (!has_emoji()) { + ifelse(x$type == "os", "[VM]", "[CT]") + } else { + ifelse(x$type == "os", "\U1F5A5", "\U1F40B") + } + name <- cli::style_bold(cli::col_blue(x$name)) + rv <- abbrev_version(x$r_version) + os <- ifelse( + is.na(x$os_name), + paste0(x$github_os, " on GitHub"), + x$os_name + ) + + lines <- paste( + ansi_align_width(num), + ansi_align_width(icon), + ansi_align_width(name), + ansi_align_width(rv), + ansi_align_width(os) + ) + + trimws(lines, which = "right") +} + +#' @export + +print.rhub_platforms_summary <- function(x, ...) { + writeLines(cli::ansi_strtrim(format(x, ...))) +} + +abbrev_version <- function(x) { + sel <- grepl("^R Under development", x) + x[sel] <- sub("R Under development [(]unstable[)]", "R-devel", x[sel]) + + sel <- grepl("R version [0-9.]+ Patched", x) + x[sel] <- sub("R version ([0-9.]+) Patched", "R-\\1 (patched)", x[sel]) + + sel <- grepl("R version [0-9.]+", x) + x[sel] <- sub("R version ([0-9.]+)", "R-\\1", x[sel]) + + x[x == "*"] <- "R-* (any version)" + + x +} + +select_platforms <- function(platforms = NULL) { + tryCatch( + plat <- rhub_platforms(), + error = function(e) { + throw(parent = e, pkg_error( + "Failed to download the list of R-hub platforms.", + i = "Make sure that you are online and Github is also online." + )) + } + ) + + if (is.null(platforms)) { + if (!is_interactive()) { + throw(pkg_error( + "{.arg platforms} argument is missing for {.fun rhub_check}.", + i = "You need to specify {.arg platforms} in non-interactive + sessions" + )) + } + cli::cli_text() + cli::cli_text( + "Available platforms + (see {.code rhub::rhub_platforms()} for details):" + ) + cli::cli_text() + cli::cli_verbatim(paste( + cli::ansi_strtrim(format(summary(plat))), + collapse = "\n" + )) + pnums <- trimws(readline( + prompt = "\nSelection (comma separated numbers, 0 to cancel): " + )) + if (pnums == "" || pnums == "0") { + throw(pkg_error("R-hub check cancelled")) + } + pnums <- unique(trimws(strsplit(pnums, ",", fixed = TRUE)[[1]])) + pnums2 <- suppressWarnings(as.integer(pnums)) + bad <- is.na(pnums2) | pnums2 < 1 | pnums2 > nrow(plat) + if (any(bad)) { + throw(pkg_error( + "Invalid platform number{?s}: {.val {pnums[bad]}}." + )) + } + platforms <- plat$name[pnums2] + + } else { + platforms <- unique(platforms) + bad <- !platforms %in% unlist(plat$name, plat$aliaeses) + if (any(bad)) { + throw(pkg_error( + "Unknown platform{?s}: {.val {platforms[bad]}}.", + i = "See {.run rhub::rhub_platforms()} for the list of platforms" + )) + } + } + + platforms +} diff --git a/R/print-status.R b/R/print-status.R deleted file mode 100644 index ff2a523..0000000 --- a/R/print-status.R +++ /dev/null @@ -1,76 +0,0 @@ - -check_print <- function(self, private) { - self$update() - for (x in private$status_) check_print2(x) - invisible(self) -} - -#' @importFrom parsedate parse_iso_8601 -#' @importFrom prettyunits pretty_ms -#' @importFrom crayon make_style red yellow - -check_print2 <- function(x) { - - title_line(paste0(x$package, " ", x$version, ": ", toupper(x$status))) - - greyish <- make_style("darkgrey") - - submitted_time <- as.numeric(Sys.time() - parse_iso_8601(x$submitted), units = "secs") - submitted <- if (submitted_time > 0) { - paste(pretty_ms(submitted_time * 1000), "ago") - } else { - "just now" - } - - build_time <- if (!is.null(x$build_time) && x$build_time != 0) { - paste0(greyish(" Build time: "), pretty_ms(x$build_time), "\n") - } - - cat( - sep = "", - greyish(" Build ID: "), x$id, "\n", - greyish(" Platform: "), x$platform$description, "\n", - greyish(" Submitted: "), submitted, "\n", - build_time %||% "", - "\n" - ) - - ## If not done, then this is all we do - if (is.null(build_time)) return(invisible(x)) - - ## R CMD check error - if (tolower(x$status) != "preperror" && tolower(x$status) != "aborted") { - makeshift <- structure( - list( - package = x$package, - version = x$version, - rversion = x$platform$rversion, - output = list(stdout = x$check_output, stderr = "", status = 0), - platform = x$platform$name, - notes = x$result$notes, - warnings = x$result$warnings, - errors = x$result$errors - ), - class = "rcmdcheck" - ) - print(makeshift, header = FALSE) - - ## Or we never got to R CMD check - } else { - clog <- gsub("+R-HUB-R-HUB-R-HUB", "", fixed = TRUE, x$preperror_log) - clog <- gsub("\necho >>>>>=========[^\n]*\n", "\n", clog) - clog <- gsub( - "\n>>>>>=======* (.+)\n", - yellow(sep = "", "\n\n", symbol$line, " \\1\n\n"), - clog, - perl = TRUE - ) - - cat(red(paste0(symbol$pointer, " Build failed during preparation or aborted\n"))) - cat(greyish("\n[...]\n")) - cat(greyish(clog)) - cat("\n") - } - - invisible(x) -} diff --git a/R/print.R b/R/print.R deleted file mode 100644 index 4e31719..0000000 --- a/R/print.R +++ /dev/null @@ -1,56 +0,0 @@ - -#' @importFrom crayon make_style -#' @importFrom cli symbol - -header_line <- function(x) { - - greyish <- make_style("darkgrey") - - cat( - paste0("\r", greyish(symbol$line), " "), - greyish(x), - "\n", - sep = "" - ) -} - -#' @importFrom crayon yellow -#' @importFrom cli symbol - -title_line <- function(x) { - - cat( - sep ="", - "\n", - yellow(paste0(symbol$line, symbol$line, " ", x)), - "\n\n" - ) -} - -#' @importFrom cli make_ansi_style style_bold style_inverse -#' col_red col_blue col_green - -status_style_created <- function(x) { - x -} - -status_style_in_progress <- function(x) { - x -} - -status_style_error <- function(x) { - style_inverse(style_bold(col_red(x))) -} - -status_style_aborted <- function(x) { - style_bold(col_blue(x)) -} - -status_style_note <- function(x) { - orange <- make_ansi_style("orange") - style_bold(orange(x)) -} - -status_style_ok <- function(x) { - style_inverse(style_bold(col_green(x))) -} diff --git a/R/rc.R b/R/rc.R new file mode 100644 index 0000000..a4d6ec8 --- /dev/null +++ b/R/rc.R @@ -0,0 +1,390 @@ +# ========================================================================= +# API +# ========================================================================= + +#' Request a new token for submissions to the R Consortium runners +#' +#' To build and check R packages on the RC runners of R-hub, you'll need +#' to verify your email address. R-hub will send a token to your email +#' address, and this token will be stored on your computer. +#' +#' You need to store a token on every computer you want to submit +#' jobs from, either using the same token from the email you got, or +#' you can request additional tokens for the new machines. Your old token +#' will stay valid as well. +#' +#' If you already have a token from a previous version of R-hub, you can +#' reuse that and you don't need to do anything. +#' +#' Run +#' ``` +#' rhub:::email_file() +#' ``` +#' to see the file rhub uses to store your tokens. +#' +#' @param email Email address to verify We try to detect this, but +#' if the detection fails, you can specify it explicitly. +#' If this argument is missing (or `NULL`), then you can specify it +#' interactively. +#' @param token Token to add. If you already received a token in an email +#' from R-hub, you can specify that here. +#' +#' @export +#' @family RC runners API + +rc_new_token <- function(email = NULL, token = NULL) { + if (is.null(email) || is.null(token)) { + if (!is_interactive()) { + throw(pkg_error("No email or no token and not in interactive mode")) + } + return(rc_new_token_interactive(email, token)) + } + + email_add_token(email, token) + cli::cli_alert_success("Added token for {.val email}.", wrap = TRUE) + cli::cli_alert_info("R-hub tokens are stored at {.path {email_file()}}.") + invisible() +} + +# ------------------------------------------------------------------------- +#' Show your tokens for the R Consortium runners +#' +#' Lists all tokens stored on the local machine. +#' +#' @return Data frame with string columns `email` and `token`. +#' @export +#' @family RC runners API + +rc_list_local_tokens <- function() { + list_validated_emails2(message = FALSE, msg_if_empty = FALSE) +} + +# ------------------------------------------------------------------------- +#' List your repositories created by the R Consortium runners +#' +#' Lists repositories created by [rc_submit()] submissions. +#' +#' @param email Email address. We try to detect this, but +#' if the detection fails, you can specify it explicitly. +#' +#' @return Data frame with columns: +#' +#' * `repo_name`: Name of the repository. +#' * `repo_url`: URL of the repository. +#' * `builds_url`: URL to the builds of the repository. +#' +#' Additional columns and customized printing will be probably added +#' later to the result. +#' +#' @export +#' @family RC runners API + +rc_list_repos <- function(email = NULL) { + email <- email %||% guess_email(message = TRUE) + resp <- query("/repos", headers = get_auth_header(email)) + jsonlite::fromJSON(rawToChar(resp$content)) +} + +# ------------------------------------------------------------------------- +#' Submit a package to the R Consortium runners +#' +#' @param path Path to package file or package directory. +#' @param platforms Platforms to checks. See [rhub_platforms()] for a +#' current list. If not specified, then you can select the platforms +#' interactively. Must be specified in non-interactive sessions. +#' @param email Email address. You must have a token on the local machhine, +#' that corresponds to the email address, see [rc_new_token()]. +#' If not specified (or `NULL`) then the email address of the package +#' maintainer is used. +#' @param confirmation You must set this to `TRUE` to submit a package +#' from a non-interactive session. +#' @return A list with data about the submission, invisibly. +#' Currently it has: +#' +#' * `result`: Should be the string `"OK"`. +#' * `repo_url`: URL to the repository. +#' * `actions_url`: URL to the builds inside the repository. +#' * `id`: Build id. This is a string with a randomly generated id. +#' * `name`: Build name, this is a string, the concatenation of the +#' build platforms. +#' +#' More fields might be added later. +#' +#' @export +#' @family RC runners API +#' @seealso [rhub_platforms()] for a list of supported platforms. + +rc_submit <- function(path = ".", platforms = NULL, email = NULL, + confirmation = NULL) { + + if (isTRUE(confirmation) && !is_interactive()) { + throw(pkg_error( + "You need to set {.arg confirmation} to {.val TRUE} + to submit packages to R-hub from non-interactive R sessions." + )) + } + + pkg_name <- desc::desc_get("Package", file = path)[[1]] + if (is.na(pkg_name)) { + throw(pkg_error( + "Could not query R package name at {.path {path}}.", + i = paste( + "Make sure that {.arg path} is an R package or a directory", + "contaiing an R package." + ) + )) + } + + email <- email %||% get_maintainer_email(path = path) + + platforms <- select_platforms() + + if (is_dir(path)) { + path <- pkgbuild::build(path = path) + } + + id <- random_id() + ep <- paste0("/job/", pkg_name) + form <- list( + config = curl::form_data(paste(platforms, collapse = ",")), + id = curl::form_data(id), + package = curl::form_file(path) + ) + + if (!isTRUE(confirmation)) { + cat(cli::col_cyan(cli::rule("Confirmation"))) + cli::cli_bullets(c( + "!" = "Your package will be publicly readable at + {.url https://github.com/r-hub2}.", + ">" = "You will need a GitHub account to view the build logs.", + ">" = "Only continue if you are fine with this.", + ">" = "See the {.fn rhub_setup} function for an alternative way + of using R-hub." + )) + ans <- trimws(readline( + prompt = "\nPlease type 'yes' to continue: " + )) + cli::cli_text() + if (ans != 'yes' && ans != "'yes'") { + throw(pkg_error("Aborted R-hub submission.")) + } + } + + resp <- query( + method = "POST", + ep, + sse = TRUE, + data_form = form, + headers = c( + get_auth_header(email), + "content-type" = "multipart/form-data", + "accept" = "text/event-stream", + "cache-control" = "no-cache", + "connection" = "keep-alive" + ) + ) + + resevt <- Filter(function(x) x[["event"]] == "result", resp$sse) + if (length(resevt) == 0) { + stop("Invalid response from R-hub server, please report this.") + } + + retval <- jsonlite::fromJSON( + resevt[[1]][["data"]], + simplifyVector = FALSE + ) + invisible(retval) +} + +# ========================================================================= +# Internals +# ========================================================================= + +guess_email <- function(path = ".", message = TRUE) { + valid <- list_validated_emails2(message = FALSE) + maint <- tryCatch(get_maintainer_email(path), error = function(e) NULL) + if (!is.null(maint)) { + if (message) { + cli::cli_alert_info( + wrap = TRUE, + "Using maintainer email address {.val {maint}}." + ) + return(maint) + } + } + + guess <- email_address() + if (message) { + cli::cli_alert_info( + wrap = TRUE, + "Using email address {.val {guess}}." + ) + } +} + +get_auth_header <- function(email) { + valid <- list_validated_emails2(message = FALSE) + if (! email %in% valid$email) { + throw(pkg_error( + "Can't find token for email address {.val {guess}}.", + i = "Call {.code rhub::rc_new_token()} to get a token." + )) + } + token <- valid$token[match(email, valid$email)] + c("Authorization" = paste("Bearer", token)) +} + +#' @importFrom cli symbol +#' @importFrom utils menu +#' @importFrom whoami email_address + +get_email_to_validate <- function(path) { + + ## Find out email first. List currently validated addresses, + ## Offer address by whoami::email_address(), and also the + ## maintainer address, if any. + + valid <- list_validated_emails2(msg_if_empty = FALSE) + guess <- email_address() + maint <- tryCatch(get_maintainer_email(path), error = function(e) NULL) + + choices <- rbind( + if (nrow(valid)) cbind(valid = TRUE, valid), + if (!is.null(guess) && ! guess %in% valid$email) { + data_frame(valid = FALSE, email = guess, token = NA) + }, + if (!is.null(maint) && ! maint %in% valid$email && maint != guess) { + data_frame(valid = FALSE, email = maint, token = NA) + }, + data_frame(valid = NA, email = "New email address", token = NA) + ) + + ## Only show the menu if there is more than one thing there + if (nrow(choices) != 1) { + choices_str <- paste( + sep = " ", + ifelse( + choices$valid & !is.na(choices$valid), + cli::col_green(cli::symbol$tick), + " " + ), + choices$email + ) + + cat("\n") + title <- cli::col_yellow(paste0( + cli::symbol$line, cli::symbol$line, + " Choose email address to request token for (or 0 to exit)" + )) + ch <- menu(choices_str, title = title) + + if (ch == 0) throw(pkg_error("Cancelled requesting new token")) + + } else { + ch <- 1 + } + + ## Get another address if that is selected + if (is.na(choices$valid[ch])) { + cat("\n") + email <- readline("Email address: ") + } else { + email <- choices$email[ch] + } +} + +list_validated_emails2 <- function(message = is_interactive(), + msg_if_empty = TRUE) { + file <- email_file() + res <- if (file.exists(file)) { + if (message) { + cli::cli_alert( + "R-hub tokens are stored at {.path {email_file()}}." + ) + } + + structure( + read.csv(file, stringsAsFactors = FALSE, header = FALSE), + names = c("email", "token") + ) + } else { + data.frame( + email = character(), + token = character(), + stringsAsFactors = FALSE + ) + } + if (is_interactive() && nrow(res) == 0) { + if (msg_if_empty) { + cli::cli_alert_info("No R-hub tokens found.") + } + invisible(res) + } else { + res + } +} + +#' @importFrom rappdirs user_data_dir + +email_file <- function() { + rhub_data_dir <- user_data_dir("rhub", "rhub") + file.path(rhub_data_dir, "validated_emails.csv") +} + +rc_new_token_interactive <- function(email, token, path = ".") { + + if (is.null(email)) email <- get_email_to_validate(path) + + ## Token next. For this we need to make an API query. + if (is.null(token)) { + query( + method = "POST", + "/user/validate", + headers = c("content-type" = "application/x-www-form-urlencoded"), + data = jsonlite::toJSON(list(email = jsonlite::unbox(email))) + ) + cli::cli_alert_info( + "Please check your emails for the R-hub access token." + ) + token <- readline("Token: ") + } + + ## We got everything now + rc_new_token(email, token) +} + +#' @importFrom utils read.csv write.table + +email_add_token <- function(email, token) { + file <- email_file() + + if (!file.exists(file)) { + parent <- dirname(file) + if (!file.exists(parent)) dir.create(parent, recursive = TRUE) + tokens <- data.frame( + V1 = character(), + V2 = character(), + stringsAsFactors = FALSE + ) + + } else { + tokens <- read.csv(file, stringsAsFactors = FALSE, header = FALSE) + } + + if (! email %in% tokens[,1]) { + tokens <- rbind(tokens, c(email, token)) + + } else{ + tokens[match(email, tokens[,1]), 2] <- token + } + + write.table( + tokens, + file = file, + sep = ",", + col.names = FALSE, + row.names = FALSE + ) + + invisible() +} diff --git a/R/rematch.R b/R/rematch.R new file mode 100644 index 0000000..6af12d9 --- /dev/null +++ b/R/rematch.R @@ -0,0 +1,36 @@ +re_match <- function(text, pattern, perl = TRUE, ...) { + + stopifnot(is.character(pattern), length(pattern) == 1, !is.na(pattern)) + text <- as.character(text) + + match <- regexpr(pattern, text, perl = perl, ...) + + start <- as.vector(match) + length <- attr(match, "match.length") + end <- start + length - 1L + + matchstr <- substring(text, start, end) + matchstr[ start == -1 ] <- NA_character_ + + res <- data.frame( + stringsAsFactors = FALSE, + .text = text, + .match = matchstr + ) + + if (!is.null(attr(match, "capture.start"))) { + + gstart <- attr(match, "capture.start") + glength <- attr(match, "capture.length") + gend <- gstart + glength - 1L + + groupstr <- substring(text, gstart, gend) + groupstr[ gstart == -1 ] <- NA_character_ + dim(groupstr) <- dim(gstart) + + res <- cbind(groupstr, res, stringsAsFactors = FALSE) + } + + names(res) <- c(attr(match, "capture.names"), ".text", ".match") + res +} diff --git a/R/rematch_all.R b/R/rematch_all.R deleted file mode 100644 index 3711881..0000000 --- a/R/rematch_all.R +++ /dev/null @@ -1,78 +0,0 @@ - -re_match_all <- function(text, pattern, ...) { - - text <- as.character(text) - stopifnot(is.character(pattern), length(pattern) == 1, !is.na(pattern)) - - ## Need to handle this case separately, as gregexpr effectively - ## does not work for this. - if (length(text) == 0) return(empty_result(text, pattern, ...)) - - match <- gregexpr(pattern, text, perl = TRUE, ...) - - num_groups <- length(attr(match[[1]], "capture.names")) - - ## Non-matching strings have a rather strange special form, - ## so we just treat them differently - non <- vapply(match, function(m) m[1] == -1, TRUE) - yes <- !non - res <- replicate(length(text), list(), simplify = FALSE) - if (any(non)) { - res[non] <- list(replicate(num_groups + 1, character(), simplify = FALSE)) - } - if (any(yes)) { - res[yes] <- mapply(match1, text[yes], match[yes], SIMPLIFY = FALSE) - } - - ## Need to assemble the final data frame "manually". - ## There is apparently no function for this. rbind() is almost - ## good, but simplifies to a matrix if the dimensions allow it.... - res <- lapply(seq_along(res[[1]]), function(i) { - lapply(res, "[[", i) - }) - - structure( - res, - names = c(attr(match[[1]], "capture.names"), ".match"), - row.names = seq_along(text), - class = c("data.frame") - ) -} - -match1 <- function(text1, match1) { - - matchstr <- substring( - text1, - match1, - match1 + attr(match1, "match.length") - 1L - ) - - ## substring fails if the index is length zero, - ## need to handle special case - if (is.null(attr(match1, "capture.start"))) { - list(.match = matchstr) - - } else { - gstart <- attr(match1, "capture.start") - glength <- attr(match1, "capture.length") - gend <- gstart + glength - 1L - - groupstr <- substring(text1, gstart, gend) - dim(groupstr) <- dim(gstart) - - c(lapply(seq_len(ncol(groupstr)), function(i) groupstr[, i]), - list(.match = matchstr) - ) - } -} - -empty_result <- function(text, pattern, ...) { - match <- regexpr(pattern, text, perl = TRUE, ...) - num_groups <- length(attr(match, "capture.names")) - structure( - replicate(num_groups + 1, list(), simplify = FALSE), - names = c(attr(match, "capture.names"), ".match"), - row.names = integer(0), - class = "data.frame" - ) -} diff --git a/R/rhub-package.R b/R/rhub-package.R deleted file mode 100644 index b30bbda..0000000 --- a/R/rhub-package.R +++ /dev/null @@ -1,8 +0,0 @@ -#' @keywords internal -"_PACKAGE" - -# The following block is used by usethis to automatically manage -# roxygen namespace tags. Modify with care! -## usethis namespace: start -## usethis namespace: end -NULL diff --git a/R/rhubv1.R b/R/rhubv1.R new file mode 100644 index 0000000..df35d63 --- /dev/null +++ b/R/rhubv1.R @@ -0,0 +1,206 @@ +deprecated <- function() { + message( + "This function is deprecated and defunct since rhub v2.\n", + "Please see `?rhubv2` on transitioning to the new rhub functions." + ) +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +get_check <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +check_for_cran <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export +#' @rdname check_shortcuts + +check_on_linux <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_on_windows <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_on_macos <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_on_debian <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_on_ubuntu <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_on_fedora <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_on_solaris <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_on_centos <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_with_roldrel <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_with_rrelease <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_with_rpatched <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_with_rdevel <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_with_valgrind <- function(...) { + deprecated() +} + +#' @export +#' @rdname check_shortcuts + +check_with_sanitizers <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +check <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +validate_email <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +list_validated_emails <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +last_check <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +list_my_checks <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +list_package_checks <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +local_check_linux <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +local_check_linux_images <- function(...) { + deprecated() +} + +#' This function is deprecated and defunct. Please see [rhubv2]. +#' +#' @param ... Deprecated. +#' @export + +platforms <- function(...) { + deprecated() +} \ No newline at end of file diff --git a/R/rhubv2.R b/R/rhubv2.R new file mode 100644 index 0000000..a335238 --- /dev/null +++ b/R/rhubv2.R @@ -0,0 +1,25 @@ +#' @title The rhub package +#' +#' @description Tools for R package developers +#' +#' @details +#' ```{r man-readme, child = "README.Rmd"} +#' ``` +#' +#' @keywords internal +#' @name rhub-package +#' @rdname rhub-package +#' @aliases rhub +NULL + + +#' @title R-hub v2 +#' @description Start here to learn about R-hub v2, especially if you +#' used the previous version of R-hub before. +#' +#' @details +#' ```{r include = FALSE, child = "vignettes/rhubv2.Rmd"} +#' ``` +#' @name rhubv2 +#' @rdname rhubv2 +NULL \ No newline at end of file diff --git a/R/setup.R b/R/setup.R new file mode 100644 index 0000000..9861567 --- /dev/null +++ b/R/setup.R @@ -0,0 +1,146 @@ +check_rpkg_root <- function(rpkg_root, git_root) { + if (rpkg_root != git_root) { + throw(pkg_error( + "R-hub currently requires that your R package is at the root of the + git repository.", + i = "Your R package is at {.path {rpkg_root}}.", + i = "Your git repository root is at {.path {git_root}}." + )) + } +} + +#' Setup the current R package for use with R-hub +#' +#' It adds or updates the R-hub workflow file to the current package, +#' and advises on next steps. +#' +#' @param overwrite if `TRUE`, [rhub_setup()] will overwrite an already +#' existing workflow file. +#' @return Nothing. +#' +#' @export + +rhub_setup <- function(overwrite = FALSE) { + cli::cli_bullets("Setting up R-hub v2.") + rpkg_root <- setup_find_r_package() + git_root <- setup_find_git_root() + check_rpkg_root(rpkg_root, git_root) + + url <- "https://raw.githubusercontent.com/r-hub/actions/v1/workflows/rhub.yaml" + resp <- synchronise(http_get(url)) + if (resp$status_code != 200) { + throw(pkg_error( + "Failed to download R-hub worflow file from GitHub.", + i = "URL: {.url {url}}.", + i = "HTTP status: {resp$status_code}.", + i = "Make sure that you are online and GitHub is up." + )) + } + wf <- resp$content + wfc <- rawToChar(wf) + Encoding(wfc) <- "UTF-8" + + updated <- FALSE + wf_file <- file.path(git_root, ".github", "workflows", "rhub.yaml") + if (file.exists(wf_file)) { + wf_current <- read_file(wf_file) + if (wfc != wf_current) { + if (overwrite) { + dir.create(dirname(wf_file), showWarnings = FALSE, recursive = TRUE) + writeBin(wf, wf_file) + updated <- TRUE + cli::cli_bullets(c( + i = "Updated existing workflow file at {.file {wf_file}}, + as requested" + )) + } else { + throw(pkg_error( + "Workflow file already exists at {.file {wf_file}}.", + i = "Use {.code overwrite = TRUE} for overwriting it." + )) + } + } else { + cli::cli_bullets(c( + v = "Workflow file {.file {wf_file}} already exists and it is current." + )) + } + } else { + dir.create(dirname(wf_file), showWarnings = FALSE, recursive = TRUE) + writeBin(wf, wf_file) + updated <- TRUE + cli::cli_bullets(c( + v = "Created workflow file {.file {wf_file}}." + )) + } + + cli::cli_text() + cli::cli_bullets(c( + "Notes:", + "*" = "The workflow file must be added to the {.emph default} branch + of the GitHub repository.", + "*" = "GitHub actions must be enabled for the repository. They are + disabled for forked repositories by default." + )) + cli::cli_text() + cli::cli_bullets(c( + "Next steps:", + "*" = "Add the workflow file to git using {.code git add }.", + "*" = if (updated) "Commit it to git using {.code git commit}.", + "*" = if (!updated) "Commit it to git using {.code git commit} (if not committed already).", + "*" = if (updated) "Push the commit to GitHub using {.code git push}.", + "*" = if (!updated) "Push the commit to GitHub using {.code git push} (if not pushed already).", + "*" = "Call {.run rhub::rhub_doctor()} to check that you have set up + R-hub correctly.", + "*" = "Call {.run rhub::rhub_check()} to check your package." + )) + + invisible(NULL) +} + +setup_find_r_package <- function() { + pid <- cli_status("Is the current directory part of an R package?") + tryCatch( + rpkg_root <- rprojroot::find_root(rprojroot::is_r_package), + error = function(e) { + cli::cli_status_clear(pid, result = "failed") + throw(pkg_error( + call. = FALSE, + "The current directory is not part of an R package.", + i = "You can create an R package in the current directory if you run + {.run usethis::create_package('.')}.", + i = "Alternatively, if you want to use R-hub for a package that is + already on GitHub, supply the {.arg gh_url} argument to + {.fun rhub_setup}." + )) + } + ) + cli::cli_status_clear(pid, result = "clear") + cli::cli_alert_success("Found R package at {.file {rpkg_root}}.") + + rpkg_root +} + +setup_find_git_root <- function() { + pid <- cli_status( + "Is the current directory part of a git repository?" + ) + tryCatch( + git_root <- rprojroot::find_root(rprojroot::is_git_root), + error = function(e) { + cli::cli_status_clear(pid, result = "failed") + throw(pkg_error( + call. = FALSE, + "The current R package is not in a git repository.", + i = "You can create a git repository for the current package or + project if you run {.run usethis::use_git()}.", + i = "Alternatively, if you want to use R-hub for a package that is + already on GitHub, supply the {.arg gh_url} argument to + {.fun rhub_setup}." + )) + } + ) + cli::cli_status_clear(result = "clear") + cli::cli_alert_success("Found git repository at {.file {git_root}}.") + + git_root +} diff --git a/R/submit.R b/R/submit.R deleted file mode 100644 index 737d1af..0000000 --- a/R/submit.R +++ /dev/null @@ -1,46 +0,0 @@ - -#' @importFrom rematch re_match -#' @importFrom jsonlite base64_enc -#' @importFrom crayon blue - -submit_package <- function(email, pkg_targz, platforms, check_args, - env_vars) { - - assert_that(is_email(email)) - assert_that( - is.character(platforms), - length(platforms) >= 1 - ) - - m <- re_match( - pattern = "^(?.+)_(?.+)\\.tar\\.gz", - basename(pkg_targz) - ) - - header_line("Uploading package") - buf <- readBin(pkg_targz, raw(), file.info(pkg_targz)$size) - response <- query( - "SUBMIT PACKAGE", - data = list( - email = unbox(email), - token = unbox(email_get_token(email)), - package = unbox(unname(m[, "package"])), - version = unbox(unname(m[, "version"])), - platform = platforms, - env = as.list(env_vars), - check_args = unbox(paste(check_args, collapse = " ")), - file = unbox(base64_enc(buf)) - ) - ) - - header_line(paste0( - "Preparing build, see status at\n", - blue(paste( - " ", - vapply(response, "[[", "", "status-url"), - collapse = "\n" - )) - )) - - response -} diff --git a/R/utils.R b/R/utils.R index b83cf89..c79af4f 100644 --- a/R/utils.R +++ b/R/utils.R @@ -1,117 +1,175 @@ -update <- function(original, new) { - - if (length(new)) { - if (length(original)) assert_that(is_named(original)) - assert_that(is_named(new)) - original[names(new)] <- new - } - - original -} - -#' @importFrom rematch re_match - -parse_email <- function(x) { - unname( - re_match(pattern = "<(?[^>]+)>", x)[, "email"] +pkg_error <- function(..., .data = NULL, .class = NULL, .envir = parent.frame(), + call. = TRUE) { + .hide_from_trace <- TRUE + cnd <- new_error( + call. = call., + cli::format_error( + .envir = .envir, + c( + ... + ) + ) ) -} -`%||%` <- function(l, r) if (is.null(l)) r else l + if (length(.data)) cnd[names(.data)] <- .data + if (length(class)) class(cnd) <- c(.class, class(cnd)) -#' @importFrom desc desc_get_maintainer -#' @importFrom utils untar + cnd +} -get_maintainer_email <- function(path) { - path <- normalizePath(path) - if (is_dir(path)) { - if (!file.exists(file.path(path, "DESCRIPTION"))) { - stop("No 'DESCRIPTION' file found") - } - parse_email(desc_get_maintainer(path)) +stop <- function(..., call. = TRUE, domain = NA) { + .hide_from_trace <- TRUE + args <- list(...) + if (length(args) == 1L && inherits(args[[1L]], "condition")) { + throw( + add_class(args[[1]], c("rlib_error_3_0", "rlib_error"), "end"), + frame = parent.frame() + ) } else { - dir.create(tmp <- tempfile()) - files <- untar(path, list = TRUE, tar = "internal") - desc <- grep("^[^/]+/DESCRIPTION$", files, value = TRUE) - if (length(desc) < 1) stop("No 'DESCRIPTION' file in package") - untar(path, desc, exdir = tmp, tar = "internal") - parse_email(desc_get_maintainer(file.path(tmp, desc))) + throw(new_error(..., call. = call., domain = domain)) } } -needs_compilation <- function(path) { - path <- normalizePath(path) - if (is_dir(path)) { - file.exists(file.path(path, "src")) - } else { - dir.create(tmp <- tempfile()) - files <- untar(path, list = TRUE, tar = "internal") - any(grepl("^[^/]+/src/?$", files)) - } +stopifnot <- function(...) { + assert_that(..., env = parent.frame()) } -`%:::%` <- function(p, f) { - get(f, envir = asNamespace(p)) +add_class <- function(obj, classes, where = c("start", "end")) { + where <- match.arg(where) + nc <- c( + if (where == "start") classes, + class(obj), + if (where == "end") classes + ) + class(obj) <- unique(nc) + obj } -is_interactive <- function() { - interactive() +zip <- function(x, y) { + mapply(FUN = c, x, y, SIMPLIFY = FALSE) } -is_dir <- function(x) { - file.info(x)$isdir +first_char <- function(x) { + substr(x, 1, 1) } -data_frame <- function(...) { - data.frame(stringsAsFactors = FALSE, ...) +last_char <- function(x) { + substr(x, nchar(x), nchar(x)) } -drop_nulls <- function(x) { - x [ ! vapply(x, is.null, TRUE) ] +unquote <- function(x) { + ifelse( + first_char(x) == last_char(x) & first_char(x) %in% c("'", '"'), + substr(x, 2L, nchar(x) - 1L), + x + ) } -get_group <- function(l){ - if (! "group" %in% names(l)){ - "" - } else { - l[["group"]] +has_emoji <- function() { + if (!cli::is_utf8_output()) return(FALSE) + if (isTRUE(opt <- getOption("pkg.emoji"))) return(TRUE) + if (identical(opt, FALSE)) return(FALSE) + if (Sys.info()[["sysname"]] != "Darwin") return(FALSE) + TRUE +} + +parse_url <- function(url) { + re_url <- paste0( + "^(?[a-zA-Z0-9]+)://", + "(?:(?[^@/:]+)(?::(?[^@/]+))?@)?", + "(?[^/]+)", + "(?.*)$" # don't worry about query params here... + ) + + mch <- re_match(url, re_url) + + if (is.na(mch[[1]])) { + ssh_re_url <- "^git@(?[^:]+):(?.*)[.]git$" + mch <- re_match(url, ssh_re_url) + + if (is.na(mch[[1]])) { + cli::cli_abort("Invalid URL: {.url {url}}") + } + + # Used for accessing the server's API + mch$protocol <- "https" } + + mch[c("protocol", "host", "path")] } -cat0 <- function(..., sep = "") { - cat(..., sep = sep) +read_file <- function(path) { + bin <- readBin(path, "raw", file.size(path)) + chr <- rawToChar(bin) + Encoding(chr) <- "UTF-8" + chr } -map <- function(.x, .f, ...) { - lapply(.x, .f, ...) +ansi_align_width <- function(text) { + if (length(text) == 0) return(text) + width <- max(cli::ansi_nchar(text, type = "width")) + cli::ansi_align(text, width = width) } -map_lgl <- function(.x, .f, ...) { - vapply(.x, .f, logical(1), ...) +random_id <- function() { + r <- paste0(sample(c(letters, LETTERS, 0:9), 20, replace = TRUE), collapse = "") + gsub(" ", "-", cli::hash_animal(r, n_adj = 1)$hash) } -map_chr <- function(.x, .f, ...) { - vapply(.x, .f, character(1), ...) +readline <- function(prompt) { + base::readline(prompt) } -map_int <- function(.x, .f, ...) { - vapply(.x, .f, integer(1), ...) +is_interactive <- function() { + opt <- getOption("rlib_interactive") + if (isTRUE(opt)) { + TRUE + } else if (identical(opt, FALSE)) { + FALSE + } else if (tolower(getOption("knitr.in.progress", "false")) == "true") { + FALSE + } else if (tolower(getOption("rstudio.notebook.executing", "false")) == "true") { + FALSE + } else if (identical(Sys.getenv("TESTTHAT"), "true")) { + FALSE + } else { + interactive() + } } -shorten_rhub_id <- function(x) { - sx <- strsplit(x, "-", fixed = TRUE) - substr(map_chr(sx, tail, 1), 1, 7) +update <- function (original, new) { + if (length(new)) { + original[names(new)] <- new + } + original } -## This is a workaround to handle NAs +get_maintainer_email <- function(path = ".") { + path <- normalizePath(path) + if (is_dir(path)) { + if (!file.exists(file.path(path, "DESCRIPTION"))) { + stop("No 'DESCRIPTION' file found") + } + parse_email(desc::desc_get_maintainer(path)) + } else { + dir.create(tmp <- tempfile()) + files <- utils::untar(path, list = TRUE, tar = "internal") + desc <- grep("^[^/]+/DESCRIPTION$", files, value = TRUE) + if (length(desc) < 1) stop("No 'DESCRIPTION' file in package") + utils::untar(path, desc, exdir = tmp, tar = "internal") + parse_email(desc::desc_get_maintainer(file.path(tmp, desc))) + } +} -my_pretty_dt <- function(x, compact = TRUE) { - res <- rep("?", length(x)) - res[!is.na(x)] <- pretty_dt(x[!is.na(x)], compact = compact) - res +is_dir <- function(x) { + file.info(x)$isdir } -problem_statuses <- function(){ - c("parseerror", "preperror", "aborted") +#' @importFrom rematch re_match + +parse_email <- function(x) { + unname( + re_match(pattern = "<(?[^>]+)>", x)[, "email"] + ) } diff --git a/README.Rmd b/README.Rmd new file mode 100644 index 0000000..9b42dbe --- /dev/null +++ b/README.Rmd @@ -0,0 +1,161 @@ +--- +output: + github_document: + toc: true + toc_depth: 3 + includes: + before_body: inst/header.md +always_allow_html: yes +editor_options: + markdown: + wrap: sentence +--- + +```{r, setup, include = FALSE} +knitr::opts_chunk$set( + comment = "#>", + fig.path = "man/figures", + fig.width = 10, + asciicast_theme = if (Sys.getenv("IN_PKGDOWN") == "true") "pkgdown" else "readme" +) +asciicast::init_knitr_engine( + echo = TRUE, + echo_input = FALSE, + startup = quote({ + library(cli) + options(cli.num_colors = cli::truecolor) + }) +) +``` + +```{asciicast asciicast-setup, include = FALSE, results = "hide"} +pkgload::load_all() +# emoji output is slightly incorrect currently, maybe a font issue +options(pkg.emoji = FALSE) +# we do this to have a package to use in the examples +setwd("/tmp") +if (!file.exists("cli")) system("git clone --depth 1 https://github.com/r-lib/cli") +setwd("cli") +unlink(".github/workflows/rhub.yaml") +``` + +## Installation + +Install rhub from CRAN: + +```{r, asciicast-install, eval = FALSE, cache = FALSE} +pak::pkg_install("rhub") +``` + +## Usage + +### Requirements + +- A Github account. +- Your R package must be in a GitHub repository. +- You need a GitHub [Personal Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token). + You can use the [gitcreds package](https://gitcreds.r-lib.org/) to add + the token to the git credential store. + +See the [R Consortium runners](#the-r-consortium-runners) section for +using rhub if your package is not on GitHub. + +### Private repositories + +rhub uses GitHub Actions, which is free for public repositories. +For private repositories you also get some minutes for free, depending on +the GitHub subscription you have. See +[About billing for GitHub Actions](https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions) for details. + +### Setup + +1. Switch to the directory of your package, and call `rhub::rhub_setup()` to + add the R-hub workflow file to your package. +```{asciicast rhub-setup} +rhub::rhub_setup() +``` + +2. Run `git commit` and `git push` to push the workflow file to GitHub. + +3. Run `rhub::rhub_doctor()` to check if everything is set up correctly: +```{asciicast rhub-doctor} +rhub::rhub_doctor() +``` + +### Run checks + +Use `rhub::rhub_platforms()` to get a list of supported platforms and checks: +```{asciicast rhub-platforms} +rhub::rhub_platforms() +``` + +```{asciicast include = FALSE} +testthat::local_mocked_bindings( + gh_rest_post = function(...) list(status_code = 204L), + readline = function(prompt) { + cat(prompt) + Sys.sleep(1) + cat("1, 5\n") + "1, 5" + } +) +``` + +Run `rhub::rhub_check()` to start R-hub v2 checks on GitHub Actions: +```{asciicast rhub-check} +rhub::rhub_check() +``` + +## The R Consortium runners + +If you don't want to put your package on GitHub, you can still use the +rhub package to run package checks on any supported platform using a +shared pool of runners in the https://github.com/r-hub2 GitHub +organization. + +The process is similar to the first version of R-hub: + +* Set your working directory to the R package you want to check. +* Obtain a token from R-hub, to verify your email address: + ``` + rc_new_token() + ``` + (You do not need to do this, if you already submitted packages to a + previous version of R-hub from the same machine, using the same email + address. Call `rc_list_local_tokens()` to check if you already have + tokens.) +* Submit a build with + ``` + rc_submit() + ``` +* Select the platforms you want to use, and follow the instructions and + the link provided to see your check results. + +### Limitations of the R Consortium runners + +* You package will be public for the world, and will be stored in the + https://github.com/r-hub2 organization. Your check output and results + will be public for anyone with a GitHub account. If you want to keep + your package private, you can put it in a private GitHub repository, + and use the `rhub_setup()` and `rhub_check()` functions instead. +* The R Consortium runners are shared among all users, so you might need + to wait for your builds to start. +* You have to wait at least five minutes between submissions with + `rc_submit()`. +* Currently you need to create a GitHub account to see the check logs of + your package. You don't need a GitHub account to submit the checks. + +To avoid these limitations (except for the neeed for a GitHub accounr), +put your package in a GitHub repository, and use the `rhub_setup()` and +`rhub_check()` functions instead of `rc_submit()` and the R Consortium +runners. + +## Code of Conduct + +Please note that the rhub package is released with a +[Contributor Code of Conduct](https://r-hub.github.io/rhub/dev/CODE_OF_CONDUCT.html). +By contributing to this project, you agree to abide by its terms. + +## License + +MIT © R Consortium diff --git a/README.md b/README.md index 6067955..35b0bd1 100644 --- a/README.md +++ b/README.md @@ -1,47 +1,296 @@ -# rhub -> Connect to R-hub, from R + + + +# rhub + +> R-hub v2 -[![](https://www.r-pkg.org/badges/version/rhub)](https://www.r-pkg.org/pkg/rhub) -[![CRAN RStudio mirror downloads](https://cranlogs.r-pkg.org/badges/rhub)](https://www.r-pkg.org/pkg/rhub) -[![Project Status: Active – The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active) -[![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/r-hub/community) -[![Codecov test coverage](https://codecov.io/gh/r-hub/rhub/branch/master/graph/badge.svg)](https://app.codecov.io/gh/r-hub/rhub?branch=master) +[![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental) [![R-CMD-check](https://github.com/r-hub/rhub/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/r-hub/rhub/actions/workflows/R-CMD-check.yaml) +[![](https://www.r-pkg.org/badges/version/rhub)](https://www.r-pkg.org/pkg/rhub) +[![Codecov test coverage](https://codecov.io/gh/r-hub/rhub/branch/main/graph/badge.svg)](https://app.codecov.io/gh/r-hub/rhub?branch=main) -## Introduction +R-hub v2 uses GitHub Actions to run `R CMD check` and similar package checks. +The rhub package helps you set up R-hub v2 for your R package, and start +running checks. -The [R-hub builder](https://builder.r-hub.io/) is a multi-platform build and -check service for R packages. The `rhub` packages use the R-hub API to connect to -the R-hub builder and start package checks on various architectures: -**Run `R CMD check` on any of the R-hub builder architectures, from R**. +--- -The `rhub` package also supports accessing **statuses of previous checks**, and -**local use of the R-hub Linux platforms via Docker**. +- [Installation](#installation) +- [Usage](#usage) + - [Requirements](#requirements) + - [Private repositories](#private-repositories) + - [Setup](#setup) + - [Run checks](#run-checks) +- [The R Consortium runners](#the-r-consortium-runners) + - [Limitations of the R Consortium + runners](#limitations-of-the-r-consortium-runners) +- [Code of Conduct](#code-of-conduct) +- [License](#license) ## Installation -Install the package from CRAN: +Install rhub from CRAN: -```r -install.packages("rhub") +``` r +pak::pkg_install("rhub") ``` -Or get the development version from GitHub: +## Usage + +### Requirements + +- A Github account. +- Your R package must be in a GitHub repository. +- You need a GitHub [Personal Access + Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token). + You can use the [gitcreds package](https://gitcreds.r-lib.org/) to add + the token to the git credential store. + +See the [R Consortium runners](#the-r-consortium-runners) section for +using rhub if your package is not on GitHub. + +### Private repositories + +rhub uses GitHub Actions, which is free for public repositories. For +private repositories you also get some minutes for free, depending on +the GitHub subscription you have. See [About billing for GitHub +Actions](https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions) +for details. + +### Setup + +1. Switch to the directory of your package, and call + `rhub::rhub_setup()` to add the R-hub workflow file to your package. -```r -# install.packages("remotes") -remotes::install_github("r-hub/rhub") +``` r +rhub::rhub_setup() ``` -## Usage +
+ +
+## Setting up R-hub v2.                                                            
+##  Found R package at /private/tmp/cli.                                          
+##  Found git repository at /private/tmp/cli.                                     
+##  Created workflow file /private/tmp/cli/.github/workflows/rhub.yaml.           
+##                                                                                 
+## Notes:                                                                          
+##  The workflow file must be added to the default branch of the GitHub           
+##   repository.                                                                   
+##  GitHub actions must be enabled for the repository. They are disabled for      
+##   forked repositories by default.                                               
+##                                                                                 
+## Next steps:                                                                     
+##  Add the workflow file to git using `git add <filename>`.                      
+##  Commit it to git using `git commit`.                                          
+##  Push the commit to GitHub using `git push`.                                   
+##  Call `rhub2::rhub_doctor()` to check that you have set up R-hub correctly.    
+##  Call `rhub2::rhub_check()` to check your package.                             
+
+ +
+ +2. Run `git commit` and `git push` to push the workflow file to GitHub. + +3. Run `rhub::rhub_doctor()` to check if everything is set up + correctly: + +``` r +rhub::rhub_doctor() +``` + +
+ +
+##  Found R package at /private/tmp/cli.                                          
+##  Found git repository at /private/tmp/cli.                                     
+##  Found GitHub PAT.                                                             
+##  Found repository on GitHub at <https://github.com/r-lib/cli>.                 
+##  GitHub PAT has the right scopes.                                              
+##  Found R-hub workflow in default branch, and it is active.                     
+## → WOOT! You are ready to run `rhub2::rhub_check()` on this package.             
+
+ +
+ +### Run checks + +Use `rhub::rhub_platforms()` to get a list of supported platforms and +checks: + +``` r +rhub::rhub_platforms() +``` + +
+ +
+## ── Virtual machines ─────────────────────────────────────────────────────────── 
+##  1 [VM]  linux                                                                  
+##    All R versions on GitHub Actions ubuntu-latest                               
+##  2 [VM]  macos                                                                  
+##    All R versions on GitHub Actions macos-latest                                
+##  3 [VM]  macos-arm64                                                            
+##    All R versions on GitHub Actions macos-14                                    
+##  4 [VM]  windows                                                                
+##    All R versions on GitHub Actions windows-latest                              
+##                                                                                 
+## ── Containers ───────────────────────────────────────────────────────────────── 
+##  5 [CT]  atlas  [ATLAS]                                                         
+##    R Under development (unstable) (2024-03-13 r86113) on Fedora Linux 38 (Conta…
+##    ghcr.io/r-hub/containers/atlas:latest                                        
+##  6 [CT]  clang-asan  [asan, clang-ASAN, clang-UBSAN, ubsan]                     
+##    R Under development (unstable) (2024-03-12 r86109) on Ubuntu 22.04.4 LTS     
+##    ghcr.io/r-hub/containers/clang-asan:latest                                   
+##  7 [CT]  clang16  [clang16]                                                     
+##    R Under development (unstable) (2024-03-12 r86109) on Ubuntu 22.04.4 LTS     
+##    ghcr.io/r-hub/containers/clang16:latest                                      
+##  8 [CT]  clang17  [clang17]                                                     
+##    R Under development (unstable) (2024-03-11 r86098) on Ubuntu 22.04.4 LTS     
+##    ghcr.io/r-hub/containers/clang17:latest                                      
+##  9 [CT]  clang18  [clang18]                                                     
+##    R Under development (unstable) (2024-03-12 r86109) on Ubuntu 22.04.4 LTS     
+##    ghcr.io/r-hub/containers/clang18:latest                                      
+## 10 [CT]  donttest  [donttest]                                                   
+##    R Under development (unstable) (2024-03-12 r86109) on Ubuntu 22.04.4 LTS     
+##    ghcr.io/r-hub/containers/donttest:latest                                     
+## 11 [CT]  gcc13  [gcc13]                                                         
+##    R Under development (unstable) (2024-03-13 r86113) on Fedora Linux 38 (Conta…
+##    ghcr.io/r-hub/containers/gcc13:latest                                        
+## 12 [CT]  intel  [Intel]                                                         
+##    R Under development (unstable) (2024-03-13 r86113) on Fedora Linux 38 (Conta…
+##    ghcr.io/r-hub/containers/intel:latest                                        
+## 13 [CT]  mkl  [MKL]                                                             
+##    R Under development (unstable) (2024-03-13 r86113) on Fedora Linux 38 (Conta…
+##    ghcr.io/r-hub/containers/mkl:latest                                          
+## 14 [CT]  nold  [noLD]                                                           
+##    R Under development (unstable) (2024-03-13 r86113) on Ubuntu 22.04.4 LTS     
+##    ghcr.io/r-hub/containers/nold:latest                                         
+## 15 [CT]  nosuggests  [noSuggests]                                               
+##    R Under development (unstable) (2024-03-13 r86113) on Fedora Linux 38 (Conta…
+##    ghcr.io/r-hub/containers/nosuggests:latest                                   
+## 16 [CT]  ubuntu-clang  [r-devel-linux-x86_64-debian-clang]                      
+##    R Under development (unstable) (2024-03-13 r86113) on Ubuntu 22.04.4 LTS     
+##    ghcr.io/r-hub/containers/ubuntu-clang:latest                                 
+## 17 [CT]  ubuntu-gcc12  [r-devel-linux-x86_64-debian-gcc]                        
+##    R Under development (unstable) (2024-03-13 r86113) on Ubuntu 22.04.4 LTS     
+##    ghcr.io/r-hub/containers/ubuntu-gcc12:latest                                 
+## 18 [CT]  ubuntu-next  [r-next, r-patched, r-patched-linux-x86_64]               
+##    R version 4.3.3 Patched (2024-02-29 r86113) on Ubuntu 22.04.4 LTS            
+##    ghcr.io/r-hub/containers/ubuntu-next:latest                                  
+## 19 [CT]  ubuntu-release  [r-release, r-release-linux-x86_64, ubuntu]            
+##    R version 4.3.3 (2024-02-29) on Ubuntu 22.04.4 LTS                           
+##    ghcr.io/r-hub/containers/ubuntu-release:latest                               
+## 20 [CT]  valgrind  [valgrind]                                                   
+##    R Under development (unstable) (2024-03-13 r86113) on Fedora Linux 38 (Conta…
+##    ghcr.io/r-hub/containers/valgrind:latest                                     
+
+ +
+ +Run `rhub::rhub_check()` to start R-hub v2 checks on GitHub Actions: + +``` r +rhub::rhub_check() +``` + +
+ +
+##  Found git repository at /private/tmp/cli.                                     
+##  Found GitHub PAT.                                                             
+##                                                                                 
+## Available platforms (see `rhub2::rhub_platforms()` for details):                
+##                                                                                 
+##  1 [VM] linux          R-* (any version)                     ubuntu-latest on G…
+##  2 [VM] macos          R-* (any version)                     macos-latest on Gi…
+##  3 [VM] macos-arm64    R-* (any version)                     macos-14 on GitHub 
+##  4 [VM] windows        R-* (any version)                     windows-latest on …
+##  5 [CT] atlas          R-devel (2024-03-13 r86113)           Fedora Linux 38 (C…
+##  6 [CT] clang-asan     R-devel (2024-03-12 r86109)           Ubuntu 22.04.4 LTS 
+##  7 [CT] clang16        R-devel (2024-03-12 r86109)           Ubuntu 22.04.4 LTS 
+##  8 [CT] clang17        R-devel (2024-03-11 r86098)           Ubuntu 22.04.4 LTS 
+##  9 [CT] clang18        R-devel (2024-03-12 r86109)           Ubuntu 22.04.4 LTS 
+## 10 [CT] donttest       R-devel (2024-03-12 r86109)           Ubuntu 22.04.4 LTS 
+## 11 [CT] gcc13          R-devel (2024-03-13 r86113)           Fedora Linux 38 (C…
+## 12 [CT] intel          R-devel (2024-03-13 r86113)           Fedora Linux 38 (C…
+## 13 [CT] mkl            R-devel (2024-03-13 r86113)           Fedora Linux 38 (C…
+## 14 [CT] nold           R-devel (2024-03-13 r86113)           Ubuntu 22.04.4 LTS 
+## 15 [CT] nosuggests     R-devel (2024-03-13 r86113)           Fedora Linux 38 (C…
+## 16 [CT] ubuntu-clang   R-devel (2024-03-13 r86113)           Ubuntu 22.04.4 LTS 
+## 17 [CT] ubuntu-gcc12   R-devel (2024-03-13 r86113)           Ubuntu 22.04.4 LTS 
+## 18 [CT] ubuntu-next    R-4.3.3 (patched) (2024-02-29 r86113) Ubuntu 22.04.4 LTS 
+## 19 [CT] ubuntu-release R-4.3.3 (2024-02-29)                  Ubuntu 22.04.4 LTS 
+## 20 [CT] valgrind       R-devel (2024-03-13 r86113)           Fedora Linux 38 (C…
+##                                                                                 
+## Selection (comma separated numbers, 0 to cancel): 1, 5                          
+##                                                                                 
+##  Check started: linux, atlas (daft-acornwoodpecker).                           
+##   See <https://github.com/r-lib/cli/actions> for live output!                   
+
+ +
+ +## The R Consortium runners + +If you don’t want to put your package on GitHub, you can still use the +rhub package to run package checks on any supported platform using a +shared pool of runners in the GitHub +organization. + +The process is similar to the first version of R-hub: + +- Set your working directory to the R package you want to check. + +- Obtain a token from R-hub, to verify your email address: + + rc_new_token() + + (You do not need to do this, if you already submitted packages to a + previous version of R-hub from the same machine, using the same email + address. Call `rc_list_local_tokens()` to check if you already have + tokens.) + +- Submit a build with + + rc_submit() + +- Select the platforms you want to use, and follow the instructions and + the link provided to see your check results. + +### Limitations of the R Consortium runners + +- You package will be public for the world, and will be stored in the + organization. Your check output and + results will be public for anyone with a GitHub account. If you want + to keep your package private, you can put it in a private GitHub + repository, and use the `rhub_setup()` and `rhub_check()` functions + instead. +- The R Consortium runners are shared among all users, so you might need + to wait for your builds to start. +- You have to wait at least five minutes between submissions with + `rc_submit()`. +- Currently you need to create a GitHub account to see the check logs of + your package. You don’t need a GitHub account to submit the checks. + +To avoid these limitations (except for the neeed for a GitHub accounr), +put your package in a GitHub repository, and use the `rhub_setup()` and +`rhub_check()` functions instead of `rc_submit()` and the R Consortium +runners. -Refer to the [`pkgdown` website](https://r-hub.github.io/rhub/), in particular -the ["Get started" vignette](https://r-hub.github.io/rhub/articles/rhub.html). +## Code of Conduct -![recording of a check on a screen](https://r-hub.github.io/rhub/articles/figures/check-output.gif) +Please note that the rhub package is released with a [Contributor Code +of Conduct](https://r-hub.github.io/rhub/dev/CODE_OF_CONDUCT.html). By +contributing to this project, you agree to abide by its terms. ## License diff --git a/_pkgdown.yml b/_pkgdown.yml index 546d7a4..14499b8 100644 --- a/_pkgdown.yml +++ b/_pkgdown.yml @@ -7,41 +7,51 @@ authors: toc: depth: 3 - + +development: + mode: auto + reference: -- title: General check API +- title: R-hub v2 + desc: | + Start with the 'Get started with R-hub v2' article if you are + new to R-hub. contents: - - validate_email - - list_validated_emails - - check - rhub_check - - rhub-ids - - get_check - - platforms -- title: Check shortcuts + - rhub_doctor + - rhub_platforms + - rhub_setup +- title: The R Consortium runners + contents: + - rc_list_local_tokens + - rc_list_repos + - rc_new_token + - rc_submit +- title: internal contents: + - rhubv2 + - check - check_for_cran - - check_on_centos - - check_on_debian - - check_on_fedora - check_on_linux + - check_on_windows - check_on_macos + - check_on_debian - check_on_ubuntu - - check_on_windows + - check_on_fedora - check_on_solaris - - check_with_rdevel + - check_on_centos - check_with_roldrel - - check_with_rpatched - check_with_rrelease - - check_with_sanitizers + - check_with_rpatched + - check_with_rdevel - check_with_valgrind -- title: Check management - contents: + - check_with_sanitizers + - get_check - last_check - list_my_checks - list_package_checks -- title: Local Docker checks - contents: + - list_validated_emails - local_check_linux - local_check_linux_images - + - platforms + - validate_email diff --git a/inst/bin/rhub-linux-docker.sh b/inst/bin/rhub-linux-docker.sh deleted file mode 100755 index eb9c714..0000000 --- a/inst/bin/rhub-linux-docker.sh +++ /dev/null @@ -1,126 +0,0 @@ -#! /bin/bash - - -main() { - - set -eu - - # The default might not be the home directory, but / - cd ~ - - # We create this up front so it always exists - mkdir -p /tmp/artifacts - local oldpackage=${package} - - config_all "$package" - - build_package "$package" - package="$REPLY" - cp "$package" /tmp/artifacts/ - echo "$package" >/tmp/artifacts/packagename - - install_remotes - - install_deps "$package" - cp *.tar.gz /tmp/artifacts/ - - run_check "$package" || true - cp -r *.Rcheck /tmp/artifacts/ 2>/dev/null >/dev/null || true -} - -config_all() { - # Configure R, local package library, and also CRAN and BioConductor - # The container might define $RBINARY - declare -r package="$1" - export RBINARY="${RBINARY:-R}" - export PATH="$(ls /opt/R-* -d)/bin:$PATH" - export R_LIBS=~/R - mkdir -p ~/R - echo "options(repos = c(CRAN = \"$RHUB_CRAN_MIRROR\"))" >> ~/.Rprofile - "$RBINARY" -q -e "install.packages('BiocManager')" - echo "try(options(repos = BiocManager::repositories()))" >> ~/.Rprofile - echo "unloadNamespace('BiocManager')" >> ~/.Rprofile - cp "/tmp/${package}" . -} - -get_desc() { - declare -r package="$1" - local desc=$(tar tzf "$package" | grep "^[^/]*/DESCRIPTION$") - local dir=$(mktemp -d) - tar xzf "$package" -C "$dir" "$desc" - REPLY="${dir}/${desc}" -} - -build_package() { - declare -r package="$1" - get_desc "$package" - local desc="$REPLY" - if grep -q "^Packaged:" "$desc"; then - local name=$(grep "^Package:" "$desc" | sed 's/^Package:[ ]*//' | tr -d ' \r') - local vers=$(grep "^Version:" "$desc" | sed 's/^Version:[ ]*//' | tr -d ' \r') - REPLY="${name}_${vers}.tar.gz" - if [[ "$package" != "$REPLY" ]]; then - mv "$package" "$REPLY" - fi - return - fi - - echo - echo ">>>>>==================== Running R CMD build" - mkdir -p build - ( - cd build - tar xzf ../"${package}" - local pkgname=$(ls | head -1 | sed 's/\///') - "$RBINARY" CMD build "${pkgname}" - cd .. - ) - REPLY=$(basename $(ls build/*.tar.gz | head -1)) - cp "build/${REPLY}" . -} - -install_remotes() { - echo - echo ">>>>>==================== Installing remotes package" - # Download the single file install script from r-lib/remotes - # We cannot do this from R, because some R versions do not support - # HTTPS. Then we install a proper 'remotes' package with it. - curl -O https://raw.githubusercontent.com/r-lib/remotes/r-hub/install-github.R - xvfb-run -a "$RBINARY" -q -e \ - "source(\"install-github.R\")\$value(\"r-lib/remotes@r-hub\")" -} - -install_deps() { - echo - echo ">>>>>==================== Installing package dependencies" - declare -r package="$1" - - # Install the package, so its dependencies will be installed - # This is a temporary solution, until remotes::install_deps works on a - # package bundle - local cmd="remotes::install_local(\"$package\", dependencies = TRUE, INSTALL_opts = \"--build\")" - xvfb-run -a "$RBINARY" -q -e "$cmd" - - ## If installation fails, then we do not run the check at all - local pkgname=$(echo $package | sed 's/_.*$//') - if ! $RBINARY -q -e "library($pkgname)"; then - >&2 echo "Failed to isntall package, cannot check it :(" - exit 1 - fi -} - -run_check() { - echo - echo ">>>>>==================== Running R CMD check" - declare -r package="$1" - - RHUB_CHECK_COMMAND="${RHUB_CHECK_COMMAND:-$RBINARY CMD check $checkArgs}" - - echo About to run xvfb-run $RHUB_CHECK_COMMAND "$package" - xvfb-run -a $RHUB_CHECK_COMMAND "$package" - - echo - echo "<<<<<==================== Running R CMD check done" -} - -[[ "$0" == "$BASH_SOURCE" ]] && main "$@" diff --git a/inst/bin/rhub-linux.sh b/inst/bin/rhub-linux.sh deleted file mode 100755 index 8c52fda..0000000 --- a/inst/bin/rhub-linux.sh +++ /dev/null @@ -1,299 +0,0 @@ -#! /bin/bash - -main() { - set -eu - - # Global for the cleanup. We make this random, to make sure that - # parallel build of the same package file, or parallel CI jobs - # do not interfere - CONTAINER=$(make_uuid | tr -d -- '-') - CLEANUPIMAGE= - CLEANUPFILES=('dummy') - CLEANUPKEEP=false - trap cleanup 0 - - echo "R-hub Linux builder script v0.10.0 (c) R Consortium, 2018-2019" - echo - - # Parse arguments - local image=rhub/debian-gcc-devel - local envvars="" - local checkargs="" - local artifacts="." - while getopts ":hi:e:c:a:kd:" opt; do - case $opt in - i) image="$OPTARG" ;; - e) envvars="$OPTARG" ;; - c) checkargs="$OPTARG" ;; - a) artifacts="$OPTARG" ;; - k) CLEANUPKEEP=true ;; - d) CONTAINER="$OPTARG" ;; - h) help; exit 1 ;; - \?) >&2 echo "Invalid option: -$OPTARG"; usage; exit 2 ;; - :) >&2 echo "Option -$OPTARG requires an argument. :("; - usage; exit 2 ;; - esac - done - - shift $((OPTIND-1)) - if (( $# != 1 )); then - >&2 echo "No package :( Specify one package fileor URL to build." - help - exit 3; - fi - declare package="$1" - - echo Package: "$package" - echo Docker image: "$image" - echo Env vars: "$envvars" - echo R CMD check arguments: "$checkargs" - - check_requirements || exit $? - - download_package "$package" || exit $? - package="$REPLY" - - detect_platform "$image" - local platform="$REPLY" - echo "Sysreqs platform: $platform" - export RHUB_PLATFORM="$platform" - - # Install sysreqs and create a image from it - install_sysreqs "$package" "$image" "$CONTAINER" "$platform" - image="$REPLY" - - create_env_file "$package" "$envvars" "$checkargs" - local envfile=$REPLY - - run_check "$package" "$image" "$CONTAINER" "$envfile" - - get_artifacts $CONTAINER $artifacts - - # Cleanup is automatic -} - -usage() { - >&2 echo - >&2 echo "Usage: $0 [-h] [-k] [-i image] [-e env] [-c checkargs] [-a path] package-file" - >&2 echo - >&2 echo "Options:" - >&2 echo " -k Keep build container" - >&2 echo " -i image Docker image to use [default: rhub/debian-gcc-devel]" - >&2 echo " -e env Environment variables to set, VAR=VALUE, newline separated" - >&2 echo " -c checkargs Arguments for 'R CMD check'" - >&2 echo " -a path Where to store check artifacts [default:.]" - >&2 echo " -h Print help message" -} - -help() { - usage - >&2 echo - >&2 echo "Run 'R CMD check' on an R package, within a Docker container." - >&2 echo "'package-file' should be a local R source package, or an URL to one." - >&2 echo "Calls 'R CMD build' automatically, if needed." -} - -make_bad_uuid() { - local N B C='89ab' - for (( N=0; N < 16; ++N )) - do - B=$(( $RANDOM%256 )) - case $N in - 6) - printf '4%x' $(( B%16 )) - ;; - 8) - printf '%c%x' ${C:$RANDOM%${#C}:1} $(( B%16 )) - ;; - 3 | 5 | 7 | 9) - printf '%02x-' $B - ;; - *) - printf '%02x' $B - ;; - esac - done - echo -} - -make_uuid() { - cat /proc/sys/kernel/random/uuid 2>/dev/null || - uuidgen 2>/dev/null || make_bad_uuid -} - -check_requirements() { - # Check for Docker, R and the packages we need - if ! docker --version >/dev/null 2>/dev/null; then - >&2 echo "Cannot find Docker :(" - >&2 echo "Make sure that Docker installed and it is in the PATH." - >&2 echo "You can install Docker from https://www.docker.com/" - return 1 - fi - if ! R --slave -e 'x <- 1' >/dev/null 2>/dev/null; then - >&2 echo "Cannot find R :(" - >&2 echo "Make sure that R installed and it is in the PATH" - >&2 echo "You can install R from https://cran.r-project.org/" - return 1 - fi - if ! R --slave -e 'library(sysreqs)' >/dev/null 2>/dev/null; then - >&2 echo "Cannot load the sysreqs package :( Install it with" - >&2 echo "R -q -e \"source('https://install-github.me/r-hub/sysreqs')\"" - return 2 - fi -} - -download_package() { - declare -r package="$1" - REPLY=$(mktemp).tar.gz - CLEANUPFILES+=("$REPLY") - if [[ "$package" =~ ^https?:// ]]; then - echo - echo ">>>>>==================== Downloading package file" - if ! wget -O "$REPLY" "$package"; then - >&2 echo "Cannot download package file :(" - return 3 - fi - else - cp "$package" "$REPLY" - fi -} - -detect_platform() { - declare -r image="$1" - REPLY=$(docker run --user docker \ - --rm ${image} \ - sh -c 'echo $RHUB_PLATFORM') -} - -get_desc() { - declare -r package="$1" - local desc=$(tar tzf "$package" | grep "^[^/]*/DESCRIPTION$") - local dir=$(mktemp -d) - CLEANUPFILES+=("$dir") - tar xzf "$package" -C "$dir" "$desc" - REPLY="${dir}/${desc}" -} - -install_sysreqs() { - declare -r package="$1" image="$2" container="$3" platform="$4" - # Default is doing nothing - REPLY="$image" - - # If there is no RHUB_PLATFORM, skip sysreqs - if [[ -z "$platform" ]]; then - echo "Unknown platform, skipping installation of system requirements" - return - fi - - # If there is nothing to install we just use the stock image - get_desc "$package" - local desc=$REPLY - local cmd="library(sysreqs); cat(sysreq_commands(\"$desc\"))" - local sysreqs=$(Rscript -e "$cmd") - - # Install them, if there is anything to install - if [[ -z "${sysreqs}" ]]; then - echo "No system requirements" - fi - - echo - echo ">>>>>==================== Installing system requirements" - local sysreqsfile=$(mktemp) - CLEANUPFILES+=("$sysreqsfile") - echo "${sysreqs}" > "$sysreqsfile" - docker create --user root --name "${container}-1" \ - "$image" bash /root/sysreqs.sh - docker cp "$sysreqsfile" "${container}-1:/root/sysreqs.sh" - docker start -i -a "${container}-1" - REPLY=$(docker commit "${container}-1") - CLEANUPIMAGE="$image" -} - -create_env_file() { - declare -r package="$1" envvars="$2" checkargs="$3" - local envfile=$(mktemp) - CLEANUPFILES+=("$envfile") - - # These can be overriden by the user supplied env vars - echo R_REMOTES_STANDALONE=true >> "$envfile" - echo R_REMOTES_NO_ERRORS_FROM_WARNINGS=true >> "$envfile" - echo TZ=Europe/London >> "$envfile" - local mirror="${RHUB_CRAN_MIRROR:-https://cloud.r-project.org}" - echo RHUB_CRAN_MIRROR="$mirror" >> "$envfile" - echo _R_CHECK_FORCE_SUGGESTS_=${_R_CHECK_FORCE_SUGGESTS_:-false} \ - >> "$envfile" - - # User supplied env vars - echo "$envvars" >> "$envfile" - - # These canot be overriden - echo checkArgs="${checkargs}" >> "$envfile" # note the uppercase! - local basepackage=$(basename "$package") - echo package="$basepackage" >> "$envfile" - REPLY="$envfile" -} - -run_check() { - echo - echo ">>>>>==================== Starting Docker container" - declare package="$1" image="$2" container="$3" envfile="$4" - local basepackage=$(basename "$package") - - docker create -i --user docker --env-file "$envfile" \ - --name "${container}-2" "$image" /bin/bash -l /tmp/build.sh \ - "/tmp/$basepackage" - docker cp rhub-linux-docker.sh "${container}-2:/tmp/build.sh" - docker cp "$package" "${container}-2:/tmp/$basepackage" - docker start -i -a "${container}-2" -} - -get_artifacts() { - declare container="$1" artifacts="$2" - local tmp=$(mktemp -d ./tmp.XXXXXXXXX) - CLEANUPFILES+=("$tmp") - - if ! docker cp "${container}-2:/tmp/artifacts" "$tmp"; then - >&2 echo "No artifacts were saved :(" - return - fi - - local packagename=$(cat "${tmp}/artifacts/packagename") - local output=${JOB_BASE_NAME:-${artifacts}/${packagename}-${container}} - if [[ -e "$output" ]]; then - local i=1 - while [[ -e "${output}-${i}" ]]; do i=$((i+1)); done - output="${output}-${i}" - fi - - mv "${tmp}/artifacts" "${output}" - echo Saved artifacts in "${output}" - - RHUB_ARTIFACTS=${RHUB_ARTIFACTS:-none} - if [[ "x$RHUB_ARTIFACTS" = "xlocal" ]]; then - cp -r "$output" /artifacts/ - fi -} - -cleanup() { - # Containers - docker rm -f -v "${CONTAINER}-1" >/dev/null 2>/dev/null || true - - if [[ "$CLEANUPKEEP" != "true" ]]; then - docker rm -f -v "${CONTAINER}-2" >/dev/null 2>/dev/null || true - else - docker kill "${CONTAINER}-2" >/dev/null 2>/dev/null || true - if ! docker inspect "${CONTAINER}-2" >/dev/null 2>/dev/null; then - echo Build container is "${CONTAINER}-2" - fi - fi - - # Image - docker rmi -f "$CLEANUPIMAGE" >/dev/null 2>/dev/null || true - - # Temp files - for i in ${CLEANUPFILES[@]}; do - rm -rf "$i" 2>/dev/null || true - done -} - -[[ "$0" == "$BASH_SOURCE" ]] && main "$@" diff --git a/inst/header.md b/inst/header.md new file mode 100644 index 0000000..158aec3 --- /dev/null +++ b/inst/header.md @@ -0,0 +1,19 @@ + + + +# rhub + +> R-hub v2 + + +[![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental) +[![R-CMD-check](https://github.com/r-hub/rhub/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/r-hub/rhub/actions/workflows/R-CMD-check.yaml) +[![](https://www.r-pkg.org/badges/version/rhub)](https://www.r-pkg.org/pkg/rhub) +[![Codecov test coverage](https://codecov.io/gh/r-hub/rhub/branch/main/graph/badge.svg)](https://app.codecov.io/gh/r-hub/rhub?branch=main) + + +R-hub v2 uses GitHub Actions to run `R CMD check` and similar package checks. +The rhub package helps you set up R-hub v2 for your R package, and start +running checks. + +--- diff --git a/man/check.Rd b/man/check.Rd index cbf4d0b..55397e0 100644 --- a/man/check.Rd +++ b/man/check.Rd @@ -1,56 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/check.R +% Please edit documentation in R/rhubv1.R \name{check} \alias{check} -\title{Check an R package on R-hub} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -check( - path = ".", - platforms = NULL, - email = NULL, - valgrind = FALSE, - check_args = character(), - env_vars = character(), - show_status = interactive() -) +check(...) } \arguments{ -\item{path}{Path to a directory containing an R package, or path to -source R package tarball built with \verb{R CMD build} or -\code{devtools::build()}.} - -\item{platforms}{A character vector of one or more platforms to build/check -the package on. See \code{\link[=platforms]{platforms()}} for the available platforms. If this is -\code{NULL}, and the R session is interactive, then a menu is shown. If it -is \code{NULL}, and the session is not interactive, then the default R-hub -platforms are used. A vector of platforms which saves time by building one -R package tarball that is used for all the platforms specified.} - -\item{email}{Email address to send notification to about the check. -It must be a validated email address, see \code{\link[=validate_email]{validate_email()}}. If -\code{NULL}, then the email address of the maintainer is used, as defined -in the \code{DESCRIPTION} file of the package.} - -\item{valgrind}{Whether to run the check in valgrind. Only supported on -Linux currently, and ignored on other platforms.} - -\item{check_args}{Extra arguments for the \verb{R CMD check} command.} - -\item{env_vars}{Environment variables to set on the builder machine -before the check. A named character vector.} - -\item{show_status}{Whether to show the status of the build and check -(live log) as it is happening.} -} -\value{ -An \link{rhub_check} object. +\item{...}{Deprecated.} } \description{ -Check an R package on R-hub -} -\examples{ -\dontrun{ -check(".") -check("mypackage_1.0.0.tar.gz", platforms = "fedora-clang-devel") -} +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/check_for_cran.Rd b/man/check_for_cran.Rd index 5b2ea74..554ecda 100644 --- a/man/check_for_cran.Rd +++ b/man/check_for_cran.Rd @@ -1,69 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/check-cran.R +% Please edit documentation in R/rhubv1.R \name{check_for_cran} \alias{check_for_cran} -\title{Check an R-package on R-hub, for a CRAN submission} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -check_for_cran( - path = ".", - email = NULL, - check_args = "--as-cran", - env_vars = c(`_R_CHECK_FORCE_SUGGESTS_` = "true", `_R_CHECK_CRAN_INCOMING_USE_ASPELL_` - = "true"), - platforms = NULL, - ... -) +check_for_cran(...) } \arguments{ -\item{path}{Path to a directory containing an R package, or path to -source R package tarball built with \verb{R CMD build} or -\code{devtools::build()}.} - -\item{email}{Email address to send notification to about the check. -It must be a validated email address, see \code{\link[=validate_email]{validate_email()}}. If -\code{NULL}, then the email address of the maintainer is used, as defined -in the \code{DESCRIPTION} file of the package.} - -\item{check_args}{Arguments for \verb{R CMD check}. By default \code{--as-cran} -is used.} - -\item{env_vars}{Character vecctor of environment variables to set on the builder. -By default \verb{_R_CHECK_FORCE_SUGGESTS_="true"} is set, to require all packages used. -\verb{_R_CHECK_CRAN_INCOMING_USE_ASPELL_="true"} is also set, to use the -spell checker.} - -\item{platforms}{A character vector of one or more platforms to build/check -the package on. See \code{\link[=platforms]{platforms()}} for the available platforms. If this is -\code{NULL}, and the R session is interactive, then a menu is shown. If it -is \code{NULL}, and the session is not interactive, then the default R-hub -platforms are used. A vector of platforms which saves time by building one -R package tarball that is used for all the platforms specified.} - -\item{...}{Additional arguments are passed to \code{\link[=check]{check()}}.} -} -\value{ -An \link{rhub_check} object. +\item{...}{Deprecated.} } \description{ -This function calls \code{\link[=check]{check()}} with arguments and platforms, that -are suggested for a CRAN submission. -} -\details{ -In particular, if \code{platforms} is \code{NULL} (the default), then -\itemize{ -\item It checks the package on Windows, and Linux. -\item It checks the package on R-release and R-devel. -\item It uses the \code{--as-cran} argument to \verb{R CMD check}. -\item It requires all dependencies, including suggested ones. -} - -This function is wrapped by \code{devtools::check_rhub()} which you -might find useful if you load \code{devtools} via your .Rprofile (see \code{usethis::use_devtools()}). -} -\examples{ -\dontrun{ -ch <- check_for_cran("package", show_status = FALSE) -ch$update() -ch$livelog(3) -} +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/check_shortcuts.Rd b/man/check_shortcuts.Rd index dca41b0..1bf601d 100644 --- a/man/check_shortcuts.Rd +++ b/man/check_shortcuts.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/check-shortcuts.R +% Please edit documentation in R/rhubv1.R \name{check_on_linux} \alias{check_on_linux} \alias{check_on_windows} @@ -15,54 +15,39 @@ \alias{check_with_rdevel} \alias{check_with_valgrind} \alias{check_with_sanitizers} -\title{Check an R package on an R-hub platform} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -check_on_linux(path = ".", ...) +check_on_linux(...) -check_on_windows(path = ".", ...) +check_on_windows(...) -check_on_macos(path = ".", ...) +check_on_macos(...) -check_on_debian(path = ".", ...) +check_on_debian(...) -check_on_ubuntu(path = ".", ...) +check_on_ubuntu(...) -check_on_fedora(path = ".", ...) +check_on_fedora(...) -check_on_solaris( - path = ".", - check_args = "'--no-manual --no-build-vignettes'", - ... -) +check_on_solaris(...) -check_on_centos(path = ".", ...) +check_on_centos(...) -check_with_roldrel(path = ".", ...) +check_with_roldrel(...) -check_with_rrelease(path = ".", ...) +check_with_rrelease(...) -check_with_rpatched(path = ".", ...) +check_with_rpatched(...) -check_with_rdevel(path = ".", ...) +check_with_rdevel(...) -check_with_valgrind(path = ".", ...) +check_with_valgrind(...) -check_with_sanitizers(path = ".", ...) +check_with_sanitizers(...) } \arguments{ -\item{path}{Path to a directory containing an R package, or path to -source R package tarball built with \verb{R CMD build} or -\code{devtools::build()}.} - -\item{...}{Additional arguments are passed to \code{\link[=check]{check()}}.} - -\item{check_args}{Extra arguments for the \verb{R CMD check} command.} -} -\value{ -An \link{rhub_check} object. +\item{...}{Deprecated.} } \description{ -These functions provide a quick easy to use interface to check a -package on a platform with some particular aspect. Which platform -they use might change over time. +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/get_check.Rd b/man/get_check.Rd index 2490d37..a968a3a 100644 --- a/man/get_check.Rd +++ b/man/get_check.Rd @@ -1,39 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/check-class.R +% Please edit documentation in R/rhubv1.R \name{get_check} \alias{get_check} -\title{Retrieve the result of R-hub checks} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -get_check(ids) +get_check(...) } \arguments{ -\item{ids}{One of the following: -\itemize{ -\item A single R-hub check id. -\item A character vector of check ids. -\item An R-hub check group id. -All ids can be abbreviated, see \link[=rhub-ids]{R-hub ids}. -}} -} -\value{ -An \link{rhub_check} object. +\item{...}{Deprecated.} } \description{ -Retrieve the result of R-hub checks -} -\section{Examples}{ - - -\if{html}{\out{
}}\preformatted{chk <- get_check("915ee61") -chk -chk$update() -chk$browse() -chk$cran_summary() -chk$urls() -}\if{html}{\out{
}} -} - -\seealso{ -\code{\link[=list_my_checks]{list_my_checks()}} and \code{\link[=list_package_checks]{list_package_checks()}} to list -R-hub checks. +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/last_check.Rd b/man/last_check.Rd index 30de104..6ceb048 100644 --- a/man/last_check.Rd +++ b/man/last_check.Rd @@ -1,22 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/last.R +% Please edit documentation in R/rhubv1.R \name{last_check} \alias{last_check} -\title{The last rhub check of this R session} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -last_check() +last_check(...) } -\value{ -An rhub_check object. +\arguments{ +\item{...}{Deprecated.} } \description{ -\code{rhub} caches the id(s) of the last submission. This can be retrieved -with \code{last_check}. -} -\examples{ -\dontrun{ -check("packagedir") -last_check() -last_check()$livelog() -} +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/list_my_checks.Rd b/man/list_my_checks.Rd index 6022e51..ffc48e6 100644 --- a/man/list_my_checks.Rd +++ b/man/list_my_checks.Rd @@ -1,63 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/list.R +% Please edit documentation in R/rhubv1.R \name{list_my_checks} \alias{list_my_checks} -\title{List all checks for an email address} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -list_my_checks(email = email_address(), package = NULL, howmany = 20) +list_my_checks(...) } \arguments{ -\item{email}{Email address. By default it is guessed with -\code{\link[whoami:email_address]{whoami::email_address()}}. The address must be validated, see -\code{\link[=validate_email]{validate_email()}}.} - -\item{package}{\code{NULL}, or a character scalar. Can be used to restrict -the search for a single package.} - -\item{howmany}{How many check groups (checks submitted simultaneously) -to show. The current API limit is 20.} -} -\value{ -A \link[tibble:tibble]{tibble::tibble} with columns: -\itemize{ -\item package Name of the package. -\item version Package version. -\item result: More detailed result of the check. Can be \code{NULL} for errors. -This is a list column with members: \code{status}, \code{errors}, \code{warnings}, -\code{notes}. -\item group: R-hub check group id. -\item id: `R-hub check id. -\item platform_name: Name of the check platform. -\item build_time: Build time, a \link{difftime} object. -\item submitted: Time of submission. -\item started: Time of the check start. -\item platform: Detailed platform data, a list column. -\item builder: Name of the builder machine. -\item status Status of the check. Possible values: -\itemize{ -\item \code{created}: check job was created, but not running yet. -\item \verb{in-progress}: check job is running. -\item \code{parseerror}: internal R-hub error parsing the check results. -\item \code{preperror}: check error, before the package check has started. -\item \code{aborted}: aborted by admin or user. -\item \code{error}: failed check. (Possibly warnings and notes as well.) -\item \code{warning}: \verb{R CMD check} reported warnings. (Possibly notes as well.) -\item \code{note}: \verb{R CMD check} reported notes. -\item \code{ok}: successful check. -} -\item email: Email address of maintainer / submitter. -} +\item{...}{Deprecated.} } \description{ -List all checks for an email address -} -\examples{ -\dontrun{ -ch <- list_my_checks() -ch -ch$details() -} -} -\seealso{ -list_package_checks +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/list_package_checks.Rd b/man/list_package_checks.Rd index 16e10ad..eb767d8 100644 --- a/man/list_package_checks.Rd +++ b/man/list_package_checks.Rd @@ -1,58 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/list.R +% Please edit documentation in R/rhubv1.R \name{list_package_checks} \alias{list_package_checks} -\title{List checks of a package} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -list_package_checks(package = ".", email = NULL, howmany = 20) +list_package_checks(...) } \arguments{ -\item{package}{Directory of an R package, or a package tarball.} - -\item{email}{Email address that was used for the check(s). -If \code{NULL}, then the maintainer address is used.} - -\item{howmany}{How many checks to show. The current maximum of the API -is 20.} -} -\value{ -A \link[tibble:tibble]{tibble::tibble} with columns: -\itemize{ -\item package Name of the package. -\item version Package version. -\item result: More detailed result of the check. Can be \code{NULL} for errors. -This is a list column with members: \code{status}, \code{errors}, \code{warnings}, -\code{notes}. -\item group: R-hub check group id. -\item id: `R-hub check id. -\item platform_name: Name of the check platform. -\item build_time: Build time, a \link{difftime} object. -\item submitted: Time of submission. -\item started: Time of the check start. -\item platform: Detailed platform data, a list column. -\item builder: Name of the builder machine. -\item status Status of the check. Possible values: -\itemize{ -\item \code{created}: check job was created, but not running yet. -\item \verb{in-progress}: check job is running. -\item \code{parseerror}: internal R-hub error parsing the check results. -\item \code{preperror}: check error, before the package check has started. -\item \code{aborted}: aborted by admin or user. -\item \code{error}: failed check. (Possibly warnings and notes as well.) -\item \code{warning}: \verb{R CMD check} reported warnings. (Possibly notes as well.) -\item \code{note}: \verb{R CMD check} reported notes. -\item \code{ok}: successful check. -} -\item email: Email address of maintainer / submitter. -} +\item{...}{Deprecated.} } \description{ -List checks of a package -} -\examples{ -\dontrun{ -ch <- list_package_checks() -ch -ch$details(1) -} +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/list_validated_emails.Rd b/man/list_validated_emails.Rd index 53d4659..a8f526f 100644 --- a/man/list_validated_emails.Rd +++ b/man/list_validated_emails.Rd @@ -1,21 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/email.R +% Please edit documentation in R/rhubv1.R \name{list_validated_emails} \alias{list_validated_emails} -\title{List validated email addresses} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -list_validated_emails() +list_validated_emails(...) } -\value{ -A \code{data.frame} with two columns: \code{email} and \code{token}. -If in interactive mode, and there are no validated email addresses, -then a message is printed and the data frame is returned invisibly. +\arguments{ +\item{...}{Deprecated.} } \description{ -List email addresses validated on R-hub on the current machine. +This function is deprecated and defunct. Please see \link{rhubv2}. } -\seealso{ -Other email validation: -\code{\link{validate_email}()} -} -\concept{email validation} diff --git a/man/local_check_linux.Rd b/man/local_check_linux.Rd index 770b393..e862d4c 100644 --- a/man/local_check_linux.Rd +++ b/man/local_check_linux.Rd @@ -1,55 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/local.R +% Please edit documentation in R/rhubv1.R \name{local_check_linux} \alias{local_check_linux} -\title{Run a package check locally, in a Docker container} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -local_check_linux( - path = ".", - quiet = FALSE, - image = NULL, - valgrind = FALSE, - check_args = character(), - env_vars = character(), - timeout = Inf, - artifacts = tempfile() -) +local_check_linux(...) } \arguments{ -\item{path}{Path to a directory containing an R package, or path to -source R package tarball built with \verb{R CMD build} or -\code{devtools::build()}.} - -\item{quiet}{Whether to print the check output} - -\item{image}{Docker image to use. If \code{NULL}, a default image is selected.} - -\item{valgrind}{Whether to run the check with Valgrind.} - -\item{check_args}{Extra arguments for the \verb{R CMD check} command.} - -\item{env_vars}{Environment variables to set on the builder machine -before the check. A named character vector.} - -\item{timeout}{Timeout for a check, a \code{difftime} object or a scalar -that will be interpreted as seconds.} - -\item{artifacts}{Where to copy the build artifacts after the build.} -} -\value{ -An \code{rcmdcheck::rcmdcheck} object, with extra fields: -\itemize{ -\item \code{all_output}: all output from the check, both standard output and -error. -\item \code{container_name}: name of the Docker container that performed the -build. It is a random name. -\item \code{artifacts}: directory of build artifacts. -} +\item{...}{Deprecated.} } \description{ -Run a package check locally, in a Docker container. UNTESTED -ON WINDOWS, bug reports welcome. :-) -} -\details{ -You'll need to have bash and Docker installed. +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/local_check_linux_images.Rd b/man/local_check_linux_images.Rd index faf28f4..9f45019 100644 --- a/man/local_check_linux_images.Rd +++ b/man/local_check_linux_images.Rd @@ -1,12 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/local.R +% Please edit documentation in R/rhubv1.R \name{local_check_linux_images} \alias{local_check_linux_images} -\title{List R-hub Docker images} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -local_check_linux_images() +local_check_linux_images(...) +} +\arguments{ +\item{...}{Deprecated.} } \description{ -The images are pretty-printed in a short format. Use -\code{as.data.frame()} to get all available platform metadata. +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/platforms.Rd b/man/platforms.Rd index 71a1fe0..953b85f 100644 --- a/man/platforms.Rd +++ b/man/platforms.Rd @@ -1,18 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/platform.R +% Please edit documentation in R/rhubv1.R \name{platforms} \alias{platforms} -\title{List all R-hub platforms} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -platforms() +platforms(...) } -\description{ -The platforms are pretty-printed in a short format. Use -\code{as.data.frame(platforms())} to get all available platform metadata. -} -\examples{ -\dontrun{ -platforms() -as.data.frame(platforms()) +\arguments{ +\item{...}{Deprecated.} } +\description{ +This function is deprecated and defunct. Please see \link{rhubv2}. } diff --git a/man/rc_list_local_tokens.Rd b/man/rc_list_local_tokens.Rd new file mode 100644 index 0000000..e1f0e45 --- /dev/null +++ b/man/rc_list_local_tokens.Rd @@ -0,0 +1,21 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rc.R +\name{rc_list_local_tokens} +\alias{rc_list_local_tokens} +\title{Show your tokens for the R Consortium runners} +\usage{ +rc_list_local_tokens() +} +\value{ +Data frame with string columns \code{email} and \code{token}. +} +\description{ +Lists all tokens stored on the local machine. +} +\seealso{ +Other RC runners API: +\code{\link{rc_list_repos}()}, +\code{\link{rc_new_token}()}, +\code{\link{rc_submit}()} +} +\concept{RC runners API} diff --git a/man/rc_list_repos.Rd b/man/rc_list_repos.Rd new file mode 100644 index 0000000..fc325d2 --- /dev/null +++ b/man/rc_list_repos.Rd @@ -0,0 +1,33 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rc.R +\name{rc_list_repos} +\alias{rc_list_repos} +\title{List your repositories created by the R Consortium runners} +\usage{ +rc_list_repos(email = NULL) +} +\arguments{ +\item{email}{Email address. We try to detect this, but +if the detection fails, you can specify it explicitly.} +} +\value{ +Data frame with columns: +\itemize{ +\item \code{repo_name}: Name of the repository. +\item \code{repo_url}: URL of the repository. +\item \code{builds_url}: URL to the builds of the repository. +} + +Additional columns and customized printing will be probably added +later to the result. +} +\description{ +Lists repositories created by \code{\link[=rc_submit]{rc_submit()}} submissions. +} +\seealso{ +Other RC runners API: +\code{\link{rc_list_local_tokens}()}, +\code{\link{rc_new_token}()}, +\code{\link{rc_submit}()} +} +\concept{RC runners API} diff --git a/man/rc_new_token.Rd b/man/rc_new_token.Rd new file mode 100644 index 0000000..67bdc7c --- /dev/null +++ b/man/rc_new_token.Rd @@ -0,0 +1,45 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rc.R +\name{rc_new_token} +\alias{rc_new_token} +\title{Request a new token for submissions to the R Consortium runners} +\usage{ +rc_new_token(email = NULL, token = NULL) +} +\arguments{ +\item{email}{Email address to verify We try to detect this, but +if the detection fails, you can specify it explicitly. +If this argument is missing (or \code{NULL}), then you can specify it +interactively.} + +\item{token}{Token to add. If you already received a token in an email +from R-hub, you can specify that here.} +} +\description{ +To build and check R packages on the RC runners of R-hub, you'll need +to verify your email address. R-hub will send a token to your email +address, and this token will be stored on your computer. +} +\details{ +You need to store a token on every computer you want to submit +jobs from, either using the same token from the email you got, or +you can request additional tokens for the new machines. Your old token +will stay valid as well. + +If you already have a token from a previous version of R-hub, you can +reuse that and you don't need to do anything. + +Run + +\if{html}{\out{
}}\preformatted{rhub:::email_file() +}\if{html}{\out{
}} + +to see the file rhub uses to store your tokens. +} +\seealso{ +Other RC runners API: +\code{\link{rc_list_local_tokens}()}, +\code{\link{rc_list_repos}()}, +\code{\link{rc_submit}()} +} +\concept{RC runners API} diff --git a/man/rc_submit.Rd b/man/rc_submit.Rd new file mode 100644 index 0000000..be04300 --- /dev/null +++ b/man/rc_submit.Rd @@ -0,0 +1,49 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rc.R +\name{rc_submit} +\alias{rc_submit} +\title{Submit a package to the R Consortium runners} +\usage{ +rc_submit(path = ".", platforms = NULL, email = NULL, confirmation = NULL) +} +\arguments{ +\item{path}{Path to package file or package directory.} + +\item{platforms}{Platforms to checks. See \code{\link[=rhub_platforms]{rhub_platforms()}} for a +current list. If not specified, then you can select the platforms +interactively. Must be specified in non-interactive sessions.} + +\item{email}{Email address. You must have a token on the local machhine, +that corresponds to the email address, see \code{\link[=rc_new_token]{rc_new_token()}}. +If not specified (or \code{NULL}) then the email address of the package +maintainer is used.} + +\item{confirmation}{You must set this to \code{TRUE} to submit a package +from a non-interactive session.} +} +\value{ +A list with data about the submission, invisibly. +Currently it has: +\itemize{ +\item \code{result}: Should be the string \code{"OK"}. +\item \code{repo_url}: URL to the repository. +\item \code{actions_url}: URL to the builds inside the repository. +\item \code{id}: Build id. This is a string with a randomly generated id. +\item \code{name}: Build name, this is a string, the concatenation of the +build platforms. +} + +More fields might be added later. +} +\description{ +Submit a package to the R Consortium runners +} +\seealso{ +\code{\link[=rhub_platforms]{rhub_platforms()}} for a list of supported platforms. + +Other RC runners API: +\code{\link{rc_list_local_tokens}()}, +\code{\link{rc_list_repos}()}, +\code{\link{rc_new_token}()} +} +\concept{RC runners API} diff --git a/man/rhub-ids.Rd b/man/rhub-ids.Rd deleted file mode 100644 index d7f3af2..0000000 --- a/man/rhub-ids.Rd +++ /dev/null @@ -1,44 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/check-class.R -\name{rhub-ids} -\alias{rhub-ids} -\title{R-hub check ids} -\description{ -R-hub check ids -} -\section{R-hub ids}{ - - -Every R-hub check has a unique id, that is constructed from the -name of the source package archive, and a random string. For example: - -\if{html}{\out{
}}\preformatted{devtools_2.0.0.tar.gz-fe53bbba85de4a579f6dc3b852bf76a3 -}\if{html}{\out{
}} -} - -\section{R-hub group ids}{ - - -For every check submission, R-hub also creates a unique check group id. -One check group may contain multiple checks. E.g. \code{\link[=check_for_cran]{check_for_cran()}} -typically creates three or four check groups. Group ids look the same -as individual check ids. -} - -\section{Abbreviating ids}{ - - -The rhub package keeps a list of all the checks that it has seen in the -current session, and these checks can be also referenced by any unique -prefix of the random string part of the id, e.g. in the \code{\link[=get_check]{get_check()}} -function. E.g. if rhub already know the devtools check above, then - -\if{html}{\out{
}}\preformatted{get_check("fe53bbb") -}\if{html}{\out{
}} - -works. - -This is only recommended in interactive mode, and we suggest that you -always use the full ids when using rhub programmatically. -} - diff --git a/man/rhub-package.Rd b/man/rhub-package.Rd index c3208ca..b32bb9e 100644 --- a/man/rhub-package.Rd +++ b/man/rhub-package.Rd @@ -1,36 +1,263 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/rhub-package.R -\docType{package} +% Please edit documentation in R/rhubv2.R \name{rhub-package} -\alias{rhub} \alias{rhub-package} -\title{rhub: Connect to 'R-hub'} +\alias{rhub} +\title{The rhub package} \description{ -\if{html}{\figure{logo.png}{options: style='float: right' alt='logo' width='120'}} +Tools for R package developers +} +\details{ +\subsection{Installation}{ -Run 'R CMD check' on any of the 'R-hub' (\url{https://builder.r-hub.io/}) architectures, from the command line. The current architectures include 'Windows', 'macOS', 'Solaris' and various 'Linux' distributions. +Install rhub from CRAN: + +\if{html}{\out{
}}\preformatted{pak::pkg_install("rhub") +}\if{html}{\out{
}} } -\seealso{ -Useful links: + +\subsection{Usage}{ +\subsection{Requirements}{ \itemize{ - \item \url{https://github.com/r-hub/rhub} - \item \url{https://r-hub.github.io/rhub/} - \item Report bugs at \url{https://github.com/r-hub/rhub/issues} +\item A Github account. +\item Your R package must be in a GitHub repository. +\item You need a GitHub \href{https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token}{Personal Access Token}. +You can use the \href{https://gitcreds.r-lib.org/}{gitcreds package} to add +the token to the git credential store. } +See the \href{#the-r-consortium-runners}{R Consortium runners} section for +using rhub if your package is not on GitHub. } -\author{ -\strong{Maintainer}: Gábor Csárdi \email{csardi.gabor@gmail.com} -Authors: -\itemize{ - \item Maëlle Salmon \email{maelle.salmon@yahoo.se} (\href{https://orcid.org/0000-0002-2815-0399}{ORCID}) +\subsection{Private repositories}{ + +rhub uses GitHub Actions, which is free for public repositories. +For private repositories you also get some minutes for free, depending on +the GitHub subscription you have. See +\href{https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions}{About billing for GitHub Actions} for details. +} + +\subsection{Setup}{ +\enumerate{ +\item Switch to the directory of your package, and call \code{rhub::rhub_setup()} to +add the R-hub workflow file to your package. } -Other contributors: +\if{html}{\out{
}}\preformatted{rhub::rhub_setup() +}\if{html}{\out{
}}\if{html}{\out{ +
+#> Setting up R-hub v2.                                                            
+#>  Found R package at /private/tmp/cli.                                          
+#>  Found git repository at /private/tmp/cli.                                     
+#>  Created workflow file /private/tmp/cli/.github/workflows/rhub.yaml.           
+#>                                                                                 
+#> Notes:                                                                          
+#>  The workflow file must be added to the default branch of the GitHub           
+#>   repository.                                                                   
+#>  GitHub actions must be enabled for the repository. They are disabled for      
+#>   forked repositories by default.                                               
+#>                                                                                 
+#> Next steps:                                                                     
+#>  Add the workflow file to git using `git add <filename>`.                      
+#>  Commit it to git using `git commit`.                                          
+#>  Push the commit to GitHub using `git push`.                                   
+#>  Call `rhub::rhub_doctor()` to check that you have set up R-hub correctly.     
+#>  Call `rhub::rhub_check()` to check your package.                              
+
+}} + +\enumerate{ +\item Run \verb{git commit} and \verb{git push} to push the workflow file to GitHub. +\item Run \code{rhub::rhub_doctor()} to check if everything is set up correctly: +} + +\if{html}{\out{
}}\preformatted{rhub::rhub_doctor() +}\if{html}{\out{
}}\if{html}{\out{ +
+#>  Found R package at /private/tmp/cli.                                          
+#>  Found git repository at /private/tmp/cli.                                     
+#>  Found GitHub PAT.                                                             
+#>  Found repository on GitHub at <https://github.com/r-lib/cli>.                 
+#>  GitHub PAT has the right scopes.                                              
+#>  Found R-hub workflow in default branch, and it is active.                     
+#> → WOOT! You are ready to run `rhub::rhub_check()` on this package.              
+
+}} + +} + +\subsection{Run checks}{ + +Use \code{rhub::rhub_platforms()} to get a list of supported platforms and checks: + +\if{html}{\out{
}}\preformatted{rhub::rhub_platforms() +}\if{html}{\out{
}}\if{html}{\out{ +
+#> ── Virtual machines ─────────────────────────────────────────────────────────── 
+#>  1 [VM]  linux                                                                  
+#>    All R versions on GitHub Actions ubuntu-latest                               
+#>  2 [VM]  macos                                                                  
+#>    All R versions on GitHub Actions macos-latest                                
+#>  3 [VM]  macos-arm64                                                            
+#>    All R versions on GitHub Actions macos-14                                    
+#>  4 [VM]  windows                                                                
+#>    All R versions on GitHub Actions windows-latest                              
+#>                                                                                 
+#> ── Containers ───────────────────────────────────────────────────────────────── 
+#>  5 [CT]  atlas  [ATLAS]                                                         
+#>    R Under development (unstable) (2024-03-19 r86153) on Fedora Linux 38 (Conta…
+#>    ghcr.io/r-hub/containers/atlas:latest                                        
+#>  6 [CT]  clang-asan  [asan, clang-ASAN, clang-UBSAN, ubsan]                     
+#>    R Under development (unstable) (2024-03-19 r86153) on Ubuntu 22.04.4 LTS     
+#>    ghcr.io/r-hub/containers/clang-asan:latest                                   
+#>  7 [CT]  clang16  [clang16]                                                     
+#>    R Under development (unstable) (2024-03-18 r86148) on Ubuntu 22.04.4 LTS     
+#>    ghcr.io/r-hub/containers/clang16:latest                                      
+#>  8 [CT]  clang17  [clang17]                                                     
+#>    R Under development (unstable) (2024-03-18 r86148) on Ubuntu 22.04.4 LTS     
+#>    ghcr.io/r-hub/containers/clang17:latest                                      
+#>  9 [CT]  clang18  [clang18]                                                     
+#>    R Under development (unstable) (2024-03-18 r86148) on Ubuntu 22.04.4 LTS     
+#>    ghcr.io/r-hub/containers/clang18:latest                                      
+#> 10 [CT]  donttest  [donttest]                                                   
+#>    R Under development (unstable) (2024-03-18 r86148) on Ubuntu 22.04.4 LTS     
+#>    ghcr.io/r-hub/containers/donttest:latest                                     
+#> 11 [CT]  gcc13  [gcc13]                                                         
+#>    R Under development (unstable) (2024-03-19 r86153) on Fedora Linux 38 (Conta…
+#>    ghcr.io/r-hub/containers/gcc13:latest                                        
+#> 12 [CT]  intel  [Intel]                                                         
+#>    R Under development (unstable) (2024-03-19 r86153) on Fedora Linux 38 (Conta…
+#>    ghcr.io/r-hub/containers/intel:latest                                        
+#> 13 [CT]  mkl  [MKL]                                                             
+#>    R Under development (unstable) (2024-03-19 r86153) on Fedora Linux 38 (Conta…
+#>    ghcr.io/r-hub/containers/mkl:latest                                          
+#> 14 [CT]  nold  [noLD]                                                           
+#>    R Under development (unstable) (2024-03-19 r86153) on Ubuntu 22.04.4 LTS     
+#>    ghcr.io/r-hub/containers/nold:latest                                         
+#> 15 [CT]  nosuggests  [noSuggests]                                               
+#>    R Under development (unstable) (2024-03-19 r86153) on Fedora Linux 38 (Conta…
+#>    ghcr.io/r-hub/containers/nosuggests:latest                                   
+#> 16 [CT]  ubuntu-clang  [r-devel-linux-x86_64-debian-clang]                      
+#>    R Under development (unstable) (2024-03-19 r86153) on Ubuntu 22.04.4 LTS     
+#>    ghcr.io/r-hub/containers/ubuntu-clang:latest                                 
+#> 17 [CT]  ubuntu-gcc12  [r-devel-linux-x86_64-debian-gcc]                        
+#>    R Under development (unstable) (2024-03-19 r86153) on Ubuntu 22.04.4 LTS     
+#>    ghcr.io/r-hub/containers/ubuntu-gcc12:latest                                 
+#> 18 [CT]  ubuntu-next  [r-next, r-patched, r-patched-linux-x86_64]               
+#>    R version 4.3.3 Patched (2024-02-29 r86153) on Ubuntu 22.04.4 LTS            
+#>    ghcr.io/r-hub/containers/ubuntu-next:latest                                  
+#> 19 [CT]  ubuntu-release  [r-release, r-release-linux-x86_64, ubuntu]            
+#>    R version 4.3.3 (2024-02-29) on Ubuntu 22.04.4 LTS                           
+#>    ghcr.io/r-hub/containers/ubuntu-release:latest                               
+#> 20 [CT]  valgrind  [valgrind]                                                   
+#>    R Under development (unstable) (2024-03-19 r86153) on Fedora Linux 38 (Conta…
+#>    ghcr.io/r-hub/containers/valgrind:latest                                     
+
+}} + + +Run \code{rhub::rhub_check()} to start R-hub v2 checks on GitHub Actions: + +\if{html}{\out{
}}\preformatted{rhub::rhub_check() +}\if{html}{\out{
}}\if{html}{\out{ +
+#>  Found git repository at /private/tmp/cli.                                     
+#>  Found GitHub PAT.                                                             
+#>                                                                                 
+#> Available platforms (see `rhub::rhub_platforms()` for details):                 
+#>                                                                                 
+#>  1 [VM] linux          R-* (any version)                     ubuntu-latest on G…
+#>  2 [VM] macos          R-* (any version)                     macos-latest on Gi…
+#>  3 [VM] macos-arm64    R-* (any version)                     macos-14 on GitHub 
+#>  4 [VM] windows        R-* (any version)                     windows-latest on …
+#>  5 [CT] atlas          R-devel (2024-03-19 r86153)           Fedora Linux 38 (C…
+#>  6 [CT] clang-asan     R-devel (2024-03-19 r86153)           Ubuntu 22.04.4 LTS 
+#>  7 [CT] clang16        R-devel (2024-03-18 r86148)           Ubuntu 22.04.4 LTS 
+#>  8 [CT] clang17        R-devel (2024-03-18 r86148)           Ubuntu 22.04.4 LTS 
+#>  9 [CT] clang18        R-devel (2024-03-18 r86148)           Ubuntu 22.04.4 LTS 
+#> 10 [CT] donttest       R-devel (2024-03-18 r86148)           Ubuntu 22.04.4 LTS 
+#> 11 [CT] gcc13          R-devel (2024-03-19 r86153)           Fedora Linux 38 (C…
+#> 12 [CT] intel          R-devel (2024-03-19 r86153)           Fedora Linux 38 (C…
+#> 13 [CT] mkl            R-devel (2024-03-19 r86153)           Fedora Linux 38 (C…
+#> 14 [CT] nold           R-devel (2024-03-19 r86153)           Ubuntu 22.04.4 LTS 
+#> 15 [CT] nosuggests     R-devel (2024-03-19 r86153)           Fedora Linux 38 (C…
+#> 16 [CT] ubuntu-clang   R-devel (2024-03-19 r86153)           Ubuntu 22.04.4 LTS 
+#> 17 [CT] ubuntu-gcc12   R-devel (2024-03-19 r86153)           Ubuntu 22.04.4 LTS 
+#> 18 [CT] ubuntu-next    R-4.3.3 (patched) (2024-02-29 r86153) Ubuntu 22.04.4 LTS 
+#> 19 [CT] ubuntu-release R-4.3.3 (2024-02-29)                  Ubuntu 22.04.4 LTS 
+#> 20 [CT] valgrind       R-devel (2024-03-19 r86153)           Fedora Linux 38 (C…
+#>                                                                                 
+#> Selection (comma separated numbers, 0 to cancel): 1, 5                          
+#>                                                                                 
+#>  Check started: linux, atlas (apricot-flycatcher).                             
+#>   See <https://github.com/r-lib/cli/actions> for live output!                   
+
+}} + +} + +} + +\subsection{The R Consortium runners}{ + +If you don't want to put your package on GitHub, you can still use the +rhub package to run package checks on any supported platform using a +shared pool of runners in the https://github.com/r-hub2 GitHub +organization. + +The process is similar to the first version of R-hub: +\itemize{ +\item Set your working directory to the R package you want to check. +\item Obtain a token from R-hub, to verify your email address: + +\if{html}{\out{
}}\preformatted{rc_new_token() +}\if{html}{\out{
}} + +(You do not need to do this, if you already submitted packages to a +previous version of R-hub from the same machine, using the same email +address. Call \code{rc_list_local_tokens()} to check if you already have +tokens.) +\item Submit a build with + +\if{html}{\out{
}}\preformatted{rc_submit() +}\if{html}{\out{
}} +\item Select the platforms you want to use, and follow the instructions and +the link provided to see your check results. +} +\subsection{Limitations of the R Consortium runners}{ \itemize{ - \item R Consortium [funder] +\item You package will be public for the world, and will be stored in the +https://github.com/r-hub2 organization. Your check output and results +will be public for anyone with a GitHub account. If you want to keep +your package private, you can put it in a private GitHub repository, +and use the \code{rhub_setup()} and \code{rhub_check()} functions instead. +\item The R Consortium runners are shared among all users, so you might need +to wait for your builds to start. +\item You have to wait at least five minutes between submissions with +\code{rc_submit()}. +\item Currently you need to create a GitHub account to see the check logs of +your package. You don't need a GitHub account to submit the checks. +} + +To avoid these limitations (except for the neeed for a GitHub accounr), +put your package in a GitHub repository, and use the \code{rhub_setup()} and +\code{rhub_check()} functions instead of \code{rc_submit()} and the R Consortium +runners. +} + } +\subsection{Code of Conduct}{ + +Please note that the rhub package is released with a +\href{https://r-hub.github.io/rhub/dev/CODE_OF_CONDUCT.html}{Contributor Code of Conduct}. +By contributing to this project, you agree to abide by its terms. +} + +\subsection{License}{ + +MIT © R Consortium +} } \keyword{internal} diff --git a/man/rhub_check.Rd b/man/rhub_check.Rd index 52a07c9..a8b90d2 100644 --- a/man/rhub_check.Rd +++ b/man/rhub_check.Rd @@ -1,74 +1,28 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/check-class.R +% Please edit documentation in R/check.R \name{rhub_check} \alias{rhub_check} -\title{An \code{rhub_check} object holds status and results of rhub checks} -\description{ -An \code{rhub_check} object holds status and results of rhub checks +\title{Check a package on R-hub} +\usage{ +rhub_check(gh_url = NULL, platforms = NULL, r_versions = NULL, branch = NULL) } -\section{Usage}{ - +\arguments{ +\item{gh_url}{GitHub URL of a package to check, or \code{NULL} to check +the package in the current directory.} -\if{html}{\out{
}}\preformatted{ch <- rhub_check$new(ids = NULL, status = NULL, group = NULL) -ch$get_ids() -ch$update() -ch$print(...) -ch$browse(which = NULL) -ch$urls(which = NULL) -ch$livelog(which = 1) -ch$cran_summary() -}\if{html}{\out{
}} -} +\item{platforms}{Platforms to use, a character vector. Use \code{NULL} to +select from a list in interactive sessions. See \code{\link[=rhub_platforms]{rhub_platforms()}}.} -\section{Arguments}{ +\item{r_versions}{Which R version(s) to use for the platforms that +supports multiple R versions. This arguemnt is not implemented yet.} -\itemize{ -\item \code{ch} An rhub check object. It can be created using \code{\link[=check]{check()}}, -and other check functions including \code{\link{check_for_cran}}. -See also \code{\link[=last_check]{last_check()}}. -\item \code{ids} Character vector of check ids. -\item \code{status} Check status for \code{ids} or \code{group}. -\item \code{group} Check group id, string scalar. Either \code{group} or \code{ids} must -be non-\code{NULL}. -\item \code{...} Extra arguments are currently ignored. -\item \code{which} Which check to show, if the object contains multiple -checks. For \code{browse} the default is all checks. For \code{livelog} the -default is the first check. A check can be selected via its number -or id. +\item{branch}{Branch to use to run R-hub. Defaults to the current +branch if \code{gh_url} is \code{NULL}. Otherwise defaults to \code{"main"}. Note that +this branch also need to include the \code{rhub.yaml} workflow file.} } +\value{ +TODO } - -\section{Details}{ - - -An \code{rhub_check} object can be created by \code{\link[=check]{check()}}, \code{\link[=list_my_checks]{list_my_checks()}}, -or \code{\link[=list_package_checks]{list_package_checks()}}. \code{\link[=last_check]{last_check()}} returns the last check(s) -submitted from the current R session. Do not confuse \code{rhub_check}/\code{rhub_check_for_cran} -(classes) with \code{\link[=check]{check()}} or \code{\link[=check_for_cran]{check_for_cran()}} (functions). - -\code{ch$get_ids()} returns the check ids. These can be used to query if a -check has finished. - -\code{ch$update()} updates the status of the check. Printing the check -status to the screen does not perform an update, unless the status of -the check(s) is unknown. - -\code{ch$print()} prints the status of the check(s) to the screen. - -\code{ch$cran_summary()} prints text to be copy-pasted in cran-comments.md, -it is especially useful on the output of \code{\link[=check_for_cran]{check_for_cran()}}. - -\code{ch$browse()} opens a tab or window in the default web browser, that points -to the detailed logs of the check(s). - -\code{ch$urls()} return a \code{\link[tibble:tibble]{tibble::tibble}} with URL to the html log, text log and artifacts -of the check(s). - -For both \code{ch$browse()} and \code{ch$urls()}, note that the logs and artifacts -are not kept forever, they are accessible for a few days after submission. - -\code{ch$livelog()} shows the live log of the check. The live log can be -interrupted using the usual interruption keyboard shortcut, usually -\code{CTRL+c} or \code{ESC}. +\description{ +Check a package on R-hub } - diff --git a/man/rhub_doctor.Rd b/man/rhub_doctor.Rd new file mode 100644 index 0000000..a3a1535 --- /dev/null +++ b/man/rhub_doctor.Rd @@ -0,0 +1,17 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/doctor.R +\name{rhub_doctor} +\alias{rhub_doctor} +\title{Check if the current or the specified package is ready to use with R-hub} +\usage{ +rhub_doctor(gh_url = NULL) +} +\arguments{ +\item{gh_url}{Use \code{NULL} for the package in the current working +directory. Alternatively, use the URL of a GitHub repository that +contains an R package that was set up to use with R-hub.} +} +\description{ +Errors if the package or repository is not set up correctly, and +advises on possible solutions. +} diff --git a/man/rhub_platforms.Rd b/man/rhub_platforms.Rd new file mode 100644 index 0000000..efaf412 --- /dev/null +++ b/man/rhub_platforms.Rd @@ -0,0 +1,29 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/platforms.R +\name{rhub_platforms} +\alias{rhub_platforms} +\title{List R-hub platforms} +\usage{ +rhub_platforms() +} +\value{ +Data frame with columns: +\itemize{ +\item \code{name}: platform name. Use this in the \code{platforms} argument of +\code{\link[=rhub_check]{rhub_check()}}. +\item \code{aliases}: alternative platform names. They can also be used in the +\code{platforms} argument of \code{\link[=rhub_check]{rhub_check()}}. +\item \code{type}: \code{"os"} or \code{"container"}. +\item \code{os_type}: Linux, macOS or Windows currently. +\item \code{container}: URL of the container image for container platforms. +\item \code{github_os}: name of the OS on GitHub Actions for non-container +platforms. +\item \code{r_version}: R version string. If \code{"*"} then any supported R version +can be selected for this platform. +\item \code{os_name}: name of the operating system, including Linux distribution +name and version for container actions. +} +} +\description{ +List R-hub platforms +} diff --git a/man/rhub_setup.Rd b/man/rhub_setup.Rd new file mode 100644 index 0000000..d673a4c --- /dev/null +++ b/man/rhub_setup.Rd @@ -0,0 +1,19 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/setup.R +\name{rhub_setup} +\alias{rhub_setup} +\title{Setup the current R package for use with R-hub} +\usage{ +rhub_setup(overwrite = FALSE) +} +\arguments{ +\item{overwrite}{if \code{TRUE}, \code{\link[=rhub_setup]{rhub_setup()}} will overwrite an already +existing workflow file.} +} +\value{ +Nothing. +} +\description{ +It adds or updates the R-hub workflow file to the current package, +and advises on next steps. +} diff --git a/man/rhubv2.Rd b/man/rhubv2.Rd new file mode 100644 index 0000000..0afdd55 --- /dev/null +++ b/man/rhubv2.Rd @@ -0,0 +1,131 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/rhubv2.R +\name{rhubv2} +\alias{rhubv2} +\title{R-hub v2} +\description{ +Start here to learn about R-hub v2, especially if you +used the previous version of R-hub before. +} +\section{R-hub v2}{ +\subsection{Introduction}{ + +R-hub v2, i.e. version 2 or later of the rhub package is a completely +new check system. In this acticle we highlight the differences between +the old and the new system. + +There are two ways to use R-hub v2. The recommended way is to store your +R package in a GitHub repository and use the \verb{rhub_*()} functions to +start checks on GitHub Actions, using your own GitHub account. + +Alternatively, if you don't want to store your R package at GitHub, you +can use the \verb{rc_*()} functions to run checks in a shared GitHub +organization at https://github.com/r-hub2, using the R Consortium runners. +See more about the R Consortium runners below. +} + +\subsection{Transitioning from R-hub v1}{ +\subsection{Requirements for using R-hub v2}{ +\itemize{ +\item First, you need a GitHub account. +\item Second, you need to have your R package in a GitHub repository. +In your local git clone make sure that the \code{origin} git remote is set +to the GitHub repository. +\item Third, you need a GitHub \href{https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token}{Personal Access Token} +and you need to store it in the git credential store on your machine. +You can use \code{gitcreds::gitcreds_set()} to add the token to the git +credential store. +} + +Call \code{rhub_setup()} from the local git clone to set up R-hub v2 for your +package. This adds a GitHub Actions workflow to your local repository. +Push this change to GitHub, into your default git branch and then you +are ready to call start checks with \code{rhub_check()}. +} + +\subsection{Differences from R-hub v1}{ +\itemize{ +\item The check picks up the package from GitHub, so it does not use +changes in your local git clone. You need to push the changes to +GitHub first. You can use a non-default branch, with the \code{branch} +argument of \code{rhub_check()}. +\item You'll not get an email about the check results. But you'll receive +regular GitHub notifications about check failures, unless you opt out. +Github can also turn these into emails if you like. +\item There is no live output from the check at the R console. See the +'Actions' tab of your repository on GitHub for a live check log. +\item Many more specialized platforms are available. +\item Most platforms use binary packages, so checks and in particular +installing dependencies is much faster. +} +} + +\subsection{Private repositories}{ + +GitHub Actions is free for public repositories. +For private repositories you also get some minutes for free, depending on +the GitHub subscription you have. See +\href{https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions}{About billing for GitHub Actions} +for details. +} + +\subsection{Branches}{ + +You can run checks on any branch that you push to GitHub, but you'll need +to add the R-hub workflow file (\code{.github/workflows/rhub.yaml} within +your repo) must be present in \strong{both} the default branch (usually \code{main}) +and also in the branch you want to run the check on. +} + +} + +\subsection{The R Consortium runners}{ + +If you don't want to put your package on GitHub, you can still use the +rhub package to run package checks on any supported platform using a +shared pool of runners in the https://github.com/r-hub2 GitHub +organization. + +The process is similar to the first version of R-hub: +\itemize{ +\item Set your working directory to the R package you want to check. +\item Obtain a token from R-hub, to verify your email address: + +\if{html}{\out{
}}\preformatted{rc_new_token() +}\if{html}{\out{
}} + +(You do not need to do this, if you already submitted packages to a +previous version of R-hub from the same machine, using the same email +address. Call \code{rc_list_local_tokens()} to check if you already have +tokens.) +\item Submit a build with + +\if{html}{\out{
}}\preformatted{rc_submit() +}\if{html}{\out{
}} +\item Select the platforms you want to use, and follow the instructions and +the link provided to see your check results. +} +\subsection{Limitations of the R Consortium runners}{ +\itemize{ +\item You package will be public for the world, and will be stored in the +https://github.com/r-hub2 organization. Your check output and results +will be public for anyone with a GitHub account. If you want to keep +your package private, you can put it in a private GitHub repository, +and use the \code{rhub_setup()} and \code{rhub_check()} functions instead. +\item The R Consortium runners are shared among all users, so you might need +to wait for your builds to start. +\item You have to wait at least five minutes between submissions with +\code{rc_submit()}. +\item Currently you need to create a GitHub account to see the check logs of +your package. You don't need a GitHub account to submit the checks. +} + +To avoid these limitations (except for the neeed for a GitHub accounr), +put your package in a GitHub repository, and use the \code{rhub_setup()} and +\code{rhub_check()} functions instead of \code{rc_submit()} and the R Consortium +runners. +} + +} +} + diff --git a/man/roxygen/meta.R b/man/roxygen/meta.R new file mode 100644 index 0000000..7aadefa --- /dev/null +++ b/man/roxygen/meta.R @@ -0,0 +1,24 @@ +if (exists(".knitr_asciicast_process", envir = .GlobalEnv)) { + rm(list = ".knitr_asciicast_process", envir = .GlobalEnv) +} + +asciicast::init_knitr_engine( + echo = TRUE, + echo_input = FALSE, + timeout = as.integer(Sys.getenv("ASCIICAST_TIMEOUT", 10)), + startup = quote(options(cli.num_colors = 256)) +) + +knitr::opts_chunk$set( + asciicast_knitr_output = "html", + asciicast_include_style = FALSE, + cache = TRUE, + cache.path = file.path(getwd(), "man/_cache/"), + fig.path = file.path(getwd(), "man/figures"), + error = TRUE +) + +list( + markdown = TRUE, + restrict_image_formats = TRUE +) diff --git a/man/validate_email.Rd b/man/validate_email.Rd index 747b6e9..1273269 100644 --- a/man/validate_email.Rd +++ b/man/validate_email.Rd @@ -1,36 +1,14 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/email.R +% Please edit documentation in R/rhubv1.R \name{validate_email} \alias{validate_email} -\title{Validate an email address on R-hub} +\title{This function is deprecated and defunct. Please see \link{rhubv2}.} \usage{ -validate_email(email = NULL, token = NULL) +validate_email(...) } \arguments{ -\item{email}{The email address to validate.} - -\item{token}{Token obtained from \code{rhub}, to validate the email address.} +\item{...}{Deprecated.} } \description{ -To build and check R packages on R-hub, you need to validate your -email address. This is because R-hub sends out emails about check -results. +This function is deprecated and defunct. Please see \link{rhubv2}. } -\details{ -The \code{rhub} package stores validated email addresses in a user -configuration file, at a platform-dependent location. -On your current platform the file is at -\Sexpr[stage=render]{rhub:::email_file()}. - -To validate a new email address, call this function from an interactive -R session, without any arguments. - -To add an email address that was validated before (probably on another -machine), to the configuration file, call this function with the \code{email} -and \code{token} arguments. -} -\seealso{ -Other email validation: -\code{\link{list_validated_emails}()} -} -\concept{email validation} diff --git a/tests/testthat/helpers.R b/tests/testthat/helpers.R deleted file mode 100644 index 9cf8cf2..0000000 --- a/tests/testthat/helpers.R +++ /dev/null @@ -1,24 +0,0 @@ - -create_minimal_package <- function(dir = tempfile()) { - if (!file.exists(dir)) dir.create(dir) - - ## /R This is not strictly necessary, actually - dir.create(file.path(dir, "R")) - cat("f <- function() { }\n", file = file.path(dir, "R", "package.R")) - - ## NAMESPACE - cat("", file = file.path(dir, "NAMESPACE")) - - ## DESCRIPTION - desc::description$new("!new")$ - set(Package = basename(dir))$ - set(Title = "Title Case")$ - set(Maintainer = "first second ")$ - set(Description = "Minimal package for testing. Multiple.")$ - set(License = "GPL-2")$ - del("URL")$ - del("BugReports")$ - write(file.path(dir, "DESCRIPTION")) - - dir -} diff --git a/tests/testthat/test-api.R b/tests/testthat/test-api.R deleted file mode 100644 index 5ad307f..0000000 --- a/tests/testthat/test-api.R +++ /dev/null @@ -1,60 +0,0 @@ - -context("api") - -test_that("query", { - res <- NULL - with_mock( - `httr::GET` = function(...) res <<- list(...), - `httr::status_code` = function(...) { 200 }, - `httr::headers` = function(...) { }, - `httr::content` = function(...) { }, - query("GET PLATFORMS") - ) - expect_equal(length(res), 3) - expect_true(is_string(res[[1]])) - expect_equal(class(res[[2]]), "request") - - called <- FALSE - with_mock( - `rhub:::get_endpoint` = function(endpoint, params) - list(method = endpoint, path = "p"), - `httr::POST` = function(...) called <<- "POST", - `httr::DELETE` = function(...) called <<- "DELETE", - `rhub:::report_error` = function(...) { }, - `rhub:::parse_response` = function(...) { }, - query("POST"), - expect_identical(called, "POST"), - query("DELETE"), - expect_identical(called, "DELETE"), - expect_error(query("FOOBAR"), "Unexpected HTTP verb") - ) -}) - -test_that("parse_response", { - with_mock( - `httr::headers` = function(...) - list("content-type" = "application/json; charset: utf8"), - `httr::content` = function(...) - '{ "foo": "bar", "bar": [1,2,3] }', - expect_equal( - parse_response(NULL, as = NULL), - list(foo = "bar", bar = list(1,2,3)) - ), - expect_equal( - parse_response(NULL, as = "text"), - '{ "foo": "bar", "bar": [1,2,3] }' - ) - ) - - with_mock( - `httr::headers` = function(...) list(), - `httr::content` = function(...) "foobar", - expect_equal(parse_response(NULL), "foobar") - ) - - with_mock( - `httr::headers` = function(...) list("content-type" = "text/plain"), - `httr::content` = function(...) "foobar", - expect_equal(parse_response(NULL), "foobar") - ) -}) diff --git a/tests/testthat/test-assertions.R b/tests/testthat/test-assertions.R deleted file mode 100644 index c6233ae..0000000 --- a/tests/testthat/test-assertions.R +++ /dev/null @@ -1,90 +0,0 @@ - -context("assertions") - -test_that("is_pkg_dir", { - tmppkg <- tempfile() - dir.create(tmppkg) - - expect_false(is_pkg_dir(tmppkg)) - expect_false(is_pkg_dir_or_tarball(tmppkg)) - - cat("dummy\n", file = file.path(tmppkg, "DESCRIPTION")) - expect_true(is_pkg_dir(tmppkg)) - expect_true(is_pkg_dir_or_tarball(tmppkg)) -}) - -test_that("is_pkg_tarball", { - tmp <- tempfile(fileext="dummy") - cat("dummy\n", file = tmp) - tmppkg <- tempfile(fileext = ".tar.gz") - expect_false(is_pkg_tarball(tmp)) - expect_false(is_pkg_dir_or_tarball(tmppkg)) - - cat("dummy\n", file = tmppkg) - expect_true(is_pkg_tarball(tmppkg)) - expect_true(is_pkg_dir_or_tarball(tmppkg)) -}) - -test_that("is_string", { - pos <- list( - "foo", - "" - ) - for (p in pos) expect_true(is_string(p)) - - neg <- list( - character(), - 1:10, - c("foo", "bar"), - NULL - ) - for (n in neg) expect_false(is_string(n)) -}) - -test_that("assert_validated_email_for_check", { - - with_mock( - `rhub::email_get_token` = function(x) "your-token", - expect_silent(assert_validated_email_for_check("foobar@domain")) - ) - - with_mock( - `rhub::is_interactive` = function(x) FALSE, - expect_error(assert_validated_email_for_check(basename(tempfile()))) - ) -}) - -test_that("is_flag", { - pos <- list( - TRUE, - FALSE - ) - for (p in pos) expect_true(is_flag(p)) - - neg <- list( - logical(), - c(TRUE, TRUE), - "TRUE", - NULL - ) - for (n in neg) expect_false(is_flag(n)) -}) - -test_that("is_named", { - - pos <- list( - c(a = 1, b = 2, c = 3), - structure(double(), names = character()), - list(foo = 1), - structure(list(), names = character()) - ) - for (p in pos) expect_true(is_named(p)) - - neg <- list( - 1:5, - c(a = 1, 2, c = 3), - list(1, 2, 3), - list(a = 1, 2, c = 3) - ) - for (n in neg) expect_false(is_named(n)) -}) diff --git a/tests/testthat/test-build.R b/tests/testthat/test-build.R deleted file mode 100644 index 0d639d5..0000000 --- a/tests/testthat/test-build.R +++ /dev/null @@ -1,19 +0,0 @@ - -context("build") - -test_that("build_package", { - ## directory is packed up properly - create_minimal_package(pkg <- tempfile()) - path <- build_package(pkg, tempfile()) - expect_true(is_pkg_tarball(path)) - - ## tarball is not touched - path2 <- build_package(path, tempfile()) - expect_true(is_pkg_tarball(path2)) - expect_equal(file.info(path)$size, file.info(path2)$size) - - ## Error is reported if R CMD build fails - create_minimal_package(pkg <- tempfile()) - file.remove(file.path(pkg, "DESCRIPTION")) - expect_error(build_package(pkg, tempfile())) -}) diff --git a/tests/testthat/test-check.R b/tests/testthat/test-check.R deleted file mode 100644 index 0c4b07b..0000000 --- a/tests/testthat/test-check.R +++ /dev/null @@ -1,67 +0,0 @@ - -context("check") - -test_that("check", { - pkg <- create_minimal_package() - - ## From tarball - pkg_targz <- build_package(pkg, tempfile()) - sub <- NULL - ch <- with_mock( - `rhub::assert_validated_email_for_check` = function(...) TRUE, - `rhub::submit_package` = function(...) { - sub <<- list(...) - list(list(id = "foobar")) - }, - `rhub:::match_platform` = function(x) x, - check(pkg_targz, email = "e", platforms = "p", show_status = FALSE) - ) - - expect_equal(sub[[1]], "e") - expect_equal(sub[[3]], "p") - expect_equal(sub[[4]], character()) -}) - -test_that("check shortcuts", { - with_mock( - `rhub::check` = function(path = ".", platforms, ...) platforms, - expect_equal(check_on_linux(), check_shortcut_platforms$linux), - expect_equal(check_on_windows(), check_shortcut_platforms$windows), - - expect_equal(check_on_debian(), check_shortcut_platforms$debian), - expect_equal(check_on_ubuntu(), check_shortcut_platforms$ubuntu), - expect_equal(check_on_fedora(), check_shortcut_platforms$fedora), - - expect_equal(check_with_roldrel(), check_shortcut_platforms$roldrel), - expect_equal(check_with_rrelease(), check_shortcut_platforms$rrelease), - expect_equal(check_with_rpatched(), check_shortcut_platforms$rpatched), - expect_equal(check_with_rdevel(), check_shortcut_platforms$rdevel), - - expect_equal(check_with_valgrind(), check_shortcut_platforms$valgrind), - expect_equal( - check_with_sanitizers(), - check_shortcut_platforms$sanitizers - ) - ) -}) - -test_that("get_check", { - package_data$ids <- character() - package_data$groups <- character() - expect_error( - get_check("foo"), - "Short check id 'foo' can only be used for cached ids", - fixed = TRUE - ) - expect_error( - get_check(c("foo", "bar")), - "Short check id 'foo' (and 1 more) can only be used for cached ids", - fixed = TRUE - ) - real <- "rversions_2.1.1.9000.tar.gz-73d9f48a0ede4deeac27fb9910be2a02" - expect_error( - get_check(c("foo", "bar", real)), - "Short check id 'foo' (and 1 more) can only be used for cached ids", - fixed = TRUE - ) -}) diff --git a/tests/testthat/test-email.R b/tests/testthat/test-email.R deleted file mode 100644 index c0341ea..0000000 --- a/tests/testthat/test-email.R +++ /dev/null @@ -1,43 +0,0 @@ - -context("email") - -test_that("validate_email", { - - with_mock( - `rhub::is_interactive` = function() FALSE, - expect_error(validate_email("foo")) - ) -}) - -test_that("email_file", { - ## Cannot easily test the value - expect_silent(email_file()) -}) - -test_that("email_get_token, email_add_token", { - tmp <- tempfile() - with_mock( - `rhub::email_file` = function() tmp, - { - expect_null(email_get_token("bugs.bunny@acme.com")) - email_add_token("bugs.bunny@acme.com", "tokenxxx") - expect_equal(email_get_token("bugs.bunny@acme.com"), "tokenxxx") - expect_null(email_get_token("duffy.duck@acme.com")) - - email_add_token("bugs.bunny@acme.com", "token2") - expect_equal(email_get_token("bugs.bunny@acme.com"), "token2") - expect_null(email_get_token("duffy.duck@acme.com")) - } - ) - -}) - -test_that("validate_email assertions", { - - ## not an email address - expect_error(validate_email("foo", "bar")) - - ## not a valid token - expect_error(validate_email("foo@dom", "")) - expect_error(validate_email("foo@dom", "bar")) -}) diff --git a/tests/testthat/test-error.R b/tests/testthat/test-error.R deleted file mode 100644 index 2e78e6c..0000000 --- a/tests/testthat/test-error.R +++ /dev/null @@ -1,41 +0,0 @@ - -context("error") - -test_that("report_system_error", { - expect_silent(report_system_error(status = list(status = 0))) - - expect_error( - report_system_error( - "this is unacceptable", - list(status = 1, stderr = "", stdout = "out") - ), - "this is unacceptable" - ) - - expect_error( - report_system_error( - "this is unacceptable", - list(status = 1, stderr = "puff", stdout = "out") - ), - "this is unacceptable.*puff" - ) -}) - -test_that("report_error", { - with_mock( - `httr::status_code` = function(response) 200, - expect_silent(report_error("dummy")) - ) - with_mock( - `httr::status_code` = function(response) 404, - expect_error(report_error("dummy")) - ) -}) - -test_that("create_condition", { - with_mock( - `httr::content` = function(...) list(message = "not at all"), - cond <- create_condition("not good", call = sys.call()) - ) - expect_true("rhub_error" %in% class(cond)) -}) diff --git a/tests/testthat/test-platforms.R b/tests/testthat/test-platforms.R deleted file mode 100644 index 4a8f69a..0000000 --- a/tests/testthat/test-platforms.R +++ /dev/null @@ -1,171 +0,0 @@ - -context("platforms") - -json <- '[ - { - "name": "debian-gcc-devel", - "description": "Debian Linux, R-devel, GCC", - "cran-name": "r-devel-linux-x86_64-debian-gcc", - "rversion": "r-devel", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "Debian GNU/Linux testing", - "compilers": "GCC 5.4.0 (Debian 5.4.0-4)", - "docker-image": "debian-gcc-devel", - "sysreqs-platform": "linux-x86_64-debian-gcc" - }, - - { - "name": "debian-gcc-release", - "description": "Debian Linux, R-release, GCC", - "cran-name": "r-release-linux-x86_64", - "rversion": "r-release", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "Debian GNU/Linux testing", - "compilers": "GCC 5.3.1 (Debian 5.3.1-14)", - "docker-image": "debian-gcc-release", - "sysreqs-platform": "linux-x86_64-debian-gcc" - }, - - { - "name": "debian-gcc-patched", - "description": "Debian Linux, R-patched, GCC", - "cran-name": "r-patched-linux-x86_64", - "rversion": "r-patched", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "Debian GNU/Linux testing", - "compilers": "GCC 5.4.0 (Debian 5.4.0-4)", - "docker-image": "debian-gcc-patched", - "sysreqs-platform": "linux-x86_64-debian-gcc" - }, - - { - "name": "fedora-gcc-devel", - "description": "Fedora Linux, R-devel, GCC", - "cran-name": "r-devel-linux-x86_64-fedora-gcc", - "rversion": "r-devel", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "Fedora 24", - "compilers": "GCC 6.1.1", - "docker-image": "fedora-gcc-devel", - "sysreqs-platform": "linux-x86_64-fedora-gcc" - }, - - { - "name": "fedora-clang-devel", - "description": "Fedora Linux, R-devel, clang, gfortran", - "cran-name": "r-devel-linux-x86_64-fedora-clang", - "rversion": "r-devel", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "Fedora 24", - "compilers": "clang version 3.8.0; GNU Fortran 6.1.1", - "docker-image": "fedora-clang-devel", - "sysreqs-platform": "linux-x86_64-fedora-clang" - }, - - { - "name": "ubuntu-gcc-devel", - "description": "Ubuntu Linux 16.04 LTS, R-devel, GCC", - "cran-name": null, - "rversion": "r-devel", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "Ubuntu 16.04 LTS", - "compilers": "GCC 5.3.1", - "docker-image": "ubuntu-gcc-devel", - "sysreqs-platform": "linux-x86_64-ubuntu-gcc" - }, - - { - "name": "ubuntu-gcc-release", - "description": "Ubuntu Linux 16.04 LTS, R-release, GCC", - "cran-name": null, - "rversion": "r-release", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "Ubuntu 16.04 LTS", - "compilers": "GCC 5.3.1", - "docker-image": "ubuntu-gcc-release", - "sysreqs-platform": "linux-x86_64-ubuntu-gcc" - }, - - { - "name": "linux-x86_64-centos6-epel", - "description": "CentOS 6, stock R from EPEL", - "cran-name": null, - "rversion": "r-release", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "CentOS 6", - "compilers": "GCC 4.4.x", - "docker-image": "centos6-epel", - "sysreqs-platform": "linux-x86_64-centos6-epel" - }, - - { - "name": "linux-x86_64-centos6-epel-rdt", - "description": "CentOS 6 with Redhat Developer Toolset, R from EPEL", - "cran-name": null, - "rversion": "r-release", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "CentOS 6", - "compilers": "GCC 5.2.1", - "docker-image": "centos6-epel-rdt", - "sysreqs-platform": "linux-x86_64-centos6-epel" - }, - - { - "name": "linux-x86_64-rocker-gcc-san", - "description": "Debian Linux, R-devel, GCC ASAN/UBSAN", - "cran-name": null, - "rversion": "r-devel", - "os-type": "Linux", - "cpu-type": "x86_64", - "os-info": "Debian GNU/Linux testing", - "compilers": "GCC 5.4.0 (Debian 5.4.0-4)", - "docker-image": "rocker-gcc-san", - "sysreqs-platform": "linux-x86_64-debian-gcc" - }, - - { - "name": "windows-x86_64-oldrel", - "description": "Windows Server 2008 R2 SP1, R-oldrel, 64 bit", - "cran-name": null, - "rversion": "r-oldrel", - "os-type": "Windows", - "cpu-type": "x86_64", - "os-info": "Windows Server 2008 R2 SP1", - "compilers": "GCC 4.6.3, Rtools 3.3", - "docker-image": null, - "sysreqs-platform": "windows-2008" - } -]' - -test_that("platforms", { - - with_mock( - `rhub::query` = function(...) json, - { - expect_silent(p <- platforms()) - expect_true("rhub_platforms" %in% class(p)) - expect_true("data.frame" %in% class(p)) - expect_equal(nrow(p), 11) - } - ) -}) - -test_that("print.rhub_platforms", { - - with_mock( - `rhub::query` = function(...) json, - expect_output( - print(platforms()), - "debian-gcc-patched:.*Debian Linux, R-patched, GCC" - ) - ) -}) diff --git a/tests/testthat/test-print.R b/tests/testthat/test-print.R deleted file mode 100644 index 02aa70e..0000000 --- a/tests/testthat/test-print.R +++ /dev/null @@ -1,9 +0,0 @@ - -context("print") - -test_that("header_line", { - expect_output( - header_line("title"), - "title" - ) -}) diff --git a/tests/testthat/test-submit.R b/tests/testthat/test-submit.R deleted file mode 100644 index 408005a..0000000 --- a/tests/testthat/test-submit.R +++ /dev/null @@ -1,38 +0,0 @@ - -context("submit") - -test_that("submit_package", { - pkg <- create_minimal_package() - pkg_targz <- build_package(pkg, tempfile()) - - args <- NULL - sp <- with_mock( - `rhub::header_line` = function(...) { }, - `rhub::query` = function(...) args <<- list(...), - `rhub::email_get_token` = function(...) "token", - submit_package("e@d", pkg_targz, "platform", c("arg1", "arg2"), - c("env" = "var")) - ) - - expect_identical(args[[1]], "SUBMIT PACKAGE") - - expect_identical( - names(args[[2]]), - c("email", "token", "package", "version", "platform", "env", - "check_args", "file") - ) - - expect_identical(args[[2]]$email, jsonlite::unbox("e@d")) - expect_identical(args[[2]]$token, jsonlite::unbox("token")) - expect_identical(args[[2]]$package, jsonlite::unbox(basename(pkg))) - expect_identical(args[[2]]$version, jsonlite::unbox("1.0.0")) - expect_identical(args[[2]]$platform, "platform") - expect_identical(args[[2]]$check_args, jsonlite::unbox("arg1 arg2")) - - ## Must be a base64 string - expect_match( - gsub("\\s+", "", args[[2]]$file), - "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$", - perl = TRUE - ) -}) diff --git a/tests/testthat/test-utils.R b/tests/testthat/test-utils.R deleted file mode 100644 index 982c264..0000000 --- a/tests/testthat/test-utils.R +++ /dev/null @@ -1,66 +0,0 @@ - -context("utils") - -test_that("update", { - expect_equal( - update(list(a = 10, b = 20), list(a = 100, c = 5)), - list(a = 100, b = 20, c = 5) - ) - - expect_equal( - update(list(), list(a = 1, b = 2)), - list(a = 1, b = 2) - ) - - expect_equal( - update(list(a = 1, b = 2), list()), - list(a = 1, b = 2) - ) -}) - -test_that("parse_email", { - expect_equal( - parse_email("first second "), - "mail@foo.com" - ) - - expect_equal( - parse_email("the is no email here"), - NA_character_ - ) - - expect_equal( - parse_email(""), - "just-email@foo.com" - ) -}) - -test_that("get_maintainer_email", { - pkg <- create_minimal_package() - targz <- build_package(pkg, tempfile()) - - expect_equal(get_maintainer_email(pkg), "first.second@foo.bar") - expect_equal(get_maintainer_email(targz), "first.second@foo.bar") - - file.remove(file.path(pkg, "DESCRIPTION")) - tar(targz <- tempfile(fileext = ".tar.gz"), pkg, tar = "internal") - expect_error( - get_maintainer_email(targz), - "No 'DESCRIPTION' file in package" - ) -}) - -test_that("%||%", { - expect_identical( NULL %||% NULL, NULL) - expect_identical( NULL %||% "OK", "OK") - expect_identical( "OK" %||% NULL, "OK") - expect_silent( "OK" %||% print("foobar")) -}) - -test_that("is_interactive", { - expect_identical(is_interactive(), interactive()) -}) - -test_that("%:::%", { - expect_equal(parse_email, "rhub" %:::% "parse_email") -}) diff --git a/tests/testthat/test.R b/tests/testthat/test.R new file mode 100644 index 0000000..64b4fee --- /dev/null +++ b/tests/testthat/test.R @@ -0,0 +1,3 @@ +test_that("Placeholder", { + expect_true(TRUE) +}) diff --git a/vignettes/.gitignore b/vignettes/.gitignore deleted file mode 100644 index 097b241..0000000 --- a/vignettes/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -*.html -*.R diff --git a/vignettes/figures/check-output.gif b/vignettes/figures/check-output.gif deleted file mode 100644 index a0f3588..0000000 Binary files a/vignettes/figures/check-output.gif and /dev/null differ diff --git a/vignettes/figures/email-validation.png b/vignettes/figures/email-validation.png deleted file mode 100644 index 1ecb245..0000000 Binary files a/vignettes/figures/email-validation.png and /dev/null differ diff --git a/vignettes/local-debugging.Rmd b/vignettes/local-debugging.Rmd deleted file mode 100644 index 154910d..0000000 --- a/vignettes/local-debugging.Rmd +++ /dev/null @@ -1,140 +0,0 @@ ---- -title: "Local Linux checks with Docker" -output: rmarkdown::html_vignette -vignette: > - %\VignetteIndexEntry{Local Linux checks with Docker} - %\VignetteEngine{knitr::rmarkdown} - %\VignetteEncoding{UTF-8} ---- - -```{r, include = FALSE} -knitr::opts_chunk$set( - collapse = TRUE, - comment = "#>" -) -``` - -```{r setup} -library(rhub) -``` - -## Introduction - -Scenario: there's a bug in the check results of your package on a CRAN -Linux platform, or you saw such a bug even before CRAN submission, by -building your package on a R-hub Linux platform. How can you reproduce and -fix the bug? Submitting to the R-hub platform -([or the R-hub platform that's closest to the CRAN platform](https://docs.r-hub.io/#rhub-cran-platforms)) -after each tweak of your code would have a high turnaround so is not -optimal for debugging. R-hub's Linux Docker images are available for you to -use, so you can run the R-hub Linux builders locally. - -**Warning: at the moment, the functions are not tested on Windows! Bug - reports are welcome :-)** - -## Install and get to know Docker - -To be able to use the feature, you will need to install Docker. Please -refer to [Docker docs](https://docs.docker.com/install/). On Windows, -installation might be trickier, check that your machine -[meets the system requirements](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install). On -Linux, make sure to -[run the post-installation steps](https://docs.docker.com/install/linux/linux-postinstall/) -to make the `docker` command available to your user without the `sudo` -prefix. - -If you are new to Docker, for the basic use shown in the next two sections -you don't need to learn anything, you won't have to leave R. Nonetheless, -if you're curious, this tutorial -[features a nice introduction](https://jsta.github.io/r-docker-tutorial/01-what-and-why.html). -Also see [this blog post](https://colinfay.me/docker-r-reproducibility/) -and the list of resources it shows at the end. - -## List R-hub Linux images - -Each of R-hub Linux platforms is associated to a Docker image, whose -Dockerfile is stored in the -[r-hub/rhub-linux-builders repository](https://github.com/r-hub/rhub-linux-builders#rhub-linux-builders), -and that is built and available on Docker Hub. Note, if you're used to -using Docker images outside of R, you might want to just refer to the -information in -[R-hub Linux Docker images GitHub repository](https://github.com/r-hub/rhub-linux-builders#rhub-linux-builders) -(including links to the built images on Docker Hub). The advantage of using -the rhub package instead of Docker directly, is that the package will -install the system requirements properly. - -To list the available images from R, you can use the -`local_check_linux_images()` function that returns a `data.frame` and has a -pretty default printing. - -```{r list} -imgs <- local_check_linux_images() -imgs -knitr::kable(imgs, row.names = FALSE) -``` - -Of particular interest are - -* the `cranname` columns if you're trying to find an equivalent to a CRAN - platform; - -* the `name` platform which is the ID you should use to select that - platform. - -In theory, you could also use images that are not listed in the list above, -e.g. your own Docker images. - -## Run local checks - -Below we'll start a check of a package on the "rhub/debian-gcc-release" -image (Debian Linux, R-release, GCC). The first time you use an image on -your machine, it'll be downloaded from Docker Hub, which might take a -while. The image won't be deleted after use, so next time will be faster -until you clean up your machine's Docker images, which one should do once -in a while (note that R-hub images are regularly updated). - -```r -pkg_path <- "/home/maelle/Documents/R-hub/test-packages/note" -local_check_linux(pkg_path, image = "rhub/debian-gcc-release") -``` - -You can either just run the check as shown above, which will print a log to -the screen, including `R CMD check` results in the end, or assign it to an -object: - -```r -pkg_path <- "/home/maelle/Documents/R-hub/test-packages/note" -chk <- local_check_linux(pkg_path, image = "rhub/debian-gcc-release") -``` - -The object returned is of the class `rcmdcheck::rcmdcheck` which is an S3 -object with fields `errors`, `warnings` and `notes` (character vectors), -that you could operate on if you wish. - -The `local_check_linux()` function creates a container (instance of the -image) that won't be deleted after use so you might want to clean up once -in a while. - -## Do more with R-hub Linux images - -If running checks in images iteratively isn't enough for your debugging, -you might want to run the container created by `local_check_linux()`. Take -note of the container name and run (in a shell, not in R) - -``` -docker container start 7181196d-bc3c-4fc8-a0e8-dc511150335d-2 -docker exec -it 7181196d-bc3c-4fc8-a0e8-dc511150335d-2 bash -``` - -where `7181196d-bc3c-4fc8-a0e8-dc511150335d-2` is the container name, this -is printed out by `local_check_linux()`. After running these commands, you -will get a shell within the Docker container, where you can run R. Note -that on some containers R is installed in `/opt/`. - -For more information, you may want to look at the shell script that `rhub` -uses to set up the container for running the check. To find it, run the -code below. - -```{r} -system.file(package = "rhub", "bin", "rhub-linux-docker.sh") -``` diff --git a/vignettes/rhub.Rmd b/vignettes/rhub.Rmd deleted file mode 100644 index 1612937..0000000 --- a/vignettes/rhub.Rmd +++ /dev/null @@ -1,362 +0,0 @@ ---- -title: "Get started with `rhub`" -output: rmarkdown::html_vignette -vignette: > - %\VignetteIndexEntry{get-started} - %\VignetteEngine{knitr::rmarkdown} - %\VignetteEncoding{UTF-8} ---- - -```{r, include = FALSE} -knitr::opts_chunk$set( - collapse = TRUE, - comment = "#>" -) -``` - -In this article you'll learn how to use `rhub` to connect to the R-hub -builder API to start new checks and get the results and artifacts of recent -checks. - -```{r} -library(rhub) -``` - - -## Validate your email address - -To build and check packages, first you need to validate your email address with -`validate_email()`. - -![](figures/email-validation.png) - -The package tries to detect your email address using [`whoami`](https://github.com/r-lib/whoami#whoami) -(note that `whoami` does so using your global git config) -and the maintainer email listed in DESCRIPTION; and if it fails to do this -correctly, you'll need to specify it. This means that if running `validate_email()` -gives an error, you should either run `validate_email("youremail@example.com")` ( -quickest fix), or edit your global git config (less quick, but useful for -package development in general; to set it up smoothly -[refer to this `usethis` helper](https://usethis.r-lib.org/articles/articles/usethis-setup.html#configure-user-name-and-user-emails)). - -`rhub` stores the token permanently on the machine, so you do not need -to validate your email again. You can also copy your token to other -machines: run `list_validated_emails()` to get the token, and use the -`validate_email()` function on the new machine, using both the `email` and - `token` arguments. - -Currently you cannot use the token obtained via this function in the Web -UI of the R-hub builder. - -## Run a package check - -`check()` runs an `R CMD check` on the package in the specified directory, -or specified source R package tarball created by `R CMD build` or -`devtools::build()`. It defaults to the working directory. - -```r -check() -``` - -If the `platform` argument is NULL, and the R session is interactive, then -a menu is shown. If it is NULL, and the session is not interactive, then the -default R-hub platform `platforms()$name[1]`, i.e. `r platforms()$name[1]`, -is used. - -In interactive R sessions, the output of the check is printed to -the screen unless you set the `show_status` argument to `FALSE`. Therefore, -by default, your R session is busy showing the log until the check is finished, -so set `show_status` to `FALSE` if you want to submit a check and then go on -with your work in the same session. - -In all cases, you will receive a notification email with results after the check. - -![recording of a check on a screen](figures/check-output.gif) - -You can either just run the function, or assign its output to an object, that -you can use to print results to the screen, to browse the web page of the check, -and to retrieve URLs to the web page of the check but also to its artifacts -that are kept a few days. - -```r -mycheck <- check() -mycheck$browse() -mycheck$print() -mycheck$livelog() -mycheck$urls() -``` - -To retrieve such objects from previous checks, see [the corresponding section](#browse-previous-checks). - -## Select a building and checking architecture - -You can run checks on any platform. You can use the platform ids (e.g. `"debian-gcc-devel"` -or `c("debian-gcc-devel", "debian-gcc-patched")`) to select between platforms. -You should not run checks on all platforms at once. E.g. if preparing for a -CRAN submission, use the shortcut function `check_for_cran()` that will submit -your package to a few recommended platforms. The following subsections give -more info on how to select platforms. - -### Listing R-hub platforms - -If looking for a platform with particular characteristics, in -particular to reproduce a result from CRAN's own platform, have a look at the -R-hub platform characteristics. - -```{r platforms-info} -knitr::kable(platforms(), row.names = FALSE) -``` - -### Shortcuts for quick checks - -These are quick shortcuts that select the right platform(s): - -* `check_on_linux()` and `check_on_windows()` select the operating system. -* `check_on_debian()`, `check_on_ubuntu()`, `check_on_fedora()` and - `check_on_centos()` select an appropriate Linux platform. -* `check_on_solaris()` also selects an operating system, Solaris, and by -default checks neither vignettes nor manual (`--no-manual --no-build-vignettes`). -* `check_with_roldrel()`, `check_with_rrelease()`, `check_with_rpatched()` - and `check_with_rdevel()` select an R version. -* `check_for_cran()` runs checks on platforms that are closest to platforms used by CRAN on submission: - * Fedora Linux, R-devel, clang, gfortran, - * Ubuntu Linux 16.04 LTS, R-release, GCC, - * Windows Server 2008 R2 SP1, R-devel, 32⁄64 bit, - * and, if your package needs compilation, Debian Linux, R-devel, GCC ASAN/UBSAN. -You can run `rhub:::default_cran_check_platforms()` to find out which -platforms will be selected for your package. -* `check_with_valgrind()` runs the build and check on Linux, in `valgrind` - to find memory leaks and pointer errors. -* `check_with_sanitizers()` runs all package package tests, examples and - vignettes with Address Sanitizer and Undefined Behavior Sanitizer, see - below. - -### Sanitizers for compiled code - -R-hub provides a special Docker image to run Address Sanitizer (ASAN) and -Undefined Behavior Sanitizer (UBSAN). This is based on the `r-devel-san` -image of the [Rocker project](https://github.com/rocker-org/r-devel-san). - -This image does not run `R CMD check` at all, but it runs - -1. package tests, -2. all manual examples, and -3. all vignette code - -with ASAN and UBSAN enabled. Use it via `check_with_sanitizers()`. - -## Browse previous checks - -Once a check is finished (or failed), you will get a notification email but -you can also get information from R. E.g. you could submit a check for a -package located at ``, then turn off your computer and on the following days retrieve results -via - -```r -previous_checks <- rhub::list_package_checks(, - email = "maelle.salmon@yahoo.se", - howmany = 4) -previous_checks -``` - -```r -# A tibble: 10 x 13 - package version result group id platform_name build_time -