diff --git a/.DS_Store b/.DS_Store index 05e5514..55a2ed7 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/.Rbuildignore b/.Rbuildignore index 4047181..51bb38c 100644 --- a/.Rbuildignore +++ b/.Rbuildignore @@ -7,6 +7,7 @@ ^codecov\.yml$ ^_config\.yml$ ^docs$ +^examples$ ^_pkgdown\.yml$ ^README\.Rmd$ ^README-.*\.png$ @@ -19,3 +20,9 @@ ^\.RData$ ^\.Rhistory$ ^\.circleci$ +^revdep$ +^Makefile$ +^\.covrignore$ +^LICENSE\.md$ +^usethis\.R$ + diff --git a/.covrignore b/.covrignore new file mode 100644 index 0000000..a27b3bf --- /dev/null +++ b/.covrignore @@ -0,0 +1,2 @@ +R/deprec-*.R +R/compat-*.R diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md new file mode 100644 index 0000000..020d91f --- /dev/null +++ b/.github/CONTRIBUTING.md @@ -0,0 +1,48 @@ +# Contributing to concurve + +This outlines how to propose a change to concurve. For more detailed +info about contributing to this, and other tidyverse packages, please see the +[**development contributing guide**](https://rstd.io/tidy-contrib). + +### Fixing typos + +Small typos or grammatical errors in documentation may be edited directly using +the GitHub web interface, so long as the changes are made in the _source_ file. + +* YES: you edit a roxygen comment in a `.R` file below `R/`. +* NO: you edit an `.Rd` file below `man/`. + +### Prerequisites + +Before you make a substantial pull request, you should always file an issue and +make sure someone from the team agrees that it’s a problem. If you’ve found a +bug, create an associated issue and illustrate the bug with a minimal +[reprex](https://www.tidyverse.org/help/#reprex). + +### Pull request process + +* We recommend that you create a Git branch for each pull request (PR). +* Look at the Travis and AppVeyor build status before and after making changes. +The `README` should contain badges for any continuous integration services used +by the package. +* New code should follow the tidyverse [style guide](https://style.tidyverse.org). +You can use the [styler](https://CRAN.R-project.org/package=styler) package to +apply these styles, but please don't restyle code that has nothing to do with +your PR. +* We use [roxygen2](https://cran.r-project.org/package=roxygen2), with +[Markdown syntax](https://cran.r-project.org/web/packages/roxygen2/vignettes/markdown.html), +for documentation. +* We use [testthat](https://cran.r-project.org/package=testthat). Contributions +with test cases included are easier to accept. +* For user-facing changes, add a bullet to the top of `NEWS.md` below the +current development version header describing the changes made followed by your +GitHub username, and links to relevant issue(s)/PR(s). + +### Code of Conduct + +Please note that the concurve project is released with a +[Contributor Code of Conduct](CODE_OF_CONDUCT.md). By contributing to this +project you agree to abide by its terms. + +### See tidyverse [development contributing guide](https://rstd.io/tidy-contrib) +for further details. diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md new file mode 100644 index 0000000..84f5c43 --- /dev/null +++ b/.github/ISSUE_TEMPLATE.md @@ -0,0 +1,11 @@ +Please briefly describe your problem and what output you expect. If you have a question, please don't use this form. Instead, ask on or . + +Please include a minimal reproducible example (AKA a reprex). If you've never heard of a [reprex](https://reprex.tidyverse.org/) before, start by reading . + +--- + +Brief description of the problem + +```r +# insert reprex here +``` diff --git a/.github/SUPPORT.md b/.github/SUPPORT.md new file mode 100644 index 0000000..1c02b30 --- /dev/null +++ b/.github/SUPPORT.md @@ -0,0 +1,35 @@ +# Getting help with concurve + +Thanks for using concurve. Before filing an issue, there are a few places +to explore and pieces to put together to make the process as smooth as possible. + +Start by making a minimal **repr**oducible **ex**ample using the +[reprex](https://reprex.tidyverse.org/) package. If you haven't heard of or used +reprex before, you're in for a treat! Seriously, reprex will make all of your +R-question-asking endeavors easier (which is a pretty insane ROI for the five to +ten minutes it'll take you to learn what it's all about). For additional reprex +pointers, check out the [Get help!](https://www.tidyverse.org/help/) section of +the tidyverse site. + +Armed with your reprex, the next step is to figure out [where to ask](https://www.tidyverse.org/help/#where-to-ask). + + * If it's a question: start with [community.rstudio.com](https://community.rstudio.com/), + and/or StackOverflow. There are more people there to answer questions. + * If it's a bug: you're in the right place, file an issue. + * If you're not sure: let the community help you figure it out! If your + problem _is_ a bug or a feature request, you can easily return here and + report it. + +Before opening a new issue, be sure to [search issues and pull requests](https://github.com/tidyverse/concurve/issues) to make sure the +bug hasn't been reported and/or already fixed in the development version. By +default, the search will be pre-populated with `is:issue is:open`. You can +[edit the qualifiers](https://help.github.com/articles/searching-issues-and-pull-requests/) +(e.g. `is:pr`, `is:closed`) as needed. For example, you'd simply +remove `is:open` to search _all_ issues in the repo, open or closed. + + +If you _are_ in the right place, and need to file an issue, please review the +["File issues"](https://www.tidyverse.org/contribute/#issues) paragraph from +the tidyverse contributing guidelines. + +Thanks for your help! diff --git a/.gitignore b/.gitignore index 5b6a065..c833a2c 100644 --- a/.gitignore +++ b/.gitignore @@ -2,3 +2,4 @@ .Rhistory .RData .Ruserdata +inst/doc diff --git a/.travis.yml b/.travis.yml index 3a41e43..5ac18d0 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,12 +1,15 @@ # R for travis: see documentation at https://docs.travis-ci.com/user/languages/r language: R -sudo: false cache: packages -r_packages: - - covr - -after_success: - - Rscript -e 'library(covr); codecov()' - - Rscript -e 'covr::coveralls()' +matrix: + include: + - r: devel + - r: release + after_success: + - Rscript -e 'covr::codecov()' + - r: oldrel + - r: 3.4 + - r: 3.3 + - r: 3.2 diff --git a/DESCRIPTION b/DESCRIPTION index 5f92571..72a87fa 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,35 +1,77 @@ -Package: concurve Type: Package -Date: 2019-09-18 -Title: Computes and Plots Consonance (Confidence) Intervals, P-Values, and S-Values to Form Consonance and Surprisal Functions -Version: 2.1.0 -Authors@R: c( - person("Zad R.", "Chow", , "zad@lesslikely.com", role = c("aut", "cre"), - comment = c(ORCID = "0000-0003-1545-8199") - ), - person("Andrew D.", "Vigotsky", role = "aut", - comment = c(ORCID = "0000-0003-3166-0688") - ) - ) +Package: concurve +Title: Computes and Plots Compatibility (Confidence) Intervals, + P-Values, S-Values, & Likelihood Intervals to Form Consonance, + Surprisal, & Likelihood Functions +Version: 2.3.0 +Date: 2019-12-04 +Authors@R: + c(person(given = "Zad R.", + family = "Chow", + role = c("aut", "cre"), + email = "zad@lesslikely.com", + comment = c(ORCID = "0000-0003-1545-8199")), + person(given = "Andrew D.", + family = "Vigotsky", + role = "aut", + comment = c(ORCID = "0000-0003-3166-0688"))) Maintainer: Zad R. Chow -Description: Allows one to compute consonance (confidence) intervals for various statistical tests along with their corresponding P-values and S-values. The intervals can be plotted to create consonance and surprisal functions allowing one to see what effect sizes are compatible with the test model at various consonance levels rather than being limited to one interval estimate such as 95%. These methods are discussed by Poole C. (1987) , Schweder T, Hjort NL. (2002) , Singh K, Xie M, Strawderman WE. (2007) , Rothman KJ, Greenland S, Lash TL. (2008, ISBN:9781451190052), Amrhein V, Trafimow D, Greenland S. (2019) , Greenland S. (2019) , Chow ZR, Greenland S. (2019) , and Greenland S, Chow ZR. (2019) . -Imports: parallel, - ggplot2, - metafor, - dplyr, - tibble, - survival, - survminer, - scales +Description: Allows one to compute consonance (confidence) + intervals for various statistical tests along with their corresponding + P-values, S-values, and likelihoods. The intervals can be plotted to + create consonance, surprisal, and likelihood functions allowing one to + see what effect sizes are compatible with the test model at various + consonance levels rather than being limited to one interval estimate + such as 95%. These methods are discussed by Poole C. (1987) + , Schweder T, Hjort NL. (2002) + , Singh K, Xie M, Strawderman WE. (2007) + , Rothman KJ, Greenland S, Lash TL. (2008, + ISBN:9781451190052), Amrhein V, Trafimow D, Greenland S. (2019) + , Greenland S. (2019) + , Chow ZR, Greenland S. (2019) + , and Greenland S, Chow ZR. (2019) + . +License: GPL-3 | file LICENSE +URL: https://data.lesslikely.com/concurve/, + https://github.com/zadchow/concurve, https://lesslikely.com/ +BugReports: https://github.com/zadchow/concurve/issues +Imports: + bcaboot, + boot, + compiler, + dplyr, + flextable, + ggplot2, + knitr, + metafor, + officer, + parallel, + pbmcapply, + ProfileLikelihood, + rlang (>= 0.1.2), + scales, + survival, + survminer, + tibble, + tidyr, + MASS, + methods Suggests: + covr, + roxygen2, + spelling, testthat, - knitr, - covr -License: GPL-3 | file LICENSE -URL: https://data.lesslikely.com/concurve/, https://github.com/Zadchow/concurve, https://lesslikely.com/ -BugReports: https://github.com/Zadchow/concurve/issues -VignetteBuilder: knitr + rmarkdown, + Lock5Data +VignetteBuilder: + knitr +ByteCompile: true Encoding: UTF-8 +Language: en-US LazyData: true -RoxygenNote: 6.1.1 -X-schema.org-keywords: confidence, compatibility, consonance, surprisal, interval, function, curve +Roxygen: list(markdown = TRUE) +RoxygenNote: 7.0.2 +X-schema.org-keywords: confidence, compatibility, consonance, + surprisal, interval, function, curve +Depends: + R (>= 3.2) diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..d22ee17 --- /dev/null +++ b/Makefile @@ -0,0 +1,26 @@ +# h/t to @jimhester and @yihui for this parse block: +# https://github.com/yihui/knitr/blob/dc5ead7bcfc0ebd2789fe99c527c7d91afb3de4a/Makefile#L1-L4 +# Note the portability change as suggested in the manual: +# https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Writing-portable-packages +PKGNAME = `sed -n "s/Package: *\([^ ]*\)/\1/p" DESCRIPTION` +PKGVERS = `sed -n "s/Version: *\([^ ]*\)/\1/p" DESCRIPTION` + + +all: check + +build: + R CMD build . + +check: build + R CMD check --no-manual $(PKGNAME)_$(PKGVERS).tar.gz + +install_deps: + Rscript \ + -e 'if (!requireNamespace("remotes") install.packages("remotes")' \ + -e 'remotes::install_deps(dependencies = TRUE)' + +install: install_deps build + R CMD INSTALL $(PKGNAME)_$(PKGVERS).tar.gz + +clean: + @rm -rf $(PKGNAME)_$(PKGVERS).tar.gz $(PKGNAME).Rcheck diff --git a/NAMESPACE b/NAMESPACE index f46fb24..be80c66 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -1,7 +1,15 @@ exportPattern("^[[:alpha:]]+") import(parallel) +import(pbmcapply) +import(bcaboot) +import(boot) import(ggplot2) +import(ProfileLikelihood) import(dplyr) +import(tidyr) +import(flextable) +import(officer) +import(knitr) import(tibble) import(survival) import(survminer) @@ -9,6 +17,8 @@ import(metafor) import(scales) importFrom("stats", "coef", "confint", "confint.default", "confint.lm", "lm", "quantile", "t.test", "cor.test", "qnorm") + importFrom("methods", "is") + importFrom("stats", "approxfun", "integrate") importFrom("graphics", "axis", "par", "polygon", "text") importFrom("stats", "logLik", "model.frame", "model.matrix", "model.response", "na.fail") diff --git a/NEWS.md b/NEWS.md index 7edfd63..7e75748 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,3 +1,20 @@ +# concurve 2.3.0 + +## Major changes +* `ggconcurve()` is now `ggcurve()`. +* `ggcurve()` plots confidence (consonance) distributions, densities, likelihood, and deviance functions. +* `plot_curve()` is now deprecated. Please use `ggcurve()` instead. +* `curve_compare()` compares two functions and calculates the area between the curve. +* `plot_compare() `allows two separate functions to be plotted and compared simultaneously. +* `curve_table()` produces publication-ready tables of relevant statistics. +* `curve_boot()` uses bootstrapping to approximate the consonance functions via the [`boot`](https://cran.r-project.org/package=boot) and [`bcaboot`](https://cran.r-project.org/package=bcaboot) packages. +* `curve_lik()` produces likelihood functions by transforming the objects from the [`ProfileLikelihood`](https://cran.r-project.org/package=ProfileLikelihood) package. + +## Minor changes + +* All functions now provide progress on how long it will take to complete the task. +* Interval widths are now provided as measures of precision. + # concurve 2.1.0 ## Major changes diff --git a/R/.DS_Store b/R/.DS_Store index 3f04d6a..0ad4d39 100644 Binary files a/R/.DS_Store and b/R/.DS_Store differ diff --git a/R/concurve-package.R b/R/concurve-package.R new file mode 100644 index 0000000..b30bbda --- /dev/null +++ b/R/concurve-package.R @@ -0,0 +1,8 @@ +#' @keywords internal +"_PACKAGE" + +# The following block is used by usethis to automatically manage +# roxygen namespace tags. Modify with care! +## usethis namespace: start +## usethis namespace: end +NULL diff --git a/R/curve_boot.R b/R/curve_boot.R new file mode 100644 index 0000000..afdbe97 --- /dev/null +++ b/R/curve_boot.R @@ -0,0 +1,175 @@ +#' Generate Consonance Functions via Bootstrapping +#' +#' Use the BCa bootstrap method and the t boostrap method from the bcaboot and boot packages +#' to generate consonance distrbutions. +#' +#' @param data Dataset that is being used to create a consonance function. +#' @param func Custom function that is used to create parameters of interest that +#' will be bootstrapped. +#' @param method The boostrap method that will be used to generate the functions. +#' Methods include "bca" which is the default and "t". +#' @param replicates Indicates how many bootstrap replicates are to be performed. +#' The defaultis currently 20000 but more may be desirable, especially to make +#' the functions more smooth. +#' @param steps Indicates how many consonance intervals are to be calculated at +#' various levels. For example, setting this to 100 will produce 100 consonance +#' intervals from 0 to 100. Setting this to 10000 will produce more consonance +#' levels. By default, it is set to 1000. Increasing the number substantially +#' is not recommended as it will take longer to produce all the intervals and +#' store them into a dataframe. +#' @param table Indicates whether or not a table output with some relevant +#' statistics should be generated. The default is TRUE and generates a table +#' which is included in the list object. +#' +#' @return A list with the dataframe of values in the first list and the table +#' in the second if table = TRUE. +#' +#' @examples +#' +#' \donttest{ +#' data(diabetes, package = "bcaboot") +#' Xy <- cbind(diabetes$x, diabetes$y) +#' rfun <- function(Xy) { +#' y <- Xy[, 11] +#' X <- Xy[, 1:10] +#' return(summary(lm(y ~ X))$adj.r.squared) +#' } +#' +#' x <- curve_boot(data = Xy, func = rfun, method = "bca", replicates = 200, steps = 1000) +#' +#' ggcurve(data = x[[1]]) +#' ggcurve(data = x[[3]]) +#' +#' plot_compare(x[[1]], x[[3]]) +#' } +#' +curve_boot <- function(data = data, func = func, method = "bca", replicates = 2000, steps = 1000, table = TRUE) { + + + # BCA Bootstrap Method --------------------------------------------------- + + if (method == "bca") { + intrvls <- 0.5 / steps + alpha <- seq(0.00, 0.50, intrvls) + result <- bcajack(x = data, B = replicates, func = func, alpha = alpha, verbose = TRUE) + + + z <- result[["lims"]] + z <- as.data.frame(z) + z <- as_tibble(rownames_to_column(z)) + colnames(z)[1] <- "alphaperc" + z$alphaperc <- as.numeric(z$alphaperc) + 1:length(alpha) + + # Data Frame with BCA Intervals ------------------------------ + + bca <- pbmclapply(1:length(alpha), FUN = function(i) c(nth(z$bca, i), nth(z$bca, -i)), mc.cores = getOption("mc.cores", 1L)) + bcaintervals <- data.frame(do.call(rbind, bca)) + intrvl.limit <- c("lower.limit", "upper.limit") + colnames(bcaintervals) <- intrvl.limit + news <- pbmclapply(1:length(alpha), FUN = function(i) nth(z$bca, -i) - nth(z$bca, i), mc.cores = getOption("mc.cores", 1L)) + width <- data.frame(do.call(rbind, news)) + colnames(width) <- "intrvl.width" + bews <- pbmclapply(1:length(alpha), FUN = function(i) nth(z$alphaperc, -i) - nth(z$alphaperc, i), mc.cores = getOption("mc.cores", 1L)) + levels <- data.frame(do.call(rbind, bews)) + colnames(levels) <- "intrvl.level" + + df_bca <- data.frame(bcaintervals$lower.limit, bcaintervals$upper.limit, levels$intrvl.level, width$intrvl.width) + df_names <- c("lower.limit", "upper.limit", "intrvl.level", "intrvl.width") + colnames(df_bca) <- df_names + df_bca$pvalue <- 1 - df_bca$intrvl.level + df_bca$svalue <- -log2(df_bca$pvalue) + df_bca$cdf <- (abs(df_bca$intrvl.level / 2)) + 0.5 + df_bca <- head(df_bca, -1) + df_bca <- df_bca[-1, ] + class(df_bca) <- c("data.frame", "concurve") + + # Data Frame with Standard Intervals ------------------------------ + + std <- pbmclapply(1:length(alpha), FUN = function(i) c(nth(z$std, i), nth(z$std, -i)), mc.cores = getOption("mc.cores", 1L)) + stdintervals <- data.frame(do.call(rbind, std)) + intrvl.limit <- c("lower.limit", "upper.limit") + colnames(stdintervals) <- intrvl.limit + news <- pbmclapply(1:length(alpha), FUN = function(i) nth(z$std, -i) - nth(z$std, i), mc.cores = getOption("mc.cores", 1L)) + width <- data.frame(do.call(rbind, news)) + colnames(width) <- "intrvl.width" + bews <- pbmclapply(1:length(alpha), FUN = function(i) nth(z$alphaperc, -i) - nth(z$alphaperc, i), mc.cores = getOption("mc.cores", 1L)) + levels <- data.frame(do.call(rbind, bews)) + colnames(levels) <- "intrvl.level" + + df_std <- data.frame(stdintervals$lower.limit, stdintervals$upper.limit, levels$intrvl.level, width$intrvl.width) + df_names <- c("lower.limit", "upper.limit", "intrvl.level", "intrvl.width") + colnames(df_std) <- df_names + df_std$pvalue <- 1 - df_std$intrvl.level + df_std$svalue <- -log2(df_std$pvalue) + df_std$cdf <- (abs(df_std$intrvl.level / 2)) + 0.5 + df_std <- head(df_std, -1) + df_std <- df_std[-1, ] + class(df_std) <- c("data.frame", "concurve") + + + # Combine Data Frames ----------------------------------------------------- + + if (table == TRUE) { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + (bca_subintervals <- (curve_table(df_bca, levels, type = "c", format = "data.frame"))) + class(bca_subintervals) <- c("data.frame", "concurve") + (std_subintervals <- (curve_table(df_std, levels, type = "c", format = "data.frame"))) + class(std_subintervals) <- c("data.frame", "concurve") + dataframes <- list(df_std, std_subintervals, df_bca, bca_subintervals) + names(dataframes) <- c("Standard Intervals", "Standard Table", "BCA Intervals", "BCA Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + dataframes <- list(df_std, df_bca) + names(dataframes) <- c("Standard", "BCA") + class(dataframes) <- "concurve" + return(dataframes) + } + + + # Boot Percentile Method For Density -------------------------------------- + } else if (method == "t") { + t.boot <- boot(data = data, statistic = func, R = replicates, parallel = "multicore", ncpus = getOption("mc.cores", 1L)) + + intrvls <- 1:steps / steps + + t <- pbmclapply(intrvls, + FUN = function(i) boot.ci(t.boot, conf = i, type = "perc")$perc[4:5], + mc.cores = getOption("mc.cores", 1L) + ) + + df <- data.frame(do.call(rbind, t)) + intrvl.limit <- c("lower.limit", "upper.limit") + colnames(df) <- intrvl.limit + df$intrvl.width <- (abs((df$upper.limit) - (df$lower.limit))) + df$intrvl.level <- intrvls + df$cdf <- (abs(df$intrvl.level / 2)) + 0.5 + df$pvalue <- 1 - intrvls + df$svalue <- -log2(df$pvalue) + df <- head(df, -1) + class(df) <- c("data.frame", "concurve") + + + densdf <- data.frame(c(df$lower.limit, df$upper.limit)) + colnames(densdf) <- "x" + densdf <- head(densdf, -1) + class(densdf) <- c("data.frame", "concurve") + + + if (table == TRUE) { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + (df_subintervals <- (curve_table(df, levels, type = "c", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(df, densdf, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Density", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(df, densdf)) + } + } +} + +# RMD Check ----------------------------------------------------- +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) diff --git a/R/curve_compare.R b/R/curve_compare.R new file mode 100644 index 0000000..f6c703c --- /dev/null +++ b/R/curve_compare.R @@ -0,0 +1,176 @@ +#' Compares two functions and produces an AUC score to show the amount of consonance. +#' +#' Compares the p-value/s-value, and likelihood functions and computes an AUC number. +#' +#' @param data1 The first dataframe produced by one of the interval functions +#' in which the intervals are stored. +#' @param data2 The second dataframe produced by one of the interval functions in +#' which the intervals are stored. +#' @param type Choose whether to plot a "consonance" function, a "surprisal" function or +#' "likelihood". The default option is set to "c". The type must be set in quotes, +#' for example curve_compare (type = "s") or curve_compare(type = "c"). Other options +#' include "pd" for the consonance distribution function, and "cd" for the consonance +#' density function, "l1" for relative likelihood, "l2" for log-likelihood, "l3" +#' for likelihood and "d" for deviance function. +#' @param plot by default it is set to TRUE and will use the plot_compare() function +#' to plot the two functions. +#' @param ... Can be used to pass further arguments to plot_compare(). +#' +#' @examples +#' \donttest{ +#' library(concurve) +#' GroupA <- rnorm(50) +#' GroupB <- rnorm(50) +#' RandomData <- data.frame(GroupA, GroupB) +#' intervalsdf <- curve_mean(GroupA, GroupB, data = RandomData) +#' GroupA2 <- rnorm(50) +#' GroupB2 <- rnorm(50) +#' RandomData2 <- data.frame(GroupA2, GroupB2) +#' model <- lm(GroupA2 ~ GroupB2, data = RandomData2) +#' randomframe <- curve_gen(model, "GroupB2") +#' (curve_compare(intervalsdf[[1]], randomframe[[1]])) +#' (curve_compare(intervalsdf[[1]], randomframe[[1]], type = "s")) +#' } +#' +curve_compare <- function(data1, data2, type = "c", plot = TRUE, ...) { + + # Consonance Function ----------------------------------------------------- + + if (type == "c") { + if (is(data1, "concurve") != TRUE) { + stop("Error: 'data1' must be a data frame from 'concurve'.") + } + if (ncol(data1) != 7) { + stop("Error: 'x' must be a data frame from 'concurve'.") + } + if (is(data2, "concurve") != TRUE) { + stop("Error: 'data2' must be a data frame from 'concurve'.") + } + if (ncol(data2) != 7) { + stop("Error: 'x' must be a data frame from 'concurve'.") + } + if (plot == TRUE) { + plot_comparison <- (plot_compare(data1, data2, type = "c", ...)) + } else if (plot == FALSE) { + + } + + class(data1) <- "data.frame" + df1 <- pivot_longer(data1, lower.limit:upper.limit, names_to = "limit.bound", values_to = "Limit") + class(data2) <- "data.frame" + df2 <- pivot_longer(data2, lower.limit:upper.limit, names_to = "limit.bound", values_to = "Limit") + + df1 <- data.frame( + "x" = df1$Limit, + "y" = df1$pvalue + ) + df2 <- data.frame( + "x" = df2$Limit, + "y" = df2$pvalue + ) + + if (max(df1$x) < min(df2$x) || min(df1$x) > max(df2$x)) { + print("Out of Range, AUC = 0") + } else { + f0 <- approxfun(df1$x, df1$y, ties = "mean") + f1 <- approxfun(df2$x, df2$y, ties = "mean") + f <- Vectorize(function(x) { + min(f0(x), f1(x)) + }) + domain <- c( + max(min(df1$x), min(df2$x)), + min(max(df1$x), max(df2$x)) + ) + AUC_1 <- integrate(f0, min(df1$x), max(df1$x))$value + AUC_2 <- integrate(f1, min(df2$x), max(df2$x))$value + AUC_shared <- integrate(f, domain[1], domain[2])$value + + AUC_overlap <- (AUC_shared / (AUC_1 + AUC_2 - AUC_shared)) + AUC_ratio <- (AUC_shared / (AUC_1 + AUC_2 - 2 * AUC_shared)) + + AUC_results <- data.frame(AUC_1, AUC_2, AUC_shared, AUC_overlap, AUC_ratio) + class(AUC_results) <- c("data.frame", "concurve") + AUC_results <- round(AUC_results, digits = 3) + names <- c("AUC 1", "AUC 2", "Shared AUC", "AUC Overlap (%)", "Overlap:Non-Overlap AUC Ratio") + colnames(AUC_results) <- names + AUC_results <- knitr::kable( + AUC_results, + booktabs = TRUE + ) + print("AUC = Area Under the Curve") + return(list(AUC_results, plot_comparison)) + } + + + # Surprisal Function ------------------------------------------------------ + } else if (type == "s") { + if (is(data1, "concurve") != TRUE) { + stop("Error: 'data1' must be a data frame from 'concurve'.") + } + if (ncol(data1) != 7) { + stop("Error: 'x' must be a data frame from 'concurve'.") + } + if (is(data2, "concurve") != TRUE) { + stop("Error: 'data2' must be a data frame from 'concurve'.") + } + if (ncol(data2) != 7) { + stop("Error: 'x' must be a data frame from 'concurve'.") + } + + if (plot == TRUE) { + plot_comparison <- (plot_compare(data1, data2, type = "s", ...)) + } else if (plot == FALSE) { + + } + + class(data1) <- "data.frame" + df1 <- pivot_longer(data1, lower.limit:upper.limit, names_to = "limit.bound", values_to = "Limit") + class(data2) <- "data.frame" + df2 <- pivot_longer(data2, lower.limit:upper.limit, names_to = "limit.bound", values_to = "Limit") + + + df1 <- data.frame( + "x" = df1$Limit, + "y" = max(df1$svalue) - df1$svalue + ) + df2 <- data.frame( + "x" = df2$Limit, + "y" = max(df2$svalue) - df2$svalue + ) + + if (max(df1$x) < min(df2$x) || min(df1$x) > max(df2$x)) { + print("Out of Range, AUC = 0") + } else { + f0 <- approxfun(df1$x, df1$y, ties = "mean") + f1 <- approxfun(df2$x, df2$y, ties = "mean") + f <- Vectorize(function(x) { + min(f0(x), f1(x)) + }) + domain <- c( + max(min(df1$x), min(df2$x)), + min(max(df1$x), max(df2$x)) + ) + AUC_1 <- integrate(f0, min(df1$x), max(df1$x))$value + AUC_2 <- integrate(f1, min(df2$x), max(df2$x))$value + AUC_shared <- integrate(f, domain[1], domain[2])$value + + AUC_overlap <- (AUC_shared / (AUC_1 + AUC_2 - AUC_shared)) + AUC_ratio <- (AUC_shared / (AUC_1 + AUC_2 - 2 * AUC_shared)) + + AUC_results <- data.frame(AUC_1, AUC_2, AUC_shared, AUC_overlap, AUC_ratio) + class(AUC_results) <- c("data.frame", "concurve") + AUC_results <- round(AUC_results, digits = 3) + names <- c("AUC 1", "AUC 2", "Shared AUC", "AUC Overlap (%)", "Overlap:Non-Overlap AUC Ratio") + colnames(AUC_results) <- names + AUC_results <- knitr::kable( + AUC_results, + booktabs = TRUE + ) + print("AUC = Area Under the Curve") + return(list(AUC_results, plot_comparison)) + } + } +} + +# RMD Check +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) diff --git a/R/curve_corr.R b/R/curve_corr.R index 0646a84..f17a8b5 100644 --- a/R/curve_corr.R +++ b/R/curve_corr.R @@ -1,4 +1,37 @@ -curve_corr <- function(x, y, alternative, method, steps = 10000) { +#' Computes Consonance Intervals for Correlations +#' +#' Computes consonance intervals to produce P- and S-value functions for +#' correlational analysesusing the cor.test function in base R and places the +#' interval limits for each interval levelinto a data frame along with the +#' corresponding p-values and s-values. +#' +#' @param x A vector that contains the data for one of the variables that will +#' be analyzed for correlational analysis. +#' @param y A vector that contains the data for one of the variables that will +#' be analyzed for correlational analysis. +#' @param alternative Indicates the alternative hypothesis and must be one of "two.sided", +#' "greater" or "less". You can specify just the initial letter. "greater" corresponds to +#' positive association, "less" to negative association. +#' @param method A character string indicating which correlation coefficient is +#' to be used for the test. One of "pearson", "kendall", or "spearman", +#' can be abbreviated. +#' @param steps Indicates how many consonance intervals are to be calculated at +#' various levels. For example, setting this to 100 will produce 100 consonance +#' intervals from 0 to 100. Setting this to 10000 will produce more consonance +#' levels. By default, it is set to 1000. Increasing the number substantially +#' is not recommended as it will take longer to produce all the intervals and +#' store them into a dataframe. +#' @param table Indicates whether or not a table output with some relevant +#' statistics should be generated. The default is TRUE and generates a table +#' which is included in the list object. +#' +#' @examples +#' +#' GroupA <- rnorm(50) +#' GroupB <- rnorm(50) +#' joe <- curve_corr(x = GroupA, y = GroupB, alternative = "two.sided", method = "pearson") +#' tibble::tibble(joe[[1]]) +curve_corr <- function(x, y, alternative, method, steps = 10000, table = TRUE) { if (is.numeric(x) != TRUE) { stop("Error: 'x' must be a numeric vector") } @@ -9,20 +42,42 @@ curve_corr <- function(x, y, alternative, method, steps = 10000) { stop("Error: 'steps' must be a numeric vector") } + intrvls <- (0:steps) / steps - results <- mclapply(intrvls, FUN = function(i) cor.test(x, y, + results <- pbmclapply(intrvls, FUN = function(i) { + cor.test(x, y, alternative = alternative, method = method, exact = NULL, conf.level = i, continuity = FALSE - )$conf.int[]) + )$conf.int[] + }, mc.cores = getOption("mc.cores", 1L)) df <- data.frame(do.call(rbind, results)) intrvl.limit <- c("lower.limit", "upper.limit") colnames(df) <- intrvl.limit + df$intrvl.width <- (abs((df$upper.limit) - (df$lower.limit))) df$intrvl.level <- intrvls + df$cdf <- (abs(df$intrvl.level / 2)) + 0.5 df$pvalue <- 1 - intrvls df$svalue <- -log2(df$pvalue) df <- head(df, -1) - return(df) + class(df) <- c("data.frame", "concurve") + densdf <- data.frame(c(df$lower.limit, df$upper.limit)) + colnames(densdf) <- "x" + densdf <- head(densdf, -1) + class(densdf) <- c("data.frame", "concurve") + + + if (table == TRUE) { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + (df_subintervals <- (curve_table(df, levels, type = "c", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(df, densdf, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Density", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(df, densdf)) + } } # RMD Check -utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.level", "pvalue", "svalue")) +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) diff --git a/R/curve_gen.R b/R/curve_gen.R index 037b432..59a14aa 100644 --- a/R/curve_gen.R +++ b/R/curve_gen.R @@ -1,44 +1,91 @@ -# General Consonance Functions Using Profile Likelihood, Wald, or the bootstrap method for linear models. - -curve_gen <- function(model, var, method = "default", replicates = 1000, steps = 10000) { - if (is.list(model) != TRUE) { - stop("Error: 'model' must be an object with a statistical model") - } +#' General Consonance Functions Using Profile Likelihood, Wald, +#' or the bootstrap method for linear models. +#' +#' Computes thousands of consonance (confidence) intervals for +#' the chosen parameter in the selected model +#' (ANOVA, ANCOVA, regression, logistic regression) and places +#' the interval limits for each interval level into a data frame along +#' with the corresponding p-values and s-values. +#' +#' @param model The statistical model of interest +#' (ANOVA, regression, logistic regression) is to be indicated here. +#' @param var The variable of interest from the model (coefficients, intercept) +#' for which the intervals are to be produced. +#' @param method Chooses the method to be used to calculate the +#' consonance intervals. There are currently four methods: +#' "default", "wald", "lm", and "boot". The "default" method uses the profile +#' likelihood method to compute intervals and can be used for models created by +#' the 'lm' function. The "wald" method is typicallywhat most people are +#' familiar with when computing intervals based on the calculated standard error. +#' The "lm" method allows this function to be used for specific scenarios like +#' logistic regression and the 'glm' function. The "boot" method allows for +#' bootstrapping at certain levels. +#' @param steps Indicates how many consonance intervals are to be calculated at +#' various levels. For example, setting this to 100 will produce 100 consonance +#' intervals from 0 to 100. Setting this to 10000 will produce more consonance +#' levels. By default, it is set to 1000. Increasing the number substantially +#' is not recommended as it will take longer to produce all the intervals and +#' store them into a dataframe. +#' @param table Indicates whether or not a table output with some relevant +#' statistics should be generated. The default is TRUE and generates a table +#' which is included in the list object. +#' +#' @examples +#' +#' \donttest{ +#' # Simulate random data +#' GroupA <- rnorm(50) +#' GroupB <- rnorm(50) +#' RandomData <- data.frame(GroupA, GroupB) +#' rob <- lm(GroupA ~ GroupB, data = RandomData) +#' bob <- curve_gen(rob, "GroupB") +#' tibble::tibble(bob[[1]]) +#' } +#' +curve_gen <- function(model, var, method = "wald", steps = 1000, table = TRUE) { if (is.character(method) != TRUE) { stop("Error: 'method' must be a character vector") } - if (is.numeric(replicates) != TRUE) { - stop("Error: 'replicates' must be a numeric vector") - } if (is.numeric(steps) != TRUE) { stop("Error: 'steps' must be a numeric vector") } - intrvls <- (0:steps) / steps - if (method == "default") { - results <- mclapply(intrvls, FUN = function(i) confint(object = model, level = i)[var, ]) - } else if (method == "Wald") { - results <- mclapply(intrvls, FUN = function(i) confint.default(object = model, level = i)[var, ]) - } else if (method == "lm") { - results <- mclapply(intrvls, FUN = function(i) confint.lm(object = model, level = i)[var, ]) - } else if (method == "boot") { - effect <- coef(model)[[var]] - boot_dist <- replicate(replicates, - expr = coef(lm(model$call$formula, - data = model$model[sample(nrow(model$model), replace = T), ] - ))[[var]] - ) - effect - results <- mclapply(intrvls, FUN = function(i) effect - quantile(boot_dist, probs = (1 + c(i, -i)) / 2)) + + intrvls <- (1:(steps - 1)) / steps + + if (method == "wald") { + results <- pbmclapply(intrvls, FUN = function(i) confint.default(object = model, level = i)[var, ], mc.cores = getOption("mc.cores", 1L)) + } else if (method == "glm") { + results <- pbmclapply(intrvls, FUN = function(i) confint(object = model, level = i, trace = FALSE)[var, ], mc.cores = getOption("mc.cores", 1L)) } df <- data.frame(do.call(rbind, results)) intrvl.limit <- c("lower.limit", "upper.limit") colnames(df) <- intrvl.limit + df$intrvl.width <- (abs((df$upper.limit) - (df$lower.limit))) df$intrvl.level <- intrvls + df$cdf <- (abs(df$intrvl.level / 2)) + 0.5 df$pvalue <- 1 - intrvls df$svalue <- -log2(df$pvalue) df <- head(df, -1) - return(df) + class(df) <- c("data.frame", "concurve") + densdf <- data.frame(c(df$lower.limit, df$upper.limit)) + colnames(densdf) <- "x" + densdf <- head(densdf, -1) + class(densdf) <- c("data.frame", "concurve") + + + if (table == TRUE) { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + (df_subintervals <- (curve_table(df, levels, type = "c", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(df, densdf, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Density", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(df, densdf)) + } } # RMD Check -utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.level", "pvalue", "svalue")) +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) diff --git a/R/curve_lik.R b/R/curve_lik.R new file mode 100644 index 0000000..49a3f80 --- /dev/null +++ b/R/curve_lik.R @@ -0,0 +1,41 @@ +#' Compute the Profile Likelihood Functions +#' +#' @param likobject An object from the ProfileLikelihood package +#' @param data The dataframe that was used to create the likelihood +#' object in the ProfileLikelihood package. +#' @param table Indicates whether or not a table output with some relevant +#' statistics should be generated. The default is TRUE and generates a table +#' which is included in the list object. +#' +#' @examples +#' +#' library(ProfileLikelihood) +#' data(dataglm) +#' xx <- profilelike.glm(y ~ x1 + x2, dataglm, profile.theta = "group", binomial("logit")) +#' lik <- curve_lik(xx, dataglm) +#' tibble::tibble(lik[[1]]) +curve_lik <- function(likobject, data, table = TRUE) { + values <- likobject[[1]] # theta values + likelihood <- likobject[[2]] # profile likelihoods + support <- likobject[[3]] # normalized profile likelihoods + loglikelihood <- log(support) # log of normalized profile likelihoods + deviancestat <- -(loglikelihood) # deviance statistic + + likfunction <- data.frame(values, likelihood, loglikelihood, support, deviancestat) + class(likfunction) <- c("data.frame", "concurve") + + + if (table == TRUE) { + levels <- c(0.03, 0.05, 0.12, 0.14) + (df_subintervals <- (curve_table(likfunction, levels, type = "l", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(likfunction, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(likfunction)) + } +} + +utils::globalVariables(c("likfunction", "values", "likelihood", "loglikelihood", "support", "deviancestat")) diff --git a/R/curve_mean.R b/R/curve_mean.R index 2012aec..123d2bf 100644 --- a/R/curve_mean.R +++ b/R/curve_mean.R @@ -1,6 +1,41 @@ -# Mean Interval Consonance Function - -curve_mean <- function(x, y, data, paired = F, method = "default", replicates = 1000, steps = 10000) { +#' Mean Interval Consonance Function +#' +#' Computes thousands of consonance (confidence) intervals for the chosen +#' parameter in a statistical test that compares means and places the interval +#' limits for each interval level into a data frame along with the corresponding +#' p-values and s-values. +#' +#' @param x Variable that contains the data for the first group being compared. +#' @param y Variable that contains the data for the second group being compared. +#' @param data Data frame from which the variables are being extracted from. +#' @param paired Indicates whether the statistical test is a paired difference test. +#' By default, it is set to "F",which means the function will be an unpaired +#' statistical test comparing two independent groups.Inserting "paired" will +#' change the test to a paired difference test. +#' @param method By default this is turned off (set to "default"), but +#' allows for bootstrapping if "boot" is insertedinto the function call. +#' @param replicates Indicates how many bootstrap replicates are to be performed. +#' The defaultis currently 20000 but more may be desirable, especially to make +#' the functions more smooth. +#' @param steps Indicates how many consonance intervals are to be calculated at +#' various levels. For example, setting this to 100 will produce 100 consonance +#' intervals from 0 to 100. Setting this to 10000 will produce more consonance +#' levels. By default, it is set to 1000. Increasing the number substantially +#' is not recommended as it will take longer to produce all the intervals and +#' store them into a dataframe. +#' @param table Indicates whether or not a table output with some relevant +#' statistics should be generated. The default is TRUE and generates a table +#' which is included in the list object. +#' +#' @examples +#' +#' # Simulate random data +#' GroupA <- runif(100, min = 0, max = 100) +#' GroupB <- runif(100, min = 0, max = 100) +#' RandomData <- data.frame(GroupA, GroupB) +#' bob <- curve_mean(GroupA, GroupB, RandomData) +#' tibble::tibble(bob[[1]]) +curve_mean <- function(x, y, data, paired = F, method = "default", replicates = 1000, steps = 10000, table = TRUE) { if (is.numeric(x) != TRUE) { stop("Error: 'x' must be a numeric vector") } @@ -16,9 +51,10 @@ curve_mean <- function(x, y, data, paired = F, method = "default", replicates = if (is.numeric(steps) != TRUE) { stop("Error: 'steps' must be a numeric vector") } + intrvls <- (0:steps) / steps if (method == "default") { - results <- mclapply(intrvls, FUN = function(i) t.test(x, y, data = data, paired = paired, conf.level = i)$conf.int[]) + results <- pbmclapply(intrvls, FUN = function(i) t.test(x, y, data = data, paired = paired, conf.level = i)$conf.int[], mc.cores = getOption("mc.cores", 1L)) } else if (method == "boot") { diff <- mean(x) - mean(y) if (paired) { @@ -32,17 +68,37 @@ curve_mean <- function(x, y, data, paired = F, method = "default", replicates = mean(sample(y, length(y), replace = T)) ) - diff } - results <- mclapply(intrvls, FUN = function(i) diff - quantile(boot_dist, probs = (1 + c(i, -i)) / 2)) + results <- pbmclapply(intrvls, FUN = function(i) diff - quantile(boot_dist, probs = (1 + c(i, -i)) / 2), mc.cores = getOption("mc.cores", 1L)) } df <- data.frame(do.call(rbind, results)) intrvl.limit <- c("lower.limit", "upper.limit") colnames(df) <- intrvl.limit + df$intrvl.width <- (abs((df$upper.limit) - (df$lower.limit))) df$intrvl.level <- intrvls + df$cdf <- (abs(df$intrvl.level / 2)) + 0.5 df$pvalue <- 1 - intrvls df$svalue <- -log2(df$pvalue) df <- head(df, -1) - return(df) + class(df) <- c("data.frame", "concurve") + densdf <- data.frame(c(df$lower.limit, df$upper.limit)) + colnames(densdf) <- "x" + densdf <- head(densdf, -1) + class(densdf) <- c("data.frame", "concurve") + + + if (table == TRUE) { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + (df_subintervals <- (curve_table(df, levels, type = "c", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(df, densdf, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Density", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(df, densdf)) + } } + # RMD Check -utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.level", "pvalue", "svalue")) +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) diff --git a/R/curve_meta.R b/R/curve_meta.R index 9fe6a2f..f560d50 100644 --- a/R/curve_meta.R +++ b/R/curve_meta.R @@ -1,6 +1,84 @@ -# Meta-analytic Consonance Function - -curve_meta <- function(x, measure = "default", steps = 10000) { +#' Meta-analytic Consonance Function +#' +#' Computes thousands of consonance (confidence) intervals for the chosen +#' parameter in the meta-analysis done by the metafor package and places the +#' interval limits for each interval level into a data frame along with the +#' corresponding p-values and s-values. +#' +#' @param x Object where the meta-analysis parameters are stored, typically a +#' list produced by 'metafor' +#' @param measure Indicates whether the object has a log transformation or is normal/default. +#' The default setting is "default. If the measure is set to "ratio", it will take +#' logarithmically transformed values and convert them back to normal values in the dataframe. +#' This is typically a setting used for binary outcomes such as risk ratios, +#' hazard ratios, and odds ratios. +#' @param steps Indicates how many consonance intervals are to be calculated at +#' various levels. For example, setting this to 100 will produce 100 consonance +#' intervals from 0 to 100. Setting this to 10000 will produce more consonance +#' levels. By default, it is set to 1000. Increasing the number substantially +#' is not recommended as it will take longer to produce all the intervals and +#' store them into a dataframe. +#' @param table Indicates whether or not a table output with some relevant +#' statistics should be generated. The default is TRUE and generates a table +#' which is included in the list object. +#' +#' @examples +#' +#' # Simulate random data for two groups in two studies +#' GroupAData <- runif(20, min = 0, max = 100) +#' GroupAMean <- round(mean(GroupAData), digits = 2) +#' GroupASD <- round(sd(GroupAData), digits = 2) +#' +#' GroupBData <- runif(20, min = 0, max = 100) +#' GroupBMean <- round(mean(GroupBData), digits = 2) +#' GroupBSD <- round(sd(GroupBData), digits = 2) +#' +#' GroupCData <- runif(20, min = 0, max = 100) +#' GroupCMean <- round(mean(GroupCData), digits = 2) +#' GroupCSD <- round(sd(GroupCData), digits = 2) +#' +#' GroupDData <- runif(20, min = 0, max = 100) +#' GroupDMean <- round(mean(GroupDData), digits = 2) +#' GroupDSD <- round(sd(GroupDData), digits = 2) +#' +#' # Combine the data +#' +#' StudyName <- c("Study1", "Study2") +#' MeanTreatment <- c(GroupAMean, GroupCMean) +#' MeanControl <- c(GroupBMean, GroupDMean) +#' SDTreatment <- c(GroupASD, GroupCSD) +#' SDControl <- c(GroupBSD, GroupDSD) +#' NTreatment <- c(20, 20) +#' NControl <- c(20, 20) +#' +#' metadf <- data.frame( +#' StudyName, MeanTreatment, MeanControl, +#' SDTreatment, SDControl, NTreatment, NControl +#' ) +#' +#' # Use metafor to calculate the standardized mean difference +#' +#' library(metafor) +#' +#' dat <- escalc( +#' measure = "SMD", m1i = MeanTreatment, sd1i = SDTreatment, +#' n1i = NTreatment, m2i = MeanControl, sd2i = SDControl, +#' n2i = NControl, data = metadf +#' ) +#' +#' # Pool the data using a particular method. Here "FE" is the fixed-effects model +#' +#' res <- rma(yi, vi, +#' data = dat, slab = paste(StudyName, sep = ", "), +#' method = "FE", digits = 2 +#' ) +#' +#' # Calculate the intervals using the metainterval function +#' +#' metaf <- curve_meta(res) +#' +#' tibble::tibble(metaf[[1]]) +curve_meta <- function(x, measure = "default", steps = 10000, table = TRUE) { if (is.list(x) != TRUE) { stop("Error: 'x' must be a list from 'metafor'") } @@ -10,8 +88,9 @@ curve_meta <- function(x, measure = "default", steps = 10000) { if (is.numeric(steps) != TRUE) { stop("Error: 'steps' must be a numeric vector") } + intrvls <- (0:steps) / steps - results <- mclapply(intrvls, FUN = function(i) confint.default(object = x, fixed = TRUE, random = FALSE, level = i)[]) + results <- pbmclapply(intrvls, FUN = function(i) confint.default(object = x, fixed = TRUE, random = FALSE, level = i)[], mc.cores = getOption("mc.cores", 1L)) df <- data.frame(do.call(rbind, results)) intrvl.limit <- c("lower.limit", "upper.limit") colnames(df) <- intrvl.limit @@ -25,9 +104,27 @@ curve_meta <- function(x, measure = "default", steps = 10000) { df$lower.limit <- exp(df$lower.limit) df$upper.limit <- exp(df$upper.limit) } + df$intrvl.width <- (abs((df$upper.limit) - (df$lower.limit))) + df$cdf <- (abs(df$intrvl.level / 2)) + 0.5 df <- head(df, -1) - return(df) + class(df) <- c("data.frame", "concurve") + densdf <- data.frame(c(df$lower.limit, df$upper.limit)) + colnames(densdf) <- "x" + densdf <- head(densdf, -1) + class(densdf) <- c("data.frame", "concurve") + + if (table == TRUE) { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + (df_subintervals <- (curve_table(df, levels, type = "c", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(df, densdf, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Density", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(df, densdf)) + } } # RMD Check -utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.level", "pvalue", "svalue")) +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) diff --git a/R/curve_rev.R b/R/curve_rev.R index d6fc3dd..7ca4bd4 100644 --- a/R/curve_rev.R +++ b/R/curve_rev.R @@ -1,49 +1,164 @@ -# Reverse Engineer Consonance Functions Using the Point Estimate and Confidence Limits +#' Reverse Engineer Consonance / Likelihood Functions Using the Point +#' Estimate and Confidence Limits +#' +#' Using the confidence limits and point estimates from a dataset, one can use +#' these estimates to compute thousands of consonance intervals and graph the +#' intervals to form a consonance and surprisal function. +#' +#' @param point The point estimate from an analysis. Ex: 1.20 +#' @param LL The lower confidence limit from an analysis Ex: 1.0 +#' @param UL The upper confidence limit from an analysis Ex: 1.4 +#' @param type Indicates whether the produced result should be a consonance +#' function or a likelihood function. The default is "c" for consonance and +#' likelihood can be set via "l". +#' @param measure The type of data being used. If they involve mean differences, +# then the "default" option should be used, which is also the default setting. +# If the data are ratios, then the "ratio" option should be used. +#' @param steps Indicates how many consonance intervals are to be calculated at +#' various levels. For example, setting this to 100 will produce 100 consonance +#' intervals from 0 to 100. Setting this to 10000 will produce more consonance +#' levels. By default, it is set to 1000. Increasing the number substantially +#' is not recommended as it will take longer to produce all the intervals and +#' store them into a dataframe. +#' @param table Indicates whether or not a table output with some relevant +#' statistics should be generated. The default is TRUE and generates a table +#' which is included in the list object. +#' +#' @examples +#' +#' # From a real published study. Point estimate of the result was hazard ratio of 1.61 and +#' # lower bound of the interval is 0.997 while upper bound of the interval is 2.59. +#' # +#' df <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59, measure = "ratio") +#' +#' tibble::tibble(df[[1]]) +curve_rev <- function(point, LL, UL, type = "c", measure = "default", steps = 10000, table = TRUE) { -curve_rev <- function(point, LL, UL, measure = "default", steps = 10000) { - if (is.numeric(point) != TRUE) { - stop("Error: 'x' must be a numeric vector") - } - if (is.numeric(LL) != TRUE) { - stop("Error: 'y' must be a numeric vector") - } - if (is.numeric(UL) != TRUE) { - stop("Error: 'y' must be a numeric vector") - } - if (is.character(measure) != TRUE) { - stop("Error: 'measure' must be a string such as 'default' or 'ratio'") - } - intrvls <- (1:steps) / steps - z <- qnorm(1 - intrvls / 2) + # Produce Consonance / Surprisal Functions -------------------------------- + + if (type == "c") { + if (is.numeric(point) != TRUE) { + stop("Error: 'x' must be a numeric vector") + } + if (is.numeric(LL) != TRUE) { + stop("Error: 'y' must be a numeric vector") + } + if (is.numeric(UL) != TRUE) { + stop("Error: 'y' must be a numeric vector") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'") + } + + intrvls <- (1:steps) / steps + z <- qnorm(1 - intrvls / 2) + + if (measure == "default") { + se <- (UL / LL) / 3.92 + LL <- pbmclapply(z, FUN = function(i) point + (i * se), mc.cores = getOption("mc.cores", 1L)) + UL <- pbmclapply(z, FUN = function(i) point - (i * se), mc.cores = getOption("mc.cores", 1L)) + df <- data.frame(do.call(rbind, UL), do.call(rbind, LL)) + intrvl.limit <- c("lower.limit", "upper.limit") + colnames(df) <- intrvl.limit + } + + else if (measure == "ratio") { + se <- log(UL / LL) / 3.92 + logpoint <- log(point) + logLL <- pbmclapply(z, FUN = function(i) logpoint + (i * se), mc.cores = getOption("mc.cores", 1L)) + logUL <- pbmclapply(z, FUN = function(i) logpoint - (i * se), mc.cores = getOption("mc.cores", 1L)) + df <- data.frame(do.call(rbind, logUL), do.call(rbind, logLL)) + intrvl.limit <- c("lower.limit", "upper.limit") + colnames(df) <- intrvl.limit + df$lower.limit <- exp(df$lower.limit) + df$upper.limit <- exp(df$upper.limit) + } + df$intrvl.width <- (abs((df$upper.limit) - (df$lower.limit))) + df$intrvl.level <- 1 - intrvls + df$cdf <- (abs(df$intrvl.level / 2)) + 0.5 + df$pvalue <- 1 - (1 - intrvls) + df$svalue <- -log2(df$pvalue) + df <- head(df, -1) + class(df) <- c("data.frame", "concurve") + densdf <- data.frame(c(df$lower.limit, df$upper.limit)) + colnames(densdf) <- "x" + densdf <- head(densdf, -1) + class(densdf) <- c("data.frame", "concurve") + + if (table == TRUE) { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + (df_subintervals <- (curve_table(df, levels, type = "c", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(df, densdf, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Density", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(df, densdf)) + } + + + # Produce Likelihood / Deviance Functions --------------------------------- + } else if (type == "l") { + intrvls <- (1:steps) / steps + z <- qnorm(1 - intrvls / 2) + + if (measure == "default") { + se <- (UL / LL) / 3.92 + LL <- pbmclapply(z, FUN = function(i) point + (i * se), mc.cores = getOption("mc.cores", 1L)) + UL <- pbmclapply(z, FUN = function(i) point - (i * se), mc.cores = getOption("mc.cores", 1L)) + df <- data.frame(do.call(rbind, UL), do.call(rbind, LL)) + intrvl.limit <- c("lower.limit", "upper.limit") + colnames(df) <- intrvl.limit + } + + else if (measure == "ratio") { + se <- log(UL / LL) / 3.92 + logpoint <- log(point) + logLL <- pbmclapply(z, FUN = function(i) logpoint + (i * se), mc.cores = getOption("mc.cores", 1L)) + logUL <- pbmclapply(z, FUN = function(i) logpoint - (i * se), mc.cores = getOption("mc.cores", 1L)) + df <- data.frame(do.call(rbind, logUL), do.call(rbind, logLL)) + intrvl.limit <- c("lower.limit", "upper.limit") + colnames(df) <- intrvl.limit + df$lower.limit <- exp(df$lower.limit) + df$upper.limit <- exp(df$upper.limit) + } + + df$intrvl.level <- 1 - intrvls + df$pvalue <- 1 - (1 - intrvls) + df$svalue <- -log2(df$pvalue) + df <- head(df, -1) - if (measure == "default") { - se <- (UL / LL) / 3.92 - LL <- mclapply(z, FUN = function(i) point + (i * se)) - UL <- mclapply(z, FUN = function(i) point - (i * se)) - df <- data.frame(do.call(rbind, UL), do.call(rbind, LL)) - intrvl.limit <- c("lower.limit", "upper.limit") - colnames(df) <- intrvl.limit - } - else if (measure == "ratio") { se <- log(UL / LL) / 3.92 - logpoint <- log(point) - logLL <- mclapply(z, FUN = function(i) logpoint + (i * se)) - logUL <- mclapply(z, FUN = function(i) logpoint - (i * se)) - df <- data.frame(do.call(rbind, logUL), do.call(rbind, logLL)) - intrvl.limit <- c("lower.limit", "upper.limit") - colnames(df) <- intrvl.limit - df$lower.limit <- exp(df$lower.limit) - df$upper.limit <- exp(df$upper.limit) - } + values <- seq(from = df[1, 1], to = df[1, 2], by = 0.01) + zscore <- sapply( + values, + function(j) (log(j / point) / se) + ) + + support <- exp((-zscore^2) / 2) + deviancestat <- (zscore^2) + likelihood <- support * (log(point)) + loglikelihood <- log(likelihood) + likfunction <- data.frame(values, likelihood, loglikelihood, support, deviancestat) - df$intrvl.level <- 1 - intrvls - df$pvalue <- 1 - (1 - intrvls) - df$svalue <- -log2(df$pvalue) - df <- head(df, -1) - return(df) + + if (table == TRUE) { + levels <- c(0.03, 0.05, 0.12, 0.14) + (df_subintervals <- (curve_table(likfunction, levels, type = "l", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(likfunction, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(likfunction)) + } + } } # RMD Check -utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.level", "pvalue", "svalue")) +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) +utils::globalVariables(c("likfunction", "values", "likelihood", "loglikelihood", "support", "deviancestat")) diff --git a/R/curve_surv.R b/R/curve_surv.R index c23af91..26c2ed8 100644 --- a/R/curve_surv.R +++ b/R/curve_surv.R @@ -1,6 +1,26 @@ -# Survival Data Consonance Function +#' Produce Consonance Intervals for Survival Data +#' +#' Computes thousands of consonance (confidence) intervals for the chosen +#' parameter in the Cox model computed by the 'survival' package and places +#' the interval limits for each interval level into a data frame along +#' with the corresponding p-value and s-value. +#' +#' @param data Object where the Cox model is stored, typically a list produced by the +#' 'survival' package. +#' @param x Predictor of interest within the survival model for which the +#' consonance intervals should be computed. +#' @param steps Indicates how many consonance intervals are to be calculated at +#' various levels. For example, setting this to 100 will produce 100 consonance +#' intervals from 0 to 100. Setting this to 10000 will produce more consonance +#' levels. By default, it is set to 1000. Increasing the number substantially +#' is not recommended as it will take longer to produce all the intervals and +#' store them into a dataframe. +#' @param table Indicates whether or not a table output with some relevant +#' statistics should be generated. The default is TRUE and generates a table +#' which is included in the list object. +#' -curve_surv <- function(data, x, steps = 10000) { +curve_surv <- function(data, x, steps = 10000, table = TRUE) { if (is.list(data) != TRUE) { stop("Error: 'data' must be an object with a Cox Proportional Hazards model") } @@ -9,17 +29,35 @@ curve_surv <- function(data, x, steps = 10000) { } intrvls <- (1:steps) / steps - results <- mclapply(intrvls, FUN = function(i) summary(data, conf.int = i)$conf.int[x, ]) + results <- pbmclapply(intrvls, FUN = function(i) summary(data, conf.int = i)$conf.int[x, ], mc.cores = getOption("mc.cores", 1L)) df <- data.frame(do.call(rbind, results))[, 3:4] intrvl.limit <- c("lower.limit", "upper.limit") colnames(df) <- intrvl.limit + df$intrvl.width <- (abs((df$upper.limit) - (df$lower.limit))) df$intrvl.level <- intrvls + df$cdf <- (abs(df$intrvl.level / 2)) + 0.5 df$pvalue <- 1 - intrvls df$svalue <- -log2(df$pvalue) df <- head(df, -1) - return(df) + class(df) <- c("data.frame", "concurve") + densdf <- data.frame(c(df$lower.limit, df$upper.limit)) + colnames(densdf) <- "x" + densdf <- head(densdf, -1) + class(densdf) <- c("data.frame", "concurve") + + if (table == TRUE) { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + (df_subintervals <- (curve_table(df, levels, type = "c", format = "data.frame"))) + class(df_subintervals) <- c("data.frame", "concurve") + dataframes <- list(df, densdf, df_subintervals) + names(dataframes) <- c("Intervals Dataframe", "Intervals Density", "Intervals Table") + class(dataframes) <- "concurve" + return(dataframes) + } else if (table == FALSE) { + return(list(df, densdf)) + } } # RMD Check -utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.level", "pvalue", "svalue")) +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) diff --git a/R/curve_table.R b/R/curve_table.R new file mode 100644 index 0000000..a3284dc --- /dev/null +++ b/R/curve_table.R @@ -0,0 +1,82 @@ +#' Produce Tables For concurve Functions +#' +#' Produces publication-ready tables with relevant statistics of interest +#' for functions produced from the concurve package. +#' +#' @param data Dataframe from a concurve function to produce a table for +#' @param levels Levels of the consonance intervals or likelihood intervals that should be +#' included in the table. +#' @param type Indicates whether the table is for a consonance function or likelihood function. +#' The default is set to "c" for consonance and can be switched to "l" for likelihood. +#' @param format The format of the tables. The options include "data.frame" which is the +#' default, "tibble", "docx" (which creates a table for a word document), "pptx" (which +#' creates a table for powerpoint), "latex", (which creates a table for a TeX document), and +#' "image", which produces an image of the table. +#' +#' @examples +#' +#' library(concurve) +#' +#' GroupA <- rnorm(500) +#' GroupB <- rnorm(500) +#' +#' RandomData <- data.frame(GroupA, GroupB) +#' +#' intervalsdf <- curve_mean(GroupA, GroupB, data = RandomData, method = "default") +#' +#' (z <- curve_table(intervalsdf[[1]], format = "data.frame")) +#' (z <- curve_table(intervalsdf[[1]], format = "tibble")) +#' (z <- curve_table(intervalsdf[[1]], format = "latex")) +curve_table <- function(data, levels, type = "c", format = "data.frame") { + if (type == "c") { + levels <- c(0.25, 0.50, 0.75, 0.80, 0.85, 0.90, 0.95, 0.975, 0.99) + + subdf <- pbmclapply(levels, FUN = function(i) (subset(data, intrvl.level == i)), mc.cores = getOption("mc.cores", 1L)) + subdf <- data.frame(do.call(rbind, subdf)) + class(subdf) <- c("data.frame", "concurve") + subdf$intrvl.level <- (subdf$intrvl.level * 100) + subcolnames <- c("Lower Limit", "Upper Limit", "Interval Width", "Interval Level (%)", "CDF", "P-value", "S-value (bits)") + colnames(subdf) <- subcolnames + subdf <- round(subdf, digits = 3) + } else if (type == "l") { + levels <- c(0.03, 0.05, 0.12, 0.14) + + subdf <- pbmclapply(levels, FUN = function(i) (subset(data, round(support, 2) == i)), mc.cores = getOption("mc.cores", 1L)) + subdf <- data.frame(do.call(rbind, subdf)) + class(subdf) <- c("data.frame", "concurve") + subcolnames <- c("Theta", "Likelihood", "Log Likelihood", "Relative Likelihood", "Deviance Statistic") + colnames(subdf) <- subcolnames + subdf <- round(subdf, digits = 3) + } + + if (format == "data.frame") { + return(subdf) + } else if (format == "tibble") { + subdf <- tibble::tibble(subdf) + return(subdf) + } else if (format == "docx") { + subdf <- flextable(subdf) + subdf <- autofit(subdf) + subdf + return(print(subdf, preview = "docx")) + } else if (format == "pptx") { + subdf <- flextable(subdf) + subdf <- autofit(subdf) + subdf + return(print(subdf, preview = "pptx")) + } else if (format == "latex") { + subdf <- knitr::kable( + subdf, + booktabs = TRUE, + label = "A table of some interval estimates at various levels and corresponding statistics." + ) + return(subdf) + } else if (format == "image") { + subdf <- flextable(subdf) + subdf <- autofit(subdf) + subdf + return(subdf) + } +} + +utils::globalVariables(c("subdf", "Lower Limit", "Upper Limit", "Interval Width", "Interval Level", "CDF", "P-value", "S-value")) diff --git a/R/defunct.R b/R/defunct.R index c7079c0..f7e85c6 100644 --- a/R/defunct.R +++ b/R/defunct.R @@ -1,8 +1,15 @@ -defunct <- function(msg = "This function is deprecated") function(...) return(stop(msg)) +defunct <- function(msg = "This function is deprecated") { + function(...) { + return(stop(msg)) + } +} # Graphical functions -plotpint <- defunct("plotpint() is now deprecated. Please use ggconcurve() or plot_concurve() instead.") -plotsint <- defunct("plotsint() is now deprecated. Please use ggconcurve() or plot_concurve() instead.") +plotpint <- defunct("plotpint() is now deprecated. Please use ggcurve() instead.") +plotsint <- defunct("plotsint() is now deprecated. Please use ggcurve() instead.") +ggconcurve <- defunct("ggconcurve() is now deprecated. Please use ggcurve() instead.") +plot_concurve <- defunct("plot_concurve() is now deprecated. Please use ggcurve() instead.") + # Computational functions meanintervals <- defunct("meanintervals() is now deprecated. Please use curve_mean() instead.") @@ -10,5 +17,4 @@ metaintervals <- defunct("metaintervals() is now deprecated. Please use curve_me genintervals <- defunct("genintervals() is now deprecated. Please use curve_gen() instead.") corrintervals <- defunct("corrintervals() is now deprecated. Please use curve_corr() instead.") survintervals <- defunct("survintervals() is now deprecated. Please use curve_surv() instead.") -rev_eng <- defunct("rev_eng() is now deprecated. Please use curve_rev instead.") - +rev_eng <- defunct("rev_eng() is now deprecated. Please use curve_rev() instead.") diff --git a/R/ggconcurve.R b/R/ggconcurve.R deleted file mode 100644 index 60c6c24..0000000 --- a/R/ggconcurve.R +++ /dev/null @@ -1,164 +0,0 @@ -ggconcurve <- function(type = "consonance", data, measure = "default", nullvalue = "absent", position = "pyramid", - title = "Consonance Function", - subtitle = "The function contains consonance intervals at every level.", - xaxis = "Range of Values", - yaxis = "P-value", - color = "#555555", - fill = "#239a98") { - if (type == "consonance") { - if (is.data.frame(data) != TRUE) { - stop("Error: 'x' must be a data frame from 'concurve'.") - } - if (ncol(data) != 5) { - stop("Error: 'x' must be a data frame from 'concurve'.") - } - if (is.character(measure) != TRUE) { - stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") - } - if (is.character(nullvalue) != TRUE) { - stop("Error: 'nullvalue' must be a string such as 'absent' or 'present'.") - } - if (is.character(position) != TRUE) { - stop("Error: 'position' must be a string such as 'pyramid' or 'inverted'.") - } - if (is.character(title) != TRUE) { - stop("Error: 'title' must be a string.") - } - if (is.character(subtitle) != TRUE) { - stop("Error: 'subtitle' must be a string.") - } - if (is.character(xaxis) != TRUE) { - stop("Error: 'xaxis' must be a string.") - } - if (is.character(yaxis) != TRUE) { - stop("Error: 'yaxis' must be a string.") - } - if (is.character(fill) != TRUE) { - stop("Error: 'fill' must be a string for the color.") - } - ggplot(data = data) + - geom_point(aes(x = lower.limit, y = pvalue), - color = color, fill = fill, alpha = 0.5, shape = 20, size = 0.1 - ) + - geom_point(aes(x = upper.limit, y = pvalue), - color = color, fill = fill, alpha = 0.5, shape = 20, size = 0.1 - ) + - geom_ribbon(aes(x = lower.limit, ymin = min(pvalue), ymax = pvalue), - fill = fill, alpha = 0.30 - ) + - geom_ribbon(aes(x = upper.limit, ymin = min(pvalue), ymax = pvalue), - fill = fill, alpha = 0.30 - ) + - labs( - title = title, - subtitle = subtitle, - x = xaxis, - y = yaxis - ) + - theme_light() + - theme( - axis.title.x = element_text(size = 12), - axis.title.y = element_text(size = 12) - ) + { - if (measure == "default") scale_x_continuous(breaks = scales::pretty_breaks(n = 10)) - } + { - if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) - } + { - if (position == "inverted") { - scale_y_reverse( - breaks = seq(0, 1, .05), - sec.axis = sec_axis(~ (1 - .) * 100, name = "Levels for CI (%)", breaks = seq(0, 100, 5)) - ) - } - } + { - if (position == "pyramid") { - scale_y_continuous( - breaks = seq(0, 1, .05), - sec.axis = sec_axis(~ (1 - .) * 100, name = "Levels for CI (%)", breaks = seq(0, 100, 5)) - ) - } - } + - theme(text = element_text(size = 11)) + - theme( - plot.title = element_text(size = 12), - plot.subtitle = element_text(size = 11) - ) + - if (nullvalue == "present") { - if (measure == "default") { - annotate("segment", - x = 0, xend = 0, y = 0, yend = 1, - color = "#990000", alpha = 0.3, size = .6 - ) - } else if (measure == "ratio") { - annotate("segment", - x = 1, xend = 1, y = 0, yend = 1, - color = "#990000", alpha = 0.3, size = .6 - ) - } - } - else if (nullvalue == "absent") { - } - } else if (type == "surprisal") { - if (is.data.frame(data) != TRUE) { - stop("Error: 'data' must be a data frame from 'concurve'") - } - if (ncol(data) != 5) { - stop("Error: 'data' must be a data frame from 'concurve'") - } - if (is.character(measure) != TRUE) { - stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") - } - if (is.character(title) != TRUE) { - stop("Error: 'title' must be a string.") - } - if (is.character(subtitle) != TRUE) { - stop("Error: 'subtitle' must be a string.") - } - if (is.character(xaxis) != TRUE) { - stop("Error: 'xaxis' must be a string.") - } - if (is.character(yaxis) != TRUE) { - stop("Error: 'yaxis' must be a string.") - } - if (is.character(fill) != TRUE) { - stop("Error: 'fill' must be a string for the color.") - } - ggplot(data = data) + - geom_point(aes(x = lower.limit, y = svalue), - color = color, fill = fill, alpha = 0.5, shape = 20, size = 0.1 - ) + - geom_point(aes(x = upper.limit, y = svalue), - color = color, fill = fill, alpha = 0.5, shape = 20, size = 0.1 - ) + - geom_ribbon(aes(x = lower.limit, ymin = max(svalue), ymax = svalue), - fill = fill, alpha = 0.30 - ) + - geom_ribbon(aes(x = upper.limit, ymin = max(svalue), ymax = svalue), - fill = fill, alpha = 0.30 - ) + - labs( - title = title, - subtitle = subtitle, - x = xaxis, - y = "S-value (bits of information)" - ) + - theme_light() + - theme( - axis.title.x = element_text(size = 12), - axis.title.y = element_text(size = 12) - ) + { - if (measure == "default") scale_x_continuous(breaks = scales::pretty_breaks(n = 10)) - } + { - if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) - } + - scale_y_continuous(breaks = seq(0, 14, 0.5), expand = c(0, 0)) + - theme(text = element_text(size = 11)) + - theme( - plot.title = element_text(size = 12), - plot.subtitle = element_text(size = 11) - ) - } -} - -# RMD Check -utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.level", "pvalue", "svalue")) diff --git a/R/ggcurve.R b/R/ggcurve.R new file mode 100644 index 0000000..e5b500b --- /dev/null +++ b/R/ggcurve.R @@ -0,0 +1,625 @@ +#' Plots the P-Value (Consonance), S-value (Surprisal), +#' and Likelihood Function via ggplot2 +#' +#' Takes the dataframe produced by the interval functions and +#' plots the p-values/s-values, consonance (confidence) levels, and +#' the interval estimates to produce a p-value/s-value function +#' using ggplot2 graphics. +#' +#' @param data The dataframe produced by one of the interval functions +#' in which the intervals are stored. +#' @param type Choose whether to plot a "consonance" function, a +#' "surprisal" function or "likelihood". The default option is set to "c". +#' The type must be set in quotes, for example ggcurve (type = "s") or +#' ggcurve(type = "c"). Other options include "pd" for the consonance +#' distribution function, and "cd" for the consonance density function, +#' "l1" for relative likelihood, "l2" for log-likelihood, "l3" for likelihood +#' and "d" for deviance function. +#' @param measure Indicates whether the object has a log transformation +#' or is normal/default. The default setting is "default". If the measure +#' is set to "ratio", it will take logarithmically transformed values and +#' convert them back to normal values in the dataframe. This is typically a +#' setting used for binary outcomes and their measures such as risk ratios, +#' hazard ratios, and odds ratios. +#' @param levels Indicates which interval levels should be plotted on the function. +#' By default it is set to 0.95 to plot the 95% interval on the consonance function, +#' but more levels can be plotted by using the c() function for example, +#' levels = c(0.50, 0.75, 0.95). +#' @param nullvalue Indicates whether the null value for the measure +#' should be plotted. By default, it is set to FALSE, meaning it will not be +#' plotted as a vertical line. Changing this to TRUE, will plot a vertical +#' line at 0 when the measure is set to " default" and a vertical line at +#' 1 when the measure is set to "ratio". For example, +#' ggcurve(type = "c", data = df, measure = "ratio", nullvalue = "present"). +#' This feature is not yet available for surprisal functions. +#' @param position Determines the orientation of the P-value (consonance) function. +#' By default, it is set to "pyramid", meaning the p-value function will +#' stand right side up, like a pyramid. However, it can also be inverted +#' via the option "inverted". This will also change the sequence of the +#' y-axes to match the orientation.This can be set as such, +#' ggcurve(type = "c", data = df, position = "inverted"). +#' @param title A custom title for the graph. By default, it is +#' set to "Consonance Function". In order to set a title, it must +#' be in quotes. For example, ggcurve(type = "c", +#' data = x, title = "Custom Title"). +#' @param subtitle A custom subtitle for the graph. By default, it is set +#' to "The function contains consonance/confidence intervals at every level +#' and the P-values." In order to set a subtitle, it must be in quotes. +#' For example, ggcurve(type = "c", data = x, subtitle = "Custom Subtitle"). +#' @param xaxis A custom x-axis title for the graph. By default, +#' it is set to "Range of Values. +#' In order to set a x-axis title, it must be in quotes. For example, +#' ggcurve(type = "c", data = x, xaxis = "Hazard Ratio"). +#' @param yaxis A custom y-axis title for the graph. By default, +#' it is set to "Consonance Level". +#' In order to set a y-axis title, it must be in quotes. For example, +#' ggcurve(type = "c", data = x, yxis = "Confidence Level"). +#' @param color Item that allows the user to choose the color of the points +#' and the ribbons in the graph. By default, it is set to color = "#555555". +#' The inputs must be in quotes. +#' For example, ggcurve(type = "c", data = x, color = "#333333"). +#' @param fill Item that allows the user to choose the color of the ribbons in the graph. +#' By default, it is set to fill = "#239a98". The inputs must be in quotes. For example, +#' ggcurve(type = "c", data = x, fill = "#333333"). +#' +#' @return Plot with intervals at every consonance level graphed with their corresponding +#' p-values and compatibility levels. +#' +#' @examples +#' +#' # Simulate random data +#' +#' library(concurve) +#' +#' GroupA <- rnorm(500) +#' GroupB <- rnorm(500) +#' +#' RandomData <- data.frame(GroupA, GroupB) +#' +#' intervalsdf <- curve_mean(GroupA, GroupB, data = RandomData, method = "default") +#' (function1 <- ggcurve(type = "c", intervalsdf[[1]])) +ggcurve <- function(data, type = "c", measure = "default", levels = 0.95, nullvalue = FALSE, position = "pyramid", + title = "Interval Function", + subtitle = "The function displays intervals at every level.", + xaxis = expression(Theta ~ "Range of Values"), + yaxis = "P-value", + color = "#000000", + fill = "#239a98") { + + + # Consonance Curve ----------------------------------------------------- + + if (type == "c") { + if (is(data, "concurve") != TRUE) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (ncol(data) != 7) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(position) != TRUE) { + stop("Error: 'position' must be a string such as 'pyramid' or 'inverted'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill) != TRUE) { + stop("Error: 'fill' must be a string for the color.") + } + + + # Plotting Intervals ------------------------------------------------------ + + interval <- pbmclapply(levels, FUN = function(i) (c(i, subset(data, intrvl.level == i)[, 1], subset(data, intrvl.level == i)[, 2])), mc.cores = getOption("mc.cores", 1L)) + interval <- data.frame(do.call(rbind, interval)) + interval <- pivot_longer(interval, X2:X3, names_to = "levels", values_to = "limits") + interval <- interval[, -2] + colum_names <- c("levels", "limits") + colnames(interval) <- colum_names + + + ggplot(data = data) + + geom_line(aes(x = lower.limit, y = pvalue), + color = color + ) + + geom_line(aes(x = upper.limit, y = pvalue), + color = color + ) + + geom_point(data = interval, mapping = aes(x = limits, y = 1 - levels), size = .95) + + geom_line(data = interval, mapping = aes(x = limits, y = 1 - levels, group = levels), size = .40) + + geom_ribbon(aes(x = lower.limit, ymin = min(pvalue), ymax = pvalue), + fill = fill, alpha = 0.20 + ) + + geom_ribbon(aes(x = upper.limit, ymin = min(pvalue), ymax = pvalue), + fill = fill, alpha = 0.20 + ) + + labs( + title = "Consonance Function", + subtitle = subtitle, + x = xaxis, + y = yaxis + ) + + theme_bw() + + theme( + plot.title = element_text(size = 12), + plot.subtitle = element_text(size = 11), + axis.title.x = element_text(size = 12), + axis.title.y = element_text(size = 12), + text = element_text(size = 11) + ) + + { + if (measure == "default") scale_x_continuous(breaks = scales::pretty_breaks(n = 5)) + } + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 5)) + } + + { + if (position == "inverted") { + scale_y_reverse( + expand = expand_scale(mult = c(0.01, 0.025)), + breaks = seq(0, 1, 0.10), + sec.axis = sec_axis(~ (1 - .) * 100, name = "Levels for CI (%)", breaks = seq(0, 100, 10)) + ) + } + } + + { + if (position == "pyramid") { + scale_y_continuous( + expand = expand_scale(mult = c(0.01, 0.025)), + breaks = seq(0, 1, 0.10), + sec.axis = sec_axis(~ (1 - .) * 100, name = "Levels for CI (%)", breaks = seq(0, 100, 10)) + ) + } + } + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + else if (nullvalue == FALSE) { + } + + # Surprisal Curve ------------------------------------------------------ + } else if (type == "s") { + if (is(data, "concurve") != TRUE) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (ncol(data) != 7) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill) != TRUE) { + stop("Error: 'fill' must be a string for the color.") + } + + + # Plotting Intervals ------------------------------------------------------ + + interval <- pbmclapply(levels, FUN = function(i) (c(i, subset(data, intrvl.level == i)[, 1], subset(data, intrvl.level == i)[, 2])), mc.cores = getOption("mc.cores", 1L)) + interval <- data.frame(do.call(rbind, interval)) + interval <- gather(interval, key = "levels", value = "limits", X2:X3) + interval <- interval[, -2] + colum_names <- c("levels", "limits") + colnames(interval) <- colum_names + + ggplot(data = data) + + geom_line(aes(x = lower.limit, y = svalue), + color = color + ) + + geom_line(aes(x = upper.limit, y = svalue), + color = color + ) + + geom_point(data = interval, mapping = aes(x = limits, y = (-log2(1 - levels))), size = .95) + + geom_line(data = interval, mapping = aes(x = limits, y = (-log2(1 - levels)), group = levels), size = .40) + + geom_ribbon(aes(x = lower.limit, ymin = max(svalue), ymax = svalue), + fill = fill, alpha = 0.30 + ) + + geom_ribbon(aes(x = upper.limit, ymin = max(svalue), ymax = svalue), + fill = fill, alpha = 0.30 + ) + + labs( + title = "Surprisal Function", + subtitle = subtitle, + x = xaxis, + y = "S-value \n(Bits of Information)" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 12), + plot.subtitle = element_text(size = 11), + axis.title.x = element_text(size = 12), + axis.title.y = element_text(size = 12), + text = element_text(size = 11) + ) + + { + if (measure == "default") scale_x_continuous(breaks = scales::pretty_breaks(n = 5)) + } + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 5)) + } + + scale_y_continuous(breaks = seq(0, 14, 1), expand = c(0.0075, 0.0075)) + + + + # Consonance Distribution ----------------------------------------------------- + } else if (type == "cdf") { + if (is(data, "concurve") != TRUE) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (ncol(data) != 1) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(position) != TRUE) { + stop("Error: 'position' must be a string such as 'pyramid' or 'inverted'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill) != TRUE) { + stop("Error: 'fill' must be a string for the color.") + } + + ggplot(data = data, mapping = aes(x = x)) + + stat_ecdf(geom = "point", color = fill, size = 0.70, shape = 8, alpha = 0.3) + + geom_hline(yintercept = 0.50) + + labs( + title = "Consonance Distribution", + subtitle = subtitle, + x = xaxis, + y = "Cumulative Confidence" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 14), + plot.subtitle = element_text(size = 12), + plot.caption = element_text(size = 8), + axis.title.x = element_text(size = 13), + axis.title.y = element_text(size = 13), + text = element_text(size = 15) + ) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(expand = expand_scale(mult = c(0.01, 0.05)), breaks = scales::pretty_breaks(n = 10)) + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + + # Consonance Density --------------------------------------------- + } else if (type == "cd") { + if (ncol(data) != 1) { + stop("Error: 'data' must be a data frame from the curve_boot function in 'concurve'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill) != TRUE) { + stop("Error: 'fill' must be a string for the color.") + } + + ggplot(data = data, mapping = aes(x = x)) + + geom_density(fill = fill, alpha = 0.3) + + labs( + title = "Consonance Density", + subtitle = subtitle, + x = xaxis, + y = "Density" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 14), + plot.subtitle = element_text(size = 12), + plot.caption = element_text(size = 8), + axis.title.x = element_text(size = 13), + axis.title.y = element_text(size = 13), + text = element_text(size = 15) + ) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(expand = expand_scale(mult = c(0.01, 0.05)), breaks = scales::pretty_breaks(n = 10)) + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + + # Relative Likelihood Function ----------------------------------------------------- + } else if (type == "l1") { + if (ncol(data) != 5) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill) != TRUE) { + stop("Error: 'fill' must be a string for the color.") + } + + ggplot(data = data, mapping = aes(x = values, y = support)) + + geom_line() + + geom_ribbon(aes(x = values, ymin = min(support), ymax = support), fill = "#239a98", alpha = 0.30) + + labs( + title = "Relative Likelihood Function", + subtitle = subtitle, + x = xaxis, + y = "Relative Likelihood" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 14), + plot.subtitle = element_text(size = 12), + plot.caption = element_text(size = 8), + axis.title.x = element_text(size = 13), + axis.title.y = element_text(size = 13), + text = element_text(size = 15) + ) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(expand = expand_scale(mult = c(0.01, 0.05)), breaks = scales::pretty_breaks(n = 10)) + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + + # Log Likelihood Function ----------------------------------------------------- + } else if (type == "l2") { + if (ncol(data) != 5) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill) != TRUE) { + stop("Error: 'fill' must be a string for the color.") + } + + ggplot(data = data, mapping = aes(x = values, y = loglikelihood)) + + geom_line() + + geom_ribbon(aes(x = values, ymin = min(loglikelihood), ymax = loglikelihood), fill = "#239a98", alpha = 0.30) + + labs( + title = "Log Likelihood Function", + subtitle = subtitle, + x = xaxis, + y = "Log Likelihood" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 14), + plot.subtitle = element_text(size = 12), + plot.caption = element_text(size = 8), + axis.title.x = element_text(size = 13), + axis.title.y = element_text(size = 13), + text = element_text(size = 15) + ) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(expand = expand_scale(mult = c(0.01, 0.05)), breaks = scales::pretty_breaks(n = 10)) + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + + # Likelihood Function ----------------------------------------------------- + } else if (type == "l3") { + if (ncol(data) != 5) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill) != TRUE) { + stop("Error: 'fill' must be a string for the color.") + } + + ggplot(data = data, mapping = aes(x = values, y = likelihood)) + + geom_line() + + geom_ribbon(aes(x = values, ymin = min(likelihood), ymax = likelihood), fill = "#239a98", alpha = 0.30) + + labs( + title = "Likelihood Function", + subtitle = subtitle, + x = xaxis, + y = "Likelihood" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 14), + plot.subtitle = element_text(size = 12), + plot.caption = element_text(size = 8), + axis.title.x = element_text(size = 13), + axis.title.y = element_text(size = 13), + text = element_text(size = 15) + ) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(expand = expand_scale(mult = c(0.01, 0.05)), breaks = scales::pretty_breaks(n = 10)) + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + + # Deviance Function ----------------------------------------------------- + } else if (type == "d") { + if (ncol(data) != 5) { + stop("Error: 'data' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill) != TRUE) { + stop("Error: 'fill' must be a string for the color.") + } + + ggplot(data = data, mapping = aes(x = values, y = deviancestat)) + + geom_line() + + geom_ribbon(aes(x = values, ymin = deviancestat, ymax = max(deviancestat)), fill = "#239a98", alpha = 0.30) + + labs( + title = "Deviance Function", + subtitle = subtitle, + x = xaxis, + y = "Deviance Statistic" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 14), + plot.subtitle = element_text(size = 12), + plot.caption = element_text(size = 8), + axis.title.x = element_text(size = 13), + axis.title.y = element_text(size = 13), + text = element_text(size = 15) + ) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(breaks = scales::pretty_breaks(n = 10), expand = c(0.0075, 0.0075)) + } +} + +# RMD Check +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) +utils::globalVariables(c("X2", "X3", "limits", "x")) diff --git a/R/plot_compare.R b/R/plot_compare.R new file mode 100644 index 0000000..a11bb09 --- /dev/null +++ b/R/plot_compare.R @@ -0,0 +1,678 @@ +#' Compares the P-Value (Consonance), S-value (Surprisal), and Likelihood Function via ggplot2 +#' +#' Compares the p-value/s-value, and likelihood functions using ggplot2 graphics. +#' +#' @param data1 The first dataframe produced by one of the interval functions in which the +#' intervals are stored. +#' @param data2 The second dataframe produced by one of the interval functions in which the +#' intervals are stored. +#' @param type Choose whether to plot a "consonance" function, a +#' "surprisal" function or "likelihood". The default option is set to "c". +#' The type must be set in quotes, for example plot_compare(type = "s") or +#' plot_compare(type = "c"). Other options include "pd" for the consonance +#' distribution function, and "cd" for the consonance density function, +#' "l1" for relative likelihood, "l2" for log-likelihood, "l3" for likelihood +#' and "d" for deviance function. +#' @param measure Indicates whether the object has a log transformation +#' or is normal/default. The default setting is "default". If the measure +#' is set to "ratio", it will take logarithmically transformed values and +#' convert them back to normal values in the dataframe. This is typically a +#' setting used for binary outcomes and their measures such as risk ratios, +#' hazard ratios, and odds ratios. +#' @param nullvalue Indicates whether the null value for the measure +#' should be plotted. By default, it is set to FALSE, meaning it will not be +#' plotted as a vertical line. Changing this to TRUE, will plot a vertical +#' line at 0 when the measure is set to " default" and a vertical line at +#' 1 when the measure is set to "ratio". For example, +#' plot_compare(type = "c", data = df, measure = "ratio", nullvalue = "present"). +#' This feature is not yet available for surprisal functions. +#' @param position Determines the orientation of the P-value (consonance) function. +#' By default, it is set to "pyramid", meaning the p-value function will +#' stand right side up, like a pyramid. However, it can also be inverted +#' via the option "inverted". This will also change the sequence of the +#' y-axes to match the orientation.This can be set as such, +#' plot_compare(type = "c", data = df, position = "inverted"). +#' @param title A custom title for the graph. By default, it is +#' set to "Consonance Function". In order to set a title, it must +#' be in quotes. For example, plot_compare(type = "c", +#' data = x, title = "Custom Title"). +#' @param subtitle A custom subtitle for the graph. By default, it is set +#' to "The function contains consonance/confidence intervals at every level +#' and the P-values." In order to set a subtitle, it must be in quotes. +#' For example, plot_compare(type = "c", data = x, subtitle = "Custom Subtitle"). +#' @param xaxis A custom x-axis title for the graph. By default, +#' it is set to "Range of Values. +#' In order to set a x-axis title, it must be in quotes. For example, +#' plot_compare(type = "c", data = x, xaxis = "Hazard Ratio"). +#' @param yaxis A custom y-axis title for the graph. By default, +#' it is set to "Consonance Level". +#' In order to set a y-axis title, it must be in quotes. For example, +#' plot_compare(type = "c", data = x, yxis = "Confidence Level"). +#' @param color Item that allows the user to choose the color of the points +#' and the ribbons in the graph. By default, it is set to color = "#555555". +#' The inputs must be in quotes. +#' For example, plot_compare(type = "c", data = x, color = "#333333"). +#' @param fill1 Item that allows the user to choose the color of the ribbons in the graph +#' for data1. By default, it is set to fill1 = "#239a98". The inputs must be in quotes. +#' For example, plot_compare(type = "c", data = x, fill1 = "#333333"). +#' @param fill2 Item that allows the user to choose the color of the ribbons in the graph +#' for data1. By default, it is set to fill2 = "#d46c5b". The inputs must be in quotes. +#' For example, plot_compare(type = "c", data = x, fill2 = "#333333"). +#' @return A plot that compares two functions. +#' @examples +#' \donttest{ +#' library(concurve) +#' +#' GroupA <- rnorm(50) +#' GroupB <- rnorm(50) +#' RandomData <- data.frame(GroupA, GroupB) +#' intervalsdf <- curve_mean(GroupA, GroupB, data = RandomData) +#' GroupA2 <- rnorm(50) +#' GroupB2 <- rnorm(50) +#' RandomData2 <- data.frame(GroupA2, GroupB2) +#' model <- lm(GroupA2 ~ GroupB2, data = RandomData2) +#' +#' randomframe <- curve_gen(model, "GroupB2") +#' +#' (plot_compare(intervalsdf[[1]], randomframe[[1]], type = "s")) +#' } +#' +plot_compare <- function(data1, data2, type = "c", measure = "default", nullvalue = FALSE, position = "pyramid", + title = "Interval Functions", + subtitle = "The function displays intervals at every level.", + xaxis = expression(Theta ~ "Range of Values"), + yaxis = "P-value", + color = "#000000", + fill1 = "#239a98", + fill2 = "#d46c5b") { + cols <- c(fill1, fill2) + + # Consonance Curve ----------------------------------------------------- + + if (type == "c") { + if (is(data1, "concurve") != TRUE) { + stop("Error: 'data1' must be a data frame from 'concurve'.") + } + if (ncol(data1) != 7) { + stop("Error: 'x' must be a data frame from 'concurve'.") + } + if (is(data2, "concurve") != TRUE) { + stop("Error: 'data2' must be a data frame from 'concurve'.") + } + if (ncol(data2) != 7) { + stop("Error: 'x' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(position) != TRUE) { + stop("Error: 'position' must be a string such as 'pyramid' or 'inverted'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill1) != TRUE) { + stop("Error: 'fill1' must be a string for the color.") + } + if (is.character(fill2) != TRUE) { + stop("Error: 'fill2' must be a string for the color.") + } + ggplot(data = data1) + + geom_line(aes(x = lower.limit, y = pvalue), + color = color + ) + + geom_line(aes(x = upper.limit, y = pvalue), + color = color + ) + + geom_ribbon(aes(x = lower.limit, ymin = min(pvalue), ymax = pvalue, fill = fill1), + alpha = 0.30 + ) + + geom_ribbon(aes(x = upper.limit, ymin = min(pvalue), ymax = pvalue, fill = fill1), + alpha = 0.30 + ) + + geom_line( + data = data2, aes(x = lower.limit, y = pvalue), + color = color + ) + + geom_line( + data = data2, aes(x = upper.limit, y = pvalue), + color = color + ) + + geom_ribbon( + data = data2, aes(x = lower.limit, ymin = min(pvalue), ymax = pvalue, fill = fill2), + alpha = 0.30 + ) + + geom_ribbon( + data = data2, aes(x = upper.limit, ymin = min(pvalue), ymax = pvalue, fill = fill2), + alpha = 0.30 + ) + + theme_bw() + + labs( + title = title, + subtitle = subtitle, + x = xaxis, + y = yaxis + ) + + theme( + plot.title = element_text(size = 12), + plot.subtitle = element_text(size = 11), + axis.title.x = element_text(size = 12), + axis.title.y = element_text(size = 12), + text = element_text(size = 11), + legend.background = element_blank(), + legend.position = c(.998, .95), + legend.justification = c("right", "top"), + legend.key = element_rect(linetype = 1), + legend.key.size = unit(0.495, "cm") + ) + + scale_fill_manual( + aesthetics = "fill", + values = cols, + labels = c("Study 1", "Study 2") + ) + + guides(fill = guide_legend( + title = "Identity", + title.theme = element_text( + size = 8 + ), + label.theme = element_text( + size = 8 + ), + label.hjust = 4.5 + )) + + { + if (measure == "default") scale_x_continuous(breaks = scales::pretty_breaks(n = 5)) + } + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 5)) + } + + { + if (position == "inverted") { + scale_y_reverse( + expand = expand_scale(mult = c(0.01, 0.025)), + breaks = seq(0, 1, 0.10), + sec.axis = sec_axis(~ (1 - .) * 100, name = "Levels for CI (%)", breaks = seq(0, 100, 10)) + ) + } + } + + { + if (position == "pyramid") { + scale_y_continuous( + expand = expand_scale(mult = c(0.01, 0.025)), + breaks = seq(0, 1, 0.10), + sec.axis = sec_axis(~ (1 - .) * 100, name = "Levels for CI (%)", breaks = seq(0, 100, 10)) + ) + } + } + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + else if (nullvalue == FALSE) { + } + + + # Surprisal Curve ------------------------------------------------------ + } else if (type == "s") { + if (is(data1, "concurve") != TRUE) { + stop("Error: 'data1' must be a data frame from 'concurve'.") + } + if (ncol(data1) != 7) { + stop("Error: 'x' must be a data frame from 'concurve'.") + } + if (is(data2, "concurve") != TRUE) { + stop("Error: 'data2' must be a data frame from 'concurve'.") + } + if (ncol(data2) != 7) { + stop("Error: 'x' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill1) != TRUE) { + stop("Error: 'fill1' must be a string for the color.") + } + if (is.character(fill2) != TRUE) { + stop("Error: 'fill2' must be a string for the color.") + } + ggplot(data = data1) + + geom_line(aes(x = lower.limit, y = svalue), + color = color + ) + + geom_line(aes(x = upper.limit, y = svalue), + color = color + ) + + geom_ribbon(aes(x = lower.limit, ymin = max(svalue), ymax = svalue, fill = fill1), + alpha = 0.30 + ) + + geom_ribbon(aes(x = upper.limit, ymin = max(svalue), ymax = svalue, fill = fill1), + alpha = 0.30 + ) + + geom_line( + data = data2, aes(x = lower.limit, y = svalue), + color = color + ) + + geom_line( + data = data2, aes(x = upper.limit, y = svalue), + color = color + ) + + geom_ribbon( + data = data2, aes(x = lower.limit, ymin = max(svalue), ymax = svalue, fill = fill2), + alpha = 0.30 + ) + + geom_ribbon( + data = data2, aes(x = upper.limit, ymin = max(svalue), ymax = svalue, fill = fill2), + fill = fill2, alpha = 0.30 + ) + + labs( + title = title, + subtitle = subtitle, + x = xaxis, + y = "S-value \n(Bits of Information)" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 12), + plot.subtitle = element_text(size = 11), + axis.title.x = element_text(size = 12), + axis.title.y = element_text(size = 12), + text = element_text(size = 11), + legend.background = element_blank(), + legend.position = c(.998, .25), + legend.justification = c("right", "top"), + legend.key = element_rect(linetype = 1), + legend.key.size = unit(0.495, "cm") + ) + + scale_fill_manual( + aesthetics = "fill", + values = cols, + labels = c("Study 1", "Study 2") + ) + + guides(fill = guide_legend( + title = "Identity", + title.theme = element_text( + size = 8 + ), + label.theme = element_text( + size = 8 + ), + label.hjust = 4.5 + )) + + { + if (measure == "default") scale_x_continuous(breaks = scales::pretty_breaks(n = 5)) + } + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 5)) + } + + scale_y_continuous(breaks = seq(0, 14, 1.0), expand = c(0.0075, 0.0075)) + + + # Relative Likelihood Function ----------------------------------------------------- + } else if (type == "l1") { + if (ncol(data1) != 5) { + stop("Error: 'data1' must be a data frame from 'concurve'.") + } + if (ncol(data2) != 5) { + stop("Error: 'data2' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill1) != TRUE) { + stop("Error: 'fill1' must be a string for the color.") + } + if (is.character(fill2) != TRUE) { + stop("Error: 'fill2' must be a string for the color.") + } + + ggplot(data = data1, mapping = aes(x = values, y = support)) + + geom_line() + + geom_ribbon(aes(x = values, ymin = min(support), ymax = support, fill = fill1), alpha = 0.30) + + geom_line(data = data2) + + geom_ribbon(data = data2, aes(x = values, ymin = min(support), ymax = support, fill = fill2), alpha = 0.30) + + labs( + title = "Relative Likelihood Functions", + subtitle = subtitle, + x = xaxis, + y = "Relative Likelihood" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 12), + plot.subtitle = element_text(size = 11), + axis.title.x = element_text(size = 12), + axis.title.y = element_text(size = 12), + text = element_text(size = 11), + legend.background = element_blank(), + legend.position = c(.998, .95), + legend.justification = c("right", "top"), + legend.key = element_rect(linetype = 1), + legend.key.size = unit(0.495, "cm") + ) + + scale_fill_manual( + aesthetics = "fill", + values = cols, + labels = c("Study 1", "Study 2") + ) + + guides(fill = guide_legend( + title = "Identity", + title.theme = element_text( + size = 8 + ), + label.theme = element_text( + size = 8 + ), + label.hjust = 4.5 + )) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(expand = expand_scale(mult = c(0.01, 0.05)), breaks = scales::pretty_breaks(n = 10)) + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + + + # Log-Likelihood Function ----------------------------------------------------- + } else if (type == "l2") { + if (ncol(data1) != 5) { + stop("Error: 'data1' must be a data frame from 'concurve'.") + } + if (ncol(data2) != 5) { + stop("Error: 'data2' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill1) != TRUE) { + stop("Error: 'fill1' must be a string for the color.") + } + if (is.character(fill2) != TRUE) { + stop("Error: 'fill2' must be a string for the color.") + } + + ggplot(data = data1, mapping = aes(x = values, y = loglikelihood)) + + geom_line() + + geom_ribbon(aes(x = values, ymin = min(loglikelihood), ymax = loglikelihood, fill = fill1), alpha = 0.30) + + geom_line(data = data2) + + geom_ribbon(data = data2, aes(x = values, ymin = min(loglikelihood), ymax = loglikelihood, fill = fill2), alpha = 0.30) + + labs( + title = "Log Likelihood Function", + subtitle = subtitle, + x = xaxis, + y = "Log Likelihood" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 12), + plot.subtitle = element_text(size = 11), + axis.title.x = element_text(size = 12), + axis.title.y = element_text(size = 12), + text = element_text(size = 11), + legend.background = element_blank(), + legend.position = c(.998, .95), + legend.justification = c("right", "top"), + legend.key = element_rect(linetype = 1), + legend.key.size = unit(0.495, "cm") + ) + + scale_fill_manual( + aesthetics = "fill", + values = cols, + labels = c("Study 1", "Study 2") + ) + + guides(fill = guide_legend( + title = "Identity", + title.theme = element_text( + size = 8 + ), + label.theme = element_text( + size = 8 + ), + label.hjust = 4.5 + )) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(expand = expand_scale(mult = c(0.01, 0.05)), breaks = scales::pretty_breaks(n = 10)) + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + + + # Likelihood Function ----------------------------------------------------- + } else if (type == "l3") { + if (ncol(data1) != 5) { + stop("Error: 'data1' must be a data frame from 'concurve'.") + } + if (ncol(data2) != 5) { + stop("Error: 'data2' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill1) != TRUE) { + stop("Error: 'fill1' must be a string for the color.") + } + if (is.character(fill2) != TRUE) { + stop("Error: 'fill2' must be a string for the color.") + } + + ggplot(data = data1, mapping = aes(x = values, y = likelihood)) + + geom_line() + + geom_ribbon(aes(x = values, ymin = min(likelihood), ymax = likelihood, fill = fill1), alpha = 0.30) + + geom_line(data = data2) + + geom_ribbon(data = data2, aes(x = values, ymin = min(likelihood), ymax = likelihood, fill = fill2), alpha = 0.30) + + labs( + title = "Likelihood Function", + subtitle = subtitle, + x = xaxis, + y = "Likelihood" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 12), + plot.subtitle = element_text(size = 11), + axis.title.x = element_text(size = 12), + axis.title.y = element_text(size = 12), + text = element_text(size = 11), + legend.background = element_blank(), + legend.position = c(.998, .95), + legend.justification = c("right", "top"), + legend.key = element_rect(linetype = 1), + legend.key.size = unit(0.495, "cm") + ) + + scale_fill_manual( + aesthetics = "fill", + values = cols, + labels = c("Study 1", "Study 2") + ) + + guides(fill = guide_legend( + title = "Identity", + title.theme = element_text( + size = 8 + ), + label.theme = element_text( + size = 8 + ), + label.hjust = 4.5 + )) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(expand = expand_scale(mult = c(0.01, 0.05)), breaks = scales::pretty_breaks(n = 10)) + + if (nullvalue == TRUE) { + if (measure == "default") { + annotate("segment", + x = 0, xend = 0, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } else if (measure == "ratio") { + annotate("segment", + x = 1, xend = 1, y = 0, yend = 1, + color = "#990000", alpha = 0.3, size = .75, linetype = 1 + ) + } + } + + # Deviance Function ----------------------------------------------------- + } else if (type == "d") { + if (ncol(data1) != 5) { + stop("Error: 'data1' must be a data frame from 'concurve'.") + } + if (ncol(data2) != 5) { + stop("Error: 'data2' must be a data frame from 'concurve'.") + } + if (is.character(measure) != TRUE) { + stop("Error: 'measure' must be a string such as 'default' or 'ratio'.") + } + if (is.logical(nullvalue) != TRUE) { + stop("Error: 'nullvalue' must be a logical statement such as 'TRUE' or 'FALSE'.") + } + if (is.character(title) != TRUE) { + stop("Error: 'title' must be a string.") + } + if (is.character(subtitle) != TRUE) { + stop("Error: 'subtitle' must be a string.") + } + if (is.character(yaxis) != TRUE) { + stop("Error: 'yaxis' must be a string.") + } + if (is.character(fill1) != TRUE) { + stop("Error: 'fill1' must be a string for the color.") + } + if (is.character(fill2) != TRUE) { + stop("Error: 'fill2' must be a string for the color.") + } + + ggplot(data = data1, mapping = aes(x = values, y = deviancestat)) + + geom_line() + + geom_ribbon(aes(x = values, ymin = deviancestat, ymax = max(deviancestat), fill = fill1), alpha = 0.30) + + geom_line(data = data2) + + geom_ribbon(data = data2, aes(x = values, ymin = deviancestat, ymax = max(deviancestat), fill = fill2), alpha = 0.30) + + labs( + title = "Deviance Functions", + subtitle = subtitle, + x = xaxis, + y = "Deviance Statistic \n2ln(MLR)" + ) + + theme_bw() + + theme( + plot.title = element_text(size = 12), + plot.subtitle = element_text(size = 11), + axis.title.x = element_text(size = 12), + axis.title.y = element_text(size = 12), + text = element_text(size = 11), + legend.background = element_blank(), + legend.position = c(.998, .35), + legend.justification = c("right", "top"), + legend.key = element_rect(linetype = 1), + legend.key.size = unit(0.495, "cm") + ) + + scale_fill_manual( + aesthetics = "fill", + values = cols, + labels = c("Study 1", "Study 2") + ) + + guides(fill = guide_legend( + title = "Identity", + title.theme = element_text( + size = 8 + ), + label.theme = element_text( + size = 8 + ), + label.hjust = 4.5 + )) + + { + if (measure == "ratio") scale_x_log10(breaks = scales::pretty_breaks(n = 10)) + } + + scale_y_continuous(breaks = scales::pretty_breaks(n = 10), expand = c(0.0075, 0.0075)) + } +} + + +# RMD Check +utils::globalVariables(c("df", "lower.limit", "upper.limit", "intrvl.width", "intrvl.level", "cdf", "pvalue", "svalue")) diff --git a/R/utils-tidy-eval.R b/R/utils-tidy-eval.R new file mode 100644 index 0000000..af7c474 --- /dev/null +++ b/R/utils-tidy-eval.R @@ -0,0 +1,47 @@ +#' Tidy eval helpers +#' +#' @description +#' +#' * \code{\link[rlang:quotation]{sym}()} creates a symbol from a string and +#' \code{\link[rlang:quotation]{syms}()} creates a list of symbols from a +#' character vector. +#' +#' * \code{\link[rlang:quotation]{enquo}()} and +#' \code{\link[rlang:quotation]{enquos}()} delay the execution of one or +#' several function arguments. \code{enquo()} returns a single quoted +#' expression, which is like a blueprint for the delayed computation. +#' \code{enquos()} returns a list of such quoted expressions. +#' +#' * \code{\link[rlang:quotation]{expr}()} quotes a new expression _locally_. It +#' is mostly useful to build new expressions around arguments +#' captured with [enquo()] or [enquos()]: +#' \code{expr(mean(!!enquo(arg), na.rm = TRUE))}. +#' +#' * \code{\link[rlang]{as_name}()} transforms a quoted variable name +#' into a string. Supplying something else than a quoted variable +#' name is an error. +#' +#' That's unlike \code{\link[rlang]{as_label}()} which also returns +#' a single string but supports any kind of R object as input, +#' including quoted function calls and vectors. Its purpose is to +#' summarise that object into a single label. That label is often +#' suitable as a default name. +#' +#' If you don't know what a quoted expression contains (for instance +#' expressions captured with \code{enquo()} could be a variable +#' name, a call to a function, or an unquoted constant), then use +#' \code{as_label()}. If you know you have quoted a simple variable +#' name, or would like to enforce this, use \code{as_name()}. +#' +#' To learn more about tidy eval and how to use these tools, visit +#' \url{https://tidyeval.tidyverse.org} and the +#' \href{https://adv-r.hadley.nz/metaprogramming.html}{Metaprogramming +#' section} of \href{https://adv-r.hadley.nz}{Advanced R}. +#' +#' @md +#' @name tidyeval +#' @keywords internal +#' @importFrom rlang expr enquo enquos sym syms .data := as_name as_label +#' @aliases expr enquo enquos sym syms .data := as_name as_label +#' @export expr enquo enquos sym syms .data := as_name as_label +NULL diff --git a/README.Rmd b/README.Rmd index c286d42..a795669 100644 --- a/README.Rmd +++ b/README.Rmd @@ -1,61 +1,69 @@ --- -title: concurve output: github_document --- -concurve | Graph Interval Functions -================ +
+ concurve | Graph Interval Functions +
-[![CRAN\_Status\_Badge](http://www.r-pkg.org/badges/version/concurve)](https://cran.r-project.org/package=concurve) -[![Build Status](https://travis-ci.org/Zadchow/concurve.svg?branch=master)](https://travis-ci.org/Zadchow/concurve) -[![Build status](https://ci.appveyor.com/api/projects/status/v8sp9x96dap2om9s?svg=true)](https://ci.appveyor.com/project/Zadchow/concurve) -[![DOI](https://zenodo.org/badge/165464881.svg)](https://zenodo.org/badge/latestdoi/165464881) +* * * + + +[![CRAN status](https://www.r-pkg.org/badges/version/concurve)](https://CRAN.R-project.org/package=concurve) +[![Travis build status](https://travis-ci.org/Zadchow/concurve.svg?branch=master)](https://travis-ci.org/Zadchow/concurve) +[![Lifecycle: maturing](https://img.shields.io/badge/lifecycle-maturing-blue.svg)](https://www.tidyverse.org/lifecycle/#maturing) [![](https://cranlogs.r-pkg.org/badges/grand-total/concurve)](https://cran.r-project.org/package=concurve) [![Rdoc](http://www.rdocumentation.org/badges/version/concurve)](http://www.rdocumentation.org/packages/concurve) +[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) + -> In addition to the overt statistical position, the p-value function also provides easily and accurately many of the familiar types of summary information: a **median estimate** of the parameter; a **one-sided test statistic** for a scalar parameter value at any chosen level; the related **power function**; a **lower confidence bound** at any level; an **upper confidence bound** at any level; and **confidence intervals** with chosen upper and lower confidence limits. The p value reports all the **common inference material**, but with **high accuracy, basic uniqueness, and wide generality**. -> -> From a scientific perspective, the likelihood function and p-value function provide the basis for scientific judgments by an investigator, and by other investigators who might have interest. **It thus replaces a blunt yes or no decision by an opportunity for appropriate informed judgment.**” - [D. A. S. Fraser, 2019](https://doi.org/10.1080/00031305.2018.1556735) +### [Compare](file:///Users/Zad/Desktop/GitHub/concurve/docs/reference/curve_compare.html) Functions From Different Datasets/Studies -# Examples + + + + - - +* * * -* * * +### [Export Tables](file:///Users/Zad/Desktop/GitHub/concurve/docs/reference/curve_table.html) Easily For Word, Powerpoint, & TeX documents +
+ -# Installation + +
-## For R: +* * * -### Install the Package From CRAN +### Install the Package From [CRAN](https://cran.r-project.org/package=concurve) Below To Follow The [Examples](https://data.lesslikely.com/concurve/articles/examples.html). +(`Serious Recommendation`) -``` r +``` install.packages("concurve") ``` - -### Install the Developer Version - -```r -library(devtools) -install_github("zadchow/concurve") -``` - -### Check out the [Examples](https://data.lesslikely.com/concurve/articles/examples.html). - * * * -## For Stata: - -### Check out the [Article on Using Stata](https://data.lesslikely.com/concurve/articles/stata.html) for concurve. +## Check out the [Article on Using Stata](https://data.lesslikely.com/concurve/articles/stata.html) for concurve. * * * -# Dependencies +## Dependencies * ggplot2 * metafor * parallel +* MASS +* boot +* bcaboot +* compiler +* ProfileLikelihood +* pbmcapply +* rlang +* magrittr * dplyr +* tidyr +* knitr +* flextable +* officer * tibble * survival * survminer @@ -63,38 +71,41 @@ install_github("zadchow/concurve") * * * -> "*Statistical software enables and promotes cargo-cult statistics. -> Marketing and adoption of statistical software are driven by ease of -> use and the range of statistical routines the software implements. -> Offering complex and “modern” methods provides a competitive -> advantage. And some disciplines have in effect standardised on -> particular statistical software, often proprietary software*. +# Purpose + +> In particular, the usual 95% default forces the user’s focus onto parameter values that yield p > 0.05, without regard to the trivial difference between (say) p = 0.06 and p = 0.04 (a difference not even worth a coin toss). To address this problem, we first note that a 95% interval estimate is only one of a number of arbitrary dichotomization of possibilities of parameter values (into either inside or outside of an interval). A more accurate picture of uncertainty is then obtained by examining intervals using other percentiles, e.g., proportionally-spaced compatibility levels such as p 0.25, 0.05, 0.01, which correspond to 75%, 95%, 99% CIs and equally-spaced S-values of s < 2, 4.32, 6.64 bits. When a detailed picture is desired, a table or graph of all the P-values and S-values across a broad range of parameter values seems the clearest way to see how compatibility +varies smoothly across the values. +> +> Graphs of P-values or their equivalent have been promoted for decades [34, 56–59], yet their adoption has been slight. Nonetheless, P-value and S-value graphing software is now available freely through several statistical packages [60–62]. A graph of the P-values p against possible parameter values allows one to see at a glance which parameter values are most compatible with the data under the background assumptions. This graph is known as the P-value function, or compatibility, consonance, or confidence curve [34, 57, 58, 63, 64]. Transforming the corresponding P-values in the graph to S-values produces an S-value (surprisal) function. > -> *Statistical software does not help you know what to compute, nor how -> to interpret the result. It does not offer to explain the assumptions -> behind methods, nor does it flag delicate or dubious assumptions. It -> does not warn you about multiplicity or p-hacking. It does not check -> whether you picked the hypothesis or analysis after looking at the -> data, nor track the number of analyses you tried before arriving at -> the one you sought to publish – another form of multiplicity. The more -> “powerful” and “user-friendly” the software is, the more it invites -> cargo-cult statistics*." - Stark & Saltelli, 2018 +> Following the common (and important) warning that P-values are not hypothesis probabilities, we caution that the P-value graph is not a probability distribution: It shows compatibility of parameter values with the data, rather than plausibility or probability of those values given the data. This is not a subtle difference: compatibility is a much weaker condition than plausibility. Consider for example that complete fabrication of the data is always an explanation compatible with the data (and indeed has happened in some influential medical studies [65]), but in studies with many participants and authors involved in all aspects of data collection it becomes so implausible or improbable as to not even merit mention. We emphasize then that all the P-value ever addresses in a direct logical sense is compatibility; for hypothesis probabilities one must turn to Bayesian methods [25]. - [Chow & Greenland, 2019](https://arxiv.org/abs/1909.08579); [Greenland & Chow, 2019](https://arxiv.org/abs/1909.08583) + +* * * + +> In addition to the overt statistical position, the p-value function also provides easily and accurately many of the familiar types of summary information: a **median estimate** of the parameter; a **one-sided test statistic** for a scalar parameter value at any chosen level; the related **power function**; a **lower confidence bound** at any level; an **upper confidence bound** at any level; and **confidence intervals** with chosen upper and lower confidence limits. The p value reports all the **common inference material**, but with **high accuracy, basic uniqueness, and wide generality**. +> +> From a scientific perspective, the **likelihood function** and **p-value function** provide the basis for scientific judgments by an investigator, and by other investigators who might have interest. **It thus replaces a blunt yes or no decision by an opportunity for appropriate informed judgment.**” - [Fraser, 2019](https://doi.org/10.1080/00031305.2018.1556735) + +* * * + +> *Statistical software does not help you know what to compute, nor how to interpret the result. It does not offer to explain the assumptions behind methods, nor does it flag delicate or dubious assumptions. It does not warn you about multiplicity or p-hacking. It does not check whether you picked the hypothesis or analysis after looking at the data, nor track the number of analyses you tried before arriving at the one you sought to publish – another form of multiplicity. The more “powerful” and “user-friendly” the software is, the more it invites cargo-cult statistics*." - Stark & Saltelli, 2018 # References -1. Stark PB, Saltelli A. Cargo-cult statistics and scientific crisis. - _Significance._ 2018;15(4):40-43. -2. Poole C. Beyond the confidence interval. - _Am J Public Health._ 1987;77(2):195-199. -3. Sullivan KM, Foster DA. Use of the confidence interval function. - _Epidemiology._ 1990;1(1):39-42. -4. Rothman KJ, Greenland S, Lash TL. Modern epidemiology. 2012. -5. Singh K, Xie M, Strawderman WE. Confidence distribution (CD) -- distribution estimator of a parameter. - _arXiv [mathST]_. 2007. -6. Schweder T, Hjort NL. Confidence and Likelihood*. - _Scand J Stat._ 2002;29(2):309-332. -7. Amrhein V, Trafimow D, Greenland S. Inferential Statistics as Descriptive Statistics: There is No Replication Crisis if We Don't Expect Replication. _Am Stat_. 2019 -8. Greenland S. Valid P-values Behave Exactly As They Should. Some misleading criticisms of P-values and their resolution with S-values. _Am Stat_. 2019;18(136). -9. Fraser DAS. The p-value Function and Statistical Inference. _Am Stat_. 2019 -10. Chow ZR, Greenland S. Semantic and Cognitive Tools to Aid Statistical Inference: Replace Confidence and Significance by Compatibility and Surprise. _arXiv:1909.08579 [stat.ME]_. 2019 -11. Greenland S, Chow ZR. To Aid Statistical Inference, Emphasize Unconditional Descriptions of Statistics. _arXiv:1909.08583 [stat.ME]_. 2019 +1. Chow ZR, Greenland S. Semantic and Cognitive Tools to Aid Statistical Inference: Replace Confidence and Significance by Compatibility and Surprise. [_arXiv:1909.08579 [stat.ME]_.](https://arxiv.org/abs/1909.08579) 2019 +2. Greenland S, Chow ZR. To Aid Statistical Inference, Emphasize Unconditional Descriptions of Statistics. [_arXiv:1909.08583 [stat.ME]_.](https://arxiv.org/abs/1909.08583) 2019 +3. Poole C. Beyond the confidence interval. _Am J Public Health._ 1987;77(2):195-199. +4. Sullivan KM, Foster DA. Use of the confidence interval function._Epidemiology._ 1990;1(1):39-42. +5. Rothman KJ, Greenland S, Lash TL. Modern epidemiology. 2012. +6. Singh K, Xie M, Strawderman WE. Confidence distribution (CD) -- distribution estimator of a parameter. _arXiv [mathST]_. 2007. +7. Schweder T, Hjort NL. Confidence and Likelihood*. _Scand J Stat._ 2002;29(2):309-332. +8. Amrhein V, Trafimow D, Greenland S. Inferential Statistics as Descriptive Statistics: There is No Replication Crisis if We Don't Expect Replication. _Am Stat_. 2019 +9. Greenland S. Valid P-values Behave Exactly As They Should. Some misleading criticisms of P-values and their resolution with S-values. _Am Stat_. 2019;18(136). +10. Fraser DAS. The p-value Function and Statistical Inference. _Am Stat_. 2019 +11. Stark PB, Saltelli A. Cargo-cult statistics and scientific crisis. _Significance._ 2018;15(4):40-43. + +# Session info + +```{r session_info, include=TRUE, echo=FALSE} +sessionInfo() +``` diff --git a/README.md b/README.md index 22d3241..ab1fd66 100644 --- a/README.md +++ b/README.md @@ -1,74 +1,79 @@ -concurve -================ -# concurve | Graph Interval Functions +
-[![CRAN\_Status\_Badge](http://www.r-pkg.org/badges/version/concurve)](https://cran.r-project.org/package=concurve) -[![Build -Status](https://travis-ci.org/Zadchow/concurve.svg?branch=master)](https://travis-ci.org/Zadchow/concurve) -[![Build -status](https://ci.appveyor.com/api/projects/status/v8sp9x96dap2om9s?svg=true)](https://ci.appveyor.com/project/Zadchow/concurve) -[![DOI](https://zenodo.org/badge/165464881.svg)](https://zenodo.org/badge/latestdoi/165464881) + concurve | Graph Interval Functions + + +
+ +----- + + + +[![CRAN +status](https://www.r-pkg.org/badges/version/concurve)](https://CRAN.R-project.org/package=concurve) +[![Travis build +status](https://travis-ci.org/Zadchow/concurve.svg?branch=master)](https://travis-ci.org/Zadchow/concurve) +[![Lifecycle: +maturing](https://img.shields.io/badge/lifecycle-maturing-blue.svg)](https://www.tidyverse.org/lifecycle/#maturing) [![](https://cranlogs.r-pkg.org/badges/grand-total/concurve)](https://cran.r-project.org/package=concurve) [![Rdoc](http://www.rdocumentation.org/badges/version/concurve)](http://www.rdocumentation.org/packages/concurve) +[![License: GPL +v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) + -> In addition to the overt statistical position, the p-value function -> also provides easily and accurately many of the familiar types of -> summary information: a **median estimate** of the parameter; a -> **one-sided test statistic** for a scalar parameter value at any -> chosen level; the related **power function**; a **lower confidence -> bound** at any level; an **upper confidence bound** at any level; and -> **confidence intervals** with chosen upper and lower confidence -> limits. The p value reports all the **common inference material**, but -> with **high accuracy, basic uniqueness, and wide generality**. -> -> From a scientific perspective, the likelihood function and p-value -> function provide the basis for scientific judgments by an -> investigator, and by other investigators who might have interest. **It -> thus replaces a blunt yes or no decision by an opportunity for -> appropriate informed judgment.**” - [D. A. S. -> Fraser, 2019](https://doi.org/10.1080/00031305.2018.1556735) - -# Examples +### [Compare](file:///Users/Zad/Desktop/GitHub/concurve/docs/reference/curve_compare.html) Functions From Different Datasets/Studies - - + + + + ----- -# Installation +### [Export Tables](file:///Users/Zad/Desktop/GitHub/concurve/docs/reference/curve_table.html) Easily For Word, Powerpoint, & TeX documents -## For R: +
-### Install the Package From CRAN + -``` r -install.packages("concurve") -``` + -### Install the Developer Version +
-``` r -library(devtools) -install_github("zadchow/concurve") -``` +----- -### Check out the [Examples](https://data.lesslikely.com/concurve/articles/examples.html). +### Install the Package From [CRAN](https://cran.r-project.org/package=concurve) Below To Follow The [Examples](https://data.lesslikely.com/concurve/articles/examples.html). ------ +(`Serious +Recommendation`) -## For Stata: + install.packages("concurve") -### Check out the [Article on Using Stata](https://data.lesslikely.com/concurve/articles/stata.html) for concurve. +----- + +## Check out the [Article on Using Stata](https://data.lesslikely.com/concurve/articles/stata.html) for concurve. ----- -# Dependencies +## Dependencies - ggplot2 - metafor - parallel + - MASS + - boot + - bcaboot + - compiler + - ProfileLikelihood + - pbmcapply + - rlang + - magrittr - dplyr + - tidyr + - knitr + - flextable + - officer - tibble - survival - survminer @@ -76,13 +81,71 @@ install_github("zadchow/concurve") ----- -> "*Statistical software enables and promotes cargo-cult statistics. -> Marketing and adoption of statistical software are driven by ease of -> use and the range of statistical routines the software implements. -> Offering complex and “modern” methods provides a competitive -> advantage. And some disciplines have in effect standardised on -> particular statistical software, often proprietary software*. +# Purpose + +> In particular, the usual 95% default forces the user’s focus onto +> parameter values that yield p \> 0.05, without regard to the trivial +> difference between (say) p = 0.06 and p = 0.04 (a difference not even +> worth a coin toss). To address this problem, we first note that a 95% +> interval estimate is only one of a number of arbitrary dichotomization +> of possibilities of parameter values (into either inside or outside of +> an interval). A more accurate picture of uncertainty is then obtained +> by examining intervals using other percentiles, e.g., +> proportionally-spaced compatibility levels such as p 0.25, 0.05, 0.01, +> which correspond to 75%, 95%, 99% CIs and equally-spaced S-values of s +> \< 2, 4.32, 6.64 bits. When a detailed picture is desired, a table or +> graph of all the P-values and S-values across a broad range of +> parameter values seems the clearest way to see how compatibility +> varies smoothly across the values. +> +> Graphs of P-values or their equivalent have been promoted for decades +> \[34, 56–59\], yet their adoption has been slight. Nonetheless, +> P-value and S-value graphing software is now available freely through +> several statistical packages \[60–62\]. A graph of the P-values p +> against possible parameter values allows one to see at a glance which +> parameter values are most compatible with the data under the +> background assumptions. This graph is known as the P-value function, +> or compatibility, consonance, or confidence curve \[34, 57, 58, 63, +> 64\]. Transforming the corresponding P-values in the graph to S-values +> produces an S-value (surprisal) function. +> +> Following the common (and important) warning that P-values are not +> hypothesis probabilities, we caution that the P-value graph is not a +> probability distribution: It shows compatibility of parameter values +> with the data, rather than plausibility or probability of those values +> given the data. This is not a subtle difference: compatibility is a +> much weaker condition than plausibility. Consider for example that +> complete fabrication of the data is always an explanation compatible +> with the data (and indeed has happened in some influential medical +> studies \[65\]), but in studies with many participants and authors +> involved in all aspects of data collection it becomes so implausible +> or improbable as to not even merit mention. We emphasize then that all +> the P-value ever addresses in a direct logical sense is compatibility; +> for hypothesis probabilities one must turn to Bayesian methods \[25\]. +> - [Chow & Greenland, 2019](https://arxiv.org/abs/1909.08579); +> [Greenland & Chow, 2019](https://arxiv.org/abs/1909.08583) + +----- + +> In addition to the overt statistical position, the p-value function +> also provides easily and accurately many of the familiar types of +> summary information: a **median estimate** of the parameter; a +> **one-sided test statistic** for a scalar parameter value at any +> chosen level; the related **power function**; a **lower confidence +> bound** at any level; an **upper confidence bound** at any level; and +> **confidence intervals** with chosen upper and lower confidence +> limits. The p value reports all the **common inference material**, but +> with **high accuracy, basic uniqueness, and wide generality**. > +> From a scientific perspective, the **likelihood function** and +> **p-value function** provide the basis for scientific judgments by an +> investigator, and by other investigators who might have interest. **It +> thus replaces a blunt yes or no decision by an opportunity for +> appropriate informed judgment.**” - +> [Fraser, 2019](https://doi.org/10.1080/00031305.2018.1556735) + +----- + > *Statistical software does not help you know what to compute, nor how > to interpret the result. It does not offer to explain the assumptions > behind methods, nor does it flag delicate or dubious assumptions. It @@ -95,28 +158,49 @@ install_github("zadchow/concurve") # References -1. Stark PB, Saltelli A. Cargo-cult statistics and scientific crisis. - *Significance.* 2018;15(4):40-43. -2. Poole C. Beyond the confidence interval. *Am J Public Health.* +1. Chow ZR, Greenland S. Semantic and Cognitive Tools to Aid + Statistical Inference: Replace Confidence and Significance by + Compatibility and Surprise. [*arXiv:1909.08579 + \[stat.ME\]*.](https://arxiv.org/abs/1909.08579) 2019 +2. Greenland S, Chow ZR. To Aid Statistical Inference, Emphasize + Unconditional Descriptions of Statistics. [*arXiv:1909.08583 + \[stat.ME\]*.](https://arxiv.org/abs/1909.08583) 2019 +3. Poole C. Beyond the confidence interval. *Am J Public Health.* 1987;77(2):195-199. -3. Sullivan KM, Foster DA. Use of the confidence interval function. - *Epidemiology.* 1990;1(1):39-42. -4. Rothman KJ, Greenland S, Lash TL. Modern epidemiology. 2012. -5. Singh K, Xie M, Strawderman WE. Confidence distribution (CD) – +4. Sullivan KM, Foster DA. Use of the confidence interval + function.\_Epidemiology.\_ 1990;1(1):39-42. +5. Rothman KJ, Greenland S, Lash TL. Modern epidemiology. 2012. +6. Singh K, Xie M, Strawderman WE. Confidence distribution (CD) – distribution estimator of a parameter. *arXiv \[mathST\]*. 2007. -6. Schweder T, Hjort NL. Confidence and Likelihood\*. *Scand J Stat.* +7. Schweder T, Hjort NL. Confidence and Likelihood\*. *Scand J Stat.* 2002;29(2):309-332. -7. Amrhein V, Trafimow D, Greenland S. Inferential Statistics as +8. Amrhein V, Trafimow D, Greenland S. Inferential Statistics as Descriptive Statistics: There is No Replication Crisis if We Don’t Expect Replication. *Am Stat*. 2019 -8. Greenland S. Valid P-values Behave Exactly As They Should. Some +9. Greenland S. Valid P-values Behave Exactly As They Should. Some misleading criticisms of P-values and their resolution with S-values. *Am Stat*. 2019;18(136). -9. Fraser DAS. The p-value Function and Statistical Inference. *Am +10. Fraser DAS. The p-value Function and Statistical Inference. *Am Stat*. 2019 -10. Chow ZR, Greenland S. Semantic and Cognitive Tools to Aid - Statistical Inference: Replace Confidence and Significance by - Compatibility and Surprise. *arXiv:1909.08579 \[stat.ME\]*. 2019 -11. Greenland S, Chow ZR. To Aid Statistical Inference, Emphasize - Unconditional Descriptions of Statistics. *arXiv:1909.08583 - \[stat.ME\]*. 2019 +11. Stark PB, Saltelli A. Cargo-cult statistics and scientific crisis. + *Significance.* 2018;15(4):40-43. + +# Session info + + ## R version 3.6.1 (2019-07-05) + ## Platform: x86_64-apple-darwin15.6.0 (64-bit) + ## Running under: macOS Catalina 10.15.1 + ## + ## Matrix products: default + ## BLAS: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib + ## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib + ## + ## locale: + ## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 + ## + ## attached base packages: + ## [1] stats graphics grDevices utils methods base + ## + ## loaded via a namespace (and not attached): + ## [1] compiler_3.6.1 magrittr_1.5 tools_3.6.1 htmltools_0.4.0 yaml_2.2.0 Rcpp_1.0.3 stringi_1.4.3 + ## [8] rmarkdown_1.18 knitr_1.26 stringr_1.4.0 xfun_0.11 digest_0.6.23 rlang_0.4.2 evaluate_0.14 diff --git a/_pkgdown.yml b/_pkgdown.yml index 5438c4f..961a678 100644 --- a/_pkgdown.yml +++ b/_pkgdown.yml @@ -1,5 +1,7 @@ url: https://data.lesslikely.com/concurve/ +title: concurve + authors: Zad R. Chow: href: https://lesslikely.com/ @@ -7,6 +9,9 @@ authors: href: https://www.researchgate.net/profile/Andrew_Vigotsky template: + path: inst/templates + assets: inst/assets + default_assets: false params: bootswatch: paper docsearch: @@ -14,6 +19,7 @@ template: index_name: lesslikely-concurve navbar: + type: default structure: left: - home @@ -42,4 +48,4 @@ navbar: href: news/index.html github: icon: fa-github fa-lg - href: https://github.com/Zadchow/concurve + href: https://github.com/zadchow/concurve diff --git a/codemeta.json b/codemeta.json index 3842882..b17e955 100644 --- a/codemeta.json +++ b/codemeta.json @@ -5,18 +5,18 @@ ], "@type": "SoftwareSourceCode", "identifier": "concurve", - "description": "Allows one to compute consonance (confidence) intervals for various statistical tests along with their corresponding P-values and S-values. The intervals can be plotted to create consonance and surprisal functions allowing one to see what effect sizes are compatible with the test model at various consonance levels rather than being limited to one interval estimate such as 95%. These methods are discussed by Poole C. (1987) , Schweder T, Hjort NL. (2002) , Singh K, Xie M, Strawderman WE. (2007) , Rothman KJ, Greenland S, Lash TL. (2008, ISBN:9781451190052), Amrhein V, Trafimow D, Greenland S. (2019) , Greenland S. (2019) , Chow ZR, Greenland S. (2019) , and Greenland S, Chow ZR. (2019) .", - "name": "concurve: Computes and Plots Consonance (Confidence) Intervals, P-Values, and S-Values to Form Consonance and Surprisal Functions", - "date": "2019-05-10", - "codeRepository": "https://github.com/Zadchow/concurve", + "description": "Allows one to compute consonance (confidence)\n intervals for various statistical tests along with their corresponding\n P-values, S-values, and likelihoods. The intervals can be plotted to\n create consonance, surprisal, and likelihood functions allowing one to\n see what effect sizes are compatible with the test model at various\n consonance levels rather than being limited to one interval estimate\n such as 95%. These methods are discussed by Poole C. (1987)\n , Schweder T, Hjort NL. (2002)\n , Singh K, Xie M, Strawderman WE. (2007)\n , Rothman KJ, Greenland S, Lash TL. (2008,\n ISBN:9781451190052), Amrhein V, Trafimow D, Greenland S. (2019)\n , Greenland S. (2019)\n , Chow ZR, Greenland S. (2019)\n , and Greenland S, Chow ZR. (2019)\n .", + "name": "concurve: Computes and Plots Compatibility (Confidence) Intervals,\n P-Values, S-Values, & Likelihood Intervals to Form Consonance,\n Surprisal, & Likelihood Functions", + "date": "2019-11-24", + "codeRepository": "https://github.com/zadchow/concurve", "relatedLink": [ "https://data.lesslikely.com/concurve/", "https://lesslikely.com/", "https://CRAN.R-project.org/package=concurve" ], - "issueTracker": "https://github.com/Zadchow/concurve/issues", + "issueTracker": "https://github.com/zadchow/concurve/issues", "license": "https://spdx.org/licenses/GPL-3.0", - "version": "2.1.0", + "version": "2.3.0", "programmingLanguage": { "@type": "ComputerLanguage", "name": "R", @@ -55,6 +55,42 @@ } ], "softwareSuggestions": [ + { + "@type": "SoftwareApplication", + "identifier": "covr", + "name": "covr", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=covr" + }, + { + "@type": "SoftwareApplication", + "identifier": "roxygen2", + "name": "roxygen2", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=roxygen2" + }, + { + "@type": "SoftwareApplication", + "identifier": "spelling", + "name": "spelling", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=spelling" + }, { "@type": "SoftwareApplication", "identifier": "testthat", @@ -69,34 +105,82 @@ }, { "@type": "SoftwareApplication", - "identifier": "knitr", - "name": "knitr", + "identifier": "rmarkdown", + "name": "rmarkdown", "provider": { "@id": "https://cran.r-project.org", "@type": "Organization", "name": "Comprehensive R Archive Network (CRAN)", "url": "https://cran.r-project.org" }, - "sameAs": "https://CRAN.R-project.org/package=knitr" + "sameAs": "https://CRAN.R-project.org/package=rmarkdown" }, { "@type": "SoftwareApplication", - "identifier": "covr", - "name": "covr", + "identifier": "Lock5Data", + "name": "Lock5Data", "provider": { "@id": "https://cran.r-project.org", "@type": "Organization", "name": "Comprehensive R Archive Network (CRAN)", "url": "https://cran.r-project.org" }, - "sameAs": "https://CRAN.R-project.org/package=covr" + "sameAs": "https://CRAN.R-project.org/package=Lock5Data" } ], "softwareRequirements": [ { "@type": "SoftwareApplication", - "identifier": "parallel", - "name": "parallel" + "identifier": "bcaboot", + "name": "bcaboot", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=bcaboot" + }, + { + "@type": "SoftwareApplication", + "identifier": "boot", + "name": "boot", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=boot" + }, + { + "@type": "SoftwareApplication", + "identifier": "compiler", + "name": "compiler" + }, + { + "@type": "SoftwareApplication", + "identifier": "dplyr", + "name": "dplyr", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=dplyr" + }, + { + "@type": "SoftwareApplication", + "identifier": "flextable", + "name": "flextable", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=flextable" }, { "@type": "SoftwareApplication", @@ -110,6 +194,18 @@ }, "sameAs": "https://CRAN.R-project.org/package=ggplot2" }, + { + "@type": "SoftwareApplication", + "identifier": "knitr", + "name": "knitr", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=knitr" + }, { "@type": "SoftwareApplication", "identifier": "metafor", @@ -124,27 +220,69 @@ }, { "@type": "SoftwareApplication", - "identifier": "dplyr", - "name": "dplyr", + "identifier": "officer", + "name": "officer", "provider": { "@id": "https://cran.r-project.org", "@type": "Organization", "name": "Comprehensive R Archive Network (CRAN)", "url": "https://cran.r-project.org" }, - "sameAs": "https://CRAN.R-project.org/package=dplyr" + "sameAs": "https://CRAN.R-project.org/package=officer" }, { "@type": "SoftwareApplication", - "identifier": "tibble", - "name": "tibble", + "identifier": "parallel", + "name": "parallel" + }, + { + "@type": "SoftwareApplication", + "identifier": "pbmcapply", + "name": "pbmcapply", "provider": { "@id": "https://cran.r-project.org", "@type": "Organization", "name": "Comprehensive R Archive Network (CRAN)", "url": "https://cran.r-project.org" }, - "sameAs": "https://CRAN.R-project.org/package=tibble" + "sameAs": "https://CRAN.R-project.org/package=pbmcapply" + }, + { + "@type": "SoftwareApplication", + "identifier": "ProfileLikelihood", + "name": "ProfileLikelihood", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=ProfileLikelihood" + }, + { + "@type": "SoftwareApplication", + "identifier": "rlang", + "name": "rlang", + "version": ">= 0.1.2", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=rlang" + }, + { + "@type": "SoftwareApplication", + "identifier": "scales", + "name": "scales", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=scales" }, { "@type": "SoftwareApplication", @@ -172,19 +310,54 @@ }, { "@type": "SoftwareApplication", - "identifier": "scales", - "name": "scales", + "identifier": "tibble", + "name": "tibble", "provider": { "@id": "https://cran.r-project.org", "@type": "Organization", "name": "Comprehensive R Archive Network (CRAN)", "url": "https://cran.r-project.org" }, - "sameAs": "https://CRAN.R-project.org/package=scales" + "sameAs": "https://CRAN.R-project.org/package=tibble" + }, + { + "@type": "SoftwareApplication", + "identifier": "tidyr", + "name": "tidyr", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=tidyr" + }, + { + "@type": "SoftwareApplication", + "identifier": "MASS", + "name": "MASS", + "provider": { + "@id": "https://cran.r-project.org", + "@type": "Organization", + "name": "Comprehensive R Archive Network (CRAN)", + "url": "https://cran.r-project.org" + }, + "sameAs": "https://CRAN.R-project.org/package=MASS" + }, + { + "@type": "SoftwareApplication", + "identifier": "methods", + "name": "methods" + }, + { + "@type": "SoftwareApplication", + "identifier": "R", + "name": "R", + "version": ">= 3.2" } ], "keywords": ["confidence", "compatibility", "consonance", "surprisal", "interval", "function", "curve"], - "fileSize": "3833.993KB", + "fileSize": "2225.399KB", "contIntegration": [ "https://travis-ci.org/Zadchow/concurve", "https://ci.appveyor.com/project/Zadchow/concurve", @@ -198,6 +371,7 @@ "citation": [ { "@type": "SoftwareSourceCode", + "datePublished": "2019", "author": [ { "@type": "Person", @@ -210,11 +384,9 @@ "familyName": "Vigotsky" } ], - "name": "{concurve}: Computes and Plots Consonance (Confidence) Intervals, P-Values, and S-Values to Form Consonance and Surprisal Functions", - "identifier": "10.5281/zenodo.1308151", - "url": "https://CRAN.R-project.org/package=concurve", - "@id": "https://doi.org/10.5281/zenodo.1308151", - "sameAs": "https://doi.org/10.5281/zenodo.1308151" + "name": "{concurve}: Computes and Plots Compatibility (Confidence) Intervals, P-Values, S-Values, & Likelihood Intervals to Form Consonance, Surprisal, & Likelihood Functions", + "url": "https://CRAN.R-project.org/package=concurve" } - ] + ], + "developmentStatus": "https://www.tidyverse.org/lifecycle/#maturing" } diff --git a/concurve.Rproj b/concurve.Rproj index f0d6187..5e0a049 100644 --- a/concurve.Rproj +++ b/concurve.Rproj @@ -1,15 +1,15 @@ Version: 1.0 -RestoreWorkspace: Default -SaveWorkspace: Default -AlwaysSaveHistory: Default +RestoreWorkspace: No +SaveWorkspace: No +AlwaysSaveHistory: No EnableCodeIndexing: Yes UseSpacesForTab: Yes NumSpacesForTab: 2 Encoding: UTF-8 -RnwWeave: Sweave +RnwWeave: knitr LaTeX: pdfLaTeX AutoAppendNewline: Yes @@ -19,3 +19,5 @@ BuildType: Package PackageUseDevtools: Yes PackageInstallArgs: --no-multiarch --with-keep.source PackageRoxygenize: rd,collate,namespace,vignette + +QuitChildProcessesOnExit: Yes diff --git a/cran-comments.md b/cran-comments.md index e69de29..9a5cfc3 100644 --- a/cran-comments.md +++ b/cran-comments.md @@ -0,0 +1,10 @@ +## Test environments +* local OS X install, R 3.6.1 +* ubuntu 14.04 (on travis-ci), R 3.6.1 +* win-builder (devel and release) + +## R CMD check results + +0 errors | 0 warnings | 1 note + +* This is a new release. diff --git a/docs/404.html b/docs/404.html index 375b745..54de992 100644 --- a/docs/404.html +++ b/docs/404.html @@ -82,9 +82,9 @@ - + concurve - 2.1.0 + 2.3.0 @@ -120,7 +120,7 @@ +
+

Community

+ +

Citation

    @@ -232,12 +288,12 @@

    Developers

    Dev status

      -
    • CRAN_Status_Badge
    • -
    • Build Status
    • -
    • Build status
    • -
    • DOI
    • +
    • CRAN status
    • +
    • Travis build status
    • +
    • Lifecycle: maturing
    • Rdoc
    • +
    • License: GPL v3
diff --git a/docs/link.svg b/docs/link.svg deleted file mode 100644 index 88ad827..0000000 --- a/docs/link.svg +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - diff --git a/docs/news/index.html b/docs/news/index.html index 8e666d7..563b32c 100644 --- a/docs/news/index.html +++ b/docs/news/index.html @@ -82,9 +82,9 @@ - + concurve - 2.1.0 + 2.3.0 @@ -120,7 +120,7 @@