diff --git a/base/db/man/convert_input.Rd b/base/db/man/convert_input.Rd index 466e46b73a3..9ab1161b640 100644 --- a/base/db/man/convert_input.Rd +++ b/base/db/man/convert_input.Rd @@ -97,14 +97,14 @@ along to fcn. The command to execute fcn is built as a string. \section{Database files}{ There are two kinds of database records (in different tables) that represent a given data file in the file system. An input file -contains information about the contents of the data file. A dbfile contains machine spacific information for a given input file, -such as the file path. Because duplicates of data files for a given input can be on multiple different machines, there can be more +contains information about the contents of the data file. A dbfile contains machine spacific information for a given input file, +such as the file path. Because duplicates of data files for a given input can be on multiple different machines, there can be more than one dbfile for a given input file. } \section{Time-span appending}{ -By default, convert_input tries to optimize the download of most data products by only downloading the years of data not present on +By default, convert_input tries to optimize the download of most data products by only downloading the years of data not present on the current machine. (For example, if files for 2004-2008 exist for a given data product exist on this machine and the user requests 2006-2010, the function will only download data for 2009 and 2010). In year-long data files, each year exists as a separate file. The database input file contains records of the bounds of the range stored by those years. The data optimization can be turned off diff --git a/base/db/man/db_merge_into.Rd b/base/db/man/db_merge_into.Rd index b4004667d8f..9ab9264c7db 100644 --- a/base/db/man/db_merge_into.Rd +++ b/base/db/man/db_merge_into.Rd @@ -20,7 +20,7 @@ db_merge_into(values, table, con, by = NULL, drop = FALSE, ...) \item{...}{ Arguments passed on to \code{\link[=insert_table]{insert_table}} \describe{ - \item{\code{coerce_col_class}}{logical, whether or not to coerce local data columns + \item{\code{coerce_col_class}}{logical, whether or not to coerce local data columns to SQL classes. Default = `TRUE.`} }} } @@ -32,7 +32,7 @@ Merge local data frame into SQL table } \examples{ irisdb <- DBI::dbConnect(RSQLite::SQLite(), ":memory:") -dplyr::copy_to(irisdb, iris[1:10,], name = "iris", overwrite = TRUE) -db_merge_into(iris[1:12,], "iris", irisdb) +dplyr::copy_to(irisdb, iris[1:10, ], name = "iris", overwrite = TRUE) +db_merge_into(iris[1:12, ], "iris", irisdb) dplyr::tbl(irisdb, "iris") \%>\% dplyr::count() } diff --git a/base/db/man/get_postgres_envvars.Rd b/base/db/man/get_postgres_envvars.Rd index c802444d6b4..17c6478a4ec 100644 --- a/base/db/man/get_postgres_envvars.Rd +++ b/base/db/man/get_postgres_envvars.Rd @@ -30,11 +30,11 @@ The list of environment variables we check is taken from the per-session behavior (e.g. PGTZ, PGSYSCONFDIR). } \examples{ - host <- Sys.getenv("PGHOST") # to restore environment after demo +host <- Sys.getenv("PGHOST") # to restore environment after demo - Sys.unsetenv("PGHOST") - get_postgres_envvars()$host # NULL - get_postgres_envvars(host = "default", port = 5432)$host # "default" +Sys.unsetenv("PGHOST") +get_postgres_envvars()$host # NULL +get_postgres_envvars(host = "default", port = 5432)$host # "default" # defaults are ignored for a variable that exists Sys.setenv(PGHOST = "localhost") get_postgres_envvars()$host # "localhost" diff --git a/base/db/man/insert.format.vars.Rd b/base/db/man/insert.format.vars.Rd index 6d425109c8a..d0f6c7a9af0 100644 --- a/base/db/man/insert.format.vars.Rd +++ b/base/db/man/insert.format.vars.Rd @@ -56,7 +56,8 @@ formats_variables_tibble <- tibble::tibble( name = c("NPP", NA, "YEAR"), unit = c("g C m-2 yr-1", NA, NA), storage_type = c(NA, NA, "\%Y"), - column_number = c(2, NA, 4)) + column_number = c(2, NA, 4) +) insert.format.vars( con = con, @@ -65,7 +66,8 @@ insert.format.vars( notes = "NPP from Harvard Forest.", header = FALSE, skip = 0, - formats_variables = formats_variables_tibble) + formats_variables = formats_variables_tibble +) } } \author{ diff --git a/base/db/man/insert_table.Rd b/base/db/man/insert_table.Rd index f0b15fc0cfb..81952c890f6 100644 --- a/base/db/man/insert_table.Rd +++ b/base/db/man/insert_table.Rd @@ -13,7 +13,7 @@ insert_table(values, table, con, coerce_col_class = TRUE, drop = TRUE) \item{con}{Database connection object} -\item{coerce_col_class}{logical, whether or not to coerce local data columns +\item{coerce_col_class}{logical, whether or not to coerce local data columns to SQL classes. Default = `TRUE.`} \item{drop}{logical. If `TRUE` (default), drop columns not found in SQL table.} @@ -22,14 +22,14 @@ to SQL classes. Default = `TRUE.`} data frame with query results } \description{ -First, subset to matching columns. Then, make sure the local and SQL column -classes match, coercing local to SQL as necessary (or throwing an error). -Then, build an SQL string for the insert statement. Finally, insert into the +First, subset to matching columns. Then, make sure the local and SQL column +classes match, coercing local to SQL as necessary (or throwing an error). +Then, build an SQL string for the insert statement. Finally, insert into the database. } \examples{ irisdb <- DBI::dbConnect(RSQLite::SQLite(), ":memory:") -dplyr::copy_to(irisdb, iris[1,], name = "iris", overwrite = TRUE) -insert_table(iris[-1,], "iris", irisdb) +dplyr::copy_to(irisdb, iris[1, ], name = "iris", overwrite = TRUE) +insert_table(iris[-1, ], "iris", irisdb) dplyr::tbl(irisdb, "iris") } diff --git a/base/db/man/match_dbcols.Rd b/base/db/man/match_dbcols.Rd index 9c42badadc0..1ce154af69e 100644 --- a/base/db/man/match_dbcols.Rd +++ b/base/db/man/match_dbcols.Rd @@ -13,7 +13,7 @@ match_dbcols(values, table, con, coerce_col_class = TRUE, drop = TRUE) \item{con}{Database connection object} -\item{coerce_col_class}{logical, whether or not to coerce local data columns +\item{coerce_col_class}{logical, whether or not to coerce local data columns to SQL classes. Default = `TRUE.`} \item{drop}{logical. If `TRUE` (default), drop columns not found in SQL table.} diff --git a/base/db/man/query_priors.Rd b/base/db/man/query_priors.Rd index 10677150f7d..2a94de36902 100644 --- a/base/db/man/query_priors.Rd +++ b/base/db/man/query_priors.Rd @@ -42,39 +42,47 @@ Query priors using prepared statements } \examples{ \dontrun{ - con <- db.open(...) +con <- db.open(...) - # No trait provided, so return all available traits - pdat <- query_priors( - c("temperate.Early_Hardwood", "temperate.North_Mid_Hardwood", - "temperate.Late_Hardwood"), - con = con - ) +# No trait provided, so return all available traits +pdat <- query_priors( + c( + "temperate.Early_Hardwood", "temperate.North_Mid_Hardwood", + "temperate.Late_Hardwood" + ), + con = con +) - # Traits provided, so restrict to only those traits. Note that - # because `expand = TRUE`, this will search for these traits for - # every PFT. - pdat2 <- query_priors( - c("Optics.Temperate_Early_Hardwood", - "Optics.Temperate_Mid_Hardwood", - "Optics.Temperate_Late_Hardwood"), - c("leaf_reflect_vis", "leaf_reflect_nir"), - con = con - ) +# Traits provided, so restrict to only those traits. Note that +# because `expand = TRUE`, this will search for these traits for +# every PFT. +pdat2 <- query_priors( + c( + "Optics.Temperate_Early_Hardwood", + "Optics.Temperate_Mid_Hardwood", + "Optics.Temperate_Late_Hardwood" + ), + c("leaf_reflect_vis", "leaf_reflect_nir"), + con = con +) - # With `expand = FALSE`, search the first trait for the first PFT, - # the second trait for the second PFT, etc. Note that this means - # PFT and trait input vectors must be the same length. - pdat2 <- query_priors( - c("Optics.Temperate_Early_Hardwood", - "Optics.Temperate_Early_Hardwood", - "Optics.Temperate_Mid_Hardwood", - "Optics.Temperate_Late_Hardwood"), - c("leaf_reflect_vis", - "leaf_reflect_nir", - "leaf_reflect_vis", - "leaf_reflect_nir"), - con = con - ) +# With `expand = FALSE`, search the first trait for the first PFT, +# the second trait for the second PFT, etc. Note that this means +# PFT and trait input vectors must be the same length. +pdat2 <- query_priors( + c( + "Optics.Temperate_Early_Hardwood", + "Optics.Temperate_Early_Hardwood", + "Optics.Temperate_Mid_Hardwood", + "Optics.Temperate_Late_Hardwood" + ), + c( + "leaf_reflect_vis", + "leaf_reflect_nir", + "leaf_reflect_vis", + "leaf_reflect_nir" + ), + con = con +) } } diff --git a/base/db/man/symmetric_setdiff.Rd b/base/db/man/symmetric_setdiff.Rd index 572a7f20522..aa83f1164c7 100644 --- a/base/db/man/symmetric_setdiff.Rd +++ b/base/db/man/symmetric_setdiff.Rd @@ -34,11 +34,15 @@ isn't numeric to character, to facilitate comparison.} Symmetric set difference of two data frames } \examples{ -xdf <- data.frame(a = c("a", "b", "c"), - b = c(1, 2, 3), - stringsAsFactors = FALSE) -ydf <- data.frame(a = c("a", "b", "d"), - b = c(1, 2.5, 3), - stringsAsFactors = FALSE) +xdf <- data.frame( + a = c("a", "b", "c"), + b = c(1, 2, 3), + stringsAsFactors = FALSE +) +ydf <- data.frame( + a = c("a", "b", "d"), + b = c(1, 2.5, 3), + stringsAsFactors = FALSE +) symmetric_setdiff(xdf, ydf) } diff --git a/base/db/man/try2sqlite.Rd b/base/db/man/try2sqlite.Rd index e4b6407712b..083ccd0dc31 100644 --- a/base/db/man/try2sqlite.Rd +++ b/base/db/man/try2sqlite.Rd @@ -13,8 +13,8 @@ Multiple files are combined with `data.table::rbindlist`.} \item{sqlite_file}{Target SQLite database file name, as character.} } \description{ -The TRY file is huge and unnecessarily long, which makes it difficult to -work with. The resulting SQLite database is much smaller on disk, and can be +The TRY file is huge and unnecessarily long, which makes it difficult to +work with. The resulting SQLite database is much smaller on disk, and can be read much faster thanks to lazy evaluation. } \details{ diff --git a/base/logger/man/severeifnot.Rd b/base/logger/man/severeifnot.Rd index 0bc51df1826..86394f04f98 100644 --- a/base/logger/man/severeifnot.Rd +++ b/base/logger/man/severeifnot.Rd @@ -43,8 +43,10 @@ infoifnot("Something is not a list.", is.list(a), is.list(b)) warnifnot("I would prefer it if you used lists.", is.list(a), is.list(b)) errorifnot("You should definitely use lists.", is.list(a), is.list(b)) try({ - severeifnot("I cannot deal with the fact that something is not a list.", + severeifnot( + "I cannot deal with the fact that something is not a list.", is.list(a), - is.list(b)) + is.list(b) + ) }) } diff --git a/base/remote/man/qsub_parallel.Rd b/base/remote/man/qsub_parallel.Rd index 274104b8139..06e2409ccf9 100644 --- a/base/remote/man/qsub_parallel.Rd +++ b/base/remote/man/qsub_parallel.Rd @@ -28,7 +28,7 @@ qsub_parallel } \examples{ \dontrun{ - qsub_parallel(settings) +qsub_parallel(settings) } } \author{ diff --git a/base/remote/man/remote.copy.from.Rd b/base/remote/man/remote.copy.from.Rd index 794aaa06c21..d751108d742 100644 --- a/base/remote/man/remote.copy.from.Rd +++ b/base/remote/man/remote.copy.from.Rd @@ -38,8 +38,8 @@ is a folder it will copy the file into that folder. } \examples{ \dontrun{ - host <- list(name='geo.bu.edu', user='kooper', tunnel='/tmp/geo.tunnel') - remote.copy.from(host, '/tmp/kooper', '/tmp/geo.tmp', delete=TRUE) +host <- list(name = "geo.bu.edu", user = "kooper", tunnel = "/tmp/geo.tunnel") +remote.copy.from(host, "/tmp/kooper", "/tmp/geo.tmp", delete = TRUE) } } \author{ diff --git a/base/remote/man/remote.copy.to.Rd b/base/remote/man/remote.copy.to.Rd index 1130c3b3501..6ac147e23f5 100644 --- a/base/remote/man/remote.copy.to.Rd +++ b/base/remote/man/remote.copy.to.Rd @@ -28,8 +28,8 @@ is a folder it will copy the file into that folder. } \examples{ \dontrun{ - host <- list(name='geo.bu.edu', user='kooper', tunnel='/tmp/geo.tunnel') - remote.copy.to(host, '/tmp/kooper', '/tmp/kooper', delete=TRUE) +host <- list(name = "geo.bu.edu", user = "kooper", tunnel = "/tmp/geo.tunnel") +remote.copy.to(host, "/tmp/kooper", "/tmp/kooper", delete = TRUE) } } \author{ diff --git a/base/remote/man/remote.execute.R.Rd b/base/remote/man/remote.execute.R.Rd index 5c47303f527..35f080305ee 100644 --- a/base/remote/man/remote.execute.R.Rd +++ b/base/remote/man/remote.execute.R.Rd @@ -39,7 +39,7 @@ machine it will execute the command locally without ssh. } \examples{ \dontrun{ - remote.execute.R('list.files()', host='localhost', verbose=FALSE) +remote.execute.R("list.files()", host = "localhost", verbose = FALSE) } } \author{ diff --git a/base/remote/man/remote.execute.cmd.Rd b/base/remote/man/remote.execute.cmd.Rd index d9a51e2c863..8fbf7b5e719 100644 --- a/base/remote/man/remote.execute.cmd.Rd +++ b/base/remote/man/remote.execute.cmd.Rd @@ -28,8 +28,8 @@ machine it will execute the command locally without ssh. } \examples{ \dontrun{ - host <- list(name='geo.bu.edu', user='kooper', tunnel='/tmp/geo.tunnel') - print(remote.execute.cmd(host, 'ls', c('-l', '/'), stderr=TRUE)) +host <- list(name = "geo.bu.edu", user = "kooper", tunnel = "/tmp/geo.tunnel") +print(remote.execute.cmd(host, "ls", c("-l", "/"), stderr = TRUE)) } } \author{ diff --git a/base/settings/man/clean.settings.Rd b/base/settings/man/clean.settings.Rd index 74d1a2a150e..b38b5a1e4f2 100644 --- a/base/settings/man/clean.settings.Rd +++ b/base/settings/man/clean.settings.Rd @@ -24,7 +24,7 @@ set the outdir to be 'pecan' for the next run. } \examples{ \dontrun{ -clean.settings('output/PEcAn_1/pecan.xml', 'pecan.xml') +clean.settings("output/PEcAn_1/pecan.xml", "pecan.xml") } } \author{ diff --git a/base/settings/man/get_args.Rd b/base/settings/man/get_args.Rd index 9dd874cbe6e..b3be24eb0c8 100644 --- a/base/settings/man/get_args.Rd +++ b/base/settings/man/get_args.Rd @@ -14,5 +14,7 @@ Used in web/workflow.R to parse command line arguments. See also https://github.com/PecanProject/pecan/pull/2626. } \examples{ -\dontrun{./web/workflow.R -h} +\dontrun{ +. / web / workflow.R - h +} } diff --git a/base/settings/man/papply.Rd b/base/settings/man/papply.Rd index ac8cccccab7..ee779826d75 100644 --- a/base/settings/man/papply.Rd +++ b/base/settings/man/papply.Rd @@ -50,18 +50,18 @@ result in an error. } } \examples{ -f = function(settings, ...) { +f <- function(settings, ...) { # Here's how I envisioned a typical use case within a standard PEcAn function - if(is.MultiSettings(settings)) { + if (is.MultiSettings(settings)) { return(papply(settings, f, ...)) } - + # Don't worry about the beolow, it's just some guts to make the function do something we can see l <- list(...) - for(i in seq_along(l)) { + for (i in seq_along(l)) { ind <- length(settings) + 1 settings[[ind]] <- l[[i]] - if(!is.null(names(l))) { + if (!is.null(names(l))) { names(settings)[ind] <- names(l)[i] } } @@ -69,14 +69,13 @@ f = function(settings, ...) { } # Example -settings1 <- Settings(list(a="aa", b=1:3, c="NA")) -settings2 <- Settings(list(a="A", b=4:5, c=paste)) +settings1 <- Settings(list(a = "aa", b = 1:3, c = "NA")) +settings2 <- Settings(list(a = "A", b = 4:5, c = paste)) multiSettings <- MultiSettings(settings1, settings2) # The fucntion should add element $d = D to either a Settings, or each entry in a MultiSettings -f(settings1, d="D") -print(f(multiSettings, d="D"), TRUE) - +f(settings1, d = "D") +print(f(multiSettings, d = "D"), TRUE) } \author{ Ryan Kelly diff --git a/base/settings/man/site.pft.linkage.Rd b/base/settings/man/site.pft.linkage.Rd index 56bae28cf0b..d7bb7d4042d 100644 --- a/base/settings/man/site.pft.linkage.Rd +++ b/base/settings/man/site.pft.linkage.Rd @@ -27,17 +27,17 @@ resulting multiple rows for a site. } \examples{ \dontrun{ -#setting up the Look up tables -site.pft.links <-tribble( - ~site, ~pft, - "1000025731", "temperate.broadleaf.deciduous1", - "1000025731", "temperate.needleleaf.evergreen", - "1000000048", "temperate.broadleaf.deciduous2", - "772", "temperate.broadleaf.deciduous3", - "763", "temperate.broadleaf.deciduous4" +# setting up the Look up tables +site.pft.links <- tribble( + ~site, ~pft, + "1000025731", "temperate.broadleaf.deciduous1", + "1000025731", "temperate.needleleaf.evergreen", + "1000000048", "temperate.broadleaf.deciduous2", + "772", "temperate.broadleaf.deciduous3", + "763", "temperate.broadleaf.deciduous4" ) # sending a multi- setting xml file to the function -site.pft.linkage(settings,site.pft.links) +site.pft.linkage(settings, site.pft.links) } } diff --git a/base/utils/man/datetime2doy.Rd b/base/utils/man/datetime2doy.Rd index afb22dc51f3..99fb6cf25cb 100644 --- a/base/utils/man/datetime2doy.Rd +++ b/base/utils/man/datetime2doy.Rd @@ -29,7 +29,7 @@ Julian Day do not support non-integer days. \examples{ datetime2doy("2010-01-01") # 1 datetime2doy("2010-01-01 12:00:00") # 1.5 -cf2doy(0, "days since 2007-01-01") +cf2doy(0, "days since 2007-01-01") cf2doy(5, "days since 2010-01-01") # 6 cf2doy(5, "days since 2010-01-01") # 6 } diff --git a/base/utils/man/days_in_year.Rd b/base/utils/man/days_in_year.Rd index 9ddcaae27f4..9e7a31d23e7 100644 --- a/base/utils/man/days_in_year.Rd +++ b/base/utils/man/days_in_year.Rd @@ -18,9 +18,9 @@ integer vector, all either 365 or 366 Calculate number of days in a year based on whether it is a leap year or not. } \examples{ -days_in_year(2010) # Not a leap year -- returns 365 -days_in_year(2012) # Leap year -- returns 366 -days_in_year(2000:2008) # Function is vectorized over years +days_in_year(2010) # Not a leap year -- returns 365 +days_in_year(2012) # Leap year -- returns 366 +days_in_year(2000:2008) # Function is vectorized over years } \author{ Alexey Shiklomanov diff --git a/base/utils/man/distn.stats.Rd b/base/utils/man/distn.stats.Rd index ac0a64079fa..e681100cbde 100644 --- a/base/utils/man/distn.stats.Rd +++ b/base/utils/man/distn.stats.Rd @@ -21,7 +21,7 @@ Implementation of standard equations used to calculate mean and sd for a variety named distributions different } \examples{ -distn.stats('norm', 0, 1) +distn.stats("norm", 0, 1) } \author{ David LeBauer diff --git a/base/utils/man/download.url.Rd b/base/utils/man/download.url.Rd index 9187d9f72c5..50f5948ebe1 100644 --- a/base/utils/man/download.url.Rd +++ b/base/utils/man/download.url.Rd @@ -29,6 +29,6 @@ it will return the name of the file } \examples{ \dontrun{ -download.url('http://localhost/', index.html) +download.url("http://localhost/", index.html) } } diff --git a/base/utils/man/download_file.Rd b/base/utils/man/download_file.Rd index 97c660c8a81..d735e0737c6 100644 --- a/base/utils/man/download_file.Rd +++ b/base/utils/man/download_file.Rd @@ -20,11 +20,13 @@ home directory } \examples{ \dontrun{ -download_file("http://lib.stat.cmu.edu/datasets/csb/ch11b.txt","~/test.download.txt") +download_file("http://lib.stat.cmu.edu/datasets/csb/ch11b.txt", "~/test.download.txt") -download_file(" +download_file( + " ftp://ftp.cdc.noaa.gov/Datasets/NARR/monolevel/pres.sfc.2000.nc", - "~/pres.sfc.2000.nc") + "~/pres.sfc.2000.nc" +) } } diff --git a/base/utils/man/full.path.Rd b/base/utils/man/full.path.Rd index 5fe7d1bf162..2750997c402 100644 --- a/base/utils/man/full.path.Rd +++ b/base/utils/man/full.path.Rd @@ -18,7 +18,7 @@ will normalize the path and prepend it with the current working folder if needed to get an absolute path name. } \examples{ -full.path('pecan') +full.path("pecan") } \author{ Rob Kooper diff --git a/base/utils/man/get.parameter.stat.Rd b/base/utils/man/get.parameter.stat.Rd index 9a6f87d1386..10f5071104a 100644 --- a/base/utils/man/get.parameter.stat.Rd +++ b/base/utils/man/get.parameter.stat.Rd @@ -18,7 +18,9 @@ table with parameter statistics Gets statistics for LaTeX - formatted table } \examples{ -\dontrun{get.parameter.stat(mcmc.summaries[[1]], 'beta.o')} +\dontrun{ +get.parameter.stat(mcmc.summaries[[1]], "beta.o") +} } \author{ David LeBauer diff --git a/base/utils/man/get.run.id.Rd b/base/utils/man/get.run.id.Rd index 675d762416a..a892b9cdb42 100644 --- a/base/utils/man/get.run.id.Rd +++ b/base/utils/man/get.run.id.Rd @@ -25,8 +25,8 @@ id representing a model run Provides a consistent method of naming runs; for use in model input files and indices } \examples{ -get.run.id('ENS', left.pad.zeros(1, 5)) -get.run.id('SA', round(qnorm(-3),3), trait = 'Vcmax') +get.run.id("ENS", left.pad.zeros(1, 5)) +get.run.id("SA", round(qnorm(-3), 3), trait = "Vcmax") } \author{ Carl Davidson, David LeBauer diff --git a/base/utils/man/load.modelpkg.Rd b/base/utils/man/load.modelpkg.Rd index 06d4aa6cbf7..1fd4e073496 100644 --- a/base/utils/man/load.modelpkg.Rd +++ b/base/utils/man/load.modelpkg.Rd @@ -16,7 +16,9 @@ FALSE if function returns error; else TRUE Load model package } \examples{ -\dontrun{require.modelpkg(BioCro)} +\dontrun{ +require.modelpkg(BioCro) +} } \author{ David LeBauer diff --git a/base/utils/man/need_packages.Rd b/base/utils/man/need_packages.Rd index 6ed4ff1341e..0e7fd2d5b77 100644 --- a/base/utils/man/need_packages.Rd +++ b/base/utils/man/need_packages.Rd @@ -21,7 +21,7 @@ error if not. \examples{ # Only need ::: because package isn't exported. # Inside a package, just call `need_packages` -PEcAn.utils:::need_packages("stats", "methods") # Always works +PEcAn.utils:::need_packages("stats", "methods") # Always works try(PEcAn.utils:::need_packages("notapackage")) } \author{ diff --git a/base/utils/man/r2bugs.distributions.Rd b/base/utils/man/r2bugs.distributions.Rd index f900f747faa..3fa3e007c5b 100644 --- a/base/utils/man/r2bugs.distributions.Rd +++ b/base/utils/man/r2bugs.distributions.Rd @@ -18,9 +18,11 @@ priors dataframe using JAGS default parameterizations R and BUGS have different parameterizations for some distributions. This function transforms the distributions from R defaults to BUGS defaults. BUGS is an implementation of the BUGS language, and these transformations are expected to work for bugs. } \examples{ -priors <- data.frame(distn = c('weibull', 'lnorm', 'norm', 'gamma'), - parama = c(1, 1, 1, 1), - paramb = c(2, 2, 2, 2)) +priors <- data.frame( + distn = c("weibull", "lnorm", "norm", "gamma"), + parama = c(1, 1, 1, 1), + paramb = c(2, 2, 2, 2) +) r2bugs.distributions(priors) } \author{ diff --git a/base/utils/man/retry.func.Rd b/base/utils/man/retry.func.Rd index 7a3ff9216ef..9a8b648e13d 100644 --- a/base/utils/man/retry.func.Rd +++ b/base/utils/man/retry.func.Rd @@ -30,13 +30,16 @@ Retry function X times before stopping in error } \examples{ \dontrun{ - file_url <- paste0("https://thredds.daac.ornl.gov/", - "thredds/dodsC/ornldaac/1220", - "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4") +file_url <- paste0( + "https://thredds.daac.ornl.gov/", + "thredds/dodsC/ornldaac/1220", + "/mstmip_driver_global_hd_climate_lwdown_1999_v1.nc4" +) dap <- retry.func( ncdf4::nc_open(file_url), - maxErrors=10, - sleep=2) + maxErrors = 10, + sleep = 2 +) } } diff --git a/base/utils/man/robustly.Rd b/base/utils/man/robustly.Rd index 43b2e07be25..0b102d84e72 100644 --- a/base/utils/man/robustly.Rd +++ b/base/utils/man/robustly.Rd @@ -25,11 +25,11 @@ Adverb to try calling a function \code{n} times before giving up rlog <- robustly(log, timeout = 0.3) try(rlog("fail")) \dontrun{ - nc_openr <- robustly(ncdf4::nc_open, n = 10, timeout = 0.5) - nc <- nc_openr(url) - # ...or just call the function directly - nc <- robustly(ncdf4::nc_open, n = 20)(url) - # Useful in `purrr` maps - many_vars <- purrr::map(varnames, robustly(ncdf4::ncvar_get), nc = nc) +nc_openr <- robustly(ncdf4::nc_open, n = 10, timeout = 0.5) +nc <- nc_openr(url) +# ...or just call the function directly +nc <- robustly(ncdf4::nc_open, n = 20)(url) +# Useful in `purrr` maps +many_vars <- purrr::map(varnames, robustly(ncdf4::ncvar_get), nc = nc) } } diff --git a/base/utils/man/seconds_in_year.Rd b/base/utils/man/seconds_in_year.Rd index 720134b31bd..bbac6e10bc4 100644 --- a/base/utils/man/seconds_in_year.Rd +++ b/base/utils/man/seconds_in_year.Rd @@ -17,9 +17,9 @@ seconds_in_year(year, leap_year = TRUE, ...) Number of seconds in a given year } \examples{ -seconds_in_year(2000) # Leap year -- 366 x 24 x 60 x 60 = 31622400 -seconds_in_year(2001) # Regular year -- 365 x 24 x 60 x 60 = 31536000 -seconds_in_year(2000:2005) # Vectorized over year +seconds_in_year(2000) # Leap year -- 366 x 24 x 60 x 60 = 31622400 +seconds_in_year(2001) # Regular year -- 365 x 24 x 60 x 60 = 31536000 +seconds_in_year(2000:2005) # Vectorized over year } \author{ Alexey Shiklomanov diff --git a/base/utils/man/sendmail.Rd b/base/utils/man/sendmail.Rd index a6c453f129a..3209a06c802 100644 --- a/base/utils/man/sendmail.Rd +++ b/base/utils/man/sendmail.Rd @@ -23,7 +23,7 @@ Sends email. This assumes the program sendmail is installed. } \examples{ \dontrun{ -sendmail('bob@example.com', 'joe@example.com', 'Hi', 'This is R.') +sendmail("bob@example.com", "joe@example.com", "Hi", "This is R.") } } \author{ diff --git a/base/utils/man/timezone_hour.Rd b/base/utils/man/timezone_hour.Rd index f63c6485c7b..7653939712b 100644 --- a/base/utils/man/timezone_hour.Rd +++ b/base/utils/man/timezone_hour.Rd @@ -17,7 +17,7 @@ Returns the number of hours offset to UTC for a timezone. } \examples{ \dontrun{ -timezone_hour('America/New_York') +timezone_hour("America/New_York") } } \author{ diff --git a/base/utils/man/trait.lookup.Rd b/base/utils/man/trait.lookup.Rd index 5d676fc6b59..686eabada28 100644 --- a/base/utils/man/trait.lookup.Rd +++ b/base/utils/man/trait.lookup.Rd @@ -20,10 +20,10 @@ Dictionary of terms used to identify traits in ed, filenames, and figures \examples{ # convert parameter name to a string appropriate for end-use plotting \dontrun{ -trait.lookup('growth_resp_factor') -trait.lookup('growth_resp_factor')$figid +trait.lookup("growth_resp_factor") +trait.lookup("growth_resp_factor")$figid # get a list of all traits and units in dictionary -trait.lookup()[,c('figid', 'units')] +trait.lookup()[, c("figid", "units")] } } diff --git a/base/utils/man/transformstats.Rd b/base/utils/man/transformstats.Rd index de87f63654e..7312009213b 100644 --- a/base/utils/man/transformstats.Rd +++ b/base/utils/man/transformstats.Rd @@ -21,10 +21,12 @@ LeBauer 2020 Transforming ANOVA and Regression statistics for Meta-analysis. Authorea. DOI: https://doi.org/10.22541/au.158359749.96662550 } \examples{ -statdf <- data.frame(Y=rep(1,5), - stat=rep(1,5), - n=rep(4,5), - statname=c('SD', 'MSE', 'LSD', 'HSD', 'MSD')) +statdf <- data.frame( + Y = rep(1, 5), + stat = rep(1, 5), + n = rep(4, 5), + statname = c("SD", "MSE", "LSD", "HSD", "MSD") +) transformstats(statdf) } \author{ diff --git a/base/utils/man/tryl.Rd b/base/utils/man/tryl.Rd index 9d011f2e15a..41d2816ec3d 100644 --- a/base/utils/man/tryl.Rd +++ b/base/utils/man/tryl.Rd @@ -16,9 +16,9 @@ FALSE if function returns error; else TRUE adaptation of try that returns a logical value (FALSE if error) } \examples{ -tryl(1+1) +tryl(1 + 1) # TRUE -tryl(sum('a')) +tryl(sum("a")) # FALSE } \author{ diff --git a/base/utils/man/unit_is_parseable.Rd b/base/utils/man/unit_is_parseable.Rd index ae1490e526a..ff89978959f 100644 --- a/base/utils/man/unit_is_parseable.Rd +++ b/base/utils/man/unit_is_parseable.Rd @@ -16,8 +16,8 @@ TRUE if the units is parseable, FALSE otherwise. Function will replace the now-unmaintained \code{udunits2::ud.is.parseable} } \examples{ - unit_is_parseable("g/sec^2") - unit_is_parseable("kiglometters") +unit_is_parseable("g/sec^2") +unit_is_parseable("kiglometters") } \author{ diff --git a/base/visualization/man/vwReg.Rd b/base/visualization/man/vwReg.Rd index 0f2301818e4..846195efae6 100644 --- a/base/visualization/man/vwReg.Rd +++ b/base/visualization/man/vwReg.Rd @@ -88,7 +88,7 @@ Details: \url{http://www.nicebread.de/visually-weighted-regression-in-r-a-la-sol \examples{ # build a demo data set set.seed(1) -x <- rnorm(200, 0.8, 1.2) +x <- rnorm(200, 0.8, 1.2) e <- rnorm(200, 0, 3)*(abs(x)^1.5 + .5) + rnorm(200, 0, 4) e2 <- rnorm(200, 0, 5)*(abs(x)^1.5 + .8) + rnorm(200, 0, 5) y <- 8*x - x^3 + e diff --git a/base/workflow/man/start_model_runs.Rd b/base/workflow/man/start_model_runs.Rd index a1c171073bb..f6e34703150 100644 --- a/base/workflow/man/start_model_runs.Rd +++ b/base/workflow/man/start_model_runs.Rd @@ -27,7 +27,7 @@ Start selected ecosystem model runs within PEcAn workflow }} \examples{ \dontrun{ - start_model_runs(settings) +start_model_runs(settings) } } \author{ diff --git a/models/ed/man/read_restart.ED2.Rd b/models/ed/man/read_restart.ED2.Rd index eff0cdcebd8..95d6c728202 100644 --- a/models/ed/man/read_restart.ED2.Rd +++ b/models/ed/man/read_restart.ED2.Rd @@ -24,11 +24,11 @@ State data assimilation read-restart for ED2 } \examples{ \dontrun{ - outdir <- "~/sda-hackathon/outputs" - runid <- "99000000020" - settings_file <- "outputs/pecan.CONFIGS.xml" - settings <- PEcAn.settings::read.settings(settings_file) - forecast <- read_restart.ED2(...) +outdir <- "~/sda-hackathon/outputs" +runid <- "99000000020" +settings_file <- "outputs/pecan.CONFIGS.xml" +settings <- PEcAn.settings::read.settings(settings_file) +forecast <- read_restart.ED2(...) } } diff --git a/models/sipnet/man/mergeNC.Rd b/models/sipnet/man/mergeNC.Rd index 011a8c8e46f..c3bf6846cce 100644 --- a/models/sipnet/man/mergeNC.Rd +++ b/models/sipnet/man/mergeNC.Rd @@ -22,10 +22,11 @@ Merge multiple NetCDF files into one } \examples{ \dontrun{ -files <- list.files(paste0(system.file(package="processNC"), "/extdata"), - pattern="tas.*\\\\.nc", full.names=TRUE) -temp <- tempfile(fileext=".nc") -mergeNC(files=files, outfile=temp) -terra::rast(temp) +files <- list.files(paste0(system.file(package = "processNC"), "/extdata"), + pattern = "tas.*\\\\.nc", full.names = TRUE +) +temp <- tempfile(fileext = ".nc") +mergeNC(files = files, outfile = temp) +terra::rast(temp) } } diff --git a/modules/allometry/man/AllomAve.Rd b/modules/allometry/man/AllomAve.Rd index 7d9fc911bab..2796f407344 100644 --- a/modules/allometry/man/AllomAve.Rd +++ b/modules/allometry/man/AllomAve.Rd @@ -50,10 +50,11 @@ nested list of parameter summary statistics \description{ Allometry wrapper function that handles loading and subsetting the data, fitting the Bayesian models, and generating diagnostic figures. Set up to loop over - multiple PFTs and components. + multiple PFTs and components. Writes raw MCMC and PDF of diagnositcs to file and returns table of summary stats. - -There are two usages of this function. +} +\details{ +There are two usages of this function. When running 'online' (connected to the PEcAn database), pass the database connection, con, and the pfts subsection of the PEcAn settings. When running 'stand alone' pass the pft list mapping species to species codes @@ -61,13 +62,13 @@ When running 'stand alone' pass the pft list mapping species to species codes } \examples{ -if(FALSE){ - pfts = list(FAGR = data.frame(spcd=531,acronym='FAGR')) - allom.stats = AllomAve(pfts,ngibbs=500) +if (FALSE) { + pfts <- list(FAGR = data.frame(spcd = 531, acronym = "FAGR")) + allom.stats <- AllomAve(pfts, ngibbs = 500) ## example of a PFT with multiple species (late hardwood) ## note that if you're just using Jenkins the acronym column is optional - pfts = list(LH = data.frame(spcd = c(531,318),acronym=c('FAGR','ACSA3'))) + pfts <- list(LH = data.frame(spcd = c(531, 318), acronym = c("FAGR", "ACSA3"))) } } diff --git a/modules/allometry/man/allom.predict.Rd b/modules/allometry/man/allom.predict.Rd index 031510f2504..d998f65ad58 100644 --- a/modules/allometry/man/allom.predict.Rd +++ b/modules/allometry/man/allom.predict.Rd @@ -47,12 +47,10 @@ Function for making tree-level Monte Carlo predictions from allometric equations estimated from the PEcAn allometry module } \examples{ - \dontrun{ - object = '~/Dropbox//HF C Synthesis/Allometry Papers & Analysis/' - dbh = seq(10,50,by=5) - mass = allom.predict(object,dbh,n=100) - +object <- "~/Dropbox//HF C Synthesis/Allometry Papers & Analysis/" +dbh <- seq(10, 50, by = 5) +mass <- allom.predict(object, dbh, n = 100) } } diff --git a/modules/allometry/man/load.allom.Rd b/modules/allometry/man/load.allom.Rd index 4b9f0415485..42ce9053e86 100644 --- a/modules/allometry/man/load.allom.Rd +++ b/modules/allometry/man/load.allom.Rd @@ -20,11 +20,9 @@ mcmc outputs in a list by PFT then component loads allom files } \examples{ - \dontrun{ - object = '~/Dropbox//HF C Synthesis/Allometry Papers & Analysis/' - allom.mcmc = load.allom(object) - +object <- "~/Dropbox//HF C Synthesis/Allometry Papers & Analysis/" +allom.mcmc <- load.allom(object) } } diff --git a/modules/allometry/man/read.allom.data.Rd b/modules/allometry/man/read.allom.data.Rd index cffd3a490ff..a609b569267 100644 --- a/modules/allometry/man/read.allom.data.Rd +++ b/modules/allometry/man/read.allom.data.Rd @@ -29,6 +29,6 @@ read.allom.data(pft.data, component, field, parm, nsim = 10000) Extracts PFT- and component-specific data and allometeric equations from the specified files. } \details{ -This code also estimates the standard error from R-squared, +This code also estimates the standard error from R-squared, which is required to simulate pseudodata from the allometric eqns. } diff --git a/modules/assim.batch/man/autoburnin.Rd b/modules/assim.batch/man/autoburnin.Rd index 8d4eae8114f..eb7908d03cb 100644 --- a/modules/assim.batch/man/autoburnin.Rd +++ b/modules/assim.batch/man/autoburnin.Rd @@ -19,10 +19,10 @@ and \code{gelman.diag}.} Automatically calculate and apply burnin value } \examples{ - z1 <- coda::mcmc(c(rnorm(2500, 5), rnorm(2500, 0))) - z2 <- coda::mcmc(c(rnorm(2500, -5), rnorm(2500, 0))) - z <- coda::mcmc.list(z1, z2) - z_burned <- autoburnin(z) +z1 <- coda::mcmc(c(rnorm(2500, 5), rnorm(2500, 0))) +z2 <- coda::mcmc(c(rnorm(2500, -5), rnorm(2500, 0))) +z <- coda::mcmc.list(z1, z2) +z_burned <- autoburnin(z) } \author{ Michael Dietze, Alexey Shiklomanov diff --git a/modules/assim.batch/man/getBurnin.Rd b/modules/assim.batch/man/getBurnin.Rd index 6d18ec20294..48595b8bffa 100644 --- a/modules/assim.batch/man/getBurnin.Rd +++ b/modules/assim.batch/man/getBurnin.Rd @@ -35,10 +35,10 @@ Automatically detect burnin based on one of several methods. See "gelman_diag_mw" and "gelman_diag_gelmanPlot" } \examples{ - z1 <- coda::mcmc(c(rnorm(2500, 5), rnorm(2500, 0))) - z2 <- coda::mcmc(c(rnorm(2500, -5), rnorm(2500, 0))) - z <- coda::mcmc.list(z1, z2) - burnin <- getBurnin(z, threshold = 1.05) +z1 <- coda::mcmc(c(rnorm(2500, 5), rnorm(2500, 0))) +z2 <- coda::mcmc(c(rnorm(2500, -5), rnorm(2500, 0))) +z <- coda::mcmc.list(z1, z2) +burnin <- getBurnin(z, threshold = 1.05) } \author{ Alexey Shiklomanov, Michael Dietze diff --git a/modules/assim.batch/man/pda.generate.externals.Rd b/modules/assim.batch/man/pda.generate.externals.Rd index fc5b3ce409c..2f298a9a8ba 100644 --- a/modules/assim.batch/man/pda.generate.externals.Rd +++ b/modules/assim.batch/man/pda.generate.externals.Rd @@ -103,9 +103,9 @@ You can use this function just to generate either one of the external.* PDA obje } \examples{ \dontrun{ -pda.externals <- pda.generate.externals(external.data = TRUE, obs = obs, +pda.externals <- pda.generate.externals(external.data = TRUE, obs = obs, varn = "NEE", varid = 297, n_eff = 106.9386, -external.formats = TRUE, model_data_diag = TRUE, +external.formats = TRUE, model_data_diag = TRUE, model.out = "/tmp/out/outdir", start_date = "2017-01-01", end_date = "2018-12-31") } diff --git a/modules/assim.sequential/man/Analysis.sda.Rd b/modules/assim.sequential/man/Analysis.sda.Rd index b2d02230274..3c57a27879b 100644 --- a/modules/assim.sequential/man/Analysis.sda.Rd +++ b/modules/assim.sequential/man/Analysis.sda.Rd @@ -33,7 +33,7 @@ Analysis.sda( Returns whatever the FUN is returning. In case of EnKF and GEF, this function returns a list with estimated mean and cov matrix of forecast state variables as well as mean and cov estimated as a result of assimilation/analysis . } \description{ -This functions uses the FUN to perform the analysis. EnKF function is developed inside the PEcAnAssimSequential package which can be sent to this function to perform the Ensemble Kalman Filter. +This functions uses the FUN to perform the analysis. EnKF function is developed inside the PEcAnAssimSequential package which can be sent to this function to perform the Ensemble Kalman Filter. The other option is GEF function inside the same package allowing to perform Generalized Ensemble kalman Filter. If you're using an arbitrary function you can use the ... to send any other variables to your desired analysis function. diff --git a/modules/assim.sequential/man/Create_Site_PFT_CSV.Rd b/modules/assim.sequential/man/Create_Site_PFT_CSV.Rd index 732dcf63a84..13831fee448 100644 --- a/modules/assim.sequential/man/Create_Site_PFT_CSV.Rd +++ b/modules/assim.sequential/man/Create_Site_PFT_CSV.Rd @@ -23,17 +23,20 @@ Title Identify pft for each site of a multi-site settings using NLCD and Eco-reg } \examples{ \dontrun{ - NLCD <- file.path( - "/fs", "data1", "pecan.data", "input", - "nlcd_2001_landcover_2011_edition_2014_10_10", - "nlcd_2001_landcover_2011_edition_2014_10_10.img") - Ecoregion <- file.path( - "/projectnb", "dietzelab", "dongchen", - "All_NEON_SDA", "NEON42", "eco-region", "us_eco_l3_state_boundaries.shp") - settings <- PEcAn.settings::read.settings( - "/projectnb/dietzelab/dongchen/All_NEON_SDA/NEON42/pecan.xml") - con <- PEcAn.DB::db.open(settings$database$bety) - site_pft_info <- Create_Site_PFT_CSV(settings, Ecoregion, NLCD, con) +NLCD <- file.path( + "/fs", "data1", "pecan.data", "input", + "nlcd_2001_landcover_2011_edition_2014_10_10", + "nlcd_2001_landcover_2011_edition_2014_10_10.img" +) +Ecoregion <- file.path( + "/projectnb", "dietzelab", "dongchen", + "All_NEON_SDA", "NEON42", "eco-region", "us_eco_l3_state_boundaries.shp" +) +settings <- PEcAn.settings::read.settings( + "/projectnb/dietzelab/dongchen/All_NEON_SDA/NEON42/pecan.xml" +) +con <- PEcAn.DB::db.open(settings$database$bety) +site_pft_info <- Create_Site_PFT_CSV(settings, Ecoregion, NLCD, con) } } diff --git a/modules/assim.sequential/man/construct_nimble_H.Rd b/modules/assim.sequential/man/construct_nimble_H.Rd index f7705188a1b..61d64b1f808 100644 --- a/modules/assim.sequential/man/construct_nimble_H.Rd +++ b/modules/assim.sequential/man/construct_nimble_H.Rd @@ -18,7 +18,7 @@ construct_nimble_H(site.ids, var.names, obs.t, pft.path = NULL, by = "single") \item{by}{criteria, it supports by variable, site, pft, all, and single Q.} } \value{ -Returns one vector containing index for which Q to be estimated for which variable, +Returns one vector containing index for which Q to be estimated for which variable, and the other vector gives which state variable has which observation (= element.W.Data). } \description{ diff --git a/modules/assim.sequential/man/sda.enkf.multisite.Rd b/modules/assim.sequential/man/sda.enkf.multisite.Rd index 81b79f1c1a1..af4098f49e6 100644 --- a/modules/assim.sequential/man/sda.enkf.multisite.Rd +++ b/modules/assim.sequential/man/sda.enkf.multisite.Rd @@ -33,9 +33,9 @@ sda.enkf.multisite( \item{ensemble.samples}{Pass ensemble.samples from outside to avoid GitHub check issues.} -\item{control}{List of flags controlling the behavior of the SDA. -`trace` for reporting back the SDA outcomes; -`TimeseriesPlot` for post analysis examination; +\item{control}{List of flags controlling the behavior of the SDA. +`trace` for reporting back the SDA outcomes; +`TimeseriesPlot` for post analysis examination; `debug` decide if we want to pause the code and examining the variables inside the function; `pause` decide if we want to pause the SDA workflow at current time point t; `Profiling` decide if we want to export the temporal SDA outputs in CSV file; diff --git a/modules/benchmark/man/align_by_first_observation.Rd b/modules/benchmark/man/align_by_first_observation.Rd index 87b6ee65b6b..2eb8de32425 100644 --- a/modules/benchmark/man/align_by_first_observation.Rd +++ b/modules/benchmark/man/align_by_first_observation.Rd @@ -11,7 +11,7 @@ align_by_first_observation(observation_one, observation_two, custom_table) \item{observation_two}{another vector of plant functional types, or species. Provides the order.} -\item{custom_table}{a table that either maps two pft's to one another or maps custom species codes to bety id codes. +\item{custom_table}{a table that either maps two pft's to one another or maps custom species codes to bety id codes. In the second case, must be passable to match_species_id.} } \value{ @@ -22,18 +22,19 @@ align_first_observation } \examples{ -observation_one<-c("AMCA3","AMCA3","AMCA3","AMCA3") -observation_two<-c("a", "b", "a", "a") +observation_one <- c("AMCA3", "AMCA3", "AMCA3", "AMCA3") +observation_two <- c("a", "b", "a", "a") -table<-list() -table$plant_functional_type_one<- c("AMCA3","AMCA3","ARHY", "ARHY") -table$plant_functional_type_two<- c('a','a','b', 'b') # PFT groupings -table<-as.data.frame(table) +table <- list() +table$plant_functional_type_one <- c("AMCA3", "AMCA3", "ARHY", "ARHY") +table$plant_functional_type_two <- c("a", "a", "b", "b") # PFT groupings +table <- as.data.frame(table) aligned <- align_by_first_observation( observation_one = observation_one, observation_two = observation_two, - custom_table = table) + custom_table = table +) # aligned should be a vector '[1] "AMCA3" "ARHY" "AMCA3" "AMCA3"' } diff --git a/modules/benchmark/man/align_data_to_data_pft.Rd b/modules/benchmark/man/align_data_to_data_pft.Rd index 99d50c029bc..e2ccb470d40 100644 --- a/modules/benchmark/man/align_data_to_data_pft.Rd +++ b/modules/benchmark/man/align_data_to_data_pft.Rd @@ -46,34 +46,35 @@ align_data_to_data_pft } \details{ Aligns vectors of Plant Fucntional Typed and species. -Can align: +Can align: - two vectors of plant functional types (pft's) if a custom map is provided - a list of species (usda, fia, or latin_name format) to a plant functional type - a list of species in a custom format, with a table mapping it to bety_species_id's - Will return a list of what was originally provided, bety_species_codes if possible, + Will return a list of what was originally provided, bety_species_codes if possible, and an aligned output. Because some alignement is order-sensitive, alignment based on observation_one and observation_two are both provided. } \examples{ \dontrun{ -observation_one<-c("AMCA3","AMCA3","AMCA3","AMCA3") -observation_two<-c("a", "b", "a", "a") +observation_one <- c("AMCA3", "AMCA3", "AMCA3", "AMCA3") +observation_two <- c("a", "b", "a", "a") -table<-list() -table$plant_functional_type_one<- c("AMCA3","AMCA3","ARHY", "ARHY") -table$plant_functional_type_two<- c('a','a','b', 'b') # PFT groupings -table<-as.data.frame(table) +table <- list() +table$plant_functional_type_one <- c("AMCA3", "AMCA3", "ARHY", "ARHY") +table$plant_functional_type_two <- c("a", "a", "b", "b") # PFT groupings +table <- as.data.frame(table) -format_one<-"species_USDA_symbol" -format_two<-"plant_functional_type" +format_one <- "species_USDA_symbol" +format_two <- "plant_functional_type" aligned <- align_data_to_data_pft( - con = con, - observation_one = observation_one, observation_two = observation_two, - format_one = format_one, format_two = format_two, - custom_table = table) + con = con, + observation_one = observation_one, observation_two = observation_two, + format_one = format_one, format_two = format_two, + custom_table = table +) } } \author{ diff --git a/modules/benchmark/man/align_pft.Rd b/modules/benchmark/man/align_pft.Rd index 71a24566e2f..aa5c1458edf 100644 --- a/modules/benchmark/man/align_pft.Rd +++ b/modules/benchmark/man/align_pft.Rd @@ -23,14 +23,14 @@ align_pft( \item{observation_two}{anouther vector of plant fucntional types, or species} -\item{custom_table}{a table that either maps two pft's to one anouther or maps custom species codes to bety id codes. +\item{custom_table}{a table that either maps two pft's to one anouther or maps custom species codes to bety id codes. In the second case, must be passable to match_species_id.} \item{format_one}{The output of query.format.vars() of observation one of the form output$vars$bety_names} \item{format_two}{The output of query.format.vars() of observation two of the form output$vars$bety_names} -\item{subset_is_ok}{When aligning two species lists, this allows for alignement when species lists aren't identical. +\item{subset_is_ok}{When aligning two species lists, this allows for alignement when species lists aren't identical. set to FALSE by default.} \item{comparison_type}{one of "data_to_model", "data_to_data", or "model_to_model"} @@ -50,14 +50,14 @@ set to FALSE by default.} Align vectors of Plant Functional Type and species. } \details{ -Can align: +Can align: - two vectors of plant fucntional types (pft's) if a custom map is provided - a list of species (usda, fia, or latin_name format) to a plant fucntional type - a list of species in a custom format, with a table mapping it to bety_species_id's - Will return a list of what was originally provided, bety_speceis_codes if possible, + Will return a list of what was originally provided, bety_speceis_codes if possible, and an aligned output. Becuase some alignement is order-sensitive, alignment based on observation_one - and observation_two are both provided. + and observation_two are both provided. \code{comparison_type} can be one of the following: \describe{ @@ -71,20 +71,22 @@ Can align: #------------ A species to PFT alignment ----------- -observation_one<-c("AMCA3","AMCA3","AMCA3","AMCA3") -observation_two<-c("a", "b", "a", "a") # +observation_one <- c("AMCA3", "AMCA3", "AMCA3", "AMCA3") +observation_two <- c("a", "b", "a", "a") # -format_one<-"species_USDA_symbol" -format_two<-"plant_funtional_type" +format_one <- "species_USDA_symbol" +format_two <- "plant_funtional_type" -table<-list() -table$plant_functional_type_one<- c("AMCA3","AMCA3","ARHY", "ARHY") -table$plant_functional_type_two<- c('a','a','b', 'b') # PFT groupings -table<-as.data.frame(table) +table <- list() +table$plant_functional_type_one <- c("AMCA3", "AMCA3", "ARHY", "ARHY") +table$plant_functional_type_two <- c("a", "a", "b", "b") # PFT groupings +table <- as.data.frame(table) -aligned<-align_pft(con = con, observation_one = observation_one, observation_two = observation_two, -format_one = format_one, format_two = format_two, custom_table = table) +aligned <- align_pft( + con = con, observation_one = observation_one, observation_two = observation_two, + format_one = format_one, format_two = format_two, custom_table = table +) } } diff --git a/modules/benchmark/man/check_if_species_list.Rd b/modules/benchmark/man/check_if_species_list.Rd index 60357127e67..c10dab46a53 100644 --- a/modules/benchmark/man/check_if_species_list.Rd +++ b/modules/benchmark/man/check_if_species_list.Rd @@ -9,7 +9,7 @@ check_if_species_list(vars, custom_table = NULL) \arguments{ \item{vars}{format} -\item{custom_table}{a table that either maps two pft's to one anouther or maps custom species codes to bety id codes. +\item{custom_table}{a table that either maps two pft's to one anouther or maps custom species codes to bety id codes. In the second case, must be passable to match_species_id.} } \value{ diff --git a/modules/data.atmosphere/man/align.met.Rd b/modules/data.atmosphere/man/align.met.Rd index 7620095fccc..618e6087ef5 100644 --- a/modules/data.atmosphere/man/align.met.Rd +++ b/modules/data.atmosphere/man/align.met.Rd @@ -21,25 +21,25 @@ align.met( \item{source.path}{- data to be bias-corrected aligned with training data (from align.met)} -\item{yrs.train}{- (optional) specify a specific years to be loaded for the training data; -prevents needing to load the entire dataset. If NULL, all available years +\item{yrs.train}{- (optional) specify a specific years to be loaded for the training data; +prevents needing to load the entire dataset. If NULL, all available years will be loaded. If not null, should be a vector of numbers (so you can skip problematic years)} \item{yrs.source}{- (optional) specify a specific years to be loaded for the source data; -prevents needing to load the entire dataset. If NULL, all available years +prevents needing to load the entire dataset. If NULL, all available years will be loaded. If not null, should be a vector of numbers (so you can skip problematic years)} \item{n.ens}{- number of ensemble members to generate and save} -\item{pair.mems}{- logical stating whether ensemble members should be paired in +\item{pair.mems}{- logical stating whether ensemble members should be paired in the case where ensembles are being read in in both the training and source data} -\item{mems.train}{- (optional) string of ensemble identifiers that ensure the training data is read +\item{mems.train}{- (optional) string of ensemble identifiers that ensure the training data is read in a specific order to ensure consistent time series & proper error propagation. -If null, members of the training data ensemble will be randomly selected and -ordered. Specifying the ensemble members IDs (e.g. CCSM_001, CCSM_002) will +If null, members of the training data ensemble will be randomly selected and +ordered. Specifying the ensemble members IDs (e.g. CCSM_001, CCSM_002) will ensure ensemble members are properly identified and combined.} \item{seed}{- specify seed so that random draws can be reproduced} @@ -50,28 +50,28 @@ ensure ensemble members are properly identified and combined.} 2-layered list (stored in memory) containing the training and source data that are now matched in temporal resolution have the specified number of ensemble members - dat.train (training dataset) and dat.source (source data to be downscaled or bias-corrected) - are both lists that contain separate data frames for time indices and all available met + are both lists that contain separate data frames for time indices and all available met variables with ensemble members in columns } \description{ -This script aligns meteorology datasets in at temporal resolution for debiasing & - temporal downscaling. - Note: The output here is stored in memory! - Note: can probably at borrow from or adapt align_data.R in Benchmarking module, but +This script aligns meteorology datasets in at temporal resolution for debiasing & + temporal downscaling. + Note: The output here is stored in memory! + Note: can probably at borrow from or adapt align_data.R in Benchmarking module, but it's too much of a black box at the moment. } \details{ Align meteorology datasets for debiasing -1. Assumes that both the training and source data are in *at least* daily resolution - and each dataset is in a consistent temporal resolution being read from a single file - (CF/Pecan format). For example, CMIP5 historical/p1000 runs where radiation drivers +1. Assumes that both the training and source data are in *at least* daily resolution + and each dataset is in a consistent temporal resolution being read from a single file + (CF/Pecan format). For example, CMIP5 historical/p1000 runs where radiation drivers are in monthly resolution and temperature is in daily will need to be reconciled using one of the "met2CF" or "download" or "extract" functions - 2. Default file structure: Ensembles members for a given site or set of simes are housed - in a common folder with the site ID. Right now everything is based off of Christy's + 2. Default file structure: Ensembles members for a given site or set of simes are housed + in a common folder with the site ID. Right now everything is based off of Christy's PalEON ensemble ID scheme where the site ID is a character string (e.g. HARVARD) followed - the SOURCE data family (i.e. GCM) as a string and then the ensemble member ID as a number + the SOURCE data family (i.e. GCM) as a string and then the ensemble member ID as a number (e.g. 001). For example, the file path for a single daily ensemble member for PalEON is: "~/Desktop/Research/met_ensembles/data/met_ensembles/HARVARD/day/ensembles/bcc-csm1-1_004" with each year in a separate netcdf file inside of it. "bcc-csm1-1_004" is an example of diff --git a/modules/data.atmosphere/man/cos_solar_zenith_angle.Rd b/modules/data.atmosphere/man/cos_solar_zenith_angle.Rd index 2a5565c42bf..6a446470e84 100644 --- a/modules/data.atmosphere/man/cos_solar_zenith_angle.Rd +++ b/modules/data.atmosphere/man/cos_solar_zenith_angle.Rd @@ -21,7 +21,7 @@ cos_solar_zenith_angle(doy, lat, lon, dt, hr) Numeric value representing the cosine of the solar zenith angle. } \description{ -Calculates the cosine of the solar zenith angle based on the given parameters. +Calculates the cosine of the solar zenith angle based on the given parameters. This angle is crucial in determining the amount of solar radiation reaching a point on Earth. } \details{ diff --git a/modules/data.atmosphere/man/download.Ameriflux.Rd b/modules/data.atmosphere/man/download.Ameriflux.Rd index 15091897fc2..d47fca5f3d0 100644 --- a/modules/data.atmosphere/man/download.Ameriflux.Rd +++ b/modules/data.atmosphere/man/download.Ameriflux.Rd @@ -15,7 +15,7 @@ download.Ameriflux( ) } \arguments{ -\item{sitename}{the FLUXNET ID of the site to be downloaded, used as file name prefix. +\item{sitename}{the FLUXNET ID of the site to be downloaded, used as file name prefix. The 'SITE_ID' field in \href{http://ameriflux.lbl.gov/sites/site-list-and-pages/}{list of Ameriflux sites}} \item{outfolder}{location on disk where outputs will be stored} diff --git a/modules/data.atmosphere/man/download.Fluxnet2015.Rd b/modules/data.atmosphere/man/download.Fluxnet2015.Rd index 72c7f309ee4..b8b51959d0b 100644 --- a/modules/data.atmosphere/man/download.Fluxnet2015.Rd +++ b/modules/data.atmosphere/man/download.Fluxnet2015.Rd @@ -16,7 +16,7 @@ download.Fluxnet2015( ) } \arguments{ -\item{sitename}{the FLUXNET ID of the site to be downloaded, used as file name prefix. +\item{sitename}{the FLUXNET ID of the site to be downloaded, used as file name prefix. The 'SITE_ID' field in \href{https://fluxnet.org/sites/site-list-and-pages/}{list of Ameriflux sites}} \item{outfolder}{location on disk where outputs will be stored} diff --git a/modules/data.atmosphere/man/download.Geostreams.Rd b/modules/data.atmosphere/man/download.Geostreams.Rd index 26c839e2528..f98498ff2ce 100644 --- a/modules/data.atmosphere/man/download.Geostreams.Rd +++ b/modules/data.atmosphere/man/download.Geostreams.Rd @@ -57,10 +57,12 @@ If using `~/.pecan.clowder.xml`, it must be a valid PEcAn-formatted XML settings } \examples{ \dontrun{ - download.Geostreams(outfolder = "~/output/dbfiles/Clowder_EF", - sitename = "UIUC Energy Farm - CEN", - start_date = "2016-01-01", end_date="2016-12-31", - key="verysecret") +download.Geostreams( + outfolder = "~/output/dbfiles/Clowder_EF", + sitename = "UIUC Energy Farm - CEN", + start_date = "2016-01-01", end_date = "2016-12-31", + key = "verysecret" +) } } \author{ diff --git a/modules/data.atmosphere/man/download.ICOS.Rd b/modules/data.atmosphere/man/download.ICOS.Rd index 09ea66b81a4..df98f2386c5 100644 --- a/modules/data.atmosphere/man/download.ICOS.Rd +++ b/modules/data.atmosphere/man/download.ICOS.Rd @@ -33,13 +33,13 @@ download.ICOS( information about the output file } \description{ -Currently available products: +Currently available products: Drought-2018 ecosystem eddy covariance flux product https://www.icos-cp.eu/data-products/YVR0-4898 ICOS Final Fully Quality Controlled Observational Data (Level 2) https://www.icos-cp.eu/data-products/ecosystem-release } \examples{ \dontrun{ -download.ICOS("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01", product="Drought2018") +download.ICOS("FI-Sii", "/home/carya/pecan", "2016-01-01", "2018-01-01", product = "Drought2018") } } \author{ diff --git a/modules/data.atmosphere/man/download.NARR_site.Rd b/modules/data.atmosphere/man/download.NARR_site.Rd index 0cbdf407772..20deedc01f5 100644 --- a/modules/data.atmosphere/man/download.NARR_site.Rd +++ b/modules/data.atmosphere/man/download.NARR_site.Rd @@ -47,12 +47,10 @@ Requires the `progress` package to be installed.} Download NARR time series for a single site } \examples{ - \dontrun{ download.NARR_site(tempdir(), "2001-01-01", "2001-01-12", 43.372, -89.907) } - } \author{ Alexey Shiklomanov diff --git a/modules/data.atmosphere/man/download.NEONmet.Rd b/modules/data.atmosphere/man/download.NEONmet.Rd index c8ffa061ae6..7abc1bfd46a 100644 --- a/modules/data.atmosphere/man/download.NEONmet.Rd +++ b/modules/data.atmosphere/man/download.NEONmet.Rd @@ -15,7 +15,7 @@ download.NEONmet( ) } \arguments{ -\item{sitename}{the NEON ID of the site to be downloaded, used as file name prefix. +\item{sitename}{the NEON ID of the site to be downloaded, used as file name prefix. The 4-letter SITE code in \href{https://www.neonscience.org/science-design/field-sites/list}{list of NEON sites}} \item{outfolder}{location on disk where outputs will be stored} diff --git a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd index 05aa332be43..aa01cffd137 100644 --- a/modules/data.atmosphere/man/download.NOAA_GEFS.Rd +++ b/modules/data.atmosphere/man/download.NOAA_GEFS.Rd @@ -50,14 +50,14 @@ Download NOAA GEFS Weather Data } \section{Information on Units}{ -Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, +Information on NOAA weather units can be found below. Note that the temperature is measured in degrees C, but is converted at the station and downloaded in Kelvin. } \section{NOAA_GEFS General Information}{ -This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. -A 16 day forecast is avaliable every 6 hours. Each forecast includes information on a total of 8 variables. +This function downloads NOAA GEFS weather data. GEFS is an ensemble of 21 different weather forecast models. +A 16 day forecast is avaliable every 6 hours. Each forecast includes information on a total of 8 variables. These are transformed from the NOAA standard to the internal PEcAn standard. } @@ -79,9 +79,9 @@ June 6th, 2018 at 6:00 a.m. to June 24th, 2018 at 6:00 a.m. \examples{ \dontrun{ - download.NOAA_GEFS(outfolder="~/Working/results", - lat.in= 45.805925, - lon.in = -90.07961, + download.NOAA_GEFS(outfolder="~/Working/results", + lat.in= 45.805925, + lon.in = -90.07961, site_id = 676) } diff --git a/modules/data.atmosphere/man/extract.local.CMIP5.Rd b/modules/data.atmosphere/man/extract.local.CMIP5.Rd index 14eb0142c7e..1cf25d0495a 100644 --- a/modules/data.atmosphere/man/extract.local.CMIP5.Rd +++ b/modules/data.atmosphere/man/extract.local.CMIP5.Rd @@ -42,7 +42,7 @@ extract.local.CMIP5( \item{ensemble_member}{which CMIP5 experiment ensemble member} \item{date.origin}{(optional) specify the date of origin for timestamps in the files being read. -If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and +If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and 850 for p1000 simulations (plus MPI-ESM-P historical). Format: YYYY-MM-DD} \item{adjust.pr}{- adjustment factor fore precipitation when the extracted values seem off} @@ -57,7 +57,7 @@ If NULL defaults to 1850 for historical simulations (except MPI-ESM-P) and This function extracts CMIP5 data from grids that have been downloaded and stored locally. Files are saved as a netCDF file in CF conventions at *DAILY* resolution. Note: At this point in time, variables that are only available at a native monthly resolution will be repeated to - give a pseudo-daily record (and can get dealt with in the downscaling workflow). These files + give a pseudo-daily record (and can get dealt with in the downscaling workflow). These files are ready to be used in the general PEcAn workflow or fed into the downscaling workflow. } \author{ diff --git a/modules/data.atmosphere/man/extract.local.NLDAS.Rd b/modules/data.atmosphere/man/extract.local.NLDAS.Rd index b9a9f2e88b8..f1aa1c2ced1 100644 --- a/modules/data.atmosphere/man/extract.local.NLDAS.Rd +++ b/modules/data.atmosphere/man/extract.local.NLDAS.Rd @@ -39,9 +39,9 @@ to control printing of debug info} } \description{ This function extracts NLDAS data from grids that have been downloaded and stored locally. - Once upon a time, you could query these files directly from the internet, but now they're - behind a tricky authentication wall. Files are saved as a netCDF file in CF conventions. - These files are ready to be used in the general PEcAn workflow or fed into the downscaling + Once upon a time, you could query these files directly from the internet, but now they're + behind a tricky authentication wall. Files are saved as a netCDF file in CF conventions. + These files are ready to be used in the general PEcAn workflow or fed into the downscaling workflow. } \author{ diff --git a/modules/data.atmosphere/man/extract.nc.ERA5.Rd b/modules/data.atmosphere/man/extract.nc.ERA5.Rd index d377f131bd4..9b5d55217f3 100644 --- a/modules/data.atmosphere/man/extract.nc.ERA5.Rd +++ b/modules/data.atmosphere/man/extract.nc.ERA5.Rd @@ -58,9 +58,9 @@ For the list of variables check out the documentation at \url{ } \examples{ \dontrun{ -point.data <- ERA5_extract(sslat=40, slon=-120, years=c(1990:1995), vars=NULL) - - purrr::map(~xts::apply.daily(.x, mean)) +point.data <- ERA5_extract(sslat = 40, slon = -120, years = c(1990:1995), vars = NULL) +# point.data \%>\% +purrr::map(~ xts::apply.daily(.x, mean)) } } diff --git a/modules/data.atmosphere/man/gen.subdaily.models.Rd b/modules/data.atmosphere/man/gen.subdaily.models.Rd index 3c77bc657c9..13b4e59739a 100644 --- a/modules/data.atmosphere/man/gen.subdaily.models.Rd +++ b/modules/data.atmosphere/man/gen.subdaily.models.Rd @@ -26,7 +26,7 @@ gen.subdaily.models( \item{path.train}{- path to CF/PEcAn style training data where each year is in a separate file.} -\item{yrs.train}{- which years of the training data should be used for to generate the model for +\item{yrs.train}{- which years of the training data should be used for to generate the model for the subdaily cycle. If NULL, will default to all years} \item{direction.filter}{- Whether the model will be filtered backward or forward in time. options = c("backward", "forward") @@ -36,7 +36,7 @@ the subdaily cycle. If NULL, will default to all years} \item{n.beta}{- number of betas to save from linear regression model} -\item{day.window}{- integer specifying number of days around the day being modeled you want to use data from for that +\item{day.window}{- integer specifying number of days around the day being modeled you want to use data from for that specific hours coefficients. Must be integer because we want statistics from the same time of day for each day surrounding the model day} diff --git a/modules/data.atmosphere/man/get.rh.Rd b/modules/data.atmosphere/man/get.rh.Rd index 34056a27891..1b575315550 100644 --- a/modules/data.atmosphere/man/get.rh.Rd +++ b/modules/data.atmosphere/man/get.rh.Rd @@ -23,7 +23,7 @@ Relative Humidity and the Dewpoint Temperature in Moist Air A Simple Conversion and Applications. BAMS https://doi.org/10.1175/BAMS-86-2-225 R = 461.5 K-1 kg-1 gas constant H2O -L enthalpy of vaporization +L enthalpy of vaporization linear dependence on T (p 226, following eq 9) } \author{ diff --git a/modules/data.atmosphere/man/get_NARR_thredds.Rd b/modules/data.atmosphere/man/get_NARR_thredds.Rd index 58896d47f1a..5d9e9dd691e 100644 --- a/modules/data.atmosphere/man/get_NARR_thredds.Rd +++ b/modules/data.atmosphere/man/get_NARR_thredds.Rd @@ -42,7 +42,6 @@ Requires the `progress` package to be installed.} Retrieve NARR data using thredds } \examples{ - \dontrun{ dat <- get_NARR_thredds("2008-01-01", "2008-01-15", 43.3724, -89.9071) } diff --git a/modules/data.atmosphere/man/merge_met_variable.Rd b/modules/data.atmosphere/man/merge_met_variable.Rd index 66821bf6bb7..3271885774e 100644 --- a/modules/data.atmosphere/man/merge_met_variable.Rd +++ b/modules/data.atmosphere/man/merge_met_variable.Rd @@ -35,23 +35,23 @@ print debugging information as they run?} Currently nothing. TODO: Return a data frame summarizing the merged files. } \description{ -Currently modifies the files IN PLACE rather than creating a new copy of the files an a new DB record. -Currently unit and name checking only implemented for CO2. +Currently modifies the files IN PLACE rather than creating a new copy of the files an a new DB record. +Currently unit and name checking only implemented for CO2. Currently does not yet support merge data that has lat/lon New variable only has time dimension and thus MIGHT break downstream code.... } \examples{ \dontrun{ -in.path <- "~/paleon/PalEONregional_CF_site_1-24047/" -in.prefix <- "" -outfolder <- "~/paleon/metTest/" +in.path <- "~/paleon/PalEONregional_CF_site_1-24047/" +in.prefix <- "" +outfolder <- "~/paleon/metTest/" merge.file <- "~/paleon/paleon_monthly_co2.nc" start_date <- "0850-01-01" -end_date <- "2010-12-31" -overwrite <- FALSE -verbose <- TRUE +end_date <- "2010-12-31" +overwrite <- FALSE +verbose <- TRUE -merge_met_variable(in.path,in.prefix,start_date,end_date,merge.file,overwrite,verbose) -PEcAn.DALEC::met2model.DALEC(in.path,in.prefix,outfolder,start_date,end_date) +merge_met_variable(in.path, in.prefix, start_date, end_date, merge.file, overwrite, verbose) +PEcAn.DALEC::met2model.DALEC(in.path, in.prefix, outfolder, start_date, end_date) } } diff --git a/modules/data.atmosphere/man/met.process.Rd b/modules/data.atmosphere/man/met.process.Rd index fce6b829b41..429e33d2c69 100644 --- a/modules/data.atmosphere/man/met.process.Rd +++ b/modules/data.atmosphere/man/met.process.Rd @@ -39,7 +39,7 @@ met.process( \item{overwrite}{Whether to force met.process to proceed. - `overwrite` may be a list with individual components corresponding to + `overwrite` may be a list with individual components corresponding to `download`, `met2cf`, `standardize`, and `met2model`. If it is instead a simple boolean, the default behavior for `overwrite=FALSE` is to overwrite nothing, as you might expect. Note however that the default behavior for `overwrite=TRUE` is to overwrite everything diff --git a/modules/data.atmosphere/man/met2CF.AmerifluxLBL.Rd b/modules/data.atmosphere/man/met2CF.AmerifluxLBL.Rd index 9ea8eeeca6b..94f723cbba1 100644 --- a/modules/data.atmosphere/man/met2CF.AmerifluxLBL.Rd +++ b/modules/data.atmosphere/man/met2CF.AmerifluxLBL.Rd @@ -43,7 +43,7 @@ format is output from db/R/query.format.vars, and should have: format$na.strings = list of missing values to convert to NA, such as -9999 format$skip = lines to skip excluding header format$vars$column_number = Column number in CSV file (optional, will use header name first) -Columns with NA for bety variable name are dropped. +Columns with NA for bety variable name are dropped. Units for datetime field are the lubridate function that will be used to parse the date (e.g. \code{ymd_hms} or \code{mdy_hm}).} \item{overwrite}{should existing files be overwritten} diff --git a/modules/data.atmosphere/man/met2CF.Geostreams.Rd b/modules/data.atmosphere/man/met2CF.Geostreams.Rd index c5a4f3f4496..9cb0bf9024a 100644 --- a/modules/data.atmosphere/man/met2CF.Geostreams.Rd +++ b/modules/data.atmosphere/man/met2CF.Geostreams.Rd @@ -26,7 +26,7 @@ met2CF.Geostreams( \item{overwrite}{logical: Regenerate existing files of the same name?} -\item{verbose}{logical, passed on to \code{\link[ncdf4]{nc_create}} +\item{verbose}{logical, passed on to \code{\link[ncdf4]{nc_create}} to control how chatty it should be during netCDF creation} \item{...}{other arguments, currently ignored} diff --git a/modules/data.atmosphere/man/met2CF.csv.Rd b/modules/data.atmosphere/man/met2CF.csv.Rd index 13add76a040..3ef8d204a49 100644 --- a/modules/data.atmosphere/man/met2CF.csv.Rd +++ b/modules/data.atmosphere/man/met2CF.csv.Rd @@ -78,23 +78,27 @@ Units for datetime field are the lubridate function that will be used to \examples{ \dontrun{ con <- PEcAn.DB::db.open( - list(user='bety', password='bety', host='localhost', - dbname='bety', driver='PostgreSQL',write=TRUE)) -start_date <- lubridate::ymd_hm('200401010000') -end_date <- lubridate::ymd_hm('200412312330') -file<-PEcAn.data.atmosphere::download.Fluxnet2015('US-WCr','~/',start_date,end_date) -in.path <- '~/' + list( + user = "bety", password = "bety", host = "localhost", + dbname = "bety", driver = "PostgreSQL", write = TRUE + ) +) +start_date <- lubridate::ymd_hm("200401010000") +end_date <- lubridate::ymd_hm("200412312330") +file <- PEcAn.data.atmosphere::download.Fluxnet2015("US-WCr", "~/", start_date, end_date) +in.path <- "~/" in.prefix <- file$dbfile.name -outfolder <- '~/' +outfolder <- "~/" format.id <- 5000000001 -format <- PEcAn.DB::query.format.vars(format.id=format.id,bety = bety) +format <- PEcAn.DB::query.format.vars(format.id = format.id, bety = bety) format$lon <- -92.0 format$lat <- 45.0 format$time_zone <- "America/Chicago" results <- PEcAn.data.atmosphere::met2CF.csv( in.path, in.prefix, outfolder, start_date, end_date, format, - overwrite=TRUE) + overwrite = TRUE +) } } diff --git a/modules/data.atmosphere/man/met_temporal_downscale.Gaussian_ensemble.Rd b/modules/data.atmosphere/man/met_temporal_downscale.Gaussian_ensemble.Rd index 253cc6dc550..0d25e36e903 100644 --- a/modules/data.atmosphere/man/met_temporal_downscale.Gaussian_ensemble.Rd +++ b/modules/data.atmosphere/man/met_temporal_downscale.Gaussian_ensemble.Rd @@ -28,7 +28,7 @@ met_temporal_downscale.Gaussian_ensemble( \item{input_met}{- the source dataset that will temporally downscaled by the train_met dataset} -\item{train_met}{- the observed dataset that will be used to train the modeled dataset in NC format. i.e. Flux Tower dataset +\item{train_met}{- the observed dataset that will be used to train the modeled dataset in NC format. i.e. Flux Tower dataset (see download.Fluxnet2015 or download.Ameriflux)} \item{overwrite}{logical: replace output file if it already exists?} diff --git a/modules/data.atmosphere/man/nc.merge.Rd b/modules/data.atmosphere/man/nc.merge.Rd index dbed3d19330..4f928804a70 100644 --- a/modules/data.atmosphere/man/nc.merge.Rd +++ b/modules/data.atmosphere/man/nc.merge.Rd @@ -39,7 +39,7 @@ functions print debugging information as they run?} \description{ This is the 1st function for the tdm (Temporally Downscale Meteorology) workflow. The nc2dat.train function parses multiple netCDF files into one central training data file called 'dat.train_file'. This netCDF - file will be used to generate the subdaily models in the next step of the workflow, generate.subdaily.models(). + file will be used to generate the subdaily models in the next step of the workflow, generate.subdaily.models(). It is also called in tdm_predict_subdaily_met which is the final step of the tdm workflow. } \details{ diff --git a/modules/data.atmosphere/man/noaa_stage2.Rd b/modules/data.atmosphere/man/noaa_stage2.Rd index 917ebccb6be..1efc2e823b7 100644 --- a/modules/data.atmosphere/man/noaa_stage2.Rd +++ b/modules/data.atmosphere/man/noaa_stage2.Rd @@ -13,7 +13,7 @@ noaa_stage2( ) } \arguments{ -\item{cycle}{Hour at which forecast was made, as character string +\item{cycle}{Hour at which forecast was made, as character string (`"00"`, `"06"`, `"12"` or `"18"`). Only `"00"` (default) has 30 days horizon.} \item{version}{GEFS forecast version. Prior versions correspond to forecasts diff --git a/modules/data.atmosphere/man/predict_subdaily_met.Rd b/modules/data.atmosphere/man/predict_subdaily_met.Rd index 18453131757..007ef20587b 100644 --- a/modules/data.atmosphere/man/predict_subdaily_met.Rd +++ b/modules/data.atmosphere/man/predict_subdaily_met.Rd @@ -27,8 +27,8 @@ predict_subdaily_met( \arguments{ \item{outfolder}{- directory where output file will be stored} -\item{in.path}{- base path to dataset you wish to temporally downscale; Note: in order for parallelization -to work, the in.prefix will need to be appended as the final level of the file structure. +\item{in.path}{- base path to dataset you wish to temporally downscale; Note: in order for parallelization +to work, the in.prefix will need to be appended as the final level of the file structure. For example, if prefix is GFDL.CM3.rcp45.r1i1p1, there should be a directory with that title in in.path.} \item{in.prefix}{- prefix of model dataset, i.e. if file is GFDL.CM3.rcp45.r1i1p1.2006 the prefix is 'GFDL.CM3.rcp45.r1i1p1'} @@ -42,7 +42,7 @@ For example, if prefix is GFDL.CM3.rcp45.r1i1p1, there should be a directory wit \item{yrs.predict}{- years for which you want to generate met. if NULL, all years in in.path will be done} -\item{ens.labs}{- vector containing the labels (suffixes) for each ensemble member; this allows you to add to your +\item{ens.labs}{- vector containing the labels (suffixes) for each ensemble member; this allows you to add to your ensemble rather than overwriting with a default naming scheme} \item{resids}{- logical stating whether to pass on residual data or not} diff --git a/modules/data.atmosphere/man/spin.met.Rd b/modules/data.atmosphere/man/spin.met.Rd index 4e2afa9a7e5..318ac5ed5e3 100644 --- a/modules/data.atmosphere/man/spin.met.Rd +++ b/modules/data.atmosphere/man/spin.met.Rd @@ -42,8 +42,8 @@ updated start date Spin-up meteorology } \details{ -spin.met works by creating symbolic links to the sampled met file, -rather than copying the whole file. Be aware that the internal dates in +spin.met works by creating symbolic links to the sampled met file, +rather than copying the whole file. Be aware that the internal dates in those files are not modified. Right now this is designed to be called within met2model.[MODEL] before the met is processed (it's designed to work with annual CF files not model-specific files) for example with models that process met @@ -51,18 +51,19 @@ into one large file } \examples{ start_date <- "0850-01-01 00:00:00" -end_date <- "2010-12-31 23:59:59" -nyear <- 10 -nsample <- 50 -resample <- TRUE +end_date <- "2010-12-31 23:59:59" +nyear <- 10 +nsample <- 50 +resample <- TRUE \dontrun{ -if(!is.null(spin)){ - ## if spinning up, extend processed met by resampling or cycling met - start_date <- PEcAn.data.atmosphere::spin.met( - in.path, in.prefix, - start_date, end_date, - nyear, nsample, resample) +if (!is.null(spin)) { + ## if spinning up, extend processed met by resampling or cycling met + start_date <- PEcAn.data.atmosphere::spin.met( + in.path, in.prefix, + start_date, end_date, + nyear, nsample, resample + ) } } } diff --git a/modules/data.atmosphere/man/split_wind.Rd b/modules/data.atmosphere/man/split_wind.Rd index 02747a03110..13ddb679d4b 100644 --- a/modules/data.atmosphere/man/split_wind.Rd +++ b/modules/data.atmosphere/man/split_wind.Rd @@ -35,13 +35,13 @@ Currently modifies the files IN PLACE rather than creating a new copy of the fil } \examples{ \dontrun{ -in.path <- "~/paleon/PalEONregional_CF_site_1-24047/" -in.prefix <- "" -outfolder <- "~/paleon/metTest/" +in.path <- "~/paleon/PalEONregional_CF_site_1-24047/" +in.prefix <- "" +outfolder <- "~/paleon/metTest/" start_date <- "0850-01-01" -end_date <- "2010-12-31" -overwrite <- FALSE -verbose <- TRUE +end_date <- "2010-12-31" +overwrite <- FALSE +verbose <- TRUE split_wind(in.path, in.prefix, start_date, end_date, merge.file, overwrite, verbose) } diff --git a/modules/data.atmosphere/man/temporal.downscale.functions.Rd b/modules/data.atmosphere/man/temporal.downscale.functions.Rd index 654fc66d6d4..31b27a6578e 100644 --- a/modules/data.atmosphere/man/temporal.downscale.functions.Rd +++ b/modules/data.atmosphere/man/temporal.downscale.functions.Rd @@ -42,12 +42,12 @@ still being worked on, set to FALSE} } \description{ This function contains the functions that do the heavy lifting in gen.subdaily.models() - and predict.subdaily.workflow(). Individual variable functions actually generate the models - and betas from the dat.train_file and save them in the output file. save.model() and - save.betas() are helper functions that save the linear regression model output to a - specific location. In the future, we should only save the data that we actually use from the + and predict.subdaily.workflow(). Individual variable functions actually generate the models + and betas from the dat.train_file and save them in the output file. save.model() and + save.betas() are helper functions that save the linear regression model output to a + specific location. In the future, we should only save the data that we actually use from the linear regression model because this is a large file. predict.met() is called from - predict.subdaily.workflow() and references the linear regression model output to + predict.subdaily.workflow() and references the linear regression model output to predict the ensemble data. } \details{ diff --git a/modules/data.land/man/BADM.Rd b/modules/data.land/man/BADM.Rd index 9e67507130e..9be324b847e 100644 --- a/modules/data.land/man/BADM.Rd +++ b/modules/data.land/man/BADM.Rd @@ -15,8 +15,8 @@ A data frame with 12,300 rows and 13 columns: \item{VARIABLE_GROUP}{category, eg abovground biomass or soil chemistry} \item{VARIABLE, DATAVALUE}{key and value for each measured variable} \item{NA_L1CODE, NA_L1NAME, NA_L2CODE, NA_L2NAME}{ - numeric IDs and names for the Level 1 and level 2 ecoregions where - this site is located} + numeric IDs and names for the Level 1 and level 2 ecoregions where + this site is located} } } \source{ diff --git a/modules/data.land/man/EPA_ecoregion_finder.Rd b/modules/data.land/man/EPA_ecoregion_finder.Rd index 1b995fd8b1b..6bf22e7ef7b 100644 --- a/modules/data.land/man/EPA_ecoregion_finder.Rd +++ b/modules/data.land/man/EPA_ecoregion_finder.Rd @@ -17,6 +17,6 @@ EPA_ecoregion_finder(Lat, Lon, folder.path = NULL) a dataframe with codes corresponding to level1 and level2 codes as two columns } \description{ -This function is designed to find the level1 and level2 code ecoregions for a given lat and long. +This function is designed to find the level1 and level2 code ecoregions for a given lat and long. You can learn more about ecoregions here: \url{https://www.epa.gov/eco-research/ecoregions}. } diff --git a/modules/data.land/man/Read.IC.info.BADM.Rd b/modules/data.land/man/Read.IC.info.BADM.Rd index d81d2631c11..d1926367252 100644 --- a/modules/data.land/man/Read.IC.info.BADM.Rd +++ b/modules/data.land/man/Read.IC.info.BADM.Rd @@ -17,12 +17,12 @@ a dataframe with 7 columns of Site, Variable, Date, Organ, AGB, soil_organic_car } \description{ This function returns a dataframe of plant biomass, root and soil carbon for a set of lat and long coordinates. -This function first finds the level1 and level2 ecoregions for the given coordinates, and then tries to filter BADM database for those eco-regions. +This function first finds the level1 and level2 ecoregions for the given coordinates, and then tries to filter BADM database for those eco-regions. If no data found in the BADM database for the given lat/longs eco-regions, then all the data in the database will be used to return the initial condition. All the variables are also converted to kg/m^2. } \examples{ \dontrun{ - badm_test <- Read.IC.info.BADM(45.805925,-90.07961) +badm_test <- Read.IC.info.BADM(45.805925, -90.07961) } } diff --git a/modules/data.land/man/Read_Tucson.Rd b/modules/data.land/man/Read_Tucson.Rd index d008e9bc65c..278c9f2345e 100644 --- a/modules/data.land/man/Read_Tucson.Rd +++ b/modules/data.land/man/Read_Tucson.Rd @@ -11,7 +11,7 @@ Read_Tucson(folder) Will read all files at this path matching "TXT", "rwl", or "rw"} } \description{ -wrapper around read.tucson that loads a whole directory of tree ring files -and calls a 'clean' function that removes redundant records +wrapper around read.tucson that loads a whole directory of tree ring files +and calls a 'clean' function that removes redundant records (WinDendro can sometimes create duplicate records when editing) } diff --git a/modules/data.land/man/dataone_download.Rd b/modules/data.land/man/dataone_download.Rd index ec9ce4d716b..09e4453565a 100644 --- a/modules/data.land/man/dataone_download.Rd +++ b/modules/data.land/man/dataone_download.Rd @@ -28,7 +28,7 @@ Adapts the dataone::getDataPackage workflow to allow users to download data from } \examples{ \dontrun{ -dataone_download(id = "doi:10.6073/pasta/63ad7159306bc031520f09b2faefcf87", +dataone_download(id = "doi:10.6073/pasta/63ad7159306bc031520f09b2faefcf87", filepath = "/fs/data1/pecan.data/dbfiles") } } diff --git a/modules/data.land/man/download_NEON_soilmoist.Rd b/modules/data.land/man/download_NEON_soilmoist.Rd index 2ce60df1737..fbdae810e33 100644 --- a/modules/data.land/man/download_NEON_soilmoist.Rd +++ b/modules/data.land/man/download_NEON_soilmoist.Rd @@ -26,8 +26,8 @@ Both variables will be saved in outdir automatically (chr)} \item{enddate}{start date as YYYY-mm. If left empty, all data available will be downloaded (chr)} \item{outdir}{out directory to store the following data: -.rds list files of SWC and SIC data for each site and sensor position, -sensor positions .csv for each site, +.rds list files of SWC and SIC data for each site and sensor position, +sensor positions .csv for each site, variable description .csv file, readme .csv file} } diff --git a/modules/data.land/man/extract_NEON_veg.Rd b/modules/data.land/man/extract_NEON_veg.Rd index f3c033e5f46..043e93bfc7c 100644 --- a/modules/data.land/man/extract_NEON_veg.Rd +++ b/modules/data.land/man/extract_NEON_veg.Rd @@ -36,8 +36,8 @@ veg_info object to be passed to extract_veg within ic_process extract_NEON_veg } \examples{ -start_date = as.Date("2020-01-01") -end_date = as.Date("2021-09-01") +start_date <- as.Date("2020-01-01") +end_date <- as.Date("2021-09-01") } \author{ Alexis Helgeson and Michael Dietze diff --git a/modules/data.land/man/extract_soil_gssurgo.Rd b/modules/data.land/man/extract_soil_gssurgo.Rd index d8231132824..9b258cae7d5 100644 --- a/modules/data.land/man/extract_soil_gssurgo.Rd +++ b/modules/data.land/man/extract_soil_gssurgo.Rd @@ -34,10 +34,10 @@ Extract soil data from gssurgo } \examples{ \dontrun{ - outdir <- "~/paleon/envTest" - lat <- 40 - lon <- -80 - PEcAn.data.land::extract_soil_gssurgo(outdir, lat, lon) +outdir <- "~/paleon/envTest" +lat <- 40 +lon <- -80 +PEcAn.data.land::extract_soil_gssurgo(outdir, lat, lon) } } \author{ diff --git a/modules/data.land/man/extract_soil_nc.Rd b/modules/data.land/man/extract_soil_nc.Rd index 1df60d75bcf..f551b4c8245 100644 --- a/modules/data.land/man/extract_soil_nc.Rd +++ b/modules/data.land/man/extract_soil_nc.Rd @@ -24,9 +24,9 @@ Extract soil data from the gridpoint closest to a location \examples{ \dontrun{ in.file <- "~/paleon/env_paleon/soil/paleon_soil.nc" -outdir <- "~/paleon/envTest" -lat <- 40 -lon <- -80 -PEcAn.data.land::extract_soil_nc(in.file,outdir,lat,lon) +outdir <- "~/paleon/envTest" +lat <- 40 +lon <- -80 +PEcAn.data.land::extract_soil_nc(in.file, outdir, lat, lon) } } diff --git a/modules/data.land/man/gSSURGO.Query.Rd b/modules/data.land/man/gSSURGO.Query.Rd index 27a7a4d2cb2..e3a6d1eee72 100644 --- a/modules/data.land/man/gSSURGO.Query.Rd +++ b/modules/data.land/man/gSSURGO.Query.Rd @@ -24,18 +24,20 @@ This function queries the gSSURGO database for a series of map unit keys Full documention of available tables and their relationships can be found here \url{www.sdmdataaccess.nrcs.usda.gov/QueryHelp.aspx} There have been occasions where NRCS made some minor changes to the structure of the API which this code is where those changes need to be implemneted here. -Fields need to be defined with their associate tables. For example, sandtotal is a field in chorizon table which needs to be defined as chorizon.sandotal_(r/l/h), where +Fields need to be defined with their associate tables. For example, sandtotal is a field in chorizon table which needs to be defined as chorizon.sandotal_(r/l/h), where r stands for the representative value, l stands for low and h stands for high. At the moment fields from mapunit, component, muaggatt, and chorizon tables can be extracted. } \examples{ \dontrun{ - PEcAn.data.land::gSSURGO.Query( - mukeys = 2747727, - fields = c( - "chorizon.cec7_r", "chorizon.sandtotal_r", - "chorizon.silttotal_r","chorizon.claytotal_r", - "chorizon.om_r","chorizon.hzdept_r","chorizon.frag3to10_r", - "chorizon.dbovendry_r","chorizon.ph1to1h2o_r", - "chorizon.cokey","chorizon.chkey")) +PEcAn.data.land::gSSURGO.Query( + mukeys = 2747727, + fields = c( + "chorizon.cec7_r", "chorizon.sandtotal_r", + "chorizon.silttotal_r", "chorizon.claytotal_r", + "chorizon.om_r", "chorizon.hzdept_r", "chorizon.frag3to10_r", + "chorizon.dbovendry_r", "chorizon.ph1to1h2o_r", + "chorizon.cokey", "chorizon.chkey" + ) +) } } diff --git a/modules/data.land/man/match_species_id.Rd b/modules/data.land/man/match_species_id.Rd index daa9b19d977..51da9f6ccec 100644 --- a/modules/data.land/man/match_species_id.Rd +++ b/modules/data.land/man/match_species_id.Rd @@ -48,16 +48,18 @@ Parses species codes in input data and matches them with the BETY species ID. \dontrun{ con <- PEcAn.DB::db.open(list( driver = "Postgres", - dbname = 'bety', - user = 'bety', - password = 'bety', - host = 'localhost') + dbname = "bety", + user = "bety", + password = "bety", + host = "localhost" +)) +input_codes <- c("ACRU", "PIMA", "TSCA") +format_name <- "usda" +match_species_id( + input_codes = input_codes, + format_name = format_name, + bety = con ) -input_codes <- c('ACRU', 'PIMA', 'TSCA') -format_name <- 'usda' -match_species_id(input_codes = input_codes, - format_name = format_name, - bety = con) } } diff --git a/modules/data.land/man/sclass.Rd b/modules/data.land/man/sclass.Rd index 476c75e967e..def56ce8472 100644 --- a/modules/data.land/man/sclass.Rd +++ b/modules/data.land/man/sclass.Rd @@ -19,5 +19,5 @@ vector of integers identifying textural class of each input layer. This function determines the soil class number based on the fraction of sand, clay, and silt } \examples{ -sclass(0.3,0.3) +sclass(0.3, 0.3) } diff --git a/modules/data.land/man/shp2kml.Rd b/modules/data.land/man/shp2kml.Rd index afae5b612e8..748a64d69db 100644 --- a/modules/data.land/man/shp2kml.Rd +++ b/modules/data.land/man/shp2kml.Rd @@ -22,7 +22,7 @@ shp2kml( \item{kmz}{TRUE/FALSE. Option to write out file as a compressed kml. Requires zip utility} -\item{proj4}{OPTIONAL. Define output proj4 projection string. If set, input vector will be +\item{proj4}{OPTIONAL. Define output proj4 projection string. If set, input vector will be reprojected to desired projection. Not yet implemented.} \item{color}{OPTIONAL. Fill color for output kml/kmz file} diff --git a/modules/data.land/man/soil2netcdf.Rd b/modules/data.land/man/soil2netcdf.Rd index a21840e8fb5..39235b114e0 100644 --- a/modules/data.land/man/soil2netcdf.Rd +++ b/modules/data.land/man/soil2netcdf.Rd @@ -32,9 +32,11 @@ pain for storing strings. Conversion back can be done by and then soil.name[soil_n] } \examples{ -\dontrun{ soil.data <- list(fraction_of_sand_in_soil = c - (0.3,0.4,0.5), fraction_of_clay_in_soil = c(0.3,0.3,0.3), soil_depth = c - (0.2,0.5,1.0)) - -soil2netcdf(soil.data,"soil.nc") } +\dontrun{ +soil.data <- list(fraction_of_sand_in_soil = c +(0.3, 0.4, 0.5), fraction_of_clay_in_soil = c(0.3, 0.3, 0.3), soil_depth = c +(0.2, 0.5, 1.0)) + +soil2netcdf(soil.data, "soil.nc") +} } diff --git a/modules/data.land/man/soil_class.Rd b/modules/data.land/man/soil_class.Rd index 7dd2ff3eaf0..47822102c2a 100644 --- a/modules/data.land/man/soil_class.Rd +++ b/modules/data.land/man/soil_class.Rd @@ -9,31 +9,31 @@ A list with 26 entries: \describe{ \item{air.cond, h2o.cond, sand.cond, silt.cond, clay.cond}{ - thermal conductivity, W m^-1 K^-1} + thermal conductivity, W m^-1 K^-1} \item{air.hcap, sand.hcap, silt.hcap, clay.hcap}{heat capacity, - J m^-3 K^-1} + J m^-3 K^-1} \item{kair, ksand, ksilt, kclay}{relative conductivity factor} \item{fieldcp.K}{hydraulic conductance at field capacity, mm day^-1} \item{grav}{gravity acceleration, m s^-2} \item{soil.key}{Abbreviations for each of 18 soil texture classes, e.g. "SiL", "LSa"} \item{soil.name}{Names for 18 soil texture classes, e.g. "Sand", - "Silty clay"} + "Silty clay"} \item{soilcp.MPa}{soil water potential when air-dry, MPa} \item{soilld.MPa}{soil water potential at critical water content, MPa} \item{soilwp.MPa}{soil water potential at wilting point, MPa} \item{stext.lines}{list of 18 lists, each giving minimum and maximum - sand/silt/clay contents for a soil texture class} + sand/silt/clay contents for a soil texture class} \item{stext.polygon}{list of 18 lists, each giving corner points in the - soil texture triangle for a soil texture class} + soil texture triangle for a soil texture class} \item{texture}{data frame with 13 rows and 21 columns, giving default - parameter values for 13 named soil textures} + parameter values for 13 named soil textures} \item{theta.crit}{critical water content (fractional soil moisture at - which plants start dropping leaves), m^3 m^-3} + which plants start dropping leaves), m^3 m^-3} \item{xclay.def}{default volume fraction of sand in each of 18 soil - texture classes} + texture classes} \item{xsand.def}{default volume fraction of clay in each of 18 soil - texture classes} + texture classes} } } \source{ diff --git a/modules/data.land/man/soil_params.Rd b/modules/data.land/man/soil_params.Rd index 4fa1ae61e10..ac0db4a13e1 100644 --- a/modules/data.land/man/soil_params.Rd +++ b/modules/data.land/man/soil_params.Rd @@ -42,5 +42,5 @@ Estimate soil parameters from texture class or sand/silt/clay \examples{ sand <- c(0.3, 0.4, 0.5) clay <- c(0.3, 0.3, 0.3) -soil_params(sand=sand,clay=clay) +soil_params(sand = sand, clay = clay) } diff --git a/modules/data.land/man/soilgrids_soilC_extract.Rd b/modules/data.land/man/soilgrids_soilC_extract.Rd index 175dbc71ee5..506642c477a 100644 --- a/modules/data.land/man/soilgrids_soilC_extract.Rd +++ b/modules/data.land/man/soilgrids_soilC_extract.Rd @@ -7,25 +7,25 @@ soilgrids_soilC_extract(site_info, outdir = NULL, verbose = TRUE) } \arguments{ -\item{site_info}{A dataframe of site info containing the BETYdb site ID, -site name, latitude, and longitude, e.g. +\item{site_info}{A dataframe of site info containing the BETYdb site ID, +site name, latitude, and longitude, e.g. (site_id, site_name, lat, lon)} -\item{outdir}{Optional. Provide the results as a CSV file +\item{outdir}{Optional. Provide the results as a CSV file (soilgrids_soilC_data.csv)} \item{verbose}{Provide progress feedback to the terminal? TRUE/FALSE} } \value{ -a dataframe containing the total soil carbon values -and the corresponding standard deviation values (uncertainties) for each location +a dataframe containing the total soil carbon values +and the corresponding standard deviation values (uncertainties) for each location Output column names are c("Site_ID","Site_Name","Latitude","Longitude", "Total_soilC","Std_soilC") } \description{ soilgrids_soilC_extract function -A function to extract total soil organic carbon for a single or group of -lat/long locationsbased on user-defined site location from SoilGrids250m +A function to extract total soil organic carbon for a single or group of +lat/long locationsbased on user-defined site location from SoilGrids250m version 2.0 : https://soilgrids.org } \examples{ @@ -41,7 +41,7 @@ db_password <- 'bety' bety <- list(user='bety', password='bety', host=host_db, dbname='betydb', driver=RPostgres::Postgres(),write=FALSE) -con <- DBI::dbConnect(drv=bety$driver, dbname=bety$dbname, host=bety$host, +con <- DBI::dbConnect(drv=bety$driver, dbname=bety$dbname, host=bety$host, password=bety$password, user=bety$user) suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, @@ -55,7 +55,7 @@ DBI::dbDisconnect(con) site_info <- qry_results.2 verbose <- TRUE -system.time(result_soc <- PEcAn.data.land::soilgrids_soilC_extract(site_info=site_info, +system.time(result_soc <- PEcAn.data.land::soilgrids_soilC_extract(site_info=site_info, verbose=verbose)) result_soc diff --git a/modules/data.remote/man/GEDI_AGB_prep.Rd b/modules/data.remote/man/GEDI_AGB_prep.Rd index db472005724..4d5fa1a4d79 100644 --- a/modules/data.remote/man/GEDI_AGB_prep.Rd +++ b/modules/data.remote/man/GEDI_AGB_prep.Rd @@ -44,16 +44,16 @@ During the first use, users will be ask to enter their Earth Explore \examples{ \dontrun{ settings <- PEcAn.settings::read.settings("pecan.xml") -site_info <- settings \%>\% - purrr::map(~.x[['run']] ) \%>\% - purrr::map('site')\%>\% - purrr::map(function(site.list){ - #conversion from string to number +site_info <- settings \%>\% + purrr::map(~ .x[["run"]]) \%>\% + purrr::map("site") \%>\% + purrr::map(function(site.list) { + # conversion from string to number site.list$lat <- as.numeric(site.list$lat) site.list$lon <- as.numeric(site.list$lon) - list(site_id=site.list$id, lat=site.list$lat, lon=site.list$lon, site_name=site.list$name) - })\%>\% - dplyr::bind_rows() \%>\% + list(site_id = site.list$id, lat = site.list$lat, lon = site.list$lon, site_name = site.list$name) + }) \%>\% + dplyr::bind_rows() \%>\% as.list() time_points <- seq(start.date, end.date, by = time.step) buffer <- 0.01 diff --git a/modules/data.remote/man/MODIS_LC_prep.Rd b/modules/data.remote/man/MODIS_LC_prep.Rd index 7229ce6d83b..66748cc3099 100644 --- a/modules/data.remote/man/MODIS_LC_prep.Rd +++ b/modules/data.remote/man/MODIS_LC_prep.Rd @@ -27,7 +27,7 @@ A data frame containing MODIS land cover types for each site and each time step. Prepare MODIS land cover data for the SDA workflow. } \details{ -This function enables the feature of grabbing pre-extracted MODIS LC CSV files such that any site that +This function enables the feature of grabbing pre-extracted MODIS LC CSV files such that any site that has records will be skipped (See Line 33). In more detail, we will be loading the previous `LC.csv` file, which contains previous extracted land cover records and trying to match that with current requests (location, time). Any requests that fail the match will be regarded as new extractions and combine with the previous `LC.csv` file. diff --git a/modules/data.remote/man/NASA_CMR_finder.Rd b/modules/data.remote/man/NASA_CMR_finder.Rd index 4359fb952b3..bf9c97a8f8e 100644 --- a/modules/data.remote/man/NASA_CMR_finder.Rd +++ b/modules/data.remote/man/NASA_CMR_finder.Rd @@ -7,12 +7,12 @@ NASA_CMR_finder(doi) } \arguments{ -\item{doi}{Character: data DOI on the NASA DAAC server, it can be obtained -directly from the NASA ORNL DAAC data portal (e.g., GEDI L4A through +\item{doi}{Character: data DOI on the NASA DAAC server, it can be obtained +directly from the NASA ORNL DAAC data portal (e.g., GEDI L4A through https://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=2056).} } \value{ -A list with each containing corresponding provider and concept ids +A list with each containing corresponding provider and concept ids given the data doi. } \description{ diff --git a/modules/data.remote/man/NASA_DAAC_URL.Rd b/modules/data.remote/man/NASA_DAAC_URL.Rd index 7ecb41b3b43..fbb209bd75c 100644 --- a/modules/data.remote/man/NASA_DAAC_URL.Rd +++ b/modules/data.remote/man/NASA_DAAC_URL.Rd @@ -15,7 +15,7 @@ NASA_DAAC_URL( ) } \arguments{ -\item{base_url}{Character: base URL for the CMR search. +\item{base_url}{Character: base URL for the CMR search. default is "https://cmr.earthdata.nasa.gov/search/granules.json?pretty=true".} \item{provider}{Character: ID of data provider from NASA DAAC. See `NASA_CMR_finder` for more details.} @@ -42,10 +42,12 @@ provider <- "ORNL_CLOUD" concept_id <- "C2770099044-ORNL_CLOUD" bbox <- "-121,33,-117,35" daterange <- c("2022-02-23", "2022-05-30") -URL <- NASA_DAAC_URL(provider = provider, -concept_id = concept_id, -bbox = bbox, -daterange = daterange) +URL <- NASA_DAAC_URL( + provider = provider, + concept_id = concept_id, + bbox = bbox, + daterange = daterange +) } } \author{ diff --git a/modules/data.remote/man/NASA_DAAC_download.Rd b/modules/data.remote/man/NASA_DAAC_download.Rd index b41cfc920c9..85c2b64368c 100644 --- a/modules/data.remote/man/NASA_DAAC_download.Rd +++ b/modules/data.remote/man/NASA_DAAC_download.Rd @@ -38,8 +38,8 @@ NASA_DAAC_download( \item{outdir}{Character: path of the directory in which to save the downloaded files. Default is the current work directory(getwd()).} -\item{doi}{Character: data DOI on the NASA DAAC server, it can be obtained -directly from the NASA ORNL DAAC data portal (e.g., GEDI L4A through +\item{doi}{Character: data DOI on the NASA DAAC server, it can be obtained +directly from the NASA ORNL DAAC data portal (e.g., GEDI L4A through https://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=2056).} \item{netrc_file}{Character: path to the credential file, default is NULL.} @@ -62,14 +62,16 @@ from <- "2022-02-23" to <- "2022-05-30" doi <- "10.3334/ORNLDAAC/2183" outdir <- "/projectnb/dietzelab/dongchen/SHIFT/test_download" -metadata <- NASA_DAAC_download(ul_lat = ul_lat, - ul_lon = ul_lon, - lr_lat = lr_lat, - lr_lon = lr_lon, - from = from, - to = to, - doi = doi, - just_path = T) +metadata <- NASA_DAAC_download( + ul_lat = ul_lat, + ul_lon = ul_lon, + lr_lat = lr_lat, + lr_lon = lr_lon, + from = from, + to = to, + doi = doi, + just_path = T +) } } \author{ diff --git a/modules/data.remote/man/call_MODIS.Rd b/modules/data.remote/man/call_MODIS.Rd index 52fc0422110..f0bd79dd7d6 100644 --- a/modules/data.remote/man/call_MODIS.Rd +++ b/modules/data.remote/man/call_MODIS.Rd @@ -25,30 +25,30 @@ call_MODIS( \item{band}{string value for which measurement to extract} -\item{site_info}{Bety list of site info for parsing MODIS data: list(site_id, site_name, lat, +\item{site_info}{Bety list of site info for parsing MODIS data: list(site_id, site_name, lat, lon, time_zone)} \item{product_dates}{a character vector of the start and end date of the data in YYYYJJJ} \item{outdir}{where the output file will be stored. Default is NULL and in this case only values are returned. When path is provided values are returned and written to disk.} -\item{run_parallel}{optional method to download data paralleize. Only works if more than 1 +\item{run_parallel}{optional method to download data paralleize. Only works if more than 1 site is needed and there are >1 CPUs available.} -\item{ncores}{number of cpus to use if run_parallel is set to TRUE. If you do not know the +\item{ncores}{number of cpus to use if run_parallel is set to TRUE. If you do not know the number of CPU's available, enter NULL.} -\item{package_method}{string value to inform function of which package method to use to download +\item{package_method}{string value to inform function of which package method to use to download modis data. Either "MODISTools" or "reticulate" (optional)} -\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good -(as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this +\item{QC_filter}{Converts QC values of band and keeps only data values that are excellent or good +(as described by MODIS documentation), and removes all bad values. qc_band must be supplied for this parameter to work. Default is False. Only MODISTools option.} -\item{progress}{TRUE reports the download progress bar of the dataset, FALSE omits the download +\item{progress}{TRUE reports the download progress bar of the dataset, FALSE omits the download progress bar. Default is TRUE. Only MODISTools option. -Requires Python3 for reticulate method option. There are a number of required python libraries. +Requires Python3 for reticulate method option. There are a number of required python libraries. sudo -H pip install numpy suds netCDF4 json depends on the MODISTools package version 1.1.0} } diff --git a/modules/data.remote/man/download.LandTrendr.AGB.Rd b/modules/data.remote/man/download.LandTrendr.AGB.Rd index a1021f109b4..605a267145c 100644 --- a/modules/data.remote/man/download.LandTrendr.AGB.Rd +++ b/modules/data.remote/man/download.LandTrendr.AGB.Rd @@ -22,7 +22,7 @@ download.LandTrendr.AGB( \item{product_dates}{What data product dates to download} -\item{product_version}{Optional. LandTrend AGB is provided with two versions, +\item{product_version}{Optional. LandTrend AGB is provided with two versions, v0 and v1 (latest version)} \item{con}{Optional database connection. If specified then the code will check to see} @@ -46,12 +46,12 @@ product_dates <- c(1990, 1991, 1995) # using discontinous, or specific years product_dates2 <- seq(1992, 1995, 1) # using a date sequence for selection of years product_version = "v1" -results <- PEcAn.data.remote::download.LandTrendr.AGB(outdir=outdir, - product_dates = product_dates, +results <- PEcAn.data.remote::download.LandTrendr.AGB(outdir=outdir, + product_dates = product_dates, product_version = product_version) -results <- PEcAn.data.remote::download.LandTrendr.AGB(outdir=outdir, - product_dates = product_dates2, +results <- PEcAn.data.remote::download.LandTrendr.AGB(outdir=outdir, + product_dates = product_dates2, product_version = product_version) } diff --git a/modules/data.remote/man/download.thredds.AGB.Rd b/modules/data.remote/man/download.thredds.AGB.Rd index 79efcce9998..dc1dd1c3afb 100644 --- a/modules/data.remote/man/download.thredds.AGB.Rd +++ b/modules/data.remote/man/download.thredds.AGB.Rd @@ -29,8 +29,8 @@ download.thredds.AGB \examples{ \dontrun{ outdir <- "~/scratch/abg_data/" -results <- PEcAn.data.remote::download.thredds.AGB(outdir=outdir, - site_ids = c(676, 678, 679, 755, 767, 1000000030, 1000000145, 1000025731), +results <- PEcAn.data.remote::download.thredds.AGB(outdir=outdir, + site_ids = c(676, 678, 679, 755, 767, 1000000030, 1000000145, 1000025731), run_parallel = TRUE, ncores = 8) } } diff --git a/modules/data.remote/man/extract.LandTrendr.AGB.Rd b/modules/data.remote/man/extract.LandTrendr.AGB.Rd index b0b77de4de7..631729b6e8e 100644 --- a/modules/data.remote/man/extract.LandTrendr.AGB.Rd +++ b/modules/data.remote/man/extract.LandTrendr.AGB.Rd @@ -29,13 +29,13 @@ extract.LandTrendr.AGB( \item{product_dates}{Process and extract data only from selected years. Default behavior (product_dates = NULL) is to extract data from all availible years in BETYdb or data_dir} -\item{output_file}{Path to save LandTrendr_AGB_output.RData file containing the +\item{output_file}{Path to save LandTrendr_AGB_output.RData file containing the output extraction list (see return)} \item{...}{Additional arguments, currently ignored} } \value{ -list of two containing the median AGB values per pixel and the corresponding +list of two containing the median AGB values per pixel and the corresponding standard deviation values (uncertainties) } \description{ @@ -52,16 +52,16 @@ con <- PEcAn.DB::db.open( dbname='bety', driver='PostgreSQL',write=TRUE)) site_ID <- c(2000000023,1000025731,676,1000005149) # BETYdb site IDs -suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, -ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", +suppressWarnings(site_qry <- glue::glue_sql("SELECT *, ST_X(ST_CENTROID(geometry)) AS lon, +ST_Y(ST_CENTROID(geometry)) AS lat FROM sites WHERE id IN ({ids*})", ids = site_ID, .con = con)) suppressWarnings(qry_results <- DBI::dbSendQuery(con,site_qry)) suppressWarnings(qry_results <- DBI::dbFetch(qry_results)) -site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, +site_info <- list(site_id=qry_results$id, site_name=qry_results$sitename, lat=qry_results$lat, lon=qry_results$lon, time_zone=qry_results$time_zone) data_dir <- "~/scratch/agb_data/" -results <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", +results <- extract.LandTrendr.AGB(site_info, "median", buffer = NULL, fun = "mean", data_dir, product_dates, output_file) } diff --git a/modules/data.remote/man/extract_phenology_MODIS.Rd b/modules/data.remote/man/extract_phenology_MODIS.Rd index 0a134d32069..d725ee14237 100644 --- a/modules/data.remote/man/extract_phenology_MODIS.Rd +++ b/modules/data.remote/man/extract_phenology_MODIS.Rd @@ -14,7 +14,7 @@ extract_phenology_MODIS( ) } \arguments{ -\item{site_info}{A dataframe of site info containing the BETYdb site ID, +\item{site_info}{A dataframe of site info containing the BETYdb site ID, site name, latitude, and longitude, e.g.} \item{start_date}{Start date to download data} @@ -23,14 +23,14 @@ site name, latitude, and longitude, e.g.} \item{outdir}{Path to store the outputs} -\item{run_parallel}{optional method to download data parallely. Only works if more than 1 +\item{run_parallel}{optional method to download data parallely. Only works if more than 1 site is needed and there are >1 CPUs available.} -\item{ncores}{number of cpus to use if run_parallel is set to TRUE. If you do not know the +\item{ncores}{number of cpus to use if run_parallel is set to TRUE. If you do not know the number of CPU's available, enter NULL.} } \value{ -the path for output file +the path for output file The output file will be saved as a CSV file to the outdir. Output column names are "year", "site_id", "lat", "lon", "leafonday","leafoffday","leafon_qa","leafoff_qa" } diff --git a/modules/data.remote/man/l4_download.Rd b/modules/data.remote/man/l4_download.Rd index d4f7cf1541b..3dd1b5f08bc 100644 --- a/modules/data.remote/man/l4_download.Rd +++ b/modules/data.remote/man/l4_download.Rd @@ -68,8 +68,8 @@ During the first use, users will be ask to enter their Earth Explore } \examples{ \dontrun{ -#retrive Italy bound -bound <- sf::st_as_sf(raster::getData('GADM', country='ITA', level=1)) +# retrive Italy bound +bound <- sf::st_as_sf(raster::getData("GADM", country = "ITA", level = 1)) ex <- raster::extent(bound) ul_lat <- ex[4] lr_lat <- ex[3] @@ -77,27 +77,30 @@ ul_lon <- ex[2] lr_lon <- ex[1] from <- "2020-07-01" to <- "2020-07-02" -#get just files path available for the searched parameters -l4_download(ul_lat=ul_lat, - lr_lat=lr_lat, - ul_lon=ul_lon, - lr_lon=lr_lon, - from=from, - to=to, - just_path=T +# get just files path available for the searched parameters +l4_download( + ul_lat = ul_lat, + lr_lat = lr_lat, + ul_lon = ul_lon, + lr_lon = lr_lon, + from = from, + to = to, + just_path = T ) -#download the first 4 files +# download the first 4 files -l4_download(ul_lat=ul_lat, - lr_lat=lr_lat, - ul_lon=ul_lon, - lr_lon=lr_lon, - from=from, - to=to, - just_path=F, - outdir = tempdir(), - subset=1:4) +l4_download( + ul_lat = ul_lat, + lr_lat = lr_lat, + ul_lon = ul_lon, + lr_lon = lr_lon, + from = from, + to = to, + just_path = F, + outdir = tempdir(), + subset = 1:4 +) } } \author{ diff --git a/modules/meta.analysis/man/pecan.ma.Rd b/modules/meta.analysis/man/pecan.ma.Rd index 3029bd7d4e5..589dc7d70fb 100644 --- a/modules/meta.analysis/man/pecan.ma.Rd +++ b/modules/meta.analysis/man/pecan.ma.Rd @@ -69,12 +69,12 @@ function to modify the \code{ma.model.template.bug} generic model. values = list(pft))[[1]] traits <- c("SLA", "Vcmax") trait_string <- paste(shQuote(traits), collapse = ",") - + # Load traits and priors from BETY species <- PEcAn.DB::query.pft_species(pft, con = con) trait.data <- PEcAn.DB::query.traits(species[["id"]], c("SLA", "Vcmax"), con = con) prior.distns <- PEcAn.DB::query.priors(pft_id, trait_string, con = con) - + # Pre-process data jagged.data <- lapply(trait.data, PEcAn.MA::jagify) taupriors <- list(tauA = 0.01, diff --git a/modules/priors/man/get.quantiles.from.density.Rd b/modules/priors/man/get.quantiles.from.density.Rd index fd7bea29be3..9f4cdc520e2 100644 --- a/modules/priors/man/get.quantiles.from.density.Rd +++ b/modules/priors/man/get.quantiles.from.density.Rd @@ -15,7 +15,7 @@ get.quantiles.from.density(density.df, quantiles = c(0.025, 0.5, 0.975)) Finds quantiles on a density data frame } \examples{ -prior.df <- create.density.df(distribution = list('norm',0,1)) +prior.df <- create.density.df(distribution = list("norm", 0, 1)) get.quantiles.from.density(prior.df) samp.df <- create.density.df(samps = rnorm(100)) get.quantiles.from.density(samp.df) diff --git a/modules/priors/man/get.sample.Rd b/modules/priors/man/get.sample.Rd index c3dc313f4ef..bf6145b352d 100644 --- a/modules/priors/man/get.sample.Rd +++ b/modules/priors/man/get.sample.Rd @@ -28,11 +28,11 @@ or list and it can return either a random sample of length n OR a sample from a \dontrun{ # return 1st through 99th quantile of standard normal distribution: PEcAn.priors::get.sample( - prior = data.frame(distn = 'norm', parama = 0, paramb = 1), + prior = data.frame(distn = 'norm', parama = 0, paramb = 1), p = 1:99/100) # return 100 random samples from standard normal distribution: PEcAn.priors::get.sample( - prior = data.frame(distn = 'norm', parama = 0, paramb = 1), + prior = data.frame(distn = 'norm', parama = 0, paramb = 1), n = 100) } } diff --git a/modules/priors/man/prior.fn.Rd b/modules/priors/man/prior.fn.Rd index ef74d3362f0..3ebb2c90939 100644 --- a/modules/priors/man/prior.fn.Rd +++ b/modules/priors/man/prior.fn.Rd @@ -26,8 +26,8 @@ parms Prior fitting function for optimization } \details{ -This function is used within `DEoptim` to parameterize a distribution to the -central tendency and confidence interval of a parameter. +This function is used within `DEoptim` to parameterize a distribution to the +central tendency and confidence interval of a parameter. This function is not very robust; currently it needs to be tweaked when distributions require starting values (e.g. beta, f) } diff --git a/modules/uncertainty/man/flux.uncertainty.Rd b/modules/uncertainty/man/flux.uncertainty.Rd index 77606cf14d1..6cf6461c7b6 100644 --- a/modules/uncertainty/man/flux.uncertainty.Rd +++ b/modules/uncertainty/man/flux.uncertainty.Rd @@ -19,7 +19,7 @@ flux.uncertainty( \item{QC}{= quality control flag time series (0 = best)} -\item{flags}{= additional flags on flux filtering of PAIRS (length = 1/2 that of the +\item{flags}{= additional flags on flux filtering of PAIRS (length = 1/2 that of the time series, TRUE = use).} \item{bin.num}{= number of bins (default = 10)} diff --git a/modules/uncertainty/man/get.ensemble.samples.Rd b/modules/uncertainty/man/get.ensemble.samples.Rd index ac981dcb82e..c71b4787bda 100644 --- a/modules/uncertainty/man/get.ensemble.samples.Rd +++ b/modules/uncertainty/man/get.ensemble.samples.Rd @@ -33,7 +33,7 @@ matrix of (quasi-)random samples from trait distributions Get parameter values used in ensemble } \details{ -Returns a matrix of randomly or quasi-randomly sampled trait values +Returns a matrix of randomly or quasi-randomly sampled trait values to be assigned to traits over several model runs. given the number of model runs and a list of sample distributions for traits The model run is indexed first by model run, then by trait diff --git a/modules/uncertainty/man/input.ens.gen.Rd b/modules/uncertainty/man/input.ens.gen.Rd index 13ecf373805..57e61fc3ce5 100644 --- a/modules/uncertainty/man/input.ens.gen.Rd +++ b/modules/uncertainty/man/input.ens.gen.Rd @@ -26,6 +26,8 @@ If for example met was a parent and it's sampling method resulted in choosing th parent_ids to this function. } \examples{ -\dontrun{input.ens.gen(settings,"met","sampling")} +\dontrun{ +input.ens.gen(settings, "met", "sampling") +} } diff --git a/modules/uncertainty/man/read.ensemble.output.Rd b/modules/uncertainty/man/read.ensemble.output.Rd index 2b6ad72cb69..8cd188470a9 100644 --- a/modules/uncertainty/man/read.ensemble.output.Rd +++ b/modules/uncertainty/man/read.ensemble.output.Rd @@ -37,7 +37,7 @@ a list of ensemble model output Reads output from model ensemble } \details{ -Reads output for an ensemble of length specified by \code{ensemble.size} and bounded by \code{start.year} +Reads output for an ensemble of length specified by \code{ensemble.size} and bounded by \code{start.year} and \code{end.year} } \author{ diff --git a/modules/uncertainty/man/write.ensemble.configs.Rd b/modules/uncertainty/man/write.ensemble.configs.Rd index 6a58d47ae4c..6b42afacd41 100644 --- a/modules/uncertainty/man/write.ensemble.configs.Rd +++ b/modules/uncertainty/man/write.ensemble.configs.Rd @@ -37,7 +37,7 @@ list, containing $runs = data frame of runids, $ensemble.id = the ensemble ID fo } \description{ Writes config files for use in meta-analysis and returns a list of run ids. -Given a pft.xml object, a list of lists as supplied by get.sa.samples, +Given a pft.xml object, a list of lists as supplied by get.sa.samples, a name to distinguish the output files, and the directory to place the files. } \details{