Skip to content

Commit

Permalink
Add files via upload
Browse files Browse the repository at this point in the history
  • Loading branch information
lygitdata authored Feb 15, 2024
1 parent fcfcebf commit 6f8b43a
Show file tree
Hide file tree
Showing 70 changed files with 5,253 additions and 18 deletions.
16 changes: 8 additions & 8 deletions R/src/DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Package: GenAI
Type: Package
Title: Generative Artificial Intelligence
Version: 0.1.15
Version: 0.2.0
Authors@R: c(
person(
given = "Li",
Expand All @@ -12,15 +12,15 @@ Authors@R: c(
)
)
Maintainer: Li Yuan <[email protected]>
Description: Utilizing 'Generative Artificial Intelligence' models like 'GPT-4' and 'Gemini Pro' as coding and writing assistants for 'R' users. Through these models, 'GenAI' offers a variety of functions, encompassing text generation, code optimization, natural language processing, chat, and image interpretation. The goal is to aid 'R' users in streamlining laborious coding and language processing tasks.
Description: Utilizing Generative Artificial Intelligence models like 'GPT-4' and 'Gemini Pro' as coding and writing assistants for 'R' users. Through these models, 'GenAI' offers a variety of functions, encompassing text generation, code optimization, natural language processing, chat, and image interpretation. The goal is to aid 'R' users in streamlining laborious coding and language processing tasks.
License: CC BY 4.0
URL: https://genai.gd.edu.kg/
BugReports: https://github.com/GitData-GA/GenAI/issues
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.3.0
Imports:
base64enc,
httr,
jsonlite,
tools
Depends: magrittr
Imports: base64enc, httr, jsonlite, tools, R6, listenv, magick,
ggplotify
NeedsCompilation: no
Packaged: 2024-02-15 03:28:07 UTC; lp130
Author: Li Yuan [aut, cre] (<https://orcid.org/0009-0008-1075-9922>)
25 changes: 15 additions & 10 deletions R/src/NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,26 @@

export(available.models)
export(chat)
export(chat.convert)
export(chat.edit)
export(chat.recent)
export(chat.save)
export(chat.setup)
export(connect.genai)
export(chat.history.convert)
export(chat.history.export)
export(chat.history.import)
export(chat.history.print)
export(chat.history.reset)
export(chat.history.save)
export(genai.google)
export(genai.moonshot)
export(genai.openai)
export(img)
export(txt)
export(txt.explain.code)
export(txt.fix.grammar)
export(txt.image)
export(txt.optimize.code)
importFrom(base64enc,base64encode)
import(R6)
importFrom(ggplotify,as.ggplot)
importFrom(httr,GET)
importFrom(httr,POST)
importFrom(httr,add_headers)
importFrom(httr,content)
importFrom(jsonlite,toJSON)
importFrom(tools,file_ext)
importFrom(listenv,listenv)
importFrom(magick,image_read)
importFrom(magrittr,"%>%")
33 changes: 33 additions & 0 deletions R/src/R/available.models.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
#' Get Supported Generative AI Models
#'
#' This function sends a request to GenAI database API to retrieve information
#' about available generative AI models.
#'
#' @return If successful, the function returns a list containing generative AI
#' service providers and their corresponding models. If the function encounters an error,
#' it will halt execution and provide an error message.
#'
#' @details
#' The function utilizes the GenAI database API to fetch the latest information about
#' available Generative AI models. The retrieved data includes details about different models
#' offered by various service providers.
#'
#' @seealso
#' \href{https://genai.gd.edu.kg/r/documentation/}{GenAI - R Package "GenAI" Documentation}
#'
#' \href{https://colab.research.google.com/github/GitData-GA/GenAI/blob/gh-pages/r/example/available_models.ipynb}{Live Demo in Colab}
#'
#' @examples
#' \dontrun{
#' # Assuming there is a GenAI object named 'genai.model' supporting this
#' # function, please refer to the "Live Demo in Colab" above for real
#' # examples. The following examples are just some basic guidelines.
#'
#' all.models = available.models() %>% print()
#' }
#'
#' @export
available.models = function() {
json.data = jsonlite::fromJSON("https://genai.gd.edu.kg/model.json")
return (json.data)
}
211 changes: 211 additions & 0 deletions R/src/R/chat.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,211 @@
#' Chat Generation with Text as the Input
#'
#' This function establishes a connection to a generative AI model through a generative AI object.
#' It generates a chat response based on the provided prompt and stores it in the chat history along
#' with the generative AI object.
#'
#' @param genai.object A generative AI object containing necessary and correct information.
#' @param prompt A character string representing the query for chat generation.
#' @param verbose Optional. Default to \code{FALSE}. A boolean value determining whether or not to print
#' out the details of the chat request.
#' @param config Optional. Default to \code{list()}. A list of configuration parameters for chat generation.
#'
#' @return If successful, the most recent chat response will be returned. If the API response indicates
#' an error, the function halts execution and provides an error message.
#'
#' @details Providing accurate and valid information for each argument is crucial for successful chat
#' generation by the generative AI model. If any parameter is incorrect, the function responds with an
#' error message based on the API feedback. To view all supported generative AI models, use the
#' function \code{\link{available.models}}.
#'
#' In addition, this function modifies the chat history along with the generative AI object directly,
#' meaning the chat history is mutable. You can print out the chat history using the
#' function \code{\link{chat.history.print}} or simply use \code{verbose = TRUE} in this function. If you
#' want to edit a message, use the function \code{\link{chat.edit}}. To reset the chat history along with
#' the generative AI object, use the function \code{\link{chat.history.reset}}.
#'
#' For \strong{Google Generative AI} models, available configurations are as follows. For more detail,
#' please refer
#' to \code{https://ai.google.dev/api/rest/v1/HarmCategory},
#' \code{https://ai.google.dev/api/rest/v1/SafetySetting}, and
#' \code{https://ai.google.dev/api/rest/v1/GenerationConfig}.
#'
#' \itemize{
#' \item \code{harm.category.dangerous.content}
#'
#' Optional. An integer, from 1 to 5 inclusive, representing the threshold for dangerous content,
#' with a higher value representing a lower probability of being blocked.
#'
#' \item \code{harm.category.harassment}
#'
#' Optional. An integer, from 1 to 5 inclusive, representing the threshold for harasment content,
#' with a higher value representing a lower probability of being blocked.
#'
#' \item \code{harm.category.hate.speech}
#'
#' Optional. An integer, from 1 to 5 inclusive, representing the threshold for hate speech and
#' content, with a higher value representing a lower probability of being blocked.
#'
#' \item \code{harm.category.sexually.explicit}
#'
#' Optional. An integer, from 1 to 5 inclusive, representing the threshold for sexually explicit
#' content, with a higher value representing a lower probability of being blocked.
#'
#' \item \code{stop.sequences}
#'
#' Optional. A list of character sequences (up to 5) that will stop output generation. If specified,
#' the API will stop at the first appearance of a stop sequence. The stop sequence will not be
#' included as part of the response.
#'
#' \item \code{max.output.tokens}
#'
#' Optional. An integer, value varies by model, representing maximum number of tokens to include
#' in a candidate.
#'
#' \item \code{temperature}
#'
#' Optional. A number, from 0.0 to 1.0 inclusive, controlling the randomness of the output.
#'
#' \item \code{top.p}
#'
#' Optional. A number, value varies by model, representing maximum cumulative probability of tokens
#' to consider when sampling.
#'
#' \item \code{top.k}
#'
#' Optional. A number, value varies by model, representing maximum number of tokens to consider when sampling.
#' }
#'
#' For \strong{Moonshot AI} models, available configurations are as follows. For more detail, please refer to
#' \code{https://platform.moonshot.cn/api.html#chat-completion}.
#'
#' \itemize{
#' \item \code{max.tokens}
#'
#' Optional. An integer. The maximum number of tokens that will be generated when the chat completes.
#' If the chat is not finished by the maximum number of tokens generated, the finish reason will be
#' "length", otherwise it will be "stop".
#'
#' \item \code{temperature}
#'
#' Optional. A number. What sampling temperature to use, between 0 and 1. Higher values (e.g. 0.7) will
#' make the output more random, while lower values (e.g. 0.2) will make it more focused and deterministic.
#'
#' \item \code{top.p}
#'
#' Optional. A number. Another sampling temperature.
#' }
#'
#' For \strong{OpenAI} models, available configurations are as follows. For more detail, please refer to
#' \code{https://platform.openai.com/docs/api-reference/chat/create}.
#'
#' \itemize{
#' \item \code{frequency.penalty}
#'
#' Optional. A number from -2.0 to 2.0 inclusive. Positive values penalize new tokens based on their
#' existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
#'
#' \item \code{logit.bias}
#'
#' Optional. A map. Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object
#' that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to
#' 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact
#' effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection;
#' values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
#'
#' \item \code{logprobs}
#'
#' Optional. A boolean value. Whether to return log probabilities of the output tokens or not. If true, returns the log
#' probabilities of each output token returned in the content of message
#'
#' \item \code{top.logprobs}
#'
#' Optional. An integer between 0 and 5 specifying the number of most likely tokens to return at each token
#' position, each with an associated log probability. \code{logprobs} must be set to \code{TRUE} if this
#' parameter is used.
#'
#' \item \code{max.tokens}
#'
#' Optional. An integer. The maximum number of tokens that can be generated in the chat completion. The total length of
#' input tokens and generated tokens is limited by the model's context length.
#'
#' \item \code{presence.penalty}
#'
#' Optional. A Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear
#' in the text so far, increasing the model's likelihood to talk about new topics.
#'
#' \item \code{response.format}
#'
#' Optional. An object specifying the format that the model must output. Compatible with GPT-4 Turbo and
#' all GPT-3.5 Turbo models newer than \code{gpt-3.5-turbo-1106}.
#'
#' \item \code{seed}
#'
#' Optional. An integer. If specified, our system will make a best effort to sample deterministically, such that repeated
#' requests with the same seed and parameters should return the same result.
#'
#' \item \code{stop}
#'
#' Optional. A character string or list contains up to 4 sequences where the API will stop generating further tokens.
#'
#' \item \code{temperature}
#'
#' Optional. A number. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output
#' more random, while lower values like 0.2 will make it more focused and deterministic.
#'
#' \item \code{top.p}
#'
#' Optional. A number. An alternative to sampling with temperature, called nucleus sampling, where the model considers
#' the results of the tokens with \code{top.p} probability mass. So 0.1 means only the tokens comprising the top
#' 10% probability mass are considered.
#'
#' \item \code{tools}
#'
#' Optional. A list of tools the model may call. Currently, only functions are supported as a tool. Use this
#' to provide a list of functions the model may generate JSON inputs for.
#'
#' \item \code{tool.choice}
#'
#' Optional. A character string or object. Controls which (if any) function is called by the model. \code{none} means
#' the model will not call a function and instead generates a message. \code{auto} means the model can pick
#' between generating a message or calling a function.
#'
#' \item \code{user}
#'
#' Optional. A character string. A unique identifier representing your end-user, which can help OpenAI to monitor
#' and detect abuse.
#' }
#'
#' @seealso
#' \href{https://genai.gd.edu.kg/r/documentation/}{GenAI - R Package "GenAI" Documentation}
#'
#' \href{https://colab.research.google.com/github/GitData-GA/GenAI/blob/gh-pages/r/example/chat.ipynb}{Live Demo in Colab}
#'
#' @examples
#' \dontrun{
#' # Assuming there is a GenAI object named 'genai.model' supporting this
#' # function, please refer to the "Live Demo in Colab" above for real
#' # examples. The following examples are just some basic guidelines.
#'
#' # Method 1 (recommended): use the pipe operator "%>%"
#' genai.model %>%
#' chat(prompt = "Write a story about Mars in 50 words.") %>%
#' cat()
#'
#' # Method 2: use the reference operator "$"
#' cat(genai.model$chat(prompt = "Write a story about Jupiter in 50 words."))
#'
#' # Method 3: use the function chat() directly
#' cat(chat(genai.object = genai.model,
#' prompt = "Summarize the chat."))
#' }
#'
#' @export
chat = function(genai.object,
prompt,
verbose = FALSE,
config = list()) {
genai.object$chat(prompt,
verbose,
config)
}
Loading

0 comments on commit 6f8b43a

Please sign in to comment.