diff --git a/0.7/objects.inv b/0.7/objects.inv index e651aef..6b0e614 100644 Binary files a/0.7/objects.inv and b/0.7/objects.inv differ diff --git a/0.7/reference/gerd/frontends/generate/index.html b/0.7/reference/gerd/frontends/generate/index.html index d89a661..ff5d30c 100644 --- a/0.7/reference/gerd/frontends/generate/index.html +++ b/0.7/reference/gerd/frontends/generate/index.html @@ -893,6 +893,30 @@ + + +
upload_lora
+
+
+
+
+
upload_lora
Upload a LoRA archive.
+gerd/frontends/generate.py
57 -58 -59 -60 +
gerd/frontends/generate.py
Returns: +
+
Source code in
+ |
def __init__(
self,
config: LoraTrainingConfig,
callback_cls: Optional[List[Type[transformers.TrainerCallback]]] = None,
@@ -3142,10 +3143,11 @@
ddp_find_unused_parameters=None,
use_ipex=config.flags.use_ipex,
save_steps=config.save_steps,
- # any of these two will set `torch_compile=True`
- # torch_compile_backend="inductor",
- # torch_compile_mode="reduce-overhead",
- )
+ use_cpu=config.flags.use_cpu,
+ # any of these two will set `torch_compile=True`
+ # torch_compile_backend="inductor",
+ # torch_compile_mode="reduce-overhead",
+ )
Source code in
- |
352 -353 -354 +
gerd/training/trainer.py |
309 -310 +
Source code in
- |
244 -245 + GERD is developed as an experimental library to investigate how large language models (LLMs) can be used to generate and analyze (sets of) documents. This project was initially forked from Llama-2-Open-Source-LLM-CPU-Inference by Kenneth Leung. "},{"location":"#quickstart","title":"Quickstart","text":"If you just want to it try out, you can clone the project and install dependencies with Source: examples/hello.py If you want to try this out in your browser, head over to binder \ud83d\udc49 . Note that running LLMs on the CPU (and especially on limited virtual machines like binder) takes some time. "},{"location":"#question-and-answer-example","title":"Question and Answer Example","text":"Follow quickstart but execute Click the 'Click to Upload' button and search for a GRASCCO document named Prompt chaining is a prompt engineering approach to increase the 'reflection' of a large language model onto its given answer. Check Source: examples/chaining.py Config: config/gen_chaining.yml As you see, the answer does not make much sense with the default model which is rather small. Give it a try with meta-llama/Llama-3.2-3B. To use this model, you need to login with the huggingface cli and accept the Meta Community License Agreement. "},{"location":"#full-documentation","title":"Full Documentation","text":"A more detailled documentation can be found here \ud83d\udc49 . "},{"location":"#used-tools","title":"Used Tools","text":"
GERD is primarly a tool for prototyping workflows for working with Large Language Models. It is meant to act as 'glue' between different tools and services and should ease the access to these tools. In general, there should be only be two components involved in a GERD workflow: A configuration and a service. The configuration can be assembled from different sources and should be able to be used in different services. The foundation of such a configration is a YAML file. GERD provides a set of those which can be found in the And can be used with a "},{"location":"develop/","title":"Development Guide","text":""},{"location":"develop/#basics","title":"Basics","text":"To get started on development you need to install uv. You can use Next install the package and all dependencies with After that, it should be possible to run scripts without further issues: To add a new runtime dependency, just run To add a new development dependency, run "},{"location":"develop/#pre-commit-hooks-recommended","title":"Pre-commit hooks (recommended)","text":"Pre-commit hooks are used to check linting and run tests before commit changes to prevent faulty commits. Thus, it is recommended to use these hooks! Hooks should not include long running actions (such as tests) since committing should be fast. To install pre-commit hooks, execute this once: "},{"location":"develop/#further-tools","title":"Further tools","text":""},{"location":"develop/#poe-task-runner","title":"Poe Task Runner","text":"Task runner configuration are stored in the "},{"location":"develop/#pytest","title":"PyTest","text":"Test case are run via pytest. Tests can be found in the More excessive testing can be trigger with "},{"location":"develop/#ruff","title":"Ruff","text":"Ruff is used for linting and code formatting. Ruff follows There is a VSCode extension that handles formatting and linting. "},{"location":"develop/#mypy","title":"MyPy","text":"MyPy does static type checking. It will not be run automatically. To run MyPy manually use uv with the folder to be checked: "},{"location":"develop/#implemented-guis","title":"Implemented GUIs","text":""},{"location":"develop/#run-frontend","title":"Run Frontend","text":"Either run Generate Frontend: or QA Frontend: or the GERD Router: "},{"location":"develop/#cicd-and-distribution","title":"CI/CD and Distribution","text":""},{"location":"develop/#github-actions","title":"GitHub Actions","text":"GitHub Actions can be found under .github/workflows. There is currently one main CI workflow called In its current config it will only be executed when a PR for This project uses GitHub issue templates. Currently, there are three templates available. "},{"location":"develop/#bug-report","title":"Bug Report","text":" "},{"location":"develop/#feature-request","title":"Feature Request","text":" "},{"location":"develop/#use-case","title":"Use Case","text":" "},{"location":"reference/gerd/","title":"gerd","text":""},{"location":"reference/gerd/#gerd","title":"gerd","text":"Generating and evaluating relevant documentation (GERD). This package provides the GERD system for working with large language models (LLMs). This includes means to generate texts using different backends and frontends. The system is designed to be flexible and extensible to support different use cases. It can also be used for Retrieval Augmented Generation (RAG) tasks or as a chatbot. Modules: Name Descriptionbackends This module contains backend implementations that manage services. config Configuration for the application. features Special features to extend the functionality of GERD services. frontends A collection of several gradio frontends. gen Services and utilities for text generation with LLMs. loader Module for loading language models. models Pydantic model definitions and data classes that are share accross modules. qa Services and utilities for retrieval augmented generation (RAG). rag Retrieval-Augmented Generation (RAG) backend. training Collections of training routines for GERD. transport Module to define the transport protocol. "},{"location":"reference/gerd/backends/","title":"gerd.backends","text":""},{"location":"reference/gerd/backends/#gerd.backends","title":"gerd.backends","text":"This module contains backend implementations that manage services. These backends can be used by frontends such as gradio. Furthermore, the backend module contains service implementations for loading LLMs or vector stores for Retrieval Augmented Generation. Modules: Name Descriptionbridge The Bridge connects backend and frontend services directly for local use. rest_client REST client for the GERD server. rest_server REST server as a GERD backend. Attributes: Name Type DescriptionTRANSPORTER Transport The default transporter that connects the backend services to the frontend. "},{"location":"reference/gerd/backends/#gerd.backends.TRANSPORTER","title":"TRANSPORTERmodule-attribute ","text":" The default transporter that connects the backend services to the frontend. "},{"location":"reference/gerd/backends/bridge/","title":"gerd.backends.bridge","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge","title":"gerd.backends.bridge","text":"The Bridge connects backend and frontend services directly for local use. Classes: Name DescriptionBridge Direct connection between backend services and frontend. "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge","title":"Bridge","text":" Bases: Direct connection between backend services and frontend. Frontends that make use of the The services associated with the bridge are initialized lazily. Methods: Name Descriptionadd_file Add a file to the vector store. analyze_mult_prompts_query Queries the vector store with a set of predefined queries. analyze_query Queries the vector store with a predefined query. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. generate Generates text with the generation service. get_gen_prompt Gets the prompt configuration for the generation service. get_qa_prompt Gets the prompt configuration for a mode of the QA service. qa_query Query the QA service with a question. remove_file Remove a file from the vector store. set_gen_prompt Sets the prompt configuration for the generation service. set_qa_prompt Sets the prompt configuration for the QA service. Attributes: Name Type Descriptiongen GenerationService Get the generation service instance. qa QAService Get the QA service instance. It will be created if it does not exist. Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.gen","title":"gen property ","text":" Get the generation service instance. It will be created if it does not exist. "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.qa","title":"qaproperty ","text":" Get the QA service instance. It will be created if it does not exist. "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.add_file","title":"add_file","text":" Add a file to the vector store. The returned answer has a status code of 200 if the file was added successfully. Parameters: file: The file to add to the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Queries the vector store with a set of predefined queries. In contrast to Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.analyze_query","title":"analyze_query","text":" Queries the vector store with a predefined query. The query should return vital information gathered from letters of discharge. Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding is defined by the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.db_query","title":"db_query","text":" Queries the vector store with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.generate","title":"generate","text":" Generates text with the generation service. Parameters: Name Type Description DefaultDict[str, str] The parameters to generate text with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.get_gen_prompt","title":"get_gen_prompt","text":" Gets the prompt configuration for the generation service. Returns: Type DescriptionPromptConfig The current prompt configuration Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.get_qa_prompt","title":"get_qa_prompt","text":" Gets the prompt configuration for a mode of the QA service. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt configuration for requiredReturns: Type DescriptionPromptConfig The prompt configuration for the QA service Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.get_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.qa_query","title":"qa_query","text":" Query the QA service with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the QA service with. requiredReturns: Type DescriptionQAAnswer The answer from the QA service. Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.qa_query(query)","title":"query ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.remove_file","title":"remove_file","text":" Remove a file from the vector store. The returned answer has a status code of 200 if the file was removed successfully. Parameters: file_name: The name of the file to remove from the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.set_gen_prompt","title":"set_gen_prompt","text":" Sets the prompt configuration for the generation service. The prompt configuration that is returned should in most cases be the same as the one that was set. Parameters: config: The prompt configuration to set Returns: Type DescriptionPromptConfig The prompt configuration that was set Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.set_qa_prompt","title":"set_qa_prompt","text":" Sets the prompt configuration for the QA service. Since the QA service uses multiple prompt configurations, the mode should be specified. For more details, see the documentation of Parameters: Name Type Description DefaultPromptConfig The prompt configuration to set requiredQAModesEnum The mode to set the prompt configuration for requiredReturns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.set_qa_prompt(config)","title":"config ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.set_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_client/","title":"gerd.backends.rest_client","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client","title":"gerd.backends.rest_client","text":"REST client for the GERD server. Classes: Name DescriptionRestClient REST client for the GERD server. "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient","title":"RestClient","text":" Bases: REST client for the GERD server. The client initalizes the server URL. It is retrieved from the global CONFIG. Other (timeout) settings are also set here but not configurable as of now. Methods: Name Descriptionadd_file Add a file to the vector store. analyze_mult_prompts_query Queries the vector store with a set of predefined queries. analyze_query Queries the vector store with a predefined query. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. generate Generates text with the generation service. get_gen_prompt Gets the prompt configuration for the generation service. get_qa_prompt Gets the prompt configuration for a mode of the QA service. qa_query Query the QA service with a question. remove_file Remove a file from the vector store. set_gen_prompt Sets the prompt configuration for the generation service. set_qa_prompt Sets the prompt configuration for the QA service. Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.add_file","title":"add_file","text":" Add a file to the vector store. The returned answer has a status code of 200 if the file was added successfully. Parameters: file: The file to add to the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Queries the vector store with a set of predefined queries. In contrast to Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.analyze_query","title":"analyze_query","text":" Queries the vector store with a predefined query. The query should return vital information gathered from letters of discharge. Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding is defined by the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.db_query","title":"db_query","text":" Queries the vector store with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.generate","title":"generate","text":" Generates text with the generation service. Parameters: Name Type Description DefaultDict[str, str] The parameters to generate text with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.get_gen_prompt","title":"get_gen_prompt","text":" Gets the prompt configuration for the generation service. Returns: Type DescriptionPromptConfig The current prompt configuration Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.get_qa_prompt","title":"get_qa_prompt","text":" Gets the prompt configuration for a mode of the QA service. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt configuration for requiredReturns: Type DescriptionPromptConfig The prompt configuration for the QA service Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.get_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.qa_query","title":"qa_query","text":" Query the QA service with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the QA service with. requiredReturns: Type DescriptionQAAnswer The answer from the QA service. Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.qa_query(query)","title":"query ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.remove_file","title":"remove_file","text":" Remove a file from the vector store. The returned answer has a status code of 200 if the file was removed successfully. Parameters: file_name: The name of the file to remove from the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.set_gen_prompt","title":"set_gen_prompt","text":" Sets the prompt configuration for the generation service. The prompt configuration that is returned should in most cases be the same as the one that was set. Parameters: config: The prompt configuration to set Returns: Type DescriptionPromptConfig The prompt configuration that was set Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.set_qa_prompt","title":"set_qa_prompt","text":" Sets the prompt configuration for the QA service. Since the QA service uses multiple prompt configurations, the mode should be specified. For more details, see the documentation of Parameters: Name Type Description DefaultPromptConfig The prompt configuration to set requiredQAModesEnum The mode to set the prompt configuration for requiredReturns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.set_qa_prompt(config)","title":"config ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.set_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_server/","title":"gerd.backends.rest_server","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server","title":"gerd.backends.rest_server","text":"REST server as a GERD backend. Classes: Name DescriptionRestServer REST server as a GERD backend. "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer","title":"RestServer","text":" Bases: REST server as a GERD backend. The REST server initializes a private bridge and an API router. The API router is used to define the endpoints for the REST server. Methods: Name Descriptionadd_file Add a file to the vector store. analyze_mult_prompts_query Queries the vector store with a set of predefined queries. analyze_query Queries the vector store with a predefined query. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. generate Generates text with the generation service. get_gen_prompt Gets the prompt configuration for the generation service. get_qa_prompt Gets the prompt configuration for a mode of the QA service. get_qa_prompt_rest Get the QA prompt configuration. qa_query Query the QA service with a question. remove_file Remove a file from the vector store. set_gen_prompt Sets the prompt configuration for the generation service. set_qa_prompt Sets the prompt configuration for the QA service. set_qa_prompt_rest Set the QA prompt configuration. Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.add_file","title":"add_file","text":" Add a file to the vector store. The returned answer has a status code of 200 if the file was added successfully. Parameters: file: The file to add to the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Queries the vector store with a set of predefined queries. In contrast to Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.analyze_query","title":"analyze_query","text":" Queries the vector store with a predefined query. The query should return vital information gathered from letters of discharge. Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding is defined by the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.db_query","title":"db_query","text":" Queries the vector store with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.generate","title":"generate","text":" Generates text with the generation service. Parameters: Name Type Description DefaultDict[str, str] The parameters to generate text with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.get_gen_prompt","title":"get_gen_prompt","text":" Gets the prompt configuration for the generation service. Returns: Type DescriptionPromptConfig The current prompt configuration Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.get_qa_prompt","title":"get_qa_prompt","text":" Gets the prompt configuration for a mode of the QA service. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt configuration for requiredReturns: Type DescriptionPromptConfig The prompt configuration for the QA service Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.get_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.get_qa_prompt_rest","title":"get_qa_prompt_rest","text":" Get the QA prompt configuration. The call is forwarded to the bridge. Parameters: qa_mode: The QA mode Returns: The QA prompt configuration Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.qa_query","title":"qa_query","text":" Query the QA service with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the QA service with. requiredReturns: Type DescriptionQAAnswer The answer from the QA service. Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.qa_query(query)","title":"query ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.remove_file","title":"remove_file","text":" Remove a file from the vector store. The returned answer has a status code of 200 if the file was removed successfully. Parameters: file_name: The name of the file to remove from the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_gen_prompt","title":"set_gen_prompt","text":" Sets the prompt configuration for the generation service. The prompt configuration that is returned should in most cases be the same as the one that was set. Parameters: config: The prompt configuration to set Returns: Type DescriptionPromptConfig The prompt configuration that was set Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_qa_prompt","title":"set_qa_prompt","text":" Sets the prompt configuration for the QA service. Since the QA service uses multiple prompt configurations, the mode should be specified. For more details, see the documentation of Parameters: Name Type Description DefaultPromptConfig The prompt configuration to set requiredQAModesEnum The mode to set the prompt configuration for requiredReturns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_qa_prompt(config)","title":"config ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_qa_prompt_rest","title":"set_qa_prompt_rest","text":" Set the QA prompt configuration. The call is forwarded to the bridge. Parameters: config: The QA prompt configuration Returns: The QA prompt configuration; Should be the same as the input in most cases Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/config/","title":"gerd.config","text":""},{"location":"reference/gerd/config/#gerd.config","title":"gerd.config","text":"Configuration for the application. Classes: Name DescriptionEnvVariables Environment variables. Settings Settings for the application. YamlConfig YAML configuration source. Functions: Name Descriptionload_gen_config Load the LLM model configuration. load_qa_config Load the LLM model configuration. Attributes: Name Type DescriptionCONFIG The global configuration object. "},{"location":"reference/gerd/config/#gerd.config.CONFIG","title":"CONFIGmodule-attribute ","text":" The global configuration object. "},{"location":"reference/gerd/config/#gerd.config.EnvVariables","title":"EnvVariables","text":" Bases: Environment variables. "},{"location":"reference/gerd/config/#gerd.config.Settings","title":"Settings","text":" Bases: Settings for the application. Methods: Name Descriptionsettings_customise_sources Customize the settings sources used by pydantic-settings. "},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources","title":"settings_customise_sourcesclassmethod ","text":" Customize the settings sources used by pydantic-settings. The order of the sources is important. The first source has the highest priority. Parameters: Name Type Description DefaultThe class of the settings. requiredPydanticBaseSettingsSource The settings from the initialization. requiredPydanticBaseSettingsSource The settings from the environment. requiredPydanticBaseSettingsSource The settings from the dotenv file. requiredPydanticBaseSettingsSource The settings from the secret file. requiredReturns: Type DescriptionTuple[PydanticBaseSettingsSource, ...] The customized settings sources. Source code ingerd/config.py "},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(cls)","title":"cls ","text":""},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(init_settings)","title":"init_settings ","text":""},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(env_settings)","title":"env_settings ","text":""},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(dotenv_settings)","title":"dotenv_settings ","text":""},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(file_secret_settings)","title":"file_secret_settings ","text":""},{"location":"reference/gerd/config/#gerd.config.YamlConfig","title":"YamlConfig","text":" Bases: YAML configuration source. Methods: Name Descriptionget_field_value Overrides a method from Overrides a method from Fails if it should ever be called. Parameters: field: The field to get the value for. field_name: The name of the field. Raises: Type DescriptionNotImplementedError Always. Source code ingerd/config.py "},{"location":"reference/gerd/config/#gerd.config.load_gen_config","title":"load_gen_config","text":" Load the LLM model configuration. Parameters: Name Type Description Defaultstr The name of the configuration. 'gen_default' Returns: Type DescriptionGenerationConfig The model configuration. Source code ingerd/config.py "},{"location":"reference/gerd/config/#gerd.config.load_gen_config(config)","title":"config ","text":""},{"location":"reference/gerd/config/#gerd.config.load_qa_config","title":"load_qa_config","text":" Load the LLM model configuration. Parameters: Name Type Description Defaultstr The name of the configuration. 'qa_default' Returns: Type DescriptionQAConfig The model configuration. Source code ingerd/config.py "},{"location":"reference/gerd/config/#gerd.config.load_qa_config(config)","title":"config ","text":""},{"location":"reference/gerd/features/","title":"gerd.features","text":""},{"location":"reference/gerd/features/#gerd.features","title":"gerd.features","text":"Special features to extend the functionality of GERD services. Modules: Name Descriptionprompt_chaining The prompt chaining extension. "},{"location":"reference/gerd/features/prompt_chaining/","title":"gerd.features.prompt_chaining","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining","title":"gerd.features.prompt_chaining","text":"The prompt chaining extension. Prompt chaining is a method to improve the factual accuracy of the model's output. To do this, the model generates a series of prompts and uses the output of each prompt as the input for the next prompt. This allows the model to reflect on its own output and generate a more coherent response. Classes: Name DescriptionPromptChaining The prompt chaining extension. PromptChainingConfig Configuration for prompt chaining. "},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining","title":"PromptChaining","text":" The prompt chaining extension. The service is initialized with a chaining configuration and an LLM. Parameters: Name Type Description DefaultPromptChainingConfig The configuration for the prompt chaining requiredLLM The language model to use for the generation requiredPromptConfig The prompt that is used to wrap the questions requiredMethods: Name Descriptiongenerate Generate text based on the prompt configuration and use chaining. Source code ingerd/features/prompt_chaining.py "},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining(config)","title":"config ","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining(llm)","title":"llm ","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining.generate","title":"generate","text":" Generate text based on the prompt configuration and use chaining. Parameters: Name Type Description Defaultdict[str, str] The parameters to format the prompt with requiredReturns: Type Descriptionstr The result of the last prompt that was chained Source code ingerd/features/prompt_chaining.py "},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChainingConfig","title":"PromptChainingConfig","text":" Bases: Configuration for prompt chaining. Note that prompts should contain placeholders for the responses to be inserted. The initial question can be used with Attributes: Name Type Descriptionprompts list[PromptConfig] The list of prompts to chain. "},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChainingConfig.prompts","title":"promptsinstance-attribute ","text":" The list of prompts to chain. "},{"location":"reference/gerd/frontends/","title":"gerd.frontends","text":""},{"location":"reference/gerd/frontends/#gerd.frontends","title":"gerd.frontends","text":"A collection of several gradio frontends. A variety of frontends to interact with GERD services and backends. Modules: Name Descriptiongen_frontend A gradio frontend to interact with the generation service. generate A simple gradio frontend to interact with the GERD chat and generate service. instruct A gradio frontend to interact with the GERD instruct service. qa_frontend A gradio frontend to query the QA service and upload files to the vectorstore. router A gradio frontend to start and stop the GERD services. training A gradio frontend to train LoRAs with. "},{"location":"reference/gerd/frontends/gen_frontend/","title":"gerd.frontends.gen_frontend","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend","title":"gerd.frontends.gen_frontend","text":"A gradio frontend to interact with the generation service. This frontend is tailored to the letter of discharge generation task. For a more general frontend see Functions: Name Descriptioncompare_paragraphs Compare paragraphs of two documents and return the modified parts. generate Generate a letter of discharge based on the provided fields. insert_paragraphs Insert modified paragraphs into the source document. response_parser Parse the response from the generation service. "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.compare_paragraphs","title":"compare_paragraphs","text":" Compare paragraphs of two documents and return the modified parts. Parameters: Name Type Description Defaultstr The source document requiredstr The modified document requiredReturns: Type DescriptionDict[str, str] The modified parts of the document Source code ingerd/frontends/gen_frontend.py "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.compare_paragraphs(src_doc)","title":"src_doc ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.compare_paragraphs(mod_doc)","title":"mod_doc ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.generate","title":"generate","text":" Generate a letter of discharge based on the provided fields. Parameters: Name Type Description Defaultstr The fields to generate the letter of discharge from. () Returns: Type Descriptionstr The generated letter of discharge, a text area to display it, str and a button state to continue the generation Source code ingerd/frontends/gen_frontend.py "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.generate(*fields)","title":"*fields ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.insert_paragraphs","title":"insert_paragraphs","text":" Insert modified paragraphs into the source document. Parameters: Name Type Description Defaultstr The source document requiredDict[str, str] The modified paragraphs requiredReturns: Type Descriptionstr The updated document Source code ingerd/frontends/gen_frontend.py "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.insert_paragraphs(src_doc)","title":"src_doc ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.insert_paragraphs(new_para)","title":"new_para ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.response_parser","title":"response_parser","text":" Parse the response from the generation service. Parameters: Name Type Description Defaultstr The response from the generation service requiredReturns: Type DescriptionDict[str, str] The parsed response Source code ingerd/frontends/gen_frontend.py "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.response_parser(response)","title":"response ","text":""},{"location":"reference/gerd/frontends/generate/","title":"gerd.frontends.generate","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate","title":"gerd.frontends.generate","text":"A simple gradio frontend to interact with the GERD chat and generate service. Classes: Name DescriptionGlobal Singleton to store the service. Functions: Name Descriptiongenerate Generate text from the model. load_model Load a global large language model. Attributes: Name Type DescriptionKIOSK_MODE Whether the frontend is running in kiosk mode. "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.KIOSK_MODE","title":"KIOSK_MODEmodule-attribute ","text":" Whether the frontend is running in kiosk mode. Kiosk mode reduces the number of options to a minimum and automatically loads the model. "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.Global","title":"Global","text":"Singleton to store the service. "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate","title":"generate","text":" Generate text from the model. Parameters: Name Type Description Defaultstr The text to generate from requiredfloat The temperature for the generation requiredfloat The top p value for the generation requiredint The maximum number of tokens to generate requiredReturns: Type Descriptionstr The generated text Source code ingerd/frontends/generate.py "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate(textbox)","title":"textbox ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate(temp)","title":"temp ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate(top_p)","title":"top_p ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate(max_tokens)","title":"max_tokens ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.load_model","title":"load_model","text":" Load a global large language model. Parameters: Name Type Description Defaultstr The name of the model requiredstr Whether to use an extra LoRA requiredReturns: Type Descriptiondict[str, Any] The updated interactive state, returns interactive=True when the model is loaded Source code ingerd/frontends/generate.py "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.load_model(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.load_model(origin)","title":"origin ","text":""},{"location":"reference/gerd/frontends/instruct/","title":"gerd.frontends.instruct","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct","title":"gerd.frontends.instruct","text":"A gradio frontend to interact with the GERD instruct service. Classes: Name DescriptionGlobal Singleton to store the service. Functions: Name Descriptiongenerate Generate text from the model. load_model Load a global large language model. Attributes: Name Type DescriptionKIOSK_MODE Whether the frontend is running in kiosk mode. "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.KIOSK_MODE","title":"KIOSK_MODEmodule-attribute ","text":" Whether the frontend is running in kiosk mode. Kiosk mode reduces the number of options to a minimum and automatically loads the model. "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.Global","title":"Global","text":"Singleton to store the service. "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate","title":"generate","text":" Generate text from the model. Parameters: Name Type Description Defaultfloat The temperature for the generation requiredfloat The top-p value for the generation requiredint The maximum number of tokens to generate requiredstr The system text to set up the context requiredstr The user input () Source code in gerd/frontends/instruct.py "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(temperature)","title":"temperature ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(top_p)","title":"top_p ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(max_tokens)","title":"max_tokens ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(system_text)","title":"system_text ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(args)","title":"args ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.load_model","title":"load_model","text":" Load a global large language model. Parameters: Name Type Description Defaultstr The name of the model requiredstr Whether to use an extra LoRA 'None' Source code in gerd/frontends/instruct.py "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.load_model(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.load_model(origin)","title":"origin ","text":""},{"location":"reference/gerd/frontends/qa_frontend/","title":"gerd.frontends.qa_frontend","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend","title":"gerd.frontends.qa_frontend","text":"A gradio frontend to query the QA service and upload files to the vectorstore. Functions: Name Descriptionfiles_changed Check if the file upload element has changed. get_qa_mode Get QAMode from string. handle_developer_mode_checkbox_change Enable/disable developermode. handle_type_radio_selection_change Enable/disable gui elements depend on which mode is selected. query Starts the selected QA Mode. set_prompt Updates the prompt of the selected QA Mode. "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.files_changed","title":"files_changed","text":" Check if the file upload element has changed. If so, upload the new files to the vectorstore and delete the one that have been removed. Parameters: Name Type Description DefaultOptional[list[str]] The file paths to upload required Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.files_changed(file_paths)","title":"file_paths ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.get_qa_mode","title":"get_qa_mode","text":" Get QAMode from string. Parameters: Name Type Description Defaultstr The search type requiredReturns: Type DescriptionQAModesEnum The QAMode Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.get_qa_mode(search_type)","title":"search_type ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.handle_developer_mode_checkbox_change","title":"handle_developer_mode_checkbox_change","text":" Enable/disable developermode. Enables or disables the developer mode and the corresponding GUI elements. Parameters: check: The current state of the developer mode checkbox Returns: The list of GUI element property changes to update Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.handle_type_radio_selection_change","title":"handle_type_radio_selection_change","text":" Enable/disable gui elements depend on which mode is selected. This order of the updates elements must be considered
Parameters: Name Type Description Defaultstr The current search type requiredReturns: Type DescriptionList[Any] The list of GUI element property changes to update Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.handle_type_radio_selection_change(search_type)","title":"search_type ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query","title":"query","text":" Starts the selected QA Mode. Parameters: Name Type Description Defaultstr The question to ask requiredstr The search type requiredint The number of sources requiredstr The search strategy requiredReturns: Type Descriptionstr The response from the QA service Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query(question)","title":"question ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query(search_type)","title":"search_type ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query(k_source)","title":"k_source ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query(search_strategy)","title":"search_strategy ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.set_prompt","title":"set_prompt","text":" Updates the prompt of the selected QA Mode. Parameters: Name Type Description Defaultstr The new prompt requiredstr The search type requiredOptional[Progress] The progress bar to update None Source code in gerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.set_prompt(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.set_prompt(search_type)","title":"search_type ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.set_prompt(progress)","title":"progress ","text":""},{"location":"reference/gerd/frontends/router/","title":"gerd.frontends.router","text":""},{"location":"reference/gerd/frontends/router/#gerd.frontends.router","title":"gerd.frontends.router","text":"A gradio frontend to start and stop the GERD services. Since most hosts that use a frontend will not have enough memory to run multiple services at the same time this router is used to start and stop the services as needed. Classes: Name DescriptionAppController The controller for the app. AppState The state of the service. Functions: Name Descriptioncheck_state Checks the app state and waits for the service to start. Attributes: Name Type DescriptionGRADIO_ROUTER_PORT The port the router is running on. GRADIO_SERVER_PORT The port the gradio server is running on. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.GRADIO_ROUTER_PORT","title":"GRADIO_ROUTER_PORTmodule-attribute ","text":" The port the router is running on. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.GRADIO_SERVER_PORT","title":"GRADIO_SERVER_PORTmodule-attribute ","text":" The port the gradio server is running on. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController","title":"AppController","text":" The controller for the app. The controlller is initialized in the stopped state. Methods: Name Descriptioncheck_port Check if the service port is open. instance Get the instance of the controller. start Start the service with the given frontend. start_gen Start the generation service. start_instruct Start the instruct service. start_qa Start the QA service. start_simple Start the simple generation service. start_training Start the training service. stop Stop the service when it is running. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.check_port","title":"check_port staticmethod ","text":" Check if the service port is open. Parameters: Name Type Description Defaultint The port to check requiredReturns: Type Descriptionbool True if the port us open, False otherwise. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.check_port(port)","title":"port ","text":""},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.instance","title":"instance classmethod ","text":" Get the instance of the controller. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start","title":"start","text":" Start the service with the given frontend. Parameters: Name Type Description Defaultstr The frontend service name to start. required Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start(frontend)","title":"frontend ","text":""},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_gen","title":"start_gen","text":" Start the generation service. Returns: Type Descriptionstr The name of the current app state. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_instruct","title":"start_instruct","text":" Start the instruct service. Returns: Type Descriptionstr The name of the current app state Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_qa","title":"start_qa","text":" Start the QA service. Returns: Type Descriptionstr The name of the current app state Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_simple","title":"start_simple","text":" Start the simple generation service. Returns: Type Descriptionstr The name of the current app state Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_training","title":"start_training","text":" Start the training service. Returns: Type Descriptionstr The name of the current app state Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.stop","title":"stop","text":" Stop the service when it is running. Returns: Type Descriptionstr The name of the current app state. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState","title":"AppState","text":" Bases: The state of the service. Attributes: Name Type DescriptionGENERATE_STARTED The generation service is started. GENERATE_STARTING The generation service is starting. INSTRUCT_STARTED The instruct service is started. INSTRUCT_STARTING The instruct service is starting. QA_STARTED The QA service is started. QA_STARTING The QA service is starting. SIMPLE_STARTED The simple generation service is started. SIMPLE_STARTING The simple generation service is starting. STOPPED All services is stopped. TRAINING_STARTED The training service is started. TRAINING_STARTING The training service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.GENERATE_STARTED","title":"GENERATE_STARTEDclass-attribute instance-attribute ","text":" The generation service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.GENERATE_STARTING","title":"GENERATE_STARTINGclass-attribute instance-attribute ","text":" The generation service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.INSTRUCT_STARTED","title":"INSTRUCT_STARTEDclass-attribute instance-attribute ","text":" The instruct service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.INSTRUCT_STARTING","title":"INSTRUCT_STARTINGclass-attribute instance-attribute ","text":" The instruct service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.QA_STARTED","title":"QA_STARTEDclass-attribute instance-attribute ","text":" The QA service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.QA_STARTING","title":"QA_STARTINGclass-attribute instance-attribute ","text":" The QA service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.SIMPLE_STARTED","title":"SIMPLE_STARTEDclass-attribute instance-attribute ","text":" The simple generation service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.SIMPLE_STARTING","title":"SIMPLE_STARTINGclass-attribute instance-attribute ","text":" The simple generation service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.STOPPED","title":"STOPPEDclass-attribute instance-attribute ","text":" All services is stopped. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.TRAINING_STARTED","title":"TRAINING_STARTEDclass-attribute instance-attribute ","text":" The training service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.TRAINING_STARTING","title":"TRAINING_STARTINGclass-attribute instance-attribute ","text":" The training service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.check_state","title":"check_state","text":" Checks the app state and waits for the service to start. Returns: Type Descriptionstr The name of the current app state. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/training/","title":"gerd.frontends.training","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training","title":"gerd.frontends.training","text":"A gradio frontend to train LoRAs with. Classes: Name DescriptionGlobal A singleton class handle to store the current trainer instance. Functions: Name Descriptioncheck_trainer Check if the trainer is (still) running. get_file_list Get a list of files matching the glob pattern. get_loras Get a list of available LoRAs. start_training Start the training process. validate_files Validate the uploaded files. "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.Global","title":"Global","text":"A singleton class handle to store the current trainer instance. "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.check_trainer","title":"check_trainer","text":" Check if the trainer is (still) running. When the trainer is running, a progress bar is shown. The method returns a gradio property update of 'visible' which can be used to activate and deactivate elements based on the current training status. Returns: Type Descriptiondict[str, Any] A dictionary with the status of gradio 'visible' property Source code ingerd/frontends/training.py "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.get_file_list","title":"get_file_list","text":" Get a list of files matching the glob pattern. Parameters: Name Type Description Defaultstr The glob pattern to search for files requiredReturns: Type Descriptionstr A string with the list of files Source code ingerd/frontends/training.py "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.get_file_list(glob_pattern)","title":"glob_pattern ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.get_loras","title":"get_loras","text":" Get a list of available LoRAs. LORAs are loaded from the path defined in the default LoraTrainingConfig. Returns: Type Descriptiondict[str, Path] A dictionary with the LoRA names as keys and the paths as values Source code ingerd/frontends/training.py "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training","title":"start_training","text":" Start the training process. While training, the method will update the progress bar. Parameters: Name Type Description Defaultlist[str] | None The list of files to train on requiredstr The name of the model to train requiredstr The name of the LoRA to train requiredstr The training mode requiredstr The source of the data requiredstr The glob pattern to search for files requiredbool Whether to override existing models requiredlist[str] The modules to train requiredlist[str] The flags to set requiredint The number of epochs to train requiredint The batch size requiredint The micro batch size requiredint The cutoff length requiredint The overlap length requiredReturns: Type Descriptionstr A string with the status of the training Source code ingerd/frontends/training.py "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(files)","title":"files ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(lora_name)","title":"lora_name ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(mode)","title":"mode ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(data_source)","title":"data_source ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(input_glob)","title":"input_glob ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(override)","title":"override ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(modules)","title":"modules ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(flags)","title":"flags ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(epochs)","title":"epochs ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(batch_size)","title":"batch_size ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(micro_batch_size)","title":"micro_batch_size ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(cutoff_len)","title":"cutoff_len ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(overlap_len)","title":"overlap_len ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.validate_files","title":"validate_files","text":" Validate the uploaded files. Whether the property 'interactive' is True depends on whether any files were valid. Parameters: file_paths: The list of file paths mode: The training mode Returns: Type Descriptiontuple[list[str], dict[str, bool]] A tuple with the validated file paths and gradio property 'interactive' Source code ingerd/frontends/training.py "},{"location":"reference/gerd/gen/","title":"gerd.gen","text":""},{"location":"reference/gerd/gen/#gerd.gen","title":"gerd.gen","text":"Services and utilities for text generation with LLMs. Modules: Name Descriptionchat_service Implementation of the ChatService class. generation_service Implements the Generation class. "},{"location":"reference/gerd/gen/chat_service/","title":"gerd.gen.chat_service","text":""},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service","title":"gerd.gen.chat_service","text":"Implementation of the ChatService class. This features the currently favoured approach of instruction-based work with large language models. Thus, models fined tuned for chat or instructions work best with this service. The service can be used to generate text as well as long as the model features a chat template. In this case this service should be prefered over the GenerationService since it is easier to setup a prompt according to the model's requirements. Classes: Name DescriptionChatService Service to generate text based on a chat history. "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService","title":"ChatService","text":" Service to generate text based on a chat history. The service is initialized with a config and parameters. The parameters are used to initialize the message history. However, future reset will not consider them. loads a model according to this config. The used LLM is loaded according to the model configuration right on initialization. Methods: Name Descriptionadd_message Add a message to the chat history. generate Generate a response based on the chat history. get_prompt_config Get the prompt configuration. reset Reset the chat history. set_prompt_config Set the prompt configuration. submit_user_message Submit a message with the user role and generates a response. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.add_message","title":"add_message","text":" Add a message to the chat history. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.generate","title":"generate","text":" Generate a response based on the chat history. This method can be used as a replacement for GenerationService.generate in cases where the used model provides a chat template. When this is the case, using this method is more reliable as it requires less manual configuration to set up the prompt according to the model's requirements. Parameters: Name Type Description DefaultDict[str, str] The parameters to format the prompt with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.get_prompt_config","title":"get_prompt_config","text":" Get the prompt configuration. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.reset","title":"reset","text":" Reset the chat history. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.set_prompt_config","title":"set_prompt_config","text":" Set the prompt configuration. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.submit_user_message","title":"submit_user_message","text":" Submit a message with the user role and generates a response. The service's prompt configuration is used to format the prompt unless a different prompt configuration is provided. Parameters: parameters: The parameters to format the prompt with prompt_config: The optional prompt configuration to be used Returns: Type DescriptionGenResponse The generation result Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/generation_service/","title":"gerd.gen.generation_service","text":""},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service","title":"gerd.gen.generation_service","text":"Implements the Generation class. The generation services is meant to generate text based on a prompt and/or the continuation of a provided text. Classes: Name DescriptionGenerationService Service to generate text based on a prompt. "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService","title":"GenerationService","text":" Service to generate text based on a prompt. Initialize the generation service and loads the model. Parameters: Name Type Description DefaultGenerationConfig The configuration for the generation service requiredMethods: Name Descriptiongenerate Generate text based on the prompt configuration. get_prompt_config Get the prompt configuration. set_prompt_config Sets the prompt configuration. Source code ingerd/gen/generation_service.py "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService(config)","title":"config ","text":""},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.generate","title":"generate","text":" Generate text based on the prompt configuration. The actual prompt is provided by the prompt configuration. The list of parameters is used to format the prompt and replace the placeholders. The list can be empty if the prompt does not contain any placeholders. Parameters: Name Type Description DefaultDict[str, str] The parameters to format the prompt with requiredbool Whether to add the prompt to the response False Returns: Type DescriptionGenResponse The generation result Source code ingerd/gen/generation_service.py "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.generate(add_prompt)","title":"add_prompt ","text":""},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.get_prompt_config","title":"get_prompt_config","text":" Get the prompt configuration. Returns: Type DescriptionPromptConfig The prompt configuration Source code ingerd/gen/generation_service.py "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.set_prompt_config","title":"set_prompt_config","text":" Sets the prompt configuration. Parameters: Name Type Description DefaultPromptConfig The prompt configuration requiredReturns: The prompt configuration; Should be the same as the input in most cases Source code ingerd/gen/generation_service.py "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.set_prompt_config(config)","title":"config ","text":""},{"location":"reference/gerd/loader/","title":"gerd.loader","text":""},{"location":"reference/gerd/loader/#gerd.loader","title":"gerd.loader","text":"Module for loading language models. Depending on the configuration, different language models are loaded and different libraries are used. The main goal is to provide a unified interface to the different models and libraries. Classes: Name DescriptionLLM The abstract base class for large language models. LlamaCppLLM A language model using the Llama.cpp library. MockLLM A mock language model for testing purposes. RemoteLLM A language model using a remote endpoint. TransformerLLM A language model using the transformers library. Functions: Name Descriptionload_model_from_config Loads a language model based on the configuration. "},{"location":"reference/gerd/loader/#gerd.loader.LLM","title":"LLM","text":" The abstract base class for large language models. Should be implemented by all language model backends. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LLM.create_chat_completion","title":"create_chat_completion abstractmethod ","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LLM.generate","title":"generate abstractmethod ","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM","title":"LlamaCppLLM","text":" Bases: A language model using the Llama.cpp library. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM.create_chat_completion","title":"create_chat_completion","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM.generate","title":"generate","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.MockLLM","title":"MockLLM","text":" Bases: A mock language model for testing purposes. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.MockLLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.MockLLM.create_chat_completion","title":"create_chat_completion","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.MockLLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.MockLLM.generate","title":"generate","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.MockLLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM","title":"RemoteLLM","text":" Bases: A language model using a remote endpoint. The endpoint can be any service that are compatible with llama.cpp and openai API. For further information, please refer to the llama.cpp server API. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM.create_chat_completion","title":"create_chat_completion","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM.generate","title":"generate","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM","title":"TransformerLLM","text":" Bases: A language model using the transformers library. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM.create_chat_completion","title":"create_chat_completion","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM.generate","title":"generate","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.load_model_from_config","title":"load_model_from_config","text":" Loads a language model based on the configuration. Which language model is loaded depends on the configuration. For instance, if an endpoint is provided, a remote language model is loaded. If a file is provided, Llama.cpp is used. Otherwise, transformers is used. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredReturns: Type DescriptionLLM The loaded language model Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.load_model_from_config(config)","title":"config ","text":""},{"location":"reference/gerd/models/","title":"gerd.models","text":""},{"location":"reference/gerd/models/#gerd.models","title":"gerd.models","text":"Pydantic model definitions and data classes that are share accross modules. Modules: Name Descriptiongen Models for the generation and chat service. label Data definitions for Label Studio tasks. logging Logging configuration and utilities. model Model configuration for supported model classes. qa Data definitions for QA model configuration. server Server configuration model for REST backends. "},{"location":"reference/gerd/models/gen/","title":"gerd.models.gen","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen","title":"gerd.models.gen","text":"Models for the generation and chat service. Classes: Name DescriptionGenerationConfig Configuration for the generation services. GenerationFeaturesConfig Configuration for the generation-specific features. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig","title":"GenerationConfig","text":" Bases: Configuration for the generation services. A configuration can be used for the GenerationService or the ChatService. Both support to generate text based on a prompt. Methods: Name Descriptionsettings_customise_sources Customize the settings sources used by pydantic-settings. Attributes: Name Type Descriptionfeatures GenerationFeaturesConfig The extra features to be used for the generation service. model ModelConfig The model to be used for the generation service. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.features","title":"featuresclass-attribute instance-attribute ","text":" The extra features to be used for the generation service. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.model","title":"modelclass-attribute instance-attribute ","text":" The model to be used for the generation service. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources","title":"settings_customise_sourcesclassmethod ","text":" Customize the settings sources used by pydantic-settings. The order of the sources is important. The first source has the highest priority. Parameters: Name Type Description DefaultThe class of the settings. requiredPydanticBaseSettingsSource The settings from the initialization. requiredPydanticBaseSettingsSource The settings from the environment. requiredPydanticBaseSettingsSource The settings from the dotenv file. requiredPydanticBaseSettingsSource The settings from the secret file. requiredReturns: Type DescriptionTuple[PydanticBaseSettingsSource, ...] The customized settings sources. Source code ingerd/models/gen.py "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(cls)","title":"cls ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(init_settings)","title":"init_settings ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(env_settings)","title":"env_settings ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(dotenv_settings)","title":"dotenv_settings ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(file_secret_settings)","title":"file_secret_settings ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationFeaturesConfig","title":"GenerationFeaturesConfig","text":" Bases: Configuration for the generation-specific features. Attributes: Name Type Descriptionprompt_chaining PromptChainingConfig | None Configuration for prompt chaining. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationFeaturesConfig.prompt_chaining","title":"prompt_chainingclass-attribute instance-attribute ","text":" Configuration for prompt chaining. "},{"location":"reference/gerd/models/label/","title":"gerd.models.label","text":""},{"location":"reference/gerd/models/label/#gerd.models.label","title":"gerd.models.label","text":"Data definitions for Label Studio tasks. The defined models and enums are used to parse and work with Label Studio data exported as JSON. Classes: Name DescriptionLabelStudioAnnotation Annotation of a Label Studio task. LabelStudioAnnotationResult Result of a Label Studio annotation. LabelStudioAnnotationValue Value of a Label Studio annotation. LabelStudioLabel Labels for the GRASCCO Label Studio annotations. LabelStudioTask Task of a Label Studio project. Functions: Name Descriptionload_label_studio_tasks Load Label Studio tasks from a JSON file. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation","title":"LabelStudioAnnotation","text":" Bases: Annotation of a Label Studio task. A collection of annotations is associated with a task. Attributes: Name Type Descriptioncompleted_by int The user ID of the user who completed the annotation. created_at str The creation date of the annotation. draft_created_at Optional[str] The creation date of the draft. ground_truth bool Whether the annotation is ground truth. id int The ID of the annotation. import_id Optional[str] The import ID of the annotation. last_action Optional[str] The last action of the annotation. last_created_by Optional[int] The user ID of the user who last created the annotation. lead_time float The lead time of the annotation. parent_annotation Optional[str] The parent annotation. parent_prediction Optional[str] The parent prediction. prediction Dict[str, str] The prediction of the annotation. project int The project ID of the annotation. result List[LabelStudioAnnotationResult] The results of the annotation. result_count int The number of results. task int The task ID of the annotation. unique_id str The unique ID of the annotation. updated_at str The update date of the annotation. updated_by int The user ID of the user who updated the annotation. was_cancelled bool Whether the annotation was cancelled. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.completed_by","title":"completed_byinstance-attribute ","text":" The user ID of the user who completed the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.created_at","title":"created_atinstance-attribute ","text":" The creation date of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.draft_created_at","title":"draft_created_atinstance-attribute ","text":" The creation date of the draft. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.ground_truth","title":"ground_truthinstance-attribute ","text":" Whether the annotation is ground truth. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.id","title":"idinstance-attribute ","text":" The ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.import_id","title":"import_idinstance-attribute ","text":" The import ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.last_action","title":"last_actioninstance-attribute ","text":" The last action of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.last_created_by","title":"last_created_byinstance-attribute ","text":" The user ID of the user who last created the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.lead_time","title":"lead_timeinstance-attribute ","text":" The lead time of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.parent_annotation","title":"parent_annotationinstance-attribute ","text":" The parent annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.parent_prediction","title":"parent_predictioninstance-attribute ","text":" The parent prediction. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.prediction","title":"predictioninstance-attribute ","text":" The prediction of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.project","title":"projectinstance-attribute ","text":" The project ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.result","title":"resultinstance-attribute ","text":" The results of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.result_count","title":"result_countinstance-attribute ","text":" The number of results. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.task","title":"taskinstance-attribute ","text":" The task ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.unique_id","title":"unique_idinstance-attribute ","text":" The unique ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.updated_at","title":"updated_atinstance-attribute ","text":" The update date of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.updated_by","title":"updated_byinstance-attribute ","text":" The user ID of the user who updated the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.was_cancelled","title":"was_cancelledinstance-attribute ","text":" Whether the annotation was cancelled. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult","title":"LabelStudioAnnotationResult","text":" Bases: Result of a Label Studio annotation. Attributes: Name Type Descriptionfrom_name str The name of the source. id str The ID of the result. origin str The origin of the result. to_name str The name of the target. type str The type of the result. value LabelStudioAnnotationValue The value of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.from_name","title":"from_nameinstance-attribute ","text":" The name of the source. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.id","title":"idinstance-attribute ","text":" The ID of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.origin","title":"origininstance-attribute ","text":" The origin of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.to_name","title":"to_nameinstance-attribute ","text":" The name of the target. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.type","title":"typeinstance-attribute ","text":" The type of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.value","title":"valueinstance-attribute ","text":" The value of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationValue","title":"LabelStudioAnnotationValue","text":" Bases: Value of a Label Studio annotation. Attributes: Name Type Descriptionend int The end of the annotation. labels List[LabelStudioLabel] The labels of the annotation. start int The start of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationValue.end","title":"endinstance-attribute ","text":" The end of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationValue.labels","title":"labelsinstance-attribute ","text":" The labels of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationValue.start","title":"startinstance-attribute ","text":" The start of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioLabel","title":"LabelStudioLabel","text":" Bases: Labels for the GRASCCO Label Studio annotations. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask","title":"LabelStudioTask","text":" Bases: Task of a Label Studio project. A task is a single unit of work that can be annotated by a user. Tasks can be used to train an auto labeler or to evaluate the performance of a model. Attributes: Name Type Descriptionannotations List[LabelStudioAnnotation] The annotations of the task. cancelled_annotations int The number of cancelled annotations. comment_authors List[str] The authors of the comments. comment_count int The number of comments. created_at str The creation date of the task. data Optional[Dict[str, str]] The data of the task. drafts List[str] The drafts of the task. file_name str Extracts the original file name from the file upload. file_upload str The file upload of the task. id int The ID of the task. inner_id int The inner ID of the task. last_comment_updated_at Optional[str] The update date of the last comment. meta Optional[Dict[str, str]] The meta data of the task. predictions List[str] The predictions of the task. project int The project ID of the task. total_annotations int The total number of annotations. total_predictions int The total number of predictions. unresolved_comment_count int The number of unresolved comments. updated_at str The update date of the task. updated_by int The user ID of the user who updated the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.annotations","title":"annotationsinstance-attribute ","text":" The annotations of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.cancelled_annotations","title":"cancelled_annotationsinstance-attribute ","text":" The number of cancelled annotations. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.comment_authors","title":"comment_authorsinstance-attribute ","text":" The authors of the comments. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.comment_count","title":"comment_countinstance-attribute ","text":" The number of comments. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.created_at","title":"created_atinstance-attribute ","text":" The creation date of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.data","title":"datainstance-attribute ","text":" The data of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.drafts","title":"draftsinstance-attribute ","text":" The drafts of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.file_name","title":"file_nameproperty ","text":" Extracts the original file name from the file upload. File uploads are stored as instance-attribute ","text":" The file upload of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.id","title":"idinstance-attribute ","text":" The ID of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.inner_id","title":"inner_idinstance-attribute ","text":" The inner ID of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.last_comment_updated_at","title":"last_comment_updated_atinstance-attribute ","text":" The update date of the last comment. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.meta","title":"metainstance-attribute ","text":" The meta data of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.predictions","title":"predictionsinstance-attribute ","text":" The predictions of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.project","title":"projectinstance-attribute ","text":" The project ID of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.total_annotations","title":"total_annotationsinstance-attribute ","text":" The total number of annotations. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.total_predictions","title":"total_predictionsinstance-attribute ","text":" The total number of predictions. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.unresolved_comment_count","title":"unresolved_comment_countinstance-attribute ","text":" The number of unresolved comments. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.updated_at","title":"updated_atinstance-attribute ","text":" The update date of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.updated_by","title":"updated_byinstance-attribute ","text":" The user ID of the user who updated the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.load_label_studio_tasks","title":"load_label_studio_tasks","text":" Load Label Studio tasks from a JSON file. Parameters: Name Type Description Defaultstr The path to the JSON file. requiredReturns: Type DescriptionList[LabelStudioTask] The loaded Label Studio tasks Source code ingerd/models/label.py "},{"location":"reference/gerd/models/label/#gerd.models.label.load_label_studio_tasks(file_path)","title":"file_path ","text":""},{"location":"reference/gerd/models/logging/","title":"gerd.models.logging","text":""},{"location":"reference/gerd/models/logging/#gerd.models.logging","title":"gerd.models.logging","text":"Logging configuration and utilities. Classes: Name DescriptionLogLevel Wrapper for string-based log levels. LoggingConfig Configuration for logging. "},{"location":"reference/gerd/models/logging/#gerd.models.logging.LogLevel","title":"LogLevel","text":" Bases: Wrapper for string-based log levels. Translates log levels to integers for Python's logging framework. Methods: Name Descriptionas_int Convert the log level to an integer. "},{"location":"reference/gerd/models/logging/#gerd.models.logging.LogLevel.as_int","title":"as_int","text":" Convert the log level to an integer. Source code ingerd/models/logging.py "},{"location":"reference/gerd/models/logging/#gerd.models.logging.LoggingConfig","title":"LoggingConfig","text":" Bases: Configuration for logging. Attributes: Name Type Descriptionlevel LogLevel The log level. "},{"location":"reference/gerd/models/logging/#gerd.models.logging.LoggingConfig.level","title":"levelinstance-attribute ","text":" The log level. "},{"location":"reference/gerd/models/model/","title":"gerd.models.model","text":""},{"location":"reference/gerd/models/model/#gerd.models.model","title":"gerd.models.model","text":"Model configuration for supported model classes. Classes: Name DescriptionChatMessage Data structure for chat messages. ModelConfig Configuration for large language models. ModelEndpoint Configuration for model endpoints where models are hosted remotely. PromptConfig Configuration for prompts. Attributes: Name Type DescriptionChatRole Currently supported chat roles. EndpointType Endpoint for remote llm services. "},{"location":"reference/gerd/models/model/#gerd.models.model.ChatRole","title":"ChatRolemodule-attribute ","text":" Currently supported chat roles. "},{"location":"reference/gerd/models/model/#gerd.models.model.EndpointType","title":"EndpointTypemodule-attribute ","text":" Endpoint for remote llm services. "},{"location":"reference/gerd/models/model/#gerd.models.model.ChatMessage","title":"ChatMessage","text":" Bases: Data structure for chat messages. Attributes: Name Type Descriptioncontent str The content of the chat message. role ChatRole The role or source of the chat message. "},{"location":"reference/gerd/models/model/#gerd.models.model.ChatMessage.content","title":"contentinstance-attribute ","text":" The content of the chat message. "},{"location":"reference/gerd/models/model/#gerd.models.model.ChatMessage.role","title":"roleinstance-attribute ","text":" The role or source of the chat message. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig","title":"ModelConfig","text":" Bases: Configuration for large language models. Most llm libraries and/or services share common parameters for configuration. Explaining each parameter is out of scope for this documentation. The most essential parameters are explained for instance here. Default values have been chosen according to ctransformers library. Attributes: Name Type Descriptionbatch_size int The batch size for the generation. context_length int The context length for the model. Currently only LLaMA, MPT and Falcon endpoint Optional[ModelEndpoint] The endpoint of the model when hosted remotely. extra_kwargs Optional[dict[str, Any]] Additional keyword arguments for the model library. file Optional[str] The path to the model file. For local models only. gpu_layers int The number of layers to run on the GPU. last_n_tokens int The number of tokens to consider for the repetition penalty. loras set[Path] The list of additional LoRAs files to load. max_new_tokens int The maximum number of new tokens to generate. name str The name of the model. Can be a path to a local model or a huggingface handle. prompt_config PromptConfig The prompt configuration. prompt_setup List[Tuple[Literal['system', 'user', 'assistant'], PromptConfig]] A list of predefined prompts for the model. repetition_penalty float The repetition penalty. seed int The seed for the random number generator. stop Optional[List[str]] The stop tokens for the generation. stream bool Whether to stream the output. temperature float The temperature for the sampling. threads Optional[int] The number of threads to use for the generation. top_k int The number of tokens to consider for the top-k sampling. top_p float The cumulative probability for the top-p sampling. torch_dtype Optional[str] The torch data type for the model. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.batch_size","title":"batch_sizeclass-attribute instance-attribute ","text":" The batch size for the generation. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.context_length","title":"context_lengthclass-attribute instance-attribute ","text":" The context length for the model. Currently only LLaMA, MPT and Falcon "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.endpoint","title":"endpointclass-attribute instance-attribute ","text":" The endpoint of the model when hosted remotely. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.extra_kwargs","title":"extra_kwargsclass-attribute instance-attribute ","text":" Additional keyword arguments for the model library. The accepted keys and values depend on the model library used. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.file","title":"fileclass-attribute instance-attribute ","text":" The path to the model file. For local models only. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.gpu_layers","title":"gpu_layersclass-attribute instance-attribute ","text":" The number of layers to run on the GPU. The actual number is only used llama.cpp. The other model libraries will determine whether to run on the GPU just by checking of this value is larger than 0. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.last_n_tokens","title":"last_n_tokensclass-attribute instance-attribute ","text":" The number of tokens to consider for the repetition penalty. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.loras","title":"lorasclass-attribute instance-attribute ","text":" The list of additional LoRAs files to load. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.max_new_tokens","title":"max_new_tokensclass-attribute instance-attribute ","text":" The maximum number of new tokens to generate. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.name","title":"nameclass-attribute instance-attribute ","text":" The name of the model. Can be a path to a local model or a huggingface handle. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.prompt_config","title":"prompt_configclass-attribute instance-attribute ","text":" The prompt configuration. This is used to process the input passed to the services. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.prompt_setup","title":"prompt_setupclass-attribute instance-attribute ","text":" A list of predefined prompts for the model. When a model context is inialized or reset, this will be used to set up the context. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.repetition_penalty","title":"repetition_penaltyclass-attribute instance-attribute ","text":" The repetition penalty. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.seed","title":"seedclass-attribute instance-attribute ","text":" The seed for the random number generator. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.stop","title":"stopclass-attribute instance-attribute ","text":" The stop tokens for the generation. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.stream","title":"streamclass-attribute instance-attribute ","text":" Whether to stream the output. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.temperature","title":"temperatureclass-attribute instance-attribute ","text":" The temperature for the sampling. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.threads","title":"threadsclass-attribute instance-attribute ","text":" The number of threads to use for the generation. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.top_k","title":"top_kclass-attribute instance-attribute ","text":" The number of tokens to consider for the top-k sampling. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.top_p","title":"top_pclass-attribute instance-attribute ","text":" The cumulative probability for the top-p sampling. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.torch_dtype","title":"torch_dtypeclass-attribute instance-attribute ","text":" The torch data type for the model. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelEndpoint","title":"ModelEndpoint","text":" Bases: Configuration for model endpoints where models are hosted remotely. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig","title":"PromptConfig","text":" Bases: Configuration for prompts. Methods: Name Descriptionformat Format the prompt with the given parameters. model_post_init Post-initialization hook for pyandic. Attributes: Name Type Descriptionis_template bool Whether the config uses jinja2 templates. parameters list[str] Retrieves and returns the parameters of the prompt. path Optional[str] The path to an external prompt file. template Optional[Template] Optional template of the prompt. This should follow the Jinja2 syntax. text str The text of the prompt. Can contain placeholders. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.is_template","title":"is_templateclass-attribute instance-attribute ","text":" Whether the config uses jinja2 templates. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.parameters","title":"parametersproperty ","text":" Retrieves and returns the parameters of the prompt. This happens on-the-fly and is not stored in the model. Returns: Type Descriptionlist[str] The parameters of the prompt. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.path","title":"pathclass-attribute instance-attribute ","text":" The path to an external prompt file. This will overload the values of text and/or template. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.template","title":"templateclass-attribute instance-attribute ","text":" Optional template of the prompt. This should follow the Jinja2 syntax. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.text","title":"textclass-attribute instance-attribute ","text":" The text of the prompt. Can contain placeholders. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.format","title":"format","text":" Format the prompt with the given parameters. Parameters: Name Type Description DefaultMapping[str, str | list[ChatMessage]] | None The parameters to format the prompt with. None Returns: Type Descriptionstr The formatted prompt Source code ingerd/models/model.py "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.format(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.model_post_init","title":"model_post_init","text":" Post-initialization hook for pyandic. When path is set, the text or template is read from the file and the template is created. Path ending with '.jinja2' will be treated as a template. If no path is set, the text parameter is used to initialize the template if is_template is set to True. Parameters: __context: The context of the model (not used) Source code ingerd/models/model.py "},{"location":"reference/gerd/models/qa/","title":"gerd.models.qa","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa","title":"gerd.models.qa","text":"Data definitions for QA model configuration. Classes: Name DescriptionAnalyzeConfig The configuration for the analyze service. EmbeddingConfig Embedding specific model configuration. QAConfig Configuration for the QA services. QAFeaturesConfig Configuration for the QA-specific features. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.AnalyzeConfig","title":"AnalyzeConfig","text":" Bases: The configuration for the analyze service. Attributes: Name Type Descriptionmodel ModelConfig The model to be used for the analyze service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.AnalyzeConfig.model","title":"modelinstance-attribute ","text":" The model to be used for the analyze service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig","title":"EmbeddingConfig","text":" Bases: Embedding specific model configuration. Attributes: Name Type Descriptionchunk_overlap int The overlap between chunks. chunk_size int The size of the chunks stored in the database. db_path Optional[str] The path to the database file. model ModelConfig The model used for the embedding. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig.chunk_overlap","title":"chunk_overlapinstance-attribute ","text":" The overlap between chunks. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig.chunk_size","title":"chunk_sizeinstance-attribute ","text":" The size of the chunks stored in the database. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig.db_path","title":"db_pathclass-attribute instance-attribute ","text":" The path to the database file. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig.model","title":"modelinstance-attribute ","text":" The model used for the embedding. This model should be rather small and fast to compute. Furthermore, not every model is suited for this task. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig","title":"QAConfig","text":" Bases: Configuration for the QA services. This model can be used to retrieve parameters from a variety of sources. The main source are YAML files (loaded as Methods: Name Descriptionsettings_customise_sources Customize the settings sources used by pydantic-settings. Attributes: Name Type Descriptiondevice str The device to run the model on. embedding EmbeddingConfig The configuration for the embedding service. features QAFeaturesConfig The configuration for the QA-specific features. model ModelConfig The model to be used for the QA service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.device","title":"deviceclass-attribute instance-attribute ","text":" The device to run the model on. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.embedding","title":"embeddinginstance-attribute ","text":" The configuration for the embedding service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.features","title":"featuresinstance-attribute ","text":" The configuration for the QA-specific features. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.model","title":"modelinstance-attribute ","text":" The model to be used for the QA service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources","title":"settings_customise_sourcesclassmethod ","text":" Customize the settings sources used by pydantic-settings. The order of the sources is important. The first source has the highest priority. Parameters: Name Type Description DefaultThe class of the settings. requiredPydanticBaseSettingsSource The settings from the initialization. requiredPydanticBaseSettingsSource The settings from the environment. requiredPydanticBaseSettingsSource The settings from the dotenv file. requiredPydanticBaseSettingsSource The settings from the secret file. requiredReturns: Type DescriptionTuple[PydanticBaseSettingsSource, ...] The customized settings sources. Source code ingerd/models/qa.py "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(cls)","title":"cls ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(init_settings)","title":"init_settings ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(env_settings)","title":"env_settings ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(dotenv_settings)","title":"dotenv_settings ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(file_secret_settings)","title":"file_secret_settings ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAFeaturesConfig","title":"QAFeaturesConfig","text":" Bases: Configuration for the QA-specific features. Attributes: Name Type Descriptionanalyze AnalyzeConfig Configuration to extract letter of discharge information from the text. analyze_mult_prompts AnalyzeConfig Configuration to extract predefined infos with multiple prompts from the text. return_source bool Whether to return the source in the response. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAFeaturesConfig.analyze","title":"analyzeinstance-attribute ","text":" Configuration to extract letter of discharge information from the text. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAFeaturesConfig.analyze_mult_prompts","title":"analyze_mult_promptsinstance-attribute ","text":" Configuration to extract predefined infos with multiple prompts from the text. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAFeaturesConfig.return_source","title":"return_sourceinstance-attribute ","text":" Whether to return the source in the response. "},{"location":"reference/gerd/models/server/","title":"gerd.models.server","text":""},{"location":"reference/gerd/models/server/#gerd.models.server","title":"gerd.models.server","text":"Server configuration model for REST backends. Classes: Name DescriptionServerConfig Server configuration model for REST backends. "},{"location":"reference/gerd/models/server/#gerd.models.server.ServerConfig","title":"ServerConfig","text":" Bases: Server configuration model for REST backends. Attributes: Name Type Descriptionapi_prefix str The prefix of the API. host str The host of the server. port int The port of the server. "},{"location":"reference/gerd/models/server/#gerd.models.server.ServerConfig.api_prefix","title":"api_prefixinstance-attribute ","text":" The prefix of the API. "},{"location":"reference/gerd/models/server/#gerd.models.server.ServerConfig.host","title":"hostinstance-attribute ","text":" The host of the server. "},{"location":"reference/gerd/models/server/#gerd.models.server.ServerConfig.port","title":"portinstance-attribute ","text":" The port of the server. "},{"location":"reference/gerd/qa/","title":"gerd.qa","text":""},{"location":"reference/gerd/qa/#gerd.qa","title":"gerd.qa","text":"Services and utilities for retrieval augmented generation (RAG). Modules: Name Descriptionqa_service Implements the QAService class. "},{"location":"reference/gerd/qa/qa_service/","title":"gerd.qa.qa_service","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service","title":"gerd.qa.qa_service","text":"Implements the QAService class. The question and answer service is used to query a language model with questions related to a specific context. The context is usually a set of documents that are loaded into a vector store. Classes: Name DescriptionQAService The question and answer service class. "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService","title":"QAService","text":" The question and answer service class. The service is initialized with a configuration. Depending on the configuration, the service will create a new in-memory vector store or load an existing one from a file. Parameters: Name Type Description DefaultQAConfig The configuration for the QA service requiredMethods: Name Descriptionadd_file Add a document to the vectorstore. analyze_mult_prompts_query Reads a set of data from doc. analyze_query Read a set of data from a set of documents. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. get_prompt_config Returns the prompt config for the given mode. query Pass a question to the language model. remove_file Removes a document from the vectorstore. set_prompt_config Sets the prompt config for the given mode. Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService(config)","title":"config ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.add_file","title":"add_file","text":" Add a document to the vectorstore. Parameters: Name Type Description DefaultQAFileUpload The file to add to the vectorstore requiredReturns: Type DescriptionQAAnswer an answer object with status 200 if successful Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.add_file(file)","title":"file ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Reads a set of data from doc. Loads the data via multiple prompts by asking for each data field separately. Data - patient_name - patient_date_of_birth - attending_doctors - recording_date - release_date Returns: Type DescriptionQAAnalyzeAnswer The answer from the language model Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.analyze_query","title":"analyze_query","text":" Read a set of data from a set of documents. Loads the data via single prompt. Data - patient_name - patient_date_of_birth - attending_doctors - recording_date - release_date Returns: Type DescriptionQAAnalyzeAnswer The answer from the language model Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding to be used is defined by the vector store or more specifically by the configured parameters passed to initialize the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.db_query","title":"db_query","text":" Queries the vector store with a question. The number of sources that are returned is defined by the max_sources parameter of the service's configuration. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.get_prompt_config","title":"get_prompt_config","text":" Returns the prompt config for the given mode. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt config for requiredReturns: Type DescriptionPromptConfig The prompt config for the given mode Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.get_prompt_config(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.query","title":"query","text":" Pass a question to the language model. The language model will generate an answer based on the question and the context derived from the vector store. Parameters: Name Type Description DefaultQAQuestion The question to be answered requiredReturns: Type DescriptionQAAnswer The answer from the language model Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.query(question)","title":"question ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.remove_file","title":"remove_file","text":" Removes a document from the vectorstore. Parameters: Name Type Description Defaultstr The name of the file to remove requiredReturns: Type DescriptionQAAnswer an answer object with status 200 if successful Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.remove_file(file_name)","title":"file_name ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.set_prompt_config","title":"set_prompt_config","text":" Sets the prompt config for the given mode. Parameters: Name Type Description DefaultPromptConfig The prompt config to set requiredQAModesEnum The mode to set the prompt config for requiredReturns: Type DescriptionQAAnswer an answer object with status 200 if successful Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.set_prompt_config(config)","title":"config ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.set_prompt_config(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/rag/","title":"gerd.rag","text":""},{"location":"reference/gerd/rag/#gerd.rag","title":"gerd.rag","text":"Retrieval-Augmented Generation (RAG) backend. This module provides the RAG backend for the GERD system which is currently based on FAISS. Classes: Name DescriptionRag The RAG backend for GERD. Functions: Name Descriptioncreate_faiss Create a new FAISS store from a list of documents. load_faiss Load a FAISS store from a disk path. "},{"location":"reference/gerd/rag/#gerd.rag.Rag","title":"Rag","text":" The RAG backend for GERD. The RAG backend will check for a context parameter in the prompt. If the context parameter is not included, a warning will be logged. Without the context parameter, no context will be added to the query. Parameters: Name Type Description DefaultLLM The LLM model to use requiredModelConfig The model configuration requiredPromptConfig The prompt configuration requiredFAISS The FAISS store to use requiredbool Whether to return the source documents requiredMethods: Name Descriptionquery Query the RAG backend with a question. Source code ingerd/rag.py "},{"location":"reference/gerd/rag/#gerd.rag.Rag(model)","title":"model ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag(model_config)","title":"model_config ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag(store)","title":"store ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag(return_source)","title":"return_source ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag.query","title":"query","text":" Query the RAG backend with a question. Parameters: Name Type Description DefaultQAQuestion The question to ask requiredReturns: Type DescriptionQAAnswer The answer to the question including the sources Source code ingerd/rag.py "},{"location":"reference/gerd/rag/#gerd.rag.Rag.query(question)","title":"question ","text":""},{"location":"reference/gerd/rag/#gerd.rag.create_faiss","title":"create_faiss","text":" Create a new FAISS store from a list of documents. Parameters: Name Type Description Defaultlist[Document] The list of documents to index requiredstr The name of the Hugging Face model to for the embeddings requiredstr The device to use for the model requiredReturns: Type DescriptionFAISS The newly created FAISS store Source code ingerd/rag.py "},{"location":"reference/gerd/rag/#gerd.rag.create_faiss(documents)","title":"documents ","text":""},{"location":"reference/gerd/rag/#gerd.rag.create_faiss(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/rag/#gerd.rag.create_faiss(device)","title":"device ","text":""},{"location":"reference/gerd/rag/#gerd.rag.load_faiss","title":"load_faiss","text":" Load a FAISS store from a disk path. Parameters: Name Type Description DefaultPath The path to the disk path requiredstr The name of the Hugging Face model to for the embeddings requiredstr The device to use for the model requiredReturns: Type DescriptionFAISS The loaded FAISS store Source code ingerd/rag.py "},{"location":"reference/gerd/rag/#gerd.rag.load_faiss(dp_path)","title":"dp_path ","text":""},{"location":"reference/gerd/rag/#gerd.rag.load_faiss(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/rag/#gerd.rag.load_faiss(device)","title":"device ","text":""},{"location":"reference/gerd/training/","title":"gerd.training","text":""},{"location":"reference/gerd/training/#gerd.training","title":"gerd.training","text":"Collections of training routines for GERD. Modules: Name Descriptiondata Data utilities for training and data processing. instruct Training module for instruction text sets. lora Configuration dataclasses for training LoRA models. trainer Training module for LoRA models. unstructured Training of LoRA models on unstructured text data. "},{"location":"reference/gerd/training/data/","title":"gerd.training.data","text":""},{"location":"reference/gerd/training/data/#gerd.training.data","title":"gerd.training.data","text":"Data utilities for training and data processing. Functions: Name Descriptiondespacyfy Removes spacy-specific tokens from a text. encode Encodes a text using a tokenizer. split_chunks Splits a list of encoded tokens into chunks of a given size. tokenize Converts a prompt into a tokenized input for a model. "},{"location":"reference/gerd/training/data/#gerd.training.data.despacyfy","title":"despacyfy","text":" Removes spacy-specific tokens from a text. For instance, -RRB- is replaced with ')', -LRB- with '(' and -UNK- with '*'. Parameters: Name Type Description Defaultstr The text to despacyfy. requiredReturns: Type Descriptionstr The despacyfied text Source code ingerd/training/data.py "},{"location":"reference/gerd/training/data/#gerd.training.data.despacyfy(text)","title":"text ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.encode","title":"encode","text":" Encodes a text using a tokenizer. Parameters: Name Type Description Defaultstr The text to encode requiredbool Whether to add the beginning of sentence token requiredPreTrainedTokenizer The tokenizer to use requiredint The maximum length of the encoded text requiredReturns: Type DescriptionList[int] The text encoded as a list of tokenizer tokens Source code ingerd/training/data.py "},{"location":"reference/gerd/training/data/#gerd.training.data.encode(text)","title":"text ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.encode(add_bos_token)","title":"add_bos_token ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.encode(tokenizer)","title":"tokenizer ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.encode(cutoff_len)","title":"cutoff_len ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.split_chunks","title":"split_chunks","text":" Splits a list of encoded tokens into chunks of a given size. Parameters: Name Type Description DefaultList[int] The list of encoded tokens. requiredint The size of the chunks. requiredint The step size for the chunks. requiredReturns: Type DescriptionNone A generator that yields the chunks Source code ingerd/training/data.py "},{"location":"reference/gerd/training/data/#gerd.training.data.split_chunks(arr)","title":"arr ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.split_chunks(size)","title":"size ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.split_chunks(step)","title":"step ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.tokenize","title":"tokenize","text":" Converts a prompt into a tokenized input for a model. The methods returns the tokenized input as a dictionary with the keys \"input_ids\", \"labels\" and \"attention_mask\" where the input_ids are the tokenized input, the labels assign the same label ('1') to each token and the attention_mask masks out the padding tokens. Parameters: prompt: The prompt to tokenize tokenizer: The tokenizer to use cutoff_len: The maximum length of the encoded text append_eos_token: Whether to append an end of sentence token Returns: Type DescriptionDict[str, Tensor | list[int]] The tokenized input as a dictionary Source code ingerd/training/data.py "},{"location":"reference/gerd/training/instruct/","title":"gerd.training.instruct","text":""},{"location":"reference/gerd/training/instruct/#gerd.training.instruct","title":"gerd.training.instruct","text":"Training module for instruction text sets. In contrast to the Classes: Name DescriptionInstructTrainingData Dataclass to hold training data for instruction text sets. InstructTrainingSample Dataclass to hold a training sample for instruction text sets. Functions: Name Descriptiontrain_lora Train a LoRA model on instruction text sets. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.InstructTrainingData","title":"InstructTrainingData","text":" Bases: Dataclass to hold training data for instruction text sets. A training data object consists of a list of training samples. Attributes: Name Type Descriptionsamples list[InstructTrainingSample] The list of training samples. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.InstructTrainingData.samples","title":"samplesclass-attribute instance-attribute ","text":" The list of training samples. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.InstructTrainingSample","title":"InstructTrainingSample","text":" Bases: Dataclass to hold a training sample for instruction text sets. A training sample consists of a list of chat messages. Attributes: Name Type Descriptionmessages list[ChatMessage] The list of chat messages. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.InstructTrainingSample.messages","title":"messagesinstance-attribute ","text":" The list of chat messages. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.train_lora","title":"train_lora","text":" Train a LoRA model on instruction text sets. Parameters: Name Type Description Defaultstr | LoraTrainingConfig The configuration name or the configuration itself requiredInstructTrainingData | None The training data to train on, if None, the input_glob from the config is used None Returns: Type DescriptionTrainer The trainer instance that is used for training Source code ingerd/training/instruct.py "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.train_lora(config)","title":"config ","text":""},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.train_lora(data)","title":"data ","text":""},{"location":"reference/gerd/training/lora/","title":"gerd.training.lora","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora","title":"gerd.training.lora","text":"Configuration dataclasses for training LoRA models. Classes: Name DescriptionLLMModelProto Protocol for the LoRA model. LoraModules Configuration for the modules to be trained in LoRA models. LoraTrainingConfig Configuration for training LoRA models. TrainingFlags Training flags for LoRA models. Functions: Name Descriptionload_training_config Load the LLM model configuration. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LLMModelProto","title":"LLMModelProto","text":" Bases: Protocol for the LoRA model. A model model needs to implement the named_modules method for it to be used in LoRA Training. Methods: Name Descriptionnamed_modules Get the named modules of the model. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LLMModelProto.named_modules","title":"named_modules","text":" Get the named modules of the model. Returns: Type Descriptionlist[tuple[str, Module]] The named modules. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraModules","title":"LoraModules","text":" Bases: Configuration for the modules to be trained in LoRA models. Methods: Name Descriptiontarget_modules Get the target modules for the given model. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraModules.target_modules","title":"target_modules","text":" Get the target modules for the given model. Parameters: Name Type Description DefaultLLMModelProto The model to be trained. requiredReturns: Type DescriptionList[str] The list of target modules Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraModules.target_modules(model)","title":"model ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig","title":"LoraTrainingConfig","text":" Bases: Configuration for training LoRA models. Methods: Name Descriptionmodel_post_init Post-initialization hook for the model. reset_tokenizer Resets the tokenizer. settings_customise_sources Customize the settings sources used by pydantic-settings. Attributes: Name Type Descriptiontokenizer PreTrainedTokenizer Get the tokenizer for the model. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.tokenizer","title":"tokenizerproperty ","text":" Get the tokenizer for the model. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.model_post_init","title":"model_post_init","text":" Post-initialization hook for the model. This method currently checks whether cutoff is larger than overlap. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.reset_tokenizer","title":"reset_tokenizer","text":" Resets the tokenizer. When a tokenizer has been used it needs to be reset before changig parameters to avoid issues with parallelism. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources","title":"settings_customise_sources classmethod ","text":" Customize the settings sources used by pydantic-settings. The order of the sources is important. The first source has the highest priority. Parameters: Name Type Description DefaultThe class of the settings. requiredPydanticBaseSettingsSource The settings from the initialization. requiredPydanticBaseSettingsSource The settings from the environment. requiredPydanticBaseSettingsSource The settings from the dotenv file. requiredPydanticBaseSettingsSource The settings from the secret file. requiredReturns: Type Descriptiontuple[PydanticBaseSettingsSource, ...] The customized settings sources. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(cls)","title":"cls ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(init_settings)","title":"init_settings ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(env_settings)","title":"env_settings ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(dotenv_settings)","title":"dotenv_settings ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(file_secret_settings)","title":"file_secret_settings ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.TrainingFlags","title":"TrainingFlags","text":" Bases: Training flags for LoRA models. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.load_training_config","title":"load_training_config","text":" Load the LLM model configuration. Parameters: Name Type Description Defaultstr The name of the configuration. requiredReturns: Type DescriptionLoraTrainingConfig The model configuration. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.load_training_config(config)","title":"config ","text":""},{"location":"reference/gerd/training/trainer/","title":"gerd.training.trainer","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer","title":"gerd.training.trainer","text":"Training module for LoRA models. Can be used to train LoRA models on structured or unstructured data. Classes: Name DescriptionCallbacks Custom callbacks for the LoRA training. Tracked Dataclass to track the training progress. Trainer The LoRA trainer class. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks","title":"Callbacks","text":" Bases: Custom callbacks for the LoRA training. Initialize the callbacks based on tracking data config. Parameters: Name Type Description DefaultTracked The tracking data requiredMethods: Name Descriptionon_log Callback to log the training progress. on_save Saves the training log when the model is saved. on_step_begin Update the training progress. on_substep_end Update the training progress and check for interruption. Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks(tracked)","title":"tracked ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log","title":"on_log","text":" Callback to log the training progress. Parameters: Name Type Description DefaultTrainingArguments The training arguments (not used) requiredTrainerState The trainer state (not used) requiredTrainerControl The trainer control requiredDict The training logs required Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log(_args)","title":"_args ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log(_state)","title":"_state ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log(control)","title":"control ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log(logs)","title":"logs ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_save","title":"on_save","text":" Saves the training log when the model is saved. Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_step_begin","title":"on_step_begin","text":" Update the training progress. This callback updates the current training steps and checks if the training was interrupted. Parameters: Name Type Description DefaultTrainingArguments The training arguments (not used) requiredTrainerState The trainer state requiredTrainerControl The trainer control required Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_step_begin(_args)","title":"_args ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_step_begin(state)","title":"state ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_step_begin(control)","title":"control ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_substep_end","title":"on_substep_end","text":" Update the training progress and check for interruption. Parameters: Name Type Description DefaultTrainingArguments The training arguments (not used) requiredTrainerState The trainer state (not used) requiredTrainerControl The trainer control required Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_substep_end(_args)","title":"_args ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_substep_end(_state)","title":"_state ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_substep_end(control)","title":"control ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked","title":"Tracked dataclass ","text":" Dataclass to track the training progress. Attributes: Name Type Descriptionconfig LoraTrainingConfig The training configuration. current_steps int The current training steps. did_save bool Whether the model was saved. interrupted bool Whether the training was interrupted. lora_model PeftModel The training model. max_steps int The maximum number of training steps. train_log Dict The training log. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.config","title":"configinstance-attribute ","text":" The training configuration. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.current_steps","title":"current_stepsclass-attribute instance-attribute ","text":" The current training steps. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.did_save","title":"did_saveclass-attribute instance-attribute ","text":" Whether the model was saved. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.interrupted","title":"interruptedclass-attribute instance-attribute ","text":" Whether the training was interrupted. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.lora_model","title":"lora_modelinstance-attribute ","text":" The training model. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.max_steps","title":"max_stepsclass-attribute instance-attribute ","text":" The maximum number of training steps. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.train_log","title":"train_logclass-attribute instance-attribute ","text":" The training log. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer","title":"Trainer","text":" The LoRA trainer class. This class is used to train LoRA models on structured or unstructured data. Since the training process is asynchronous, the trainer can be used to track or interrupt the training process. The LoRa traininer requires a configuration and optional list of callbacks. If no callbacks are provided, the default Callbacks class Methods: Name Descriptioninterrupt Interrupt the training process. save Save the model and log files to the path set in the trainer configuration. setup_training Setup the training process and initialize the transformer trainer. train Start the training process. Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.interrupt","title":"interrupt","text":" Interrupt the training process. Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.save","title":"save","text":" Save the model and log files to the path set in the trainer configuration. When the gerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.setup_training","title":"setup_training","text":" Setup the training process and initialize the transformer trainer. Parameters: Name Type Description DefaultDataset The training data requiredDict The training template requiredbool Whether to use torch compile False Source code in gerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.setup_training(train_data)","title":"train_data ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.setup_training(train_template)","title":"train_template ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.setup_training(torch_compile)","title":"torch_compile ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.train","title":"train","text":" Start the training process. Returns: Type DescriptionThread The training thread Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/unstructured/","title":"gerd.training.unstructured","text":""},{"location":"reference/gerd/training/unstructured/#gerd.training.unstructured","title":"gerd.training.unstructured","text":"Training of LoRA models on unstructured text data. This module provides functions to train LoRA models to 'imitate' the style of a given text corpus. Functions: Name Descriptiontrain_lora Train a LoRA model on unstructured text data. "},{"location":"reference/gerd/training/unstructured/#gerd.training.unstructured.train_lora","title":"train_lora","text":" Train a LoRA model on unstructured text data. Parameters: Name Type Description Defaultstr | LoraTrainingConfig The configuration name or the configuration itself requiredlist[str] | None The list of texts to train on, if None, the input_glob from the config is used None Returns: Type DescriptionTrainer The trainer instance that is used for training Source code ingerd/training/unstructured.py "},{"location":"reference/gerd/training/unstructured/#gerd.training.unstructured.train_lora(config)","title":"config ","text":""},{"location":"reference/gerd/training/unstructured/#gerd.training.unstructured.train_lora(texts)","title":"texts ","text":""},{"location":"reference/gerd/transport/","title":"gerd.transport","text":""},{"location":"reference/gerd/transport/#gerd.transport","title":"gerd.transport","text":"Module to define the transport protocol. The transport protocol is used to connect the backend and frontend services. Implemetations of the transport protocol can be found in the Classes: Name DescriptionDocumentSource Dataclass to hold a document source. FileTypes Enum to hold all supported file types. GenResponse Dataclass to hold a response from the generation service. QAAnalyzeAnswer Dataclass to hold an answer from the predefined queries to the QA service. QAAnswer Dataclass to hold an answer from the QA service. QAFileUpload Dataclass to hold a file upload. QAModesEnum Enum to hold all supported QA modes. QAPromptConfig Prompt configuration for the QA service. QAQuestion Dataclass to hold a question for the QA service. Transport Transport protocol to connect backend and frontend services. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource","title":"DocumentSource","text":" Bases: Dataclass to hold a document source. Attributes: Name Type Descriptioncontent str The content of the document. name str The name of the document. page int The page of the document. query str The query that was used to find the document. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource.content","title":"contentinstance-attribute ","text":" The content of the document. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource.name","title":"nameinstance-attribute ","text":" The name of the document. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource.page","title":"pageinstance-attribute ","text":" The page of the document. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource.query","title":"queryinstance-attribute ","text":" The query that was used to find the document. "},{"location":"reference/gerd/transport/#gerd.transport.FileTypes","title":"FileTypes","text":" Bases: Enum to hold all supported file types. Attributes: Name Type DescriptionPDF PDF file type. TEXT Text file type. "},{"location":"reference/gerd/transport/#gerd.transport.FileTypes.PDF","title":"PDFclass-attribute instance-attribute ","text":" PDF file type. "},{"location":"reference/gerd/transport/#gerd.transport.FileTypes.TEXT","title":"TEXTclass-attribute instance-attribute ","text":" Text file type. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse","title":"GenResponse","text":" Bases: Dataclass to hold a response from the generation service. Attributes: Name Type Descriptionerror_msg str The error message if the status code is not 200. prompt str | None The custom prompt that was used to generate the text. status int The status code of the response. text str The generated text if the status code is 200. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse.error_msg","title":"error_msgclass-attribute instance-attribute ","text":" The error message if the status code is not 200. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse.prompt","title":"promptclass-attribute instance-attribute ","text":" The custom prompt that was used to generate the text. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse.status","title":"statusclass-attribute instance-attribute ","text":" The status code of the response. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse.text","title":"textclass-attribute instance-attribute ","text":" The generated text if the status code is 200. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnalyzeAnswer","title":"QAAnalyzeAnswer","text":" Bases: Dataclass to hold an answer from the predefined queries to the QA service. Attributes: Name Type Descriptionerror_msg str The error message of the answer if the status code is not 200. status int The status code of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnalyzeAnswer.error_msg","title":"error_msgclass-attribute instance-attribute ","text":" The error message of the answer if the status code is not 200. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnalyzeAnswer.status","title":"statusclass-attribute instance-attribute ","text":" The status code of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer","title":"QAAnswer","text":" Bases: Dataclass to hold an answer from the QA service. Attributes: Name Type Descriptionerror_msg str The error message of the answer if the status code is not 200. response str The response of the answer. sources List[DocumentSource] The sources of the answer. status int The status code of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer.error_msg","title":"error_msgclass-attribute instance-attribute ","text":" The error message of the answer if the status code is not 200. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer.response","title":"responseclass-attribute instance-attribute ","text":" The response of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer.sources","title":"sourcesclass-attribute instance-attribute ","text":" The sources of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer.status","title":"statusclass-attribute instance-attribute ","text":" The status code of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAFileUpload","title":"QAFileUpload","text":" Bases: Dataclass to hold a file upload. Attributes: Name Type Descriptiondata bytes The file data. name str The name of the file. "},{"location":"reference/gerd/transport/#gerd.transport.QAFileUpload.data","title":"datainstance-attribute ","text":" The file data. "},{"location":"reference/gerd/transport/#gerd.transport.QAFileUpload.name","title":"nameinstance-attribute ","text":" The name of the file. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum","title":"QAModesEnum","text":" Bases: Enum to hold all supported QA modes. Attributes: Name Type DescriptionANALYZE Analyze mode. ANALYZE_MULT_PROMPTS Analyze multiple prompts mode. NONE No mode. SEARCH Search mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum.ANALYZE","title":"ANALYZEclass-attribute instance-attribute ","text":" Analyze mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum.ANALYZE_MULT_PROMPTS","title":"ANALYZE_MULT_PROMPTSclass-attribute instance-attribute ","text":" Analyze multiple prompts mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum.NONE","title":"NONEclass-attribute instance-attribute ","text":" No mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum.SEARCH","title":"SEARCHclass-attribute instance-attribute ","text":" Search mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAPromptConfig","title":"QAPromptConfig","text":" Bases: Prompt configuration for the QA service. Attributes: Name Type Descriptionconfig PromptConfig The prompt configuration. mode QAModesEnum The mode to set the prompt configuration for. "},{"location":"reference/gerd/transport/#gerd.transport.QAPromptConfig.config","title":"configinstance-attribute ","text":" The prompt configuration. "},{"location":"reference/gerd/transport/#gerd.transport.QAPromptConfig.mode","title":"modeinstance-attribute ","text":" The mode to set the prompt configuration for. "},{"location":"reference/gerd/transport/#gerd.transport.QAQuestion","title":"QAQuestion","text":" Bases: Dataclass to hold a question for the QA service. Attributes: Name Type Descriptionmax_sources int The maximum number of sources to return. question str The question to ask the QA service. search_strategy str The search strategy to use. "},{"location":"reference/gerd/transport/#gerd.transport.QAQuestion.max_sources","title":"max_sourcesclass-attribute instance-attribute ","text":" The maximum number of sources to return. "},{"location":"reference/gerd/transport/#gerd.transport.QAQuestion.question","title":"questioninstance-attribute ","text":" The question to ask the QA service. "},{"location":"reference/gerd/transport/#gerd.transport.QAQuestion.search_strategy","title":"search_strategyclass-attribute instance-attribute ","text":" The search strategy to use. "},{"location":"reference/gerd/transport/#gerd.transport.Transport","title":"Transport","text":" Bases: Transport protocol to connect backend and frontend services. Transport should be implemented by a class that provides the necessary methods to interact with the backend. Methods: Name Descriptionadd_file Add a file to the vector store. analyze_mult_prompts_query Queries the vector store with a set of predefined queries. analyze_query Queries the vector store with a predefined query. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. generate Generates text with the generation service. get_gen_prompt Gets the prompt configuration for the generation service. get_qa_prompt Gets the prompt configuration for a mode of the QA service. qa_query Query the QA service with a question. remove_file Remove a file from the vector store. set_gen_prompt Sets the prompt configuration for the generation service. set_qa_prompt Sets the prompt configuration for the QA service. "},{"location":"reference/gerd/transport/#gerd.transport.Transport.add_file","title":"add_file","text":" Add a file to the vector store. The returned answer has a status code of 200 if the file was added successfully. Parameters: file: The file to add to the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Queries the vector store with a set of predefined queries. In contrast to Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.analyze_query","title":"analyze_query","text":" Queries the vector store with a predefined query. The query should return vital information gathered from letters of discharge. Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding is defined by the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.db_query","title":"db_query","text":" Queries the vector store with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.generate","title":"generate","text":" Generates text with the generation service. Parameters: Name Type Description DefaultDict[str, str] The parameters to generate text with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.get_gen_prompt","title":"get_gen_prompt","text":" Gets the prompt configuration for the generation service. Returns: Type DescriptionPromptConfig The current prompt configuration Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.get_qa_prompt","title":"get_qa_prompt","text":" Gets the prompt configuration for a mode of the QA service. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt configuration for requiredReturns: Type DescriptionPromptConfig The prompt configuration for the QA service Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.get_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.qa_query","title":"qa_query","text":" Query the QA service with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the QA service with. requiredReturns: Type DescriptionQAAnswer The answer from the QA service. Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.qa_query(query)","title":"query ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.remove_file","title":"remove_file","text":" Remove a file from the vector store. The returned answer has a status code of 200 if the file was removed successfully. Parameters: file_name: The name of the file to remove from the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.set_gen_prompt","title":"set_gen_prompt","text":" Sets the prompt configuration for the generation service. The prompt configuration that is returned should in most cases be the same as the one that was set. Parameters: config: The prompt configuration to set Returns: Type DescriptionPromptConfig The prompt configuration that was set Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.set_qa_prompt","title":"set_qa_prompt","text":" Sets the prompt configuration for the QA service. Since the QA service uses multiple prompt configurations, the mode should be specified. For more details, see the documentation of Parameters: Name Type Description DefaultPromptConfig The prompt configuration to set requiredQAModesEnum The mode to set the prompt configuration for requiredReturns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.set_qa_prompt(config)","title":"config ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.set_qa_prompt(qa_mode)","title":"qa_mode ","text":""}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Generating and evaluating relevant documentation","text":"GERD is developed as an experimental library to investigate how large language models (LLMs) can be used to generate and analyze (sets of) documents. This project was initially forked from Llama-2-Open-Source-LLM-CPU-Inference by Kenneth Leung. "},{"location":"#quickstart","title":"Quickstart","text":"If you just want to it try out, you can clone the project and install dependencies with Source: examples/hello.py If you want to try this out in your browser, head over to binder \ud83d\udc49 . Note that running LLMs on the CPU (and especially on limited virtual machines like binder) takes some time. "},{"location":"#question-and-answer-example","title":"Question and Answer Example","text":"Follow quickstart but execute Click the 'Click to Upload' button and search for a GRASCCO document named Prompt chaining is a prompt engineering approach to increase the 'reflection' of a large language model onto its given answer. Check Source: examples/chaining.py Config: config/gen_chaining.yml As you see, the answer does not make much sense with the default model which is rather small. Give it a try with meta-llama/Llama-3.2-3B. To use this model, you need to login with the huggingface cli and accept the Meta Community License Agreement. "},{"location":"#full-documentation","title":"Full Documentation","text":"A more detailled documentation can be found here \ud83d\udc49 . "},{"location":"#used-tools","title":"Used Tools","text":"
GERD is primarly a tool for prototyping workflows for working with Large Language Models. It is meant to act as 'glue' between different tools and services and should ease the access to these tools. In general, there should be only be two components involved in a GERD workflow: A configuration and a service. The configuration can be assembled from different sources and should be able to be used in different services. The foundation of such a configration is a YAML file. GERD provides a set of those which can be found in the And can be used with a "},{"location":"develop/","title":"Development Guide","text":""},{"location":"develop/#basics","title":"Basics","text":"To get started on development you need to install uv. You can use Next install the package and all dependencies with After that, it should be possible to run scripts without further issues: To add a new runtime dependency, just run To add a new development dependency, run "},{"location":"develop/#pre-commit-hooks-recommended","title":"Pre-commit hooks (recommended)","text":"Pre-commit hooks are used to check linting and run tests before commit changes to prevent faulty commits. Thus, it is recommended to use these hooks! Hooks should not include long running actions (such as tests) since committing should be fast. To install pre-commit hooks, execute this once: "},{"location":"develop/#further-tools","title":"Further tools","text":""},{"location":"develop/#poe-task-runner","title":"Poe Task Runner","text":"Task runner configuration are stored in the "},{"location":"develop/#pytest","title":"PyTest","text":"Test case are run via pytest. Tests can be found in the More excessive testing can be trigger with "},{"location":"develop/#ruff","title":"Ruff","text":"Ruff is used for linting and code formatting. Ruff follows There is a VSCode extension that handles formatting and linting. "},{"location":"develop/#mypy","title":"MyPy","text":"MyPy does static type checking. It will not be run automatically. To run MyPy manually use uv with the folder to be checked: "},{"location":"develop/#implemented-guis","title":"Implemented GUIs","text":""},{"location":"develop/#run-frontend","title":"Run Frontend","text":"Either run Generate Frontend: or QA Frontend: or the GERD Router: "},{"location":"develop/#cicd-and-distribution","title":"CI/CD and Distribution","text":""},{"location":"develop/#github-actions","title":"GitHub Actions","text":"GitHub Actions can be found under .github/workflows. There is currently one main CI workflow called In its current config it will only be executed when a PR for This project uses GitHub issue templates. Currently, there are three templates available. "},{"location":"develop/#bug-report","title":"Bug Report","text":" "},{"location":"develop/#feature-request","title":"Feature Request","text":" "},{"location":"develop/#use-case","title":"Use Case","text":" "},{"location":"reference/gerd/","title":"gerd","text":""},{"location":"reference/gerd/#gerd","title":"gerd","text":"Generating and evaluating relevant documentation (GERD). This package provides the GERD system for working with large language models (LLMs). This includes means to generate texts using different backends and frontends. The system is designed to be flexible and extensible to support different use cases. It can also be used for Retrieval Augmented Generation (RAG) tasks or as a chatbot. Modules: Name Descriptionbackends This module contains backend implementations that manage services. config Configuration for the application. features Special features to extend the functionality of GERD services. frontends A collection of several gradio frontends. gen Services and utilities for text generation with LLMs. loader Module for loading language models. models Pydantic model definitions and data classes that are share accross modules. qa Services and utilities for retrieval augmented generation (RAG). rag Retrieval-Augmented Generation (RAG) backend. training Collections of training routines for GERD. transport Module to define the transport protocol. "},{"location":"reference/gerd/backends/","title":"gerd.backends","text":""},{"location":"reference/gerd/backends/#gerd.backends","title":"gerd.backends","text":"This module contains backend implementations that manage services. These backends can be used by frontends such as gradio. Furthermore, the backend module contains service implementations for loading LLMs or vector stores for Retrieval Augmented Generation. Modules: Name Descriptionbridge The Bridge connects backend and frontend services directly for local use. rest_client REST client for the GERD server. rest_server REST server as a GERD backend. Attributes: Name Type DescriptionTRANSPORTER Transport The default transporter that connects the backend services to the frontend. "},{"location":"reference/gerd/backends/#gerd.backends.TRANSPORTER","title":"TRANSPORTERmodule-attribute ","text":" The default transporter that connects the backend services to the frontend. "},{"location":"reference/gerd/backends/bridge/","title":"gerd.backends.bridge","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge","title":"gerd.backends.bridge","text":"The Bridge connects backend and frontend services directly for local use. Classes: Name DescriptionBridge Direct connection between backend services and frontend. "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge","title":"Bridge","text":" Bases: Direct connection between backend services and frontend. Frontends that make use of the The services associated with the bridge are initialized lazily. Methods: Name Descriptionadd_file Add a file to the vector store. analyze_mult_prompts_query Queries the vector store with a set of predefined queries. analyze_query Queries the vector store with a predefined query. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. generate Generates text with the generation service. get_gen_prompt Gets the prompt configuration for the generation service. get_qa_prompt Gets the prompt configuration for a mode of the QA service. qa_query Query the QA service with a question. remove_file Remove a file from the vector store. set_gen_prompt Sets the prompt configuration for the generation service. set_qa_prompt Sets the prompt configuration for the QA service. Attributes: Name Type Descriptiongen GenerationService Get the generation service instance. qa QAService Get the QA service instance. It will be created if it does not exist. Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.gen","title":"gen property ","text":" Get the generation service instance. It will be created if it does not exist. "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.qa","title":"qaproperty ","text":" Get the QA service instance. It will be created if it does not exist. "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.add_file","title":"add_file","text":" Add a file to the vector store. The returned answer has a status code of 200 if the file was added successfully. Parameters: file: The file to add to the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Queries the vector store with a set of predefined queries. In contrast to Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.analyze_query","title":"analyze_query","text":" Queries the vector store with a predefined query. The query should return vital information gathered from letters of discharge. Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding is defined by the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.db_query","title":"db_query","text":" Queries the vector store with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.generate","title":"generate","text":" Generates text with the generation service. Parameters: Name Type Description DefaultDict[str, str] The parameters to generate text with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.get_gen_prompt","title":"get_gen_prompt","text":" Gets the prompt configuration for the generation service. Returns: Type DescriptionPromptConfig The current prompt configuration Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.get_qa_prompt","title":"get_qa_prompt","text":" Gets the prompt configuration for a mode of the QA service. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt configuration for requiredReturns: Type DescriptionPromptConfig The prompt configuration for the QA service Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.get_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.qa_query","title":"qa_query","text":" Query the QA service with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the QA service with. requiredReturns: Type DescriptionQAAnswer The answer from the QA service. Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.qa_query(query)","title":"query ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.remove_file","title":"remove_file","text":" Remove a file from the vector store. The returned answer has a status code of 200 if the file was removed successfully. Parameters: file_name: The name of the file to remove from the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.set_gen_prompt","title":"set_gen_prompt","text":" Sets the prompt configuration for the generation service. The prompt configuration that is returned should in most cases be the same as the one that was set. Parameters: config: The prompt configuration to set Returns: Type DescriptionPromptConfig The prompt configuration that was set Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.set_qa_prompt","title":"set_qa_prompt","text":" Sets the prompt configuration for the QA service. Since the QA service uses multiple prompt configurations, the mode should be specified. For more details, see the documentation of Parameters: Name Type Description DefaultPromptConfig The prompt configuration to set requiredQAModesEnum The mode to set the prompt configuration for requiredReturns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/bridge.py "},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.set_qa_prompt(config)","title":"config ","text":""},{"location":"reference/gerd/backends/bridge/#gerd.backends.bridge.Bridge.set_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_client/","title":"gerd.backends.rest_client","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client","title":"gerd.backends.rest_client","text":"REST client for the GERD server. Classes: Name DescriptionRestClient REST client for the GERD server. "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient","title":"RestClient","text":" Bases: REST client for the GERD server. The client initalizes the server URL. It is retrieved from the global CONFIG. Other (timeout) settings are also set here but not configurable as of now. Methods: Name Descriptionadd_file Add a file to the vector store. analyze_mult_prompts_query Queries the vector store with a set of predefined queries. analyze_query Queries the vector store with a predefined query. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. generate Generates text with the generation service. get_gen_prompt Gets the prompt configuration for the generation service. get_qa_prompt Gets the prompt configuration for a mode of the QA service. qa_query Query the QA service with a question. remove_file Remove a file from the vector store. set_gen_prompt Sets the prompt configuration for the generation service. set_qa_prompt Sets the prompt configuration for the QA service. Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.add_file","title":"add_file","text":" Add a file to the vector store. The returned answer has a status code of 200 if the file was added successfully. Parameters: file: The file to add to the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Queries the vector store with a set of predefined queries. In contrast to Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.analyze_query","title":"analyze_query","text":" Queries the vector store with a predefined query. The query should return vital information gathered from letters of discharge. Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding is defined by the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.db_query","title":"db_query","text":" Queries the vector store with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.generate","title":"generate","text":" Generates text with the generation service. Parameters: Name Type Description DefaultDict[str, str] The parameters to generate text with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.get_gen_prompt","title":"get_gen_prompt","text":" Gets the prompt configuration for the generation service. Returns: Type DescriptionPromptConfig The current prompt configuration Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.get_qa_prompt","title":"get_qa_prompt","text":" Gets the prompt configuration for a mode of the QA service. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt configuration for requiredReturns: Type DescriptionPromptConfig The prompt configuration for the QA service Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.get_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.qa_query","title":"qa_query","text":" Query the QA service with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the QA service with. requiredReturns: Type DescriptionQAAnswer The answer from the QA service. Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.qa_query(query)","title":"query ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.remove_file","title":"remove_file","text":" Remove a file from the vector store. The returned answer has a status code of 200 if the file was removed successfully. Parameters: file_name: The name of the file to remove from the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.set_gen_prompt","title":"set_gen_prompt","text":" Sets the prompt configuration for the generation service. The prompt configuration that is returned should in most cases be the same as the one that was set. Parameters: config: The prompt configuration to set Returns: Type DescriptionPromptConfig The prompt configuration that was set Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.set_qa_prompt","title":"set_qa_prompt","text":" Sets the prompt configuration for the QA service. Since the QA service uses multiple prompt configurations, the mode should be specified. For more details, see the documentation of Parameters: Name Type Description DefaultPromptConfig The prompt configuration to set requiredQAModesEnum The mode to set the prompt configuration for requiredReturns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_client.py "},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.set_qa_prompt(config)","title":"config ","text":""},{"location":"reference/gerd/backends/rest_client/#gerd.backends.rest_client.RestClient.set_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_server/","title":"gerd.backends.rest_server","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server","title":"gerd.backends.rest_server","text":"REST server as a GERD backend. Classes: Name DescriptionRestServer REST server as a GERD backend. "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer","title":"RestServer","text":" Bases: REST server as a GERD backend. The REST server initializes a private bridge and an API router. The API router is used to define the endpoints for the REST server. Methods: Name Descriptionadd_file Add a file to the vector store. analyze_mult_prompts_query Queries the vector store with a set of predefined queries. analyze_query Queries the vector store with a predefined query. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. generate Generates text with the generation service. get_gen_prompt Gets the prompt configuration for the generation service. get_qa_prompt Gets the prompt configuration for a mode of the QA service. get_qa_prompt_rest Get the QA prompt configuration. qa_query Query the QA service with a question. remove_file Remove a file from the vector store. set_gen_prompt Sets the prompt configuration for the generation service. set_qa_prompt Sets the prompt configuration for the QA service. set_qa_prompt_rest Set the QA prompt configuration. Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.add_file","title":"add_file","text":" Add a file to the vector store. The returned answer has a status code of 200 if the file was added successfully. Parameters: file: The file to add to the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Queries the vector store with a set of predefined queries. In contrast to Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.analyze_query","title":"analyze_query","text":" Queries the vector store with a predefined query. The query should return vital information gathered from letters of discharge. Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding is defined by the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.db_query","title":"db_query","text":" Queries the vector store with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.generate","title":"generate","text":" Generates text with the generation service. Parameters: Name Type Description DefaultDict[str, str] The parameters to generate text with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.get_gen_prompt","title":"get_gen_prompt","text":" Gets the prompt configuration for the generation service. Returns: Type DescriptionPromptConfig The current prompt configuration Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.get_qa_prompt","title":"get_qa_prompt","text":" Gets the prompt configuration for a mode of the QA service. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt configuration for requiredReturns: Type DescriptionPromptConfig The prompt configuration for the QA service Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.get_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.get_qa_prompt_rest","title":"get_qa_prompt_rest","text":" Get the QA prompt configuration. The call is forwarded to the bridge. Parameters: qa_mode: The QA mode Returns: The QA prompt configuration Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.qa_query","title":"qa_query","text":" Query the QA service with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the QA service with. requiredReturns: Type DescriptionQAAnswer The answer from the QA service. Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.qa_query(query)","title":"query ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.remove_file","title":"remove_file","text":" Remove a file from the vector store. The returned answer has a status code of 200 if the file was removed successfully. Parameters: file_name: The name of the file to remove from the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_gen_prompt","title":"set_gen_prompt","text":" Sets the prompt configuration for the generation service. The prompt configuration that is returned should in most cases be the same as the one that was set. Parameters: config: The prompt configuration to set Returns: Type DescriptionPromptConfig The prompt configuration that was set Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_qa_prompt","title":"set_qa_prompt","text":" Sets the prompt configuration for the QA service. Since the QA service uses multiple prompt configurations, the mode should be specified. For more details, see the documentation of Parameters: Name Type Description DefaultPromptConfig The prompt configuration to set requiredQAModesEnum The mode to set the prompt configuration for requiredReturns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_qa_prompt(config)","title":"config ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/backends/rest_server/#gerd.backends.rest_server.RestServer.set_qa_prompt_rest","title":"set_qa_prompt_rest","text":" Set the QA prompt configuration. The call is forwarded to the bridge. Parameters: config: The QA prompt configuration Returns: The QA prompt configuration; Should be the same as the input in most cases Source code ingerd/backends/rest_server.py "},{"location":"reference/gerd/config/","title":"gerd.config","text":""},{"location":"reference/gerd/config/#gerd.config","title":"gerd.config","text":"Configuration for the application. Classes: Name DescriptionEnvVariables Environment variables. Settings Settings for the application. YamlConfig YAML configuration source. Functions: Name Descriptionload_gen_config Load the LLM model configuration. load_qa_config Load the LLM model configuration. Attributes: Name Type DescriptionCONFIG The global configuration object. "},{"location":"reference/gerd/config/#gerd.config.CONFIG","title":"CONFIGmodule-attribute ","text":" The global configuration object. "},{"location":"reference/gerd/config/#gerd.config.EnvVariables","title":"EnvVariables","text":" Bases: Environment variables. "},{"location":"reference/gerd/config/#gerd.config.Settings","title":"Settings","text":" Bases: Settings for the application. Methods: Name Descriptionsettings_customise_sources Customize the settings sources used by pydantic-settings. "},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources","title":"settings_customise_sourcesclassmethod ","text":" Customize the settings sources used by pydantic-settings. The order of the sources is important. The first source has the highest priority. Parameters: Name Type Description DefaultThe class of the settings. requiredPydanticBaseSettingsSource The settings from the initialization. requiredPydanticBaseSettingsSource The settings from the environment. requiredPydanticBaseSettingsSource The settings from the dotenv file. requiredPydanticBaseSettingsSource The settings from the secret file. requiredReturns: Type DescriptionTuple[PydanticBaseSettingsSource, ...] The customized settings sources. Source code ingerd/config.py "},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(cls)","title":"cls ","text":""},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(init_settings)","title":"init_settings ","text":""},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(env_settings)","title":"env_settings ","text":""},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(dotenv_settings)","title":"dotenv_settings ","text":""},{"location":"reference/gerd/config/#gerd.config.Settings.settings_customise_sources(file_secret_settings)","title":"file_secret_settings ","text":""},{"location":"reference/gerd/config/#gerd.config.YamlConfig","title":"YamlConfig","text":" Bases: YAML configuration source. Methods: Name Descriptionget_field_value Overrides a method from Overrides a method from Fails if it should ever be called. Parameters: field: The field to get the value for. field_name: The name of the field. Raises: Type DescriptionNotImplementedError Always. Source code ingerd/config.py "},{"location":"reference/gerd/config/#gerd.config.load_gen_config","title":"load_gen_config","text":" Load the LLM model configuration. Parameters: Name Type Description Defaultstr The name of the configuration. 'gen_default' Returns: Type DescriptionGenerationConfig The model configuration. Source code ingerd/config.py "},{"location":"reference/gerd/config/#gerd.config.load_gen_config(config)","title":"config ","text":""},{"location":"reference/gerd/config/#gerd.config.load_qa_config","title":"load_qa_config","text":" Load the LLM model configuration. Parameters: Name Type Description Defaultstr The name of the configuration. 'qa_default' Returns: Type DescriptionQAConfig The model configuration. Source code ingerd/config.py "},{"location":"reference/gerd/config/#gerd.config.load_qa_config(config)","title":"config ","text":""},{"location":"reference/gerd/features/","title":"gerd.features","text":""},{"location":"reference/gerd/features/#gerd.features","title":"gerd.features","text":"Special features to extend the functionality of GERD services. Modules: Name Descriptionprompt_chaining The prompt chaining extension. "},{"location":"reference/gerd/features/prompt_chaining/","title":"gerd.features.prompt_chaining","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining","title":"gerd.features.prompt_chaining","text":"The prompt chaining extension. Prompt chaining is a method to improve the factual accuracy of the model's output. To do this, the model generates a series of prompts and uses the output of each prompt as the input for the next prompt. This allows the model to reflect on its own output and generate a more coherent response. Classes: Name DescriptionPromptChaining The prompt chaining extension. PromptChainingConfig Configuration for prompt chaining. "},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining","title":"PromptChaining","text":" The prompt chaining extension. The service is initialized with a chaining configuration and an LLM. Parameters: Name Type Description DefaultPromptChainingConfig The configuration for the prompt chaining requiredLLM The language model to use for the generation requiredPromptConfig The prompt that is used to wrap the questions requiredMethods: Name Descriptiongenerate Generate text based on the prompt configuration and use chaining. Source code ingerd/features/prompt_chaining.py "},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining(config)","title":"config ","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining(llm)","title":"llm ","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining.generate","title":"generate","text":" Generate text based on the prompt configuration and use chaining. Parameters: Name Type Description Defaultdict[str, str] The parameters to format the prompt with requiredReturns: Type Descriptionstr The result of the last prompt that was chained Source code ingerd/features/prompt_chaining.py "},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChaining.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChainingConfig","title":"PromptChainingConfig","text":" Bases: Configuration for prompt chaining. Note that prompts should contain placeholders for the responses to be inserted. The initial question can be used with Attributes: Name Type Descriptionprompts list[PromptConfig] The list of prompts to chain. "},{"location":"reference/gerd/features/prompt_chaining/#gerd.features.prompt_chaining.PromptChainingConfig.prompts","title":"promptsinstance-attribute ","text":" The list of prompts to chain. "},{"location":"reference/gerd/frontends/","title":"gerd.frontends","text":""},{"location":"reference/gerd/frontends/#gerd.frontends","title":"gerd.frontends","text":"A collection of several gradio frontends. A variety of frontends to interact with GERD services and backends. Modules: Name Descriptiongen_frontend A gradio frontend to interact with the generation service. generate A simple gradio frontend to interact with the GERD chat and generate service. instruct A gradio frontend to interact with the GERD instruct service. qa_frontend A gradio frontend to query the QA service and upload files to the vectorstore. router A gradio frontend to start and stop the GERD services. training A gradio frontend to train LoRAs with. "},{"location":"reference/gerd/frontends/gen_frontend/","title":"gerd.frontends.gen_frontend","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend","title":"gerd.frontends.gen_frontend","text":"A gradio frontend to interact with the generation service. This frontend is tailored to the letter of discharge generation task. For a more general frontend see Functions: Name Descriptioncompare_paragraphs Compare paragraphs of two documents and return the modified parts. generate Generate a letter of discharge based on the provided fields. insert_paragraphs Insert modified paragraphs into the source document. response_parser Parse the response from the generation service. "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.compare_paragraphs","title":"compare_paragraphs","text":" Compare paragraphs of two documents and return the modified parts. Parameters: Name Type Description Defaultstr The source document requiredstr The modified document requiredReturns: Type DescriptionDict[str, str] The modified parts of the document Source code ingerd/frontends/gen_frontend.py "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.compare_paragraphs(src_doc)","title":"src_doc ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.compare_paragraphs(mod_doc)","title":"mod_doc ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.generate","title":"generate","text":" Generate a letter of discharge based on the provided fields. Parameters: Name Type Description Defaultstr The fields to generate the letter of discharge from. () Returns: Type Descriptionstr The generated letter of discharge, a text area to display it, str and a button state to continue the generation Source code ingerd/frontends/gen_frontend.py "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.generate(*fields)","title":"*fields ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.insert_paragraphs","title":"insert_paragraphs","text":" Insert modified paragraphs into the source document. Parameters: Name Type Description Defaultstr The source document requiredDict[str, str] The modified paragraphs requiredReturns: Type Descriptionstr The updated document Source code ingerd/frontends/gen_frontend.py "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.insert_paragraphs(src_doc)","title":"src_doc ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.insert_paragraphs(new_para)","title":"new_para ","text":""},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.response_parser","title":"response_parser","text":" Parse the response from the generation service. Parameters: Name Type Description Defaultstr The response from the generation service requiredReturns: Type DescriptionDict[str, str] The parsed response Source code ingerd/frontends/gen_frontend.py "},{"location":"reference/gerd/frontends/gen_frontend/#gerd.frontends.gen_frontend.response_parser(response)","title":"response ","text":""},{"location":"reference/gerd/frontends/generate/","title":"gerd.frontends.generate","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate","title":"gerd.frontends.generate","text":"A simple gradio frontend to interact with the GERD chat and generate service. Classes: Name DescriptionGlobal Singleton to store the service. Functions: Name Descriptiongenerate Generate text from the model. load_model Load a global large language model. upload_lora Upload a LoRA archive. Attributes: Name Type DescriptionKIOSK_MODE Whether the frontend is running in kiosk mode. "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.KIOSK_MODE","title":"KIOSK_MODEmodule-attribute ","text":" Whether the frontend is running in kiosk mode. Kiosk mode reduces the number of options to a minimum and automatically loads the model. "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.Global","title":"Global","text":"Singleton to store the service. "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate","title":"generate","text":" Generate text from the model. Parameters: Name Type Description Defaultstr The text to generate from requiredfloat The temperature for the generation requiredfloat The top p value for the generation requiredint The maximum number of tokens to generate requiredReturns: Type Descriptionstr The generated text Source code ingerd/frontends/generate.py "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate(textbox)","title":"textbox ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate(temp)","title":"temp ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate(top_p)","title":"top_p ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.generate(max_tokens)","title":"max_tokens ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.load_model","title":"load_model","text":" Load a global large language model. Parameters: Name Type Description Defaultstr The name of the model requiredstr Whether to use an extra LoRA requiredReturns: Type Descriptiondict[str, Any] The updated interactive state, returns interactive=True when the model is loaded Source code ingerd/frontends/generate.py "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.load_model(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.load_model(origin)","title":"origin ","text":""},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.upload_lora","title":"upload_lora","text":" Upload a LoRA archive. Parameters: Name Type Description Defaultstr The path to the uploaded archive requiredReturns: Type Descriptionstr an empty string to clear the input Source code ingerd/frontends/generate.py "},{"location":"reference/gerd/frontends/generate/#gerd.frontends.generate.upload_lora(file_upload)","title":"file_upload ","text":""},{"location":"reference/gerd/frontends/instruct/","title":"gerd.frontends.instruct","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct","title":"gerd.frontends.instruct","text":"A gradio frontend to interact with the GERD instruct service. Classes: Name DescriptionGlobal Singleton to store the service. Functions: Name Descriptiongenerate Generate text from the model. load_model Load a global large language model. Attributes: Name Type DescriptionKIOSK_MODE Whether the frontend is running in kiosk mode. "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.KIOSK_MODE","title":"KIOSK_MODEmodule-attribute ","text":" Whether the frontend is running in kiosk mode. Kiosk mode reduces the number of options to a minimum and automatically loads the model. "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.Global","title":"Global","text":"Singleton to store the service. "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate","title":"generate","text":" Generate text from the model. Parameters: Name Type Description Defaultfloat The temperature for the generation requiredfloat The top-p value for the generation requiredint The maximum number of tokens to generate requiredstr The system text to set up the context requiredstr The user input () Source code in gerd/frontends/instruct.py "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(temperature)","title":"temperature ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(top_p)","title":"top_p ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(max_tokens)","title":"max_tokens ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(system_text)","title":"system_text ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.generate(args)","title":"args ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.load_model","title":"load_model","text":" Load a global large language model. Parameters: Name Type Description Defaultstr The name of the model requiredstr Whether to use an extra LoRA 'None' Source code in gerd/frontends/instruct.py "},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.load_model(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/frontends/instruct/#gerd.frontends.instruct.load_model(origin)","title":"origin ","text":""},{"location":"reference/gerd/frontends/qa_frontend/","title":"gerd.frontends.qa_frontend","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend","title":"gerd.frontends.qa_frontend","text":"A gradio frontend to query the QA service and upload files to the vectorstore. Functions: Name Descriptionfiles_changed Check if the file upload element has changed. get_qa_mode Get QAMode from string. handle_developer_mode_checkbox_change Enable/disable developermode. handle_type_radio_selection_change Enable/disable gui elements depend on which mode is selected. query Starts the selected QA Mode. set_prompt Updates the prompt of the selected QA Mode. "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.files_changed","title":"files_changed","text":" Check if the file upload element has changed. If so, upload the new files to the vectorstore and delete the one that have been removed. Parameters: Name Type Description DefaultOptional[list[str]] The file paths to upload required Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.files_changed(file_paths)","title":"file_paths ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.get_qa_mode","title":"get_qa_mode","text":" Get QAMode from string. Parameters: Name Type Description Defaultstr The search type requiredReturns: Type DescriptionQAModesEnum The QAMode Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.get_qa_mode(search_type)","title":"search_type ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.handle_developer_mode_checkbox_change","title":"handle_developer_mode_checkbox_change","text":" Enable/disable developermode. Enables or disables the developer mode and the corresponding GUI elements. Parameters: check: The current state of the developer mode checkbox Returns: The list of GUI element property changes to update Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.handle_type_radio_selection_change","title":"handle_type_radio_selection_change","text":" Enable/disable gui elements depend on which mode is selected. This order of the updates elements must be considered
Parameters: Name Type Description Defaultstr The current search type requiredReturns: Type DescriptionList[Any] The list of GUI element property changes to update Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.handle_type_radio_selection_change(search_type)","title":"search_type ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query","title":"query","text":" Starts the selected QA Mode. Parameters: Name Type Description Defaultstr The question to ask requiredstr The search type requiredint The number of sources requiredstr The search strategy requiredReturns: Type Descriptionstr The response from the QA service Source code ingerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query(question)","title":"question ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query(search_type)","title":"search_type ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query(k_source)","title":"k_source ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.query(search_strategy)","title":"search_strategy ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.set_prompt","title":"set_prompt","text":" Updates the prompt of the selected QA Mode. Parameters: Name Type Description Defaultstr The new prompt requiredstr The search type requiredOptional[Progress] The progress bar to update None Source code in gerd/frontends/qa_frontend.py "},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.set_prompt(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.set_prompt(search_type)","title":"search_type ","text":""},{"location":"reference/gerd/frontends/qa_frontend/#gerd.frontends.qa_frontend.set_prompt(progress)","title":"progress ","text":""},{"location":"reference/gerd/frontends/router/","title":"gerd.frontends.router","text":""},{"location":"reference/gerd/frontends/router/#gerd.frontends.router","title":"gerd.frontends.router","text":"A gradio frontend to start and stop the GERD services. Since most hosts that use a frontend will not have enough memory to run multiple services at the same time this router is used to start and stop the services as needed. Classes: Name DescriptionAppController The controller for the app. AppState The state of the service. Functions: Name Descriptioncheck_state Checks the app state and waits for the service to start. Attributes: Name Type DescriptionGRADIO_ROUTER_PORT The port the router is running on. GRADIO_SERVER_PORT The port the gradio server is running on. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.GRADIO_ROUTER_PORT","title":"GRADIO_ROUTER_PORTmodule-attribute ","text":" The port the router is running on. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.GRADIO_SERVER_PORT","title":"GRADIO_SERVER_PORTmodule-attribute ","text":" The port the gradio server is running on. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController","title":"AppController","text":" The controller for the app. The controlller is initialized in the stopped state. Methods: Name Descriptioncheck_port Check if the service port is open. instance Get the instance of the controller. start Start the service with the given frontend. start_gen Start the generation service. start_instruct Start the instruct service. start_qa Start the QA service. start_simple Start the simple generation service. start_training Start the training service. stop Stop the service when it is running. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.check_port","title":"check_port staticmethod ","text":" Check if the service port is open. Parameters: Name Type Description Defaultint The port to check requiredReturns: Type Descriptionbool True if the port us open, False otherwise. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.check_port(port)","title":"port ","text":""},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.instance","title":"instance classmethod ","text":" Get the instance of the controller. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start","title":"start","text":" Start the service with the given frontend. Parameters: Name Type Description Defaultstr The frontend service name to start. required Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start(frontend)","title":"frontend ","text":""},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_gen","title":"start_gen","text":" Start the generation service. Returns: Type Descriptionstr The name of the current app state. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_instruct","title":"start_instruct","text":" Start the instruct service. Returns: Type Descriptionstr The name of the current app state Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_qa","title":"start_qa","text":" Start the QA service. Returns: Type Descriptionstr The name of the current app state Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_simple","title":"start_simple","text":" Start the simple generation service. Returns: Type Descriptionstr The name of the current app state Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.start_training","title":"start_training","text":" Start the training service. Returns: Type Descriptionstr The name of the current app state Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppController.stop","title":"stop","text":" Stop the service when it is running. Returns: Type Descriptionstr The name of the current app state. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState","title":"AppState","text":" Bases: The state of the service. Attributes: Name Type DescriptionGENERATE_STARTED The generation service is started. GENERATE_STARTING The generation service is starting. INSTRUCT_STARTED The instruct service is started. INSTRUCT_STARTING The instruct service is starting. QA_STARTED The QA service is started. QA_STARTING The QA service is starting. SIMPLE_STARTED The simple generation service is started. SIMPLE_STARTING The simple generation service is starting. STOPPED All services is stopped. TRAINING_STARTED The training service is started. TRAINING_STARTING The training service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.GENERATE_STARTED","title":"GENERATE_STARTEDclass-attribute instance-attribute ","text":" The generation service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.GENERATE_STARTING","title":"GENERATE_STARTINGclass-attribute instance-attribute ","text":" The generation service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.INSTRUCT_STARTED","title":"INSTRUCT_STARTEDclass-attribute instance-attribute ","text":" The instruct service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.INSTRUCT_STARTING","title":"INSTRUCT_STARTINGclass-attribute instance-attribute ","text":" The instruct service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.QA_STARTED","title":"QA_STARTEDclass-attribute instance-attribute ","text":" The QA service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.QA_STARTING","title":"QA_STARTINGclass-attribute instance-attribute ","text":" The QA service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.SIMPLE_STARTED","title":"SIMPLE_STARTEDclass-attribute instance-attribute ","text":" The simple generation service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.SIMPLE_STARTING","title":"SIMPLE_STARTINGclass-attribute instance-attribute ","text":" The simple generation service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.STOPPED","title":"STOPPEDclass-attribute instance-attribute ","text":" All services is stopped. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.TRAINING_STARTED","title":"TRAINING_STARTEDclass-attribute instance-attribute ","text":" The training service is started. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.AppState.TRAINING_STARTING","title":"TRAINING_STARTINGclass-attribute instance-attribute ","text":" The training service is starting. "},{"location":"reference/gerd/frontends/router/#gerd.frontends.router.check_state","title":"check_state","text":" Checks the app state and waits for the service to start. Returns: Type Descriptionstr The name of the current app state. Source code ingerd/frontends/router.py "},{"location":"reference/gerd/frontends/training/","title":"gerd.frontends.training","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training","title":"gerd.frontends.training","text":"A gradio frontend to train LoRAs with. Classes: Name DescriptionGlobal A singleton class handle to store the current trainer instance. Functions: Name Descriptioncheck_trainer Check if the trainer is (still) running. get_file_list Get a list of files matching the glob pattern. get_loras Get a list of available LoRAs. start_training Start the training process. validate_files Validate the uploaded files. "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.Global","title":"Global","text":"A singleton class handle to store the current trainer instance. "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.check_trainer","title":"check_trainer","text":" Check if the trainer is (still) running. When the trainer is running, a progress bar is shown. The method returns a gradio property update of 'visible' which can be used to activate and deactivate elements based on the current training status. Returns: Type Descriptiondict[str, Any] A dictionary with the status of gradio 'visible' property Source code ingerd/frontends/training.py "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.get_file_list","title":"get_file_list","text":" Get a list of files matching the glob pattern. Parameters: Name Type Description Defaultstr The glob pattern to search for files requiredReturns: Type Descriptionstr A string with the list of files Source code ingerd/frontends/training.py "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.get_file_list(glob_pattern)","title":"glob_pattern ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.get_loras","title":"get_loras","text":" Get a list of available LoRAs. LORAs are loaded from the path defined in the default LoraTrainingConfig. Returns: Type Descriptiondict[str, Path] A dictionary with the LoRA names as keys and the paths as values Source code ingerd/frontends/training.py "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training","title":"start_training","text":" Start the training process. While training, the method will update the progress bar. Parameters: Name Type Description Defaultlist[str] | None The list of files to train on requiredstr The name of the model to train requiredstr The name of the LoRA to train requiredstr The training mode requiredstr The source of the data requiredstr The glob pattern to search for files requiredbool Whether to override existing models requiredlist[str] The modules to train requiredlist[str] The flags to set requiredint The number of epochs to train requiredint The batch size requiredint The micro batch size requiredint The cutoff length requiredint The overlap length requiredReturns: Type Descriptionstr A string with the status of the training Source code ingerd/frontends/training.py "},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(files)","title":"files ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(lora_name)","title":"lora_name ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(mode)","title":"mode ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(data_source)","title":"data_source ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(input_glob)","title":"input_glob ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(override)","title":"override ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(modules)","title":"modules ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(flags)","title":"flags ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(epochs)","title":"epochs ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(batch_size)","title":"batch_size ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(micro_batch_size)","title":"micro_batch_size ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(cutoff_len)","title":"cutoff_len ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.start_training(overlap_len)","title":"overlap_len ","text":""},{"location":"reference/gerd/frontends/training/#gerd.frontends.training.validate_files","title":"validate_files","text":" Validate the uploaded files. Whether the property 'interactive' is True depends on whether any files were valid. Parameters: file_paths: The list of file paths mode: The training mode Returns: Type Descriptiontuple[list[str], dict[str, bool]] A tuple with the validated file paths and gradio property 'interactive' Source code ingerd/frontends/training.py "},{"location":"reference/gerd/gen/","title":"gerd.gen","text":""},{"location":"reference/gerd/gen/#gerd.gen","title":"gerd.gen","text":"Services and utilities for text generation with LLMs. Modules: Name Descriptionchat_service Implementation of the ChatService class. generation_service Implements the Generation class. "},{"location":"reference/gerd/gen/chat_service/","title":"gerd.gen.chat_service","text":""},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service","title":"gerd.gen.chat_service","text":"Implementation of the ChatService class. This features the currently favoured approach of instruction-based work with large language models. Thus, models fined tuned for chat or instructions work best with this service. The service can be used to generate text as well as long as the model features a chat template. In this case this service should be prefered over the GenerationService since it is easier to setup a prompt according to the model's requirements. Classes: Name DescriptionChatService Service to generate text based on a chat history. "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService","title":"ChatService","text":" Service to generate text based on a chat history. The service is initialized with a config and parameters. The parameters are used to initialize the message history. However, future reset will not consider them. loads a model according to this config. The used LLM is loaded according to the model configuration right on initialization. Methods: Name Descriptionadd_message Add a message to the chat history. generate Generate a response based on the chat history. get_prompt_config Get the prompt configuration. reset Reset the chat history. set_prompt_config Set the prompt configuration. submit_user_message Submit a message with the user role and generates a response. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.add_message","title":"add_message","text":" Add a message to the chat history. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.generate","title":"generate","text":" Generate a response based on the chat history. This method can be used as a replacement for GenerationService.generate in cases where the used model provides a chat template. When this is the case, using this method is more reliable as it requires less manual configuration to set up the prompt according to the model's requirements. Parameters: Name Type Description DefaultDict[str, str] The parameters to format the prompt with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.get_prompt_config","title":"get_prompt_config","text":" Get the prompt configuration. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.reset","title":"reset","text":" Reset the chat history. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.set_prompt_config","title":"set_prompt_config","text":" Set the prompt configuration. Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/chat_service/#gerd.gen.chat_service.ChatService.submit_user_message","title":"submit_user_message","text":" Submit a message with the user role and generates a response. The service's prompt configuration is used to format the prompt unless a different prompt configuration is provided. Parameters: parameters: The parameters to format the prompt with prompt_config: The optional prompt configuration to be used Returns: Type DescriptionGenResponse The generation result Source code ingerd/gen/chat_service.py "},{"location":"reference/gerd/gen/generation_service/","title":"gerd.gen.generation_service","text":""},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service","title":"gerd.gen.generation_service","text":"Implements the Generation class. The generation services is meant to generate text based on a prompt and/or the continuation of a provided text. Classes: Name DescriptionGenerationService Service to generate text based on a prompt. "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService","title":"GenerationService","text":" Service to generate text based on a prompt. Initialize the generation service and loads the model. Parameters: Name Type Description DefaultGenerationConfig The configuration for the generation service requiredMethods: Name Descriptiongenerate Generate text based on the prompt configuration. get_prompt_config Get the prompt configuration. set_prompt_config Sets the prompt configuration. Source code ingerd/gen/generation_service.py "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService(config)","title":"config ","text":""},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.generate","title":"generate","text":" Generate text based on the prompt configuration. The actual prompt is provided by the prompt configuration. The list of parameters is used to format the prompt and replace the placeholders. The list can be empty if the prompt does not contain any placeholders. Parameters: Name Type Description DefaultDict[str, str] The parameters to format the prompt with requiredbool Whether to add the prompt to the response False Returns: Type DescriptionGenResponse The generation result Source code ingerd/gen/generation_service.py "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.generate(add_prompt)","title":"add_prompt ","text":""},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.get_prompt_config","title":"get_prompt_config","text":" Get the prompt configuration. Returns: Type DescriptionPromptConfig The prompt configuration Source code ingerd/gen/generation_service.py "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.set_prompt_config","title":"set_prompt_config","text":" Sets the prompt configuration. Parameters: Name Type Description DefaultPromptConfig The prompt configuration requiredReturns: The prompt configuration; Should be the same as the input in most cases Source code ingerd/gen/generation_service.py "},{"location":"reference/gerd/gen/generation_service/#gerd.gen.generation_service.GenerationService.set_prompt_config(config)","title":"config ","text":""},{"location":"reference/gerd/loader/","title":"gerd.loader","text":""},{"location":"reference/gerd/loader/#gerd.loader","title":"gerd.loader","text":"Module for loading language models. Depending on the configuration, different language models are loaded and different libraries are used. The main goal is to provide a unified interface to the different models and libraries. Classes: Name DescriptionLLM The abstract base class for large language models. LlamaCppLLM A language model using the Llama.cpp library. MockLLM A mock language model for testing purposes. RemoteLLM A language model using a remote endpoint. TransformerLLM A language model using the transformers library. Functions: Name Descriptionload_model_from_config Loads a language model based on the configuration. "},{"location":"reference/gerd/loader/#gerd.loader.LLM","title":"LLM","text":" The abstract base class for large language models. Should be implemented by all language model backends. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LLM.create_chat_completion","title":"create_chat_completion abstractmethod ","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LLM.generate","title":"generate abstractmethod ","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM","title":"LlamaCppLLM","text":" Bases: A language model using the Llama.cpp library. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM.create_chat_completion","title":"create_chat_completion","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM.generate","title":"generate","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.LlamaCppLLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.MockLLM","title":"MockLLM","text":" Bases: A mock language model for testing purposes. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.MockLLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.MockLLM.create_chat_completion","title":"create_chat_completion","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.MockLLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.MockLLM.generate","title":"generate","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.MockLLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM","title":"RemoteLLM","text":" Bases: A language model using a remote endpoint. The endpoint can be any service that are compatible with llama.cpp and openai API. For further information, please refer to the llama.cpp server API. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM.create_chat_completion","title":"create_chat_completion","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM.generate","title":"generate","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.RemoteLLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM","title":"TransformerLLM","text":" Bases: A language model using the transformers library. A language model is initialized with a configuration. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredMethods: Name Descriptioncreate_chat_completion Create a chat completion based on a list of messages. generate Generate text based on a prompt. Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM(config)","title":"config ","text":""},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM.create_chat_completion","title":"create_chat_completion","text":" Create a chat completion based on a list of messages. Parameters: Name Type Description Defaultlist[ChatMessage] The list of messages in the chat history requiredReturns: Type Descriptiontuple[ChatRole, str] The role of the generated message and the content Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM.create_chat_completion(messages)","title":"messages ","text":""},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM.generate","title":"generate","text":" Generate text based on a prompt. Parameters: Name Type Description Defaultstr The prompt to generate text from requiredReturns: Type Descriptionstr The generated text Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.TransformerLLM.generate(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/loader/#gerd.loader.load_model_from_config","title":"load_model_from_config","text":" Loads a language model based on the configuration. Which language model is loaded depends on the configuration. For instance, if an endpoint is provided, a remote language model is loaded. If a file is provided, Llama.cpp is used. Otherwise, transformers is used. Parameters: Name Type Description DefaultModelConfig The configuration for the language model requiredReturns: Type DescriptionLLM The loaded language model Source code ingerd/loader.py "},{"location":"reference/gerd/loader/#gerd.loader.load_model_from_config(config)","title":"config ","text":""},{"location":"reference/gerd/models/","title":"gerd.models","text":""},{"location":"reference/gerd/models/#gerd.models","title":"gerd.models","text":"Pydantic model definitions and data classes that are share accross modules. Modules: Name Descriptiongen Models for the generation and chat service. label Data definitions for Label Studio tasks. logging Logging configuration and utilities. model Model configuration for supported model classes. qa Data definitions for QA model configuration. server Server configuration model for REST backends. "},{"location":"reference/gerd/models/gen/","title":"gerd.models.gen","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen","title":"gerd.models.gen","text":"Models for the generation and chat service. Classes: Name DescriptionGenerationConfig Configuration for the generation services. GenerationFeaturesConfig Configuration for the generation-specific features. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig","title":"GenerationConfig","text":" Bases: Configuration for the generation services. A configuration can be used for the GenerationService or the ChatService. Both support to generate text based on a prompt. Methods: Name Descriptionsettings_customise_sources Customize the settings sources used by pydantic-settings. Attributes: Name Type Descriptionfeatures GenerationFeaturesConfig The extra features to be used for the generation service. model ModelConfig The model to be used for the generation service. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.features","title":"featuresclass-attribute instance-attribute ","text":" The extra features to be used for the generation service. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.model","title":"modelclass-attribute instance-attribute ","text":" The model to be used for the generation service. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources","title":"settings_customise_sourcesclassmethod ","text":" Customize the settings sources used by pydantic-settings. The order of the sources is important. The first source has the highest priority. Parameters: Name Type Description DefaultThe class of the settings. requiredPydanticBaseSettingsSource The settings from the initialization. requiredPydanticBaseSettingsSource The settings from the environment. requiredPydanticBaseSettingsSource The settings from the dotenv file. requiredPydanticBaseSettingsSource The settings from the secret file. requiredReturns: Type DescriptionTuple[PydanticBaseSettingsSource, ...] The customized settings sources. Source code ingerd/models/gen.py "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(cls)","title":"cls ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(init_settings)","title":"init_settings ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(env_settings)","title":"env_settings ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(dotenv_settings)","title":"dotenv_settings ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationConfig.settings_customise_sources(file_secret_settings)","title":"file_secret_settings ","text":""},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationFeaturesConfig","title":"GenerationFeaturesConfig","text":" Bases: Configuration for the generation-specific features. Attributes: Name Type Descriptionprompt_chaining PromptChainingConfig | None Configuration for prompt chaining. "},{"location":"reference/gerd/models/gen/#gerd.models.gen.GenerationFeaturesConfig.prompt_chaining","title":"prompt_chainingclass-attribute instance-attribute ","text":" Configuration for prompt chaining. "},{"location":"reference/gerd/models/label/","title":"gerd.models.label","text":""},{"location":"reference/gerd/models/label/#gerd.models.label","title":"gerd.models.label","text":"Data definitions for Label Studio tasks. The defined models and enums are used to parse and work with Label Studio data exported as JSON. Classes: Name DescriptionLabelStudioAnnotation Annotation of a Label Studio task. LabelStudioAnnotationResult Result of a Label Studio annotation. LabelStudioAnnotationValue Value of a Label Studio annotation. LabelStudioLabel Labels for the GRASCCO Label Studio annotations. LabelStudioTask Task of a Label Studio project. Functions: Name Descriptionload_label_studio_tasks Load Label Studio tasks from a JSON file. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation","title":"LabelStudioAnnotation","text":" Bases: Annotation of a Label Studio task. A collection of annotations is associated with a task. Attributes: Name Type Descriptioncompleted_by int The user ID of the user who completed the annotation. created_at str The creation date of the annotation. draft_created_at Optional[str] The creation date of the draft. ground_truth bool Whether the annotation is ground truth. id int The ID of the annotation. import_id Optional[str] The import ID of the annotation. last_action Optional[str] The last action of the annotation. last_created_by Optional[int] The user ID of the user who last created the annotation. lead_time float The lead time of the annotation. parent_annotation Optional[str] The parent annotation. parent_prediction Optional[str] The parent prediction. prediction Dict[str, str] The prediction of the annotation. project int The project ID of the annotation. result List[LabelStudioAnnotationResult] The results of the annotation. result_count int The number of results. task int The task ID of the annotation. unique_id str The unique ID of the annotation. updated_at str The update date of the annotation. updated_by int The user ID of the user who updated the annotation. was_cancelled bool Whether the annotation was cancelled. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.completed_by","title":"completed_byinstance-attribute ","text":" The user ID of the user who completed the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.created_at","title":"created_atinstance-attribute ","text":" The creation date of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.draft_created_at","title":"draft_created_atinstance-attribute ","text":" The creation date of the draft. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.ground_truth","title":"ground_truthinstance-attribute ","text":" Whether the annotation is ground truth. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.id","title":"idinstance-attribute ","text":" The ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.import_id","title":"import_idinstance-attribute ","text":" The import ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.last_action","title":"last_actioninstance-attribute ","text":" The last action of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.last_created_by","title":"last_created_byinstance-attribute ","text":" The user ID of the user who last created the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.lead_time","title":"lead_timeinstance-attribute ","text":" The lead time of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.parent_annotation","title":"parent_annotationinstance-attribute ","text":" The parent annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.parent_prediction","title":"parent_predictioninstance-attribute ","text":" The parent prediction. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.prediction","title":"predictioninstance-attribute ","text":" The prediction of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.project","title":"projectinstance-attribute ","text":" The project ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.result","title":"resultinstance-attribute ","text":" The results of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.result_count","title":"result_countinstance-attribute ","text":" The number of results. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.task","title":"taskinstance-attribute ","text":" The task ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.unique_id","title":"unique_idinstance-attribute ","text":" The unique ID of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.updated_at","title":"updated_atinstance-attribute ","text":" The update date of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.updated_by","title":"updated_byinstance-attribute ","text":" The user ID of the user who updated the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotation.was_cancelled","title":"was_cancelledinstance-attribute ","text":" Whether the annotation was cancelled. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult","title":"LabelStudioAnnotationResult","text":" Bases: Result of a Label Studio annotation. Attributes: Name Type Descriptionfrom_name str The name of the source. id str The ID of the result. origin str The origin of the result. to_name str The name of the target. type str The type of the result. value LabelStudioAnnotationValue The value of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.from_name","title":"from_nameinstance-attribute ","text":" The name of the source. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.id","title":"idinstance-attribute ","text":" The ID of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.origin","title":"origininstance-attribute ","text":" The origin of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.to_name","title":"to_nameinstance-attribute ","text":" The name of the target. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.type","title":"typeinstance-attribute ","text":" The type of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationResult.value","title":"valueinstance-attribute ","text":" The value of the result. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationValue","title":"LabelStudioAnnotationValue","text":" Bases: Value of a Label Studio annotation. Attributes: Name Type Descriptionend int The end of the annotation. labels List[LabelStudioLabel] The labels of the annotation. start int The start of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationValue.end","title":"endinstance-attribute ","text":" The end of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationValue.labels","title":"labelsinstance-attribute ","text":" The labels of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioAnnotationValue.start","title":"startinstance-attribute ","text":" The start of the annotation. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioLabel","title":"LabelStudioLabel","text":" Bases: Labels for the GRASCCO Label Studio annotations. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask","title":"LabelStudioTask","text":" Bases: Task of a Label Studio project. A task is a single unit of work that can be annotated by a user. Tasks can be used to train an auto labeler or to evaluate the performance of a model. Attributes: Name Type Descriptionannotations List[LabelStudioAnnotation] The annotations of the task. cancelled_annotations int The number of cancelled annotations. comment_authors List[str] The authors of the comments. comment_count int The number of comments. created_at str The creation date of the task. data Optional[Dict[str, str]] The data of the task. drafts List[str] The drafts of the task. file_name str Extracts the original file name from the file upload. file_upload str The file upload of the task. id int The ID of the task. inner_id int The inner ID of the task. last_comment_updated_at Optional[str] The update date of the last comment. meta Optional[Dict[str, str]] The meta data of the task. predictions List[str] The predictions of the task. project int The project ID of the task. total_annotations int The total number of annotations. total_predictions int The total number of predictions. unresolved_comment_count int The number of unresolved comments. updated_at str The update date of the task. updated_by int The user ID of the user who updated the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.annotations","title":"annotationsinstance-attribute ","text":" The annotations of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.cancelled_annotations","title":"cancelled_annotationsinstance-attribute ","text":" The number of cancelled annotations. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.comment_authors","title":"comment_authorsinstance-attribute ","text":" The authors of the comments. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.comment_count","title":"comment_countinstance-attribute ","text":" The number of comments. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.created_at","title":"created_atinstance-attribute ","text":" The creation date of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.data","title":"datainstance-attribute ","text":" The data of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.drafts","title":"draftsinstance-attribute ","text":" The drafts of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.file_name","title":"file_nameproperty ","text":" Extracts the original file name from the file upload. File uploads are stored as instance-attribute ","text":" The file upload of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.id","title":"idinstance-attribute ","text":" The ID of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.inner_id","title":"inner_idinstance-attribute ","text":" The inner ID of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.last_comment_updated_at","title":"last_comment_updated_atinstance-attribute ","text":" The update date of the last comment. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.meta","title":"metainstance-attribute ","text":" The meta data of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.predictions","title":"predictionsinstance-attribute ","text":" The predictions of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.project","title":"projectinstance-attribute ","text":" The project ID of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.total_annotations","title":"total_annotationsinstance-attribute ","text":" The total number of annotations. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.total_predictions","title":"total_predictionsinstance-attribute ","text":" The total number of predictions. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.unresolved_comment_count","title":"unresolved_comment_countinstance-attribute ","text":" The number of unresolved comments. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.updated_at","title":"updated_atinstance-attribute ","text":" The update date of the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.LabelStudioTask.updated_by","title":"updated_byinstance-attribute ","text":" The user ID of the user who updated the task. "},{"location":"reference/gerd/models/label/#gerd.models.label.load_label_studio_tasks","title":"load_label_studio_tasks","text":" Load Label Studio tasks from a JSON file. Parameters: Name Type Description Defaultstr The path to the JSON file. requiredReturns: Type DescriptionList[LabelStudioTask] The loaded Label Studio tasks Source code ingerd/models/label.py "},{"location":"reference/gerd/models/label/#gerd.models.label.load_label_studio_tasks(file_path)","title":"file_path ","text":""},{"location":"reference/gerd/models/logging/","title":"gerd.models.logging","text":""},{"location":"reference/gerd/models/logging/#gerd.models.logging","title":"gerd.models.logging","text":"Logging configuration and utilities. Classes: Name DescriptionLogLevel Wrapper for string-based log levels. LoggingConfig Configuration for logging. "},{"location":"reference/gerd/models/logging/#gerd.models.logging.LogLevel","title":"LogLevel","text":" Bases: Wrapper for string-based log levels. Translates log levels to integers for Python's logging framework. Methods: Name Descriptionas_int Convert the log level to an integer. "},{"location":"reference/gerd/models/logging/#gerd.models.logging.LogLevel.as_int","title":"as_int","text":" Convert the log level to an integer. Source code ingerd/models/logging.py "},{"location":"reference/gerd/models/logging/#gerd.models.logging.LoggingConfig","title":"LoggingConfig","text":" Bases: Configuration for logging. Attributes: Name Type Descriptionlevel LogLevel The log level. "},{"location":"reference/gerd/models/logging/#gerd.models.logging.LoggingConfig.level","title":"levelinstance-attribute ","text":" The log level. "},{"location":"reference/gerd/models/model/","title":"gerd.models.model","text":""},{"location":"reference/gerd/models/model/#gerd.models.model","title":"gerd.models.model","text":"Model configuration for supported model classes. Classes: Name DescriptionChatMessage Data structure for chat messages. ModelConfig Configuration for large language models. ModelEndpoint Configuration for model endpoints where models are hosted remotely. PromptConfig Configuration for prompts. Attributes: Name Type DescriptionChatRole Currently supported chat roles. EndpointType Endpoint for remote llm services. "},{"location":"reference/gerd/models/model/#gerd.models.model.ChatRole","title":"ChatRolemodule-attribute ","text":" Currently supported chat roles. "},{"location":"reference/gerd/models/model/#gerd.models.model.EndpointType","title":"EndpointTypemodule-attribute ","text":" Endpoint for remote llm services. "},{"location":"reference/gerd/models/model/#gerd.models.model.ChatMessage","title":"ChatMessage","text":" Bases: Data structure for chat messages. Attributes: Name Type Descriptioncontent str The content of the chat message. role ChatRole The role or source of the chat message. "},{"location":"reference/gerd/models/model/#gerd.models.model.ChatMessage.content","title":"contentinstance-attribute ","text":" The content of the chat message. "},{"location":"reference/gerd/models/model/#gerd.models.model.ChatMessage.role","title":"roleinstance-attribute ","text":" The role or source of the chat message. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig","title":"ModelConfig","text":" Bases: Configuration for large language models. Most llm libraries and/or services share common parameters for configuration. Explaining each parameter is out of scope for this documentation. The most essential parameters are explained for instance here. Default values have been chosen according to ctransformers library. Attributes: Name Type Descriptionbatch_size int The batch size for the generation. context_length int The context length for the model. Currently only LLaMA, MPT and Falcon endpoint Optional[ModelEndpoint] The endpoint of the model when hosted remotely. extra_kwargs Optional[dict[str, Any]] Additional keyword arguments for the model library. file Optional[str] The path to the model file. For local models only. gpu_layers int The number of layers to run on the GPU. last_n_tokens int The number of tokens to consider for the repetition penalty. loras set[Path] The list of additional LoRAs files to load. max_new_tokens int The maximum number of new tokens to generate. name str The name of the model. Can be a path to a local model or a huggingface handle. prompt_config PromptConfig The prompt configuration. prompt_setup List[Tuple[Literal['system', 'user', 'assistant'], PromptConfig]] A list of predefined prompts for the model. repetition_penalty float The repetition penalty. seed int The seed for the random number generator. stop Optional[List[str]] The stop tokens for the generation. stream bool Whether to stream the output. temperature float The temperature for the sampling. threads Optional[int] The number of threads to use for the generation. top_k int The number of tokens to consider for the top-k sampling. top_p float The cumulative probability for the top-p sampling. torch_dtype Optional[str] The torch data type for the model. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.batch_size","title":"batch_sizeclass-attribute instance-attribute ","text":" The batch size for the generation. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.context_length","title":"context_lengthclass-attribute instance-attribute ","text":" The context length for the model. Currently only LLaMA, MPT and Falcon "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.endpoint","title":"endpointclass-attribute instance-attribute ","text":" The endpoint of the model when hosted remotely. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.extra_kwargs","title":"extra_kwargsclass-attribute instance-attribute ","text":" Additional keyword arguments for the model library. The accepted keys and values depend on the model library used. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.file","title":"fileclass-attribute instance-attribute ","text":" The path to the model file. For local models only. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.gpu_layers","title":"gpu_layersclass-attribute instance-attribute ","text":" The number of layers to run on the GPU. The actual number is only used llama.cpp. The other model libraries will determine whether to run on the GPU just by checking of this value is larger than 0. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.last_n_tokens","title":"last_n_tokensclass-attribute instance-attribute ","text":" The number of tokens to consider for the repetition penalty. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.loras","title":"lorasclass-attribute instance-attribute ","text":" The list of additional LoRAs files to load. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.max_new_tokens","title":"max_new_tokensclass-attribute instance-attribute ","text":" The maximum number of new tokens to generate. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.name","title":"nameclass-attribute instance-attribute ","text":" The name of the model. Can be a path to a local model or a huggingface handle. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.prompt_config","title":"prompt_configclass-attribute instance-attribute ","text":" The prompt configuration. This is used to process the input passed to the services. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.prompt_setup","title":"prompt_setupclass-attribute instance-attribute ","text":" A list of predefined prompts for the model. When a model context is inialized or reset, this will be used to set up the context. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.repetition_penalty","title":"repetition_penaltyclass-attribute instance-attribute ","text":" The repetition penalty. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.seed","title":"seedclass-attribute instance-attribute ","text":" The seed for the random number generator. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.stop","title":"stopclass-attribute instance-attribute ","text":" The stop tokens for the generation. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.stream","title":"streamclass-attribute instance-attribute ","text":" Whether to stream the output. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.temperature","title":"temperatureclass-attribute instance-attribute ","text":" The temperature for the sampling. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.threads","title":"threadsclass-attribute instance-attribute ","text":" The number of threads to use for the generation. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.top_k","title":"top_kclass-attribute instance-attribute ","text":" The number of tokens to consider for the top-k sampling. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.top_p","title":"top_pclass-attribute instance-attribute ","text":" The cumulative probability for the top-p sampling. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelConfig.torch_dtype","title":"torch_dtypeclass-attribute instance-attribute ","text":" The torch data type for the model. "},{"location":"reference/gerd/models/model/#gerd.models.model.ModelEndpoint","title":"ModelEndpoint","text":" Bases: Configuration for model endpoints where models are hosted remotely. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig","title":"PromptConfig","text":" Bases: Configuration for prompts. Methods: Name Descriptionformat Format the prompt with the given parameters. model_post_init Post-initialization hook for pyandic. Attributes: Name Type Descriptionis_template bool Whether the config uses jinja2 templates. parameters list[str] Retrieves and returns the parameters of the prompt. path Optional[str] The path to an external prompt file. template Optional[Template] Optional template of the prompt. This should follow the Jinja2 syntax. text str The text of the prompt. Can contain placeholders. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.is_template","title":"is_templateclass-attribute instance-attribute ","text":" Whether the config uses jinja2 templates. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.parameters","title":"parametersproperty ","text":" Retrieves and returns the parameters of the prompt. This happens on-the-fly and is not stored in the model. Returns: Type Descriptionlist[str] The parameters of the prompt. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.path","title":"pathclass-attribute instance-attribute ","text":" The path to an external prompt file. This will overload the values of text and/or template. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.template","title":"templateclass-attribute instance-attribute ","text":" Optional template of the prompt. This should follow the Jinja2 syntax. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.text","title":"textclass-attribute instance-attribute ","text":" The text of the prompt. Can contain placeholders. "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.format","title":"format","text":" Format the prompt with the given parameters. Parameters: Name Type Description DefaultMapping[str, str | list[ChatMessage]] | None The parameters to format the prompt with. None Returns: Type Descriptionstr The formatted prompt Source code ingerd/models/model.py "},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.format(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/models/model/#gerd.models.model.PromptConfig.model_post_init","title":"model_post_init","text":" Post-initialization hook for pyandic. When path is set, the text or template is read from the file and the template is created. Path ending with '.jinja2' will be treated as a template. If no path is set, the text parameter is used to initialize the template if is_template is set to True. Parameters: __context: The context of the model (not used) Source code ingerd/models/model.py "},{"location":"reference/gerd/models/qa/","title":"gerd.models.qa","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa","title":"gerd.models.qa","text":"Data definitions for QA model configuration. Classes: Name DescriptionAnalyzeConfig The configuration for the analyze service. EmbeddingConfig Embedding specific model configuration. QAConfig Configuration for the QA services. QAFeaturesConfig Configuration for the QA-specific features. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.AnalyzeConfig","title":"AnalyzeConfig","text":" Bases: The configuration for the analyze service. Attributes: Name Type Descriptionmodel ModelConfig The model to be used for the analyze service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.AnalyzeConfig.model","title":"modelinstance-attribute ","text":" The model to be used for the analyze service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig","title":"EmbeddingConfig","text":" Bases: Embedding specific model configuration. Attributes: Name Type Descriptionchunk_overlap int The overlap between chunks. chunk_size int The size of the chunks stored in the database. db_path Optional[str] The path to the database file. model ModelConfig The model used for the embedding. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig.chunk_overlap","title":"chunk_overlapinstance-attribute ","text":" The overlap between chunks. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig.chunk_size","title":"chunk_sizeinstance-attribute ","text":" The size of the chunks stored in the database. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig.db_path","title":"db_pathclass-attribute instance-attribute ","text":" The path to the database file. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.EmbeddingConfig.model","title":"modelinstance-attribute ","text":" The model used for the embedding. This model should be rather small and fast to compute. Furthermore, not every model is suited for this task. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig","title":"QAConfig","text":" Bases: Configuration for the QA services. This model can be used to retrieve parameters from a variety of sources. The main source are YAML files (loaded as Methods: Name Descriptionsettings_customise_sources Customize the settings sources used by pydantic-settings. Attributes: Name Type Descriptiondevice str The device to run the model on. embedding EmbeddingConfig The configuration for the embedding service. features QAFeaturesConfig The configuration for the QA-specific features. model ModelConfig The model to be used for the QA service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.device","title":"deviceclass-attribute instance-attribute ","text":" The device to run the model on. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.embedding","title":"embeddinginstance-attribute ","text":" The configuration for the embedding service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.features","title":"featuresinstance-attribute ","text":" The configuration for the QA-specific features. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.model","title":"modelinstance-attribute ","text":" The model to be used for the QA service. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources","title":"settings_customise_sourcesclassmethod ","text":" Customize the settings sources used by pydantic-settings. The order of the sources is important. The first source has the highest priority. Parameters: Name Type Description DefaultThe class of the settings. requiredPydanticBaseSettingsSource The settings from the initialization. requiredPydanticBaseSettingsSource The settings from the environment. requiredPydanticBaseSettingsSource The settings from the dotenv file. requiredPydanticBaseSettingsSource The settings from the secret file. requiredReturns: Type DescriptionTuple[PydanticBaseSettingsSource, ...] The customized settings sources. Source code ingerd/models/qa.py "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(cls)","title":"cls ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(init_settings)","title":"init_settings ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(env_settings)","title":"env_settings ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(dotenv_settings)","title":"dotenv_settings ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAConfig.settings_customise_sources(file_secret_settings)","title":"file_secret_settings ","text":""},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAFeaturesConfig","title":"QAFeaturesConfig","text":" Bases: Configuration for the QA-specific features. Attributes: Name Type Descriptionanalyze AnalyzeConfig Configuration to extract letter of discharge information from the text. analyze_mult_prompts AnalyzeConfig Configuration to extract predefined infos with multiple prompts from the text. return_source bool Whether to return the source in the response. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAFeaturesConfig.analyze","title":"analyzeinstance-attribute ","text":" Configuration to extract letter of discharge information from the text. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAFeaturesConfig.analyze_mult_prompts","title":"analyze_mult_promptsinstance-attribute ","text":" Configuration to extract predefined infos with multiple prompts from the text. "},{"location":"reference/gerd/models/qa/#gerd.models.qa.QAFeaturesConfig.return_source","title":"return_sourceinstance-attribute ","text":" Whether to return the source in the response. "},{"location":"reference/gerd/models/server/","title":"gerd.models.server","text":""},{"location":"reference/gerd/models/server/#gerd.models.server","title":"gerd.models.server","text":"Server configuration model for REST backends. Classes: Name DescriptionServerConfig Server configuration model for REST backends. "},{"location":"reference/gerd/models/server/#gerd.models.server.ServerConfig","title":"ServerConfig","text":" Bases: Server configuration model for REST backends. Attributes: Name Type Descriptionapi_prefix str The prefix of the API. host str The host of the server. port int The port of the server. "},{"location":"reference/gerd/models/server/#gerd.models.server.ServerConfig.api_prefix","title":"api_prefixinstance-attribute ","text":" The prefix of the API. "},{"location":"reference/gerd/models/server/#gerd.models.server.ServerConfig.host","title":"hostinstance-attribute ","text":" The host of the server. "},{"location":"reference/gerd/models/server/#gerd.models.server.ServerConfig.port","title":"portinstance-attribute ","text":" The port of the server. "},{"location":"reference/gerd/qa/","title":"gerd.qa","text":""},{"location":"reference/gerd/qa/#gerd.qa","title":"gerd.qa","text":"Services and utilities for retrieval augmented generation (RAG). Modules: Name Descriptionqa_service Implements the QAService class. "},{"location":"reference/gerd/qa/qa_service/","title":"gerd.qa.qa_service","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service","title":"gerd.qa.qa_service","text":"Implements the QAService class. The question and answer service is used to query a language model with questions related to a specific context. The context is usually a set of documents that are loaded into a vector store. Classes: Name DescriptionQAService The question and answer service class. "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService","title":"QAService","text":" The question and answer service class. The service is initialized with a configuration. Depending on the configuration, the service will create a new in-memory vector store or load an existing one from a file. Parameters: Name Type Description DefaultQAConfig The configuration for the QA service requiredMethods: Name Descriptionadd_file Add a document to the vectorstore. analyze_mult_prompts_query Reads a set of data from doc. analyze_query Read a set of data from a set of documents. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. get_prompt_config Returns the prompt config for the given mode. query Pass a question to the language model. remove_file Removes a document from the vectorstore. set_prompt_config Sets the prompt config for the given mode. Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService(config)","title":"config ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.add_file","title":"add_file","text":" Add a document to the vectorstore. Parameters: Name Type Description DefaultQAFileUpload The file to add to the vectorstore requiredReturns: Type DescriptionQAAnswer an answer object with status 200 if successful Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.add_file(file)","title":"file ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Reads a set of data from doc. Loads the data via multiple prompts by asking for each data field separately. Data - patient_name - patient_date_of_birth - attending_doctors - recording_date - release_date Returns: Type DescriptionQAAnalyzeAnswer The answer from the language model Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.analyze_query","title":"analyze_query","text":" Read a set of data from a set of documents. Loads the data via single prompt. Data - patient_name - patient_date_of_birth - attending_doctors - recording_date - release_date Returns: Type DescriptionQAAnalyzeAnswer The answer from the language model Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding to be used is defined by the vector store or more specifically by the configured parameters passed to initialize the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.db_query","title":"db_query","text":" Queries the vector store with a question. The number of sources that are returned is defined by the max_sources parameter of the service's configuration. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.get_prompt_config","title":"get_prompt_config","text":" Returns the prompt config for the given mode. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt config for requiredReturns: Type DescriptionPromptConfig The prompt config for the given mode Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.get_prompt_config(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.query","title":"query","text":" Pass a question to the language model. The language model will generate an answer based on the question and the context derived from the vector store. Parameters: Name Type Description DefaultQAQuestion The question to be answered requiredReturns: Type DescriptionQAAnswer The answer from the language model Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.query(question)","title":"question ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.remove_file","title":"remove_file","text":" Removes a document from the vectorstore. Parameters: Name Type Description Defaultstr The name of the file to remove requiredReturns: Type DescriptionQAAnswer an answer object with status 200 if successful Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.remove_file(file_name)","title":"file_name ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.set_prompt_config","title":"set_prompt_config","text":" Sets the prompt config for the given mode. Parameters: Name Type Description DefaultPromptConfig The prompt config to set requiredQAModesEnum The mode to set the prompt config for requiredReturns: Type DescriptionQAAnswer an answer object with status 200 if successful Source code ingerd/qa/qa_service.py "},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.set_prompt_config(config)","title":"config ","text":""},{"location":"reference/gerd/qa/qa_service/#gerd.qa.qa_service.QAService.set_prompt_config(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/rag/","title":"gerd.rag","text":""},{"location":"reference/gerd/rag/#gerd.rag","title":"gerd.rag","text":"Retrieval-Augmented Generation (RAG) backend. This module provides the RAG backend for the GERD system which is currently based on FAISS. Classes: Name DescriptionRag The RAG backend for GERD. Functions: Name Descriptioncreate_faiss Create a new FAISS store from a list of documents. load_faiss Load a FAISS store from a disk path. "},{"location":"reference/gerd/rag/#gerd.rag.Rag","title":"Rag","text":" The RAG backend for GERD. The RAG backend will check for a context parameter in the prompt. If the context parameter is not included, a warning will be logged. Without the context parameter, no context will be added to the query. Parameters: Name Type Description DefaultLLM The LLM model to use requiredModelConfig The model configuration requiredPromptConfig The prompt configuration requiredFAISS The FAISS store to use requiredbool Whether to return the source documents requiredMethods: Name Descriptionquery Query the RAG backend with a question. Source code ingerd/rag.py "},{"location":"reference/gerd/rag/#gerd.rag.Rag(model)","title":"model ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag(model_config)","title":"model_config ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag(prompt)","title":"prompt ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag(store)","title":"store ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag(return_source)","title":"return_source ","text":""},{"location":"reference/gerd/rag/#gerd.rag.Rag.query","title":"query","text":" Query the RAG backend with a question. Parameters: Name Type Description DefaultQAQuestion The question to ask requiredReturns: Type DescriptionQAAnswer The answer to the question including the sources Source code ingerd/rag.py "},{"location":"reference/gerd/rag/#gerd.rag.Rag.query(question)","title":"question ","text":""},{"location":"reference/gerd/rag/#gerd.rag.create_faiss","title":"create_faiss","text":" Create a new FAISS store from a list of documents. Parameters: Name Type Description Defaultlist[Document] The list of documents to index requiredstr The name of the Hugging Face model to for the embeddings requiredstr The device to use for the model requiredReturns: Type DescriptionFAISS The newly created FAISS store Source code ingerd/rag.py "},{"location":"reference/gerd/rag/#gerd.rag.create_faiss(documents)","title":"documents ","text":""},{"location":"reference/gerd/rag/#gerd.rag.create_faiss(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/rag/#gerd.rag.create_faiss(device)","title":"device ","text":""},{"location":"reference/gerd/rag/#gerd.rag.load_faiss","title":"load_faiss","text":" Load a FAISS store from a disk path. Parameters: Name Type Description DefaultPath The path to the disk path requiredstr The name of the Hugging Face model to for the embeddings requiredstr The device to use for the model requiredReturns: Type DescriptionFAISS The loaded FAISS store Source code ingerd/rag.py "},{"location":"reference/gerd/rag/#gerd.rag.load_faiss(dp_path)","title":"dp_path ","text":""},{"location":"reference/gerd/rag/#gerd.rag.load_faiss(model_name)","title":"model_name ","text":""},{"location":"reference/gerd/rag/#gerd.rag.load_faiss(device)","title":"device ","text":""},{"location":"reference/gerd/training/","title":"gerd.training","text":""},{"location":"reference/gerd/training/#gerd.training","title":"gerd.training","text":"Collections of training routines for GERD. Modules: Name Descriptiondata Data utilities for training and data processing. instruct Training module for instruction text sets. lora Configuration dataclasses for training LoRA models. trainer Training module for LoRA models. unstructured Training of LoRA models on unstructured text data. "},{"location":"reference/gerd/training/data/","title":"gerd.training.data","text":""},{"location":"reference/gerd/training/data/#gerd.training.data","title":"gerd.training.data","text":"Data utilities for training and data processing. Functions: Name Descriptiondespacyfy Removes spacy-specific tokens from a text. encode Encodes a text using a tokenizer. split_chunks Splits a list of encoded tokens into chunks of a given size. tokenize Converts a prompt into a tokenized input for a model. "},{"location":"reference/gerd/training/data/#gerd.training.data.despacyfy","title":"despacyfy","text":" Removes spacy-specific tokens from a text. For instance, -RRB- is replaced with ')', -LRB- with '(' and -UNK- with '*'. Parameters: Name Type Description Defaultstr The text to despacyfy. requiredReturns: Type Descriptionstr The despacyfied text Source code ingerd/training/data.py "},{"location":"reference/gerd/training/data/#gerd.training.data.despacyfy(text)","title":"text ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.encode","title":"encode","text":" Encodes a text using a tokenizer. Parameters: Name Type Description Defaultstr The text to encode requiredbool Whether to add the beginning of sentence token requiredPreTrainedTokenizer The tokenizer to use requiredint The maximum length of the encoded text requiredReturns: Type DescriptionList[int] The text encoded as a list of tokenizer tokens Source code ingerd/training/data.py "},{"location":"reference/gerd/training/data/#gerd.training.data.encode(text)","title":"text ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.encode(add_bos_token)","title":"add_bos_token ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.encode(tokenizer)","title":"tokenizer ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.encode(cutoff_len)","title":"cutoff_len ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.split_chunks","title":"split_chunks","text":" Splits a list of encoded tokens into chunks of a given size. Parameters: Name Type Description DefaultList[int] The list of encoded tokens. requiredint The size of the chunks. requiredint The step size for the chunks. requiredReturns: Type DescriptionNone A generator that yields the chunks Source code ingerd/training/data.py "},{"location":"reference/gerd/training/data/#gerd.training.data.split_chunks(arr)","title":"arr ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.split_chunks(size)","title":"size ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.split_chunks(step)","title":"step ","text":""},{"location":"reference/gerd/training/data/#gerd.training.data.tokenize","title":"tokenize","text":" Converts a prompt into a tokenized input for a model. The methods returns the tokenized input as a dictionary with the keys \"input_ids\", \"labels\" and \"attention_mask\" where the input_ids are the tokenized input, the labels assign the same label ('1') to each token and the attention_mask masks out the padding tokens. Parameters: prompt: The prompt to tokenize tokenizer: The tokenizer to use cutoff_len: The maximum length of the encoded text append_eos_token: Whether to append an end of sentence token Returns: Type DescriptionDict[str, Tensor | list[int]] The tokenized input as a dictionary Source code ingerd/training/data.py "},{"location":"reference/gerd/training/instruct/","title":"gerd.training.instruct","text":""},{"location":"reference/gerd/training/instruct/#gerd.training.instruct","title":"gerd.training.instruct","text":"Training module for instruction text sets. In contrast to the Classes: Name DescriptionInstructTrainingData Dataclass to hold training data for instruction text sets. InstructTrainingSample Dataclass to hold a training sample for instruction text sets. Functions: Name Descriptiontrain_lora Train a LoRA model on instruction text sets. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.InstructTrainingData","title":"InstructTrainingData","text":" Bases: Dataclass to hold training data for instruction text sets. A training data object consists of a list of training samples. Attributes: Name Type Descriptionsamples list[InstructTrainingSample] The list of training samples. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.InstructTrainingData.samples","title":"samplesclass-attribute instance-attribute ","text":" The list of training samples. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.InstructTrainingSample","title":"InstructTrainingSample","text":" Bases: Dataclass to hold a training sample for instruction text sets. A training sample consists of a list of chat messages. Attributes: Name Type Descriptionmessages list[ChatMessage] The list of chat messages. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.InstructTrainingSample.messages","title":"messagesinstance-attribute ","text":" The list of chat messages. "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.train_lora","title":"train_lora","text":" Train a LoRA model on instruction text sets. Parameters: Name Type Description Defaultstr | LoraTrainingConfig The configuration name or the configuration itself requiredInstructTrainingData | None The training data to train on, if None, the input_glob from the config is used None Returns: Type DescriptionTrainer The trainer instance that is used for training Source code ingerd/training/instruct.py "},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.train_lora(config)","title":"config ","text":""},{"location":"reference/gerd/training/instruct/#gerd.training.instruct.train_lora(data)","title":"data ","text":""},{"location":"reference/gerd/training/lora/","title":"gerd.training.lora","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora","title":"gerd.training.lora","text":"Configuration dataclasses for training LoRA models. Classes: Name DescriptionLLMModelProto Protocol for the LoRA model. LoraModules Configuration for the modules to be trained in LoRA models. LoraTrainingConfig Configuration for training LoRA models. TrainingFlags Training flags for LoRA models. Functions: Name Descriptionload_training_config Load the LLM model configuration. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LLMModelProto","title":"LLMModelProto","text":" Bases: Protocol for the LoRA model. A model model needs to implement the named_modules method for it to be used in LoRA Training. Methods: Name Descriptionnamed_modules Get the named modules of the model. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LLMModelProto.named_modules","title":"named_modules","text":" Get the named modules of the model. Returns: Type Descriptionlist[tuple[str, Module]] The named modules. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraModules","title":"LoraModules","text":" Bases: Configuration for the modules to be trained in LoRA models. Methods: Name Descriptiontarget_modules Get the target modules for the given model. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraModules.target_modules","title":"target_modules","text":" Get the target modules for the given model. Parameters: Name Type Description DefaultLLMModelProto The model to be trained. requiredReturns: Type DescriptionList[str] The list of target modules Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraModules.target_modules(model)","title":"model ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig","title":"LoraTrainingConfig","text":" Bases: Configuration for training LoRA models. Methods: Name Descriptionmodel_post_init Post-initialization hook for the model. reset_tokenizer Resets the tokenizer. settings_customise_sources Customize the settings sources used by pydantic-settings. Attributes: Name Type Descriptiontokenizer PreTrainedTokenizer Get the tokenizer for the model. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.tokenizer","title":"tokenizerproperty ","text":" Get the tokenizer for the model. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.model_post_init","title":"model_post_init","text":" Post-initialization hook for the model. This method currently checks whether cutoff is larger than overlap. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.reset_tokenizer","title":"reset_tokenizer","text":" Resets the tokenizer. When a tokenizer has been used it needs to be reset before changig parameters to avoid issues with parallelism. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources","title":"settings_customise_sources classmethod ","text":" Customize the settings sources used by pydantic-settings. The order of the sources is important. The first source has the highest priority. Parameters: Name Type Description DefaultThe class of the settings. requiredPydanticBaseSettingsSource The settings from the initialization. requiredPydanticBaseSettingsSource The settings from the environment. requiredPydanticBaseSettingsSource The settings from the dotenv file. requiredPydanticBaseSettingsSource The settings from the secret file. requiredReturns: Type Descriptiontuple[PydanticBaseSettingsSource, ...] The customized settings sources. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(cls)","title":"cls ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(init_settings)","title":"init_settings ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(env_settings)","title":"env_settings ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(dotenv_settings)","title":"dotenv_settings ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.LoraTrainingConfig.settings_customise_sources(file_secret_settings)","title":"file_secret_settings ","text":""},{"location":"reference/gerd/training/lora/#gerd.training.lora.TrainingFlags","title":"TrainingFlags","text":" Bases: Training flags for LoRA models. "},{"location":"reference/gerd/training/lora/#gerd.training.lora.load_training_config","title":"load_training_config","text":" Load the LLM model configuration. Parameters: Name Type Description Defaultstr The name of the configuration. requiredReturns: Type DescriptionLoraTrainingConfig The model configuration. Source code ingerd/training/lora.py "},{"location":"reference/gerd/training/lora/#gerd.training.lora.load_training_config(config)","title":"config ","text":""},{"location":"reference/gerd/training/trainer/","title":"gerd.training.trainer","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer","title":"gerd.training.trainer","text":"Training module for LoRA models. Can be used to train LoRA models on structured or unstructured data. Classes: Name DescriptionCallbacks Custom callbacks for the LoRA training. Tracked Dataclass to track the training progress. Trainer The LoRA trainer class. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks","title":"Callbacks","text":" Bases: Custom callbacks for the LoRA training. Initialize the callbacks based on tracking data config. Parameters: Name Type Description DefaultTracked The tracking data requiredMethods: Name Descriptionon_log Callback to log the training progress. on_save Saves the training log when the model is saved. on_step_begin Update the training progress. on_substep_end Update the training progress and check for interruption. Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks(tracked)","title":"tracked ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log","title":"on_log","text":" Callback to log the training progress. Parameters: Name Type Description DefaultTrainingArguments The training arguments (not used) requiredTrainerState The trainer state (not used) requiredTrainerControl The trainer control requiredDict The training logs required Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log(_args)","title":"_args ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log(_state)","title":"_state ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log(control)","title":"control ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_log(logs)","title":"logs ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_save","title":"on_save","text":" Saves the training log when the model is saved. Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_step_begin","title":"on_step_begin","text":" Update the training progress. This callback updates the current training steps and checks if the training was interrupted. Parameters: Name Type Description DefaultTrainingArguments The training arguments (not used) requiredTrainerState The trainer state requiredTrainerControl The trainer control required Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_step_begin(_args)","title":"_args ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_step_begin(state)","title":"state ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_step_begin(control)","title":"control ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_substep_end","title":"on_substep_end","text":" Update the training progress and check for interruption. Parameters: Name Type Description DefaultTrainingArguments The training arguments (not used) requiredTrainerState The trainer state (not used) requiredTrainerControl The trainer control required Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_substep_end(_args)","title":"_args ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_substep_end(_state)","title":"_state ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Callbacks.on_substep_end(control)","title":"control ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked","title":"Tracked dataclass ","text":" Dataclass to track the training progress. Attributes: Name Type Descriptionconfig LoraTrainingConfig The training configuration. current_steps int The current training steps. did_save bool Whether the model was saved. interrupted bool Whether the training was interrupted. lora_model PeftModel The training model. max_steps int The maximum number of training steps. train_log Dict The training log. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.config","title":"configinstance-attribute ","text":" The training configuration. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.current_steps","title":"current_stepsclass-attribute instance-attribute ","text":" The current training steps. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.did_save","title":"did_saveclass-attribute instance-attribute ","text":" Whether the model was saved. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.interrupted","title":"interruptedclass-attribute instance-attribute ","text":" Whether the training was interrupted. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.lora_model","title":"lora_modelinstance-attribute ","text":" The training model. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.max_steps","title":"max_stepsclass-attribute instance-attribute ","text":" The maximum number of training steps. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Tracked.train_log","title":"train_logclass-attribute instance-attribute ","text":" The training log. "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer","title":"Trainer","text":" The LoRA trainer class. This class is used to train LoRA models on structured or unstructured data. Since the training process is asynchronous, the trainer can be used to track or interrupt the training process. The LoRa traininer requires a configuration and optional list of callbacks. If no callbacks are provided, the default Callbacks class Methods: Name Descriptioninterrupt Interrupt the training process. save Save the model and log files to the path set in the trainer configuration. setup_training Setup the training process and initialize the transformer trainer. train Start the training process. Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.interrupt","title":"interrupt","text":" Interrupt the training process. Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.save","title":"save","text":" Save the model and log files to the path set in the trainer configuration. When the gerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.setup_training","title":"setup_training","text":" Setup the training process and initialize the transformer trainer. Parameters: Name Type Description DefaultDataset The training data requiredDict The training template requiredbool Whether to use torch compile False Source code in gerd/training/trainer.py "},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.setup_training(train_data)","title":"train_data ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.setup_training(train_template)","title":"train_template ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.setup_training(torch_compile)","title":"torch_compile ","text":""},{"location":"reference/gerd/training/trainer/#gerd.training.trainer.Trainer.train","title":"train","text":" Start the training process. Returns: Type DescriptionThread The training thread Source code ingerd/training/trainer.py "},{"location":"reference/gerd/training/unstructured/","title":"gerd.training.unstructured","text":""},{"location":"reference/gerd/training/unstructured/#gerd.training.unstructured","title":"gerd.training.unstructured","text":"Training of LoRA models on unstructured text data. This module provides functions to train LoRA models to 'imitate' the style of a given text corpus. Functions: Name Descriptiontrain_lora Train a LoRA model on unstructured text data. "},{"location":"reference/gerd/training/unstructured/#gerd.training.unstructured.train_lora","title":"train_lora","text":" Train a LoRA model on unstructured text data. Parameters: Name Type Description Defaultstr | LoraTrainingConfig The configuration name or the configuration itself requiredlist[str] | None The list of texts to train on, if None, the input_glob from the config is used None Returns: Type DescriptionTrainer The trainer instance that is used for training Source code ingerd/training/unstructured.py "},{"location":"reference/gerd/training/unstructured/#gerd.training.unstructured.train_lora(config)","title":"config ","text":""},{"location":"reference/gerd/training/unstructured/#gerd.training.unstructured.train_lora(texts)","title":"texts ","text":""},{"location":"reference/gerd/transport/","title":"gerd.transport","text":""},{"location":"reference/gerd/transport/#gerd.transport","title":"gerd.transport","text":"Module to define the transport protocol. The transport protocol is used to connect the backend and frontend services. Implemetations of the transport protocol can be found in the Classes: Name DescriptionDocumentSource Dataclass to hold a document source. FileTypes Enum to hold all supported file types. GenResponse Dataclass to hold a response from the generation service. QAAnalyzeAnswer Dataclass to hold an answer from the predefined queries to the QA service. QAAnswer Dataclass to hold an answer from the QA service. QAFileUpload Dataclass to hold a file upload. QAModesEnum Enum to hold all supported QA modes. QAPromptConfig Prompt configuration for the QA service. QAQuestion Dataclass to hold a question for the QA service. Transport Transport protocol to connect backend and frontend services. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource","title":"DocumentSource","text":" Bases: Dataclass to hold a document source. Attributes: Name Type Descriptioncontent str The content of the document. name str The name of the document. page int The page of the document. query str The query that was used to find the document. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource.content","title":"contentinstance-attribute ","text":" The content of the document. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource.name","title":"nameinstance-attribute ","text":" The name of the document. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource.page","title":"pageinstance-attribute ","text":" The page of the document. "},{"location":"reference/gerd/transport/#gerd.transport.DocumentSource.query","title":"queryinstance-attribute ","text":" The query that was used to find the document. "},{"location":"reference/gerd/transport/#gerd.transport.FileTypes","title":"FileTypes","text":" Bases: Enum to hold all supported file types. Attributes: Name Type DescriptionPDF PDF file type. TEXT Text file type. "},{"location":"reference/gerd/transport/#gerd.transport.FileTypes.PDF","title":"PDFclass-attribute instance-attribute ","text":" PDF file type. "},{"location":"reference/gerd/transport/#gerd.transport.FileTypes.TEXT","title":"TEXTclass-attribute instance-attribute ","text":" Text file type. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse","title":"GenResponse","text":" Bases: Dataclass to hold a response from the generation service. Attributes: Name Type Descriptionerror_msg str The error message if the status code is not 200. prompt str | None The custom prompt that was used to generate the text. status int The status code of the response. text str The generated text if the status code is 200. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse.error_msg","title":"error_msgclass-attribute instance-attribute ","text":" The error message if the status code is not 200. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse.prompt","title":"promptclass-attribute instance-attribute ","text":" The custom prompt that was used to generate the text. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse.status","title":"statusclass-attribute instance-attribute ","text":" The status code of the response. "},{"location":"reference/gerd/transport/#gerd.transport.GenResponse.text","title":"textclass-attribute instance-attribute ","text":" The generated text if the status code is 200. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnalyzeAnswer","title":"QAAnalyzeAnswer","text":" Bases: Dataclass to hold an answer from the predefined queries to the QA service. Attributes: Name Type Descriptionerror_msg str The error message of the answer if the status code is not 200. status int The status code of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnalyzeAnswer.error_msg","title":"error_msgclass-attribute instance-attribute ","text":" The error message of the answer if the status code is not 200. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnalyzeAnswer.status","title":"statusclass-attribute instance-attribute ","text":" The status code of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer","title":"QAAnswer","text":" Bases: Dataclass to hold an answer from the QA service. Attributes: Name Type Descriptionerror_msg str The error message of the answer if the status code is not 200. response str The response of the answer. sources List[DocumentSource] The sources of the answer. status int The status code of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer.error_msg","title":"error_msgclass-attribute instance-attribute ","text":" The error message of the answer if the status code is not 200. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer.response","title":"responseclass-attribute instance-attribute ","text":" The response of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer.sources","title":"sourcesclass-attribute instance-attribute ","text":" The sources of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAAnswer.status","title":"statusclass-attribute instance-attribute ","text":" The status code of the answer. "},{"location":"reference/gerd/transport/#gerd.transport.QAFileUpload","title":"QAFileUpload","text":" Bases: Dataclass to hold a file upload. Attributes: Name Type Descriptiondata bytes The file data. name str The name of the file. "},{"location":"reference/gerd/transport/#gerd.transport.QAFileUpload.data","title":"datainstance-attribute ","text":" The file data. "},{"location":"reference/gerd/transport/#gerd.transport.QAFileUpload.name","title":"nameinstance-attribute ","text":" The name of the file. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum","title":"QAModesEnum","text":" Bases: Enum to hold all supported QA modes. Attributes: Name Type DescriptionANALYZE Analyze mode. ANALYZE_MULT_PROMPTS Analyze multiple prompts mode. NONE No mode. SEARCH Search mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum.ANALYZE","title":"ANALYZEclass-attribute instance-attribute ","text":" Analyze mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum.ANALYZE_MULT_PROMPTS","title":"ANALYZE_MULT_PROMPTSclass-attribute instance-attribute ","text":" Analyze multiple prompts mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum.NONE","title":"NONEclass-attribute instance-attribute ","text":" No mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAModesEnum.SEARCH","title":"SEARCHclass-attribute instance-attribute ","text":" Search mode. "},{"location":"reference/gerd/transport/#gerd.transport.QAPromptConfig","title":"QAPromptConfig","text":" Bases: Prompt configuration for the QA service. Attributes: Name Type Descriptionconfig PromptConfig The prompt configuration. mode QAModesEnum The mode to set the prompt configuration for. "},{"location":"reference/gerd/transport/#gerd.transport.QAPromptConfig.config","title":"configinstance-attribute ","text":" The prompt configuration. "},{"location":"reference/gerd/transport/#gerd.transport.QAPromptConfig.mode","title":"modeinstance-attribute ","text":" The mode to set the prompt configuration for. "},{"location":"reference/gerd/transport/#gerd.transport.QAQuestion","title":"QAQuestion","text":" Bases: Dataclass to hold a question for the QA service. Attributes: Name Type Descriptionmax_sources int The maximum number of sources to return. question str The question to ask the QA service. search_strategy str The search strategy to use. "},{"location":"reference/gerd/transport/#gerd.transport.QAQuestion.max_sources","title":"max_sourcesclass-attribute instance-attribute ","text":" The maximum number of sources to return. "},{"location":"reference/gerd/transport/#gerd.transport.QAQuestion.question","title":"questioninstance-attribute ","text":" The question to ask the QA service. "},{"location":"reference/gerd/transport/#gerd.transport.QAQuestion.search_strategy","title":"search_strategyclass-attribute instance-attribute ","text":" The search strategy to use. "},{"location":"reference/gerd/transport/#gerd.transport.Transport","title":"Transport","text":" Bases: Transport protocol to connect backend and frontend services. Transport should be implemented by a class that provides the necessary methods to interact with the backend. Methods: Name Descriptionadd_file Add a file to the vector store. analyze_mult_prompts_query Queries the vector store with a set of predefined queries. analyze_query Queries the vector store with a predefined query. db_embedding Converts a question to an embedding. db_query Queries the vector store with a question. generate Generates text with the generation service. get_gen_prompt Gets the prompt configuration for the generation service. get_qa_prompt Gets the prompt configuration for a mode of the QA service. qa_query Query the QA service with a question. remove_file Remove a file from the vector store. set_gen_prompt Sets the prompt configuration for the generation service. set_qa_prompt Sets the prompt configuration for the QA service. "},{"location":"reference/gerd/transport/#gerd.transport.Transport.add_file","title":"add_file","text":" Add a file to the vector store. The returned answer has a status code of 200 if the file was added successfully. Parameters: file: The file to add to the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.analyze_mult_prompts_query","title":"analyze_mult_prompts_query","text":" Queries the vector store with a set of predefined queries. In contrast to Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.analyze_query","title":"analyze_query","text":" Queries the vector store with a predefined query. The query should return vital information gathered from letters of discharge. Returns: Type DescriptionQAAnalyzeAnswer The answer from the QA service. Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.db_embedding","title":"db_embedding","text":" Converts a question to an embedding. The embedding is defined by the vector store. Parameters: Name Type Description DefaultQAQuestion The question to convert to an embedding. requiredReturns: Type DescriptionList[float] The embedding of the question Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.db_embedding(question)","title":"question ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.db_query","title":"db_query","text":" Queries the vector store with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the vector store with. requiredReturns: Type DescriptionList[DocumentSource] A list of document sources Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.db_query(question)","title":"question ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.generate","title":"generate","text":" Generates text with the generation service. Parameters: Name Type Description DefaultDict[str, str] The parameters to generate text with requiredReturns: Type DescriptionGenResponse The generation result Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.generate(parameters)","title":"parameters ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.get_gen_prompt","title":"get_gen_prompt","text":" Gets the prompt configuration for the generation service. Returns: Type DescriptionPromptConfig The current prompt configuration Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.get_qa_prompt","title":"get_qa_prompt","text":" Gets the prompt configuration for a mode of the QA service. Parameters: Name Type Description DefaultQAModesEnum The mode to get the prompt configuration for requiredReturns: Type DescriptionPromptConfig The prompt configuration for the QA service Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.get_qa_prompt(qa_mode)","title":"qa_mode ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.qa_query","title":"qa_query","text":" Query the QA service with a question. Parameters: Name Type Description DefaultQAQuestion The question to query the QA service with. requiredReturns: Type DescriptionQAAnswer The answer from the QA service. Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.qa_query(query)","title":"query ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.remove_file","title":"remove_file","text":" Remove a file from the vector store. The returned answer has a status code of 200 if the file was removed successfully. Parameters: file_name: The name of the file to remove from the vector store. Returns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.set_gen_prompt","title":"set_gen_prompt","text":" Sets the prompt configuration for the generation service. The prompt configuration that is returned should in most cases be the same as the one that was set. Parameters: config: The prompt configuration to set Returns: Type DescriptionPromptConfig The prompt configuration that was set Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.set_qa_prompt","title":"set_qa_prompt","text":" Sets the prompt configuration for the QA service. Since the QA service uses multiple prompt configurations, the mode should be specified. For more details, see the documentation of Parameters: Name Type Description DefaultPromptConfig The prompt configuration to set requiredQAModesEnum The mode to set the prompt configuration for requiredReturns: Type DescriptionQAAnswer The answer from the QA service Source code ingerd/transport.py "},{"location":"reference/gerd/transport/#gerd.transport.Transport.set_qa_prompt(config)","title":"config ","text":""},{"location":"reference/gerd/transport/#gerd.transport.Transport.set_qa_prompt(qa_mode)","title":"qa_mode ","text":""}]}
\ No newline at end of file
|