Skip to content

Latest commit

 

History

History
31 lines (24 loc) · 2.25 KB

README.md

File metadata and controls

31 lines (24 loc) · 2.25 KB

RAG/ Finetune

QA generators:

The QA_generator_*.ipynb notebooks generate artificial question/answer JSON files based on a given golden context. These include:

  • id: original-file-location_seed_task_x_y, where x is the id of the golden context chunk, and y is the id of the question generated (if we generate 3 questions, then y = 0, 1, 2).
  • context: including distractor contexts and might include the golden context (probability p = 0.8).
  • golden_context: the context that was used to generate QA.
  • cot_answer: includes the full chain of thought answer.
  • answer: only includes the final answer.

Example file

Pre-finetuning processing

  • The pre-FT-processing.ipynb notebook is used to generate a data file for finetuning. Here we use autotrain for finetuning, so the output file has to have a "text" column. Each row will be in the format of ###Human: question ###Assistant: answer. Example
  • Use autotrain for finetuning: autotrain --config "config file location". Config file example

Post-finetuning processing

  • After finetuning, we will have adapter files. To merge the adapters with a base LLM model, we use post-FT-processing.ipynb.
  • To use the new finetuned model with Ollama:
    • Create a modelfile. How to. Example
    • Use llama.cpp to convert the trained model to a GGUF file.
    • Create a new Ollama model with the new modelfile and GGUF file.

Evaluate with autorag

Create an autorag corpus and QA parquet using the autorag notebook. Autorag can compare multiple LLM models, prompts, retrieval methods, top_k, etc.

Manually test LLM models' answers

Using the evaluate.ipynb notebook, we can test different models with a set of fixed questions. Example

Run local RAG model

To run a local RAG model, use the local_RAG_md.ipynb notebook.