Evaluating Large Language Models with Instructions and Prompts
-
Updated
Jan 28, 2024 - Python
Evaluating Large Language Models with Instructions and Prompts
KoTox is an automatically generated instruction dataset in Korean. The instruction set is used to mitigate the toxicity of the LLMs.
This is the official code repository for the ACL Findings Paper "Multi-Task Transfer Matters During Instruction-Tuning"
Instruction fine tuning BART for Dialogue Summarization | IT4772E | NLP Project 20232
an instruction-tuning dataset generation script
This repository contains the implementation of a fine-tuned Llama2 chatbot using QLoRA, tailored to provide detailed information and recommendations about movies. The model is fine-tuned on the IMDB dataset, enabling it to generate informed and contextually relevant responses.
Basline: google/flan-t5 Finetuning: LMQG , LoRA
Discourse chat data crawling and on-the-way parsing straight for LLM instruction finetuning. 论坛数据爬取和解析,直接用于对话微调。
A multimodal model for language-guided socially compliant robot navigation.
Summaries of papers related to the alignment problem in NLP
Implementation of the models of the Universal-NER Paper 2024 using a Streamlit-based web application that is designed to process PDF documents for Named Entity Recognition tasks. It allows users to upload PDF files, from which the application extracts text, images, and tables to identify entities based on a user-specific user-specified entity type.
Domain generalization on Aspect Based Sentiment Analysis (ABSA) task via utilizing noisy student architecture.
This repository has a lot of LLM projects done. It is the best place to start learning LLM.
Language Models Resist Alignment
End-to-end MLOps LLM instruction finetuning based on PEFT & QLoRA to solve math problems.
The official implementation of paper "Demystifying Instruction Mixing for Fine-tuning Large Language Models"
This repo is the official implementation for Incubating Text Classifiers Following User Instruction with Nothing but LLM. We allow users to get a personalized classifier with only the instruction as input. The incubation is based on a llama-2-7b fine-tuned on Huggingface Meta Data and Self-Diversification.
An interpretable KBQA system that operates at the natural language level with the help of LLMs
This repository hosts materials from the Bertinoro International Spring School 2024 course
EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration
Add a description, image, and links to the instruction-tuning topic page so that developers can more easily learn about it.
To associate your repository with the instruction-tuning topic, visit your repo's landing page and select "manage topics."