Skip to content

mazarrazi/LLM-FineTuning-Large-Language-Models_rp

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LLM (Large Language Models) FineTuning Projects and notes on common practical techniques

Find me here..


Fine-tuning LLM (and YouTube Video Explanations)

Notebook 🟠 YouTube Video 🟠
CodeLLaMA-34B - Conversational Agent Youtube Link
Inference Yarn-Llama-2-13b-128k with KV Cache to answer quiz on very long textbook Youtube Link
Mistral 7B FineTuning with_PEFT and QLORA Youtube Link
Falcon finetuning on openassistant-guanaco Youtube Link
Fine Tuning Phi 1_5 with PEFT and QLoRA Youtube Link
Web scraping with Large Language Models (LLM)-AnthropicAI + LangChainAI Youtube Link

Fine-tuning LLM

Notebook Colab
πŸ“Œ Finetune codellama-34B with QLoRA Open In Colab
πŸ“Œ Mixtral Chatbot with Gradio
πŸ“Œ togetherai api to run Mixtral Open In Colab
πŸ“Œ Integrating TogetherAI with LangChain πŸ¦™ Open In Colab
πŸ“Œ Mistral-7B-Instruct_GPTQ - Finetune on finance-alpaca dataset πŸ¦™ Open In Colab
πŸ“Œ Mistral 7b FineTuning with DPO Direct_Preference_Optimization Open In Colab
πŸ“Œ Finetune llama_2_GPTQ
πŸ“Œ TinyLlama with Unsloth and_RoPE_Scaling dolly-15 dataset Open In Colab
πŸ“Œ Tinyllama fine-tuning with Taylor_Swift Song lyrics Open In Colab

LLM Techniques and utils - Explained

LLM Concepts
πŸ“Œ DPO (Direct Preference Optimization) training and its datasets
πŸ“Œ 4-bit LLM Quantization with GPTQ
πŸ“Œ Quantize with HF Transformers
πŸ“Œ Understanding rank r in LoRA and related Matrix_Math
πŸ“Œ Rotary Embeddings (RopE) is one of the Fundamental Building Blocks of LlaMA-2 Implementation
πŸ“Œ Chat Templates in HuggingFace
πŸ“Œ How is Mixtral 8x7B is a dense 47Bn param model
πŸ“Œ The concept of validation log perplexity in LLM training - a note on fundamentals.
πŸ“Œ Why we need to identify target_layers for LoRA/QLoRA
πŸ“Œ Evaluate Token per sec
πŸ“Œ traversing through nested attributes (or sub-modules) of a PyTorch module
πŸ“Œ Implementation of Sparse Mixtures-of-Experts layer in PyTorch from Mistral Official Repo
πŸ“Œ Util method to extract a specific token's representation from the last hidden states of a transformer model.
πŸ“Œ Convert PyTorch model's parameters and tensors to half-precision floating-point format
πŸ“Œ Quantizing πŸ€— Transformers models with the GPTQ method
πŸ“Œ Quantize Mixtral-8x7B so it can run in 24GB GPU
πŸ“Œ What is GGML or GGUF in the world of Large Language Models ?

Other Smaller Language Models

About

LLM (Large Language Model) FineTuning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.7%
  • Python 1.3%